HardSigmoid#
HardSigmoid - 6#
Version
name: HardSigmoid (GitHub)
domain: main
since_version: 6
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 6.
Summary
HardSigmoid takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the HardSigmoid function, y = max(0, min(1, alpha * x + beta)), is applied to the tensor elementwise.
Attributes
alpha: Value of alpha. Default value is
0.20000000298023224
.beta: Value of beta. Default value is
0.5
.
Inputs
X (heterogeneous) - T: Input tensor
Outputs
Y (heterogeneous) - T: Output tensor
Type Constraints
T in ( tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.
Examples
hardsigmoid_default
default_alpha = 0.2
default_beta = 0.5
node = onnx.helper.make_node(
'HardSigmoid',
inputs=['x'],
outputs=['y'],
)
x = np.random.randn(3, 4, 5).astype(np.float32)
y = np.clip(x * default_alpha + default_beta, 0, 1)
expect(node, inputs=[x], outputs=[y],
name='test_hardsigmoid_default')
Differences
0 | 0 | HardSigmoid takes one input data (Tensor | HardSigmoid takes one input data (Tensor |
1 | 1 | (Tensor | (Tensor |
2 | 2 | is applied to the tensor elementwise. | is applied to the tensor elementwise. |
3 | 3 |
|
|
4 | 4 | **Attributes** | **Attributes** |
5 | 5 |
|
|
6 | 6 | * **alpha**: | * **alpha**: |
7 | 7 | Value of alpha default to 0.2 Default value is 0.20000000298023224. |
|
8 | 8 | * **beta**: | * **beta**: |
9 | 9 | Value of beta default to 0.5 Default value is 0.5. |
|
10 | * **consumed_inputs**: | ||
11 | legacy optimization attribute. | ||
12 | 10 |
|
|
13 | 11 | **Inputs** | **Inputs** |
14 | 12 |
|
|
15 | 13 | * **X** (heterogeneous) - **T**: | * **X** (heterogeneous) - **T**: |
16 | 14 | Input tensor | Input tensor |
17 | 15 |
|
|
18 | 16 | **Outputs** | **Outputs** |
19 | 17 |
|
|
20 | 18 | * **Y** (heterogeneous) - **T**: | * **Y** (heterogeneous) - **T**: |
21 | 19 | Output tensor | Output tensor |
22 | 20 |
|
|
23 | 21 | **Type Constraints** | **Type Constraints** |
24 | 22 |
|
|
25 | 23 | * **T** in ( | * **T** in ( |
26 | 24 | tensor(double), | tensor(double), |
27 | 25 | tensor(float), | tensor(float), |
28 | 26 | tensor(float16) | tensor(float16) |
29 | 27 | ): | ): |
30 | 28 | Constrain input and output types to float tensors. | Constrain input and output types to float tensors. |
HardSigmoid - 1#
Version
name: HardSigmoid (GitHub)
domain: main
since_version: 1
function: False
support_level: SupportType.COMMON
shape inference: False
This version of the operator has been available since version 1.
Summary
HardSigmoid takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the HardSigmoid function, y = max(0, min(1, alpha * x + beta)), is applied to the tensor elementwise.
Attributes
alpha: Value of alpha default to 0.2 Default value is
0.20000000298023224
.beta: Value of beta default to 0.5 Default value is
0.5
.consumed_inputs: legacy optimization attribute.
Inputs
X (heterogeneous) - T: Input tensor
Outputs
Y (heterogeneous) - T: Output tensor
Type Constraints
T in ( tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.