HardSigmoid - 1 vs 6¶
HardSigmoid1 → HardSigmoid6
RENAMED
@@ -1 +1 @@
|
|
1
1
|
HardSigmoid takes one input data (Tensor<T>) and produces one output data
|
2
2
|
(Tensor<T>) where the HardSigmoid function, y = max(0, min(1, alpha * x + beta)),
|
3
3
|
is applied to the tensor elementwise.
|
4
4
|
**Attributes**
|
5
5
|
* **alpha**:
|
6
|
-
Value of alpha
|
6
|
+
Value of alpha.
|
7
7
|
* **beta**:
|
8
|
+
Value of beta.
|
8
|
-
Value of beta default to 0.5
|
9
|
-
* **consumed_inputs**:
|
10
|
-
legacy optimization attribute.
|
11
9
|
**Inputs**
|
12
10
|
* **X** (heterogeneous) - **T**:
|
13
11
|
Input tensor
|
14
12
|
**Outputs**
|
15
13
|
* **Y** (heterogeneous) - **T**:
|
16
14
|
Output tensor
|
17
15
|
**Type Constraints**
|
18
16
|
* **T** in (
|
19
17
|
tensor(double),
|
20
18
|
tensor(float),
|
21
19
|
tensor(float16)
|
22
20
|
):
|
23
21
|
Constrain input and output types to float tensors.
|