Relu - 6 vs 14#
Next section compares an older to a newer version of the same operator after both definition are converted into markdown text. Green means an addition to the newer version, red means a deletion. Anything else is unchanged.
- Relu6 → Relu14 +2 -7
Relu6 → Relu14
RENAMED
@@ -1 +1 @@
|
|
1
1
|
Relu takes one input data (Tensor<T>) and produces one output data
|
2
2
|
(Tensor<T>) where the rectified linear function, y = max(0, x), is applied to
|
3
3
|
the tensor elementwise.
|
4
4
|
**Inputs**
|
5
5
|
* **X** (heterogeneous) - **T**:
|
6
6
|
Input tensor
|
7
7
|
**Outputs**
|
8
8
|
* **Y** (heterogeneous) - **T**:
|
9
9
|
Output tensor
|
10
10
|
**Type Constraints**
|
11
11
|
* **T** in (
|
12
|
-
tensor(bfloat16),
|
13
12
|
tensor(double),
|
14
13
|
tensor(float),
|
15
|
-
tensor(float16)
|
14
|
+
tensor(float16)
|
16
|
-
tensor(int16),
|
17
|
-
tensor(int32),
|
18
|
-
tensor(int64),
|
19
|
-
tensor(int8)
|
20
15
|
):
|
21
|
-
Constrain input and output types to
|
16
|
+
Constrain input and output types to float tensors.? ^^^^^
|