PRelu - 1 vs 16#
Next section compares an older to a newer version of the same operator after both definition are converted into markdown text. Green means an addition to the newer version, red means a deletion. Anything else is unchanged.
- PRelu1 → PRelu16 +9 -13
PRelu1 → PRelu16
RENAMED
@@ -1 +1 @@
|
|
1
1
|
PRelu takes input data (Tensor<T>) and slope tensor as input, and produces one
|
2
2
|
output data (Tensor<T>) where the function f(x) = slope * x for x < 0,
|
3
3
|
f(x) = x for x >= 0., is applied to the data tensor elementwise.
|
4
|
-
**
|
5
|
-
|
6
|
-
|
4
|
+
**Attributes**
|
5
|
+
|
6
|
+
* **consumed_inputs**:
|
7
|
+
legacy optimization attribute.
|
7
8
|
**Inputs**
|
8
9
|
* **X** (heterogeneous) - **T**:
|
9
10
|
Input tensor
|
10
11
|
* **slope** (heterogeneous) - **T**:
|
11
|
-
Slope tensor.
|
12
|
+
Slope tensor. If Slope is of size 1, the value is sharedacross
|
12
|
-
|
13
|
+
different channels
|
13
14
|
**Outputs**
|
14
15
|
* **Y** (heterogeneous) - **T**:
|
15
|
-
Output tensor
|
16
|
+
Output tensor
|
16
17
|
**Type Constraints**
|
17
18
|
* **T** in (
|
18
|
-
tensor(bfloat16),
|
19
19
|
tensor(double),
|
20
20
|
tensor(float),
|
21
|
-
tensor(float16)
|
21
|
+
tensor(float16)
|
22
|
-
tensor(int32),
|
23
|
-
tensor(int64),
|
24
|
-
tensor(uint32),
|
25
|
-
tensor(uint64)
|
26
22
|
):
|
27
|
-
Constrain input and output types to float
|
23
|
+
Constrain input and output types to float tensors.
|