LeakyRelu - 1 vs 6#
Next section compares an older to a newer version of the same operator after both definition are converted into markdown text. Green means an addition to the newer version, red means a deletion. Anything else is unchanged.
- LeakyRelu1 → LeakyRelu6 +3 -1
LeakyRelu1 → LeakyRelu6
RENAMED
@@ -1 +1 @@
|
|
1
1
|
LeakyRelu takes input data (Tensor<T>) and an argument alpha, and produces one
|
2
2
|
output data (Tensor<T>) where the function f(x) = alpha * x for x < 0,
|
3
3
|
f(x) = x for x >= 0, is applied to the data tensor elementwise.
|
4
4
|
**Attributes**
|
5
5
|
* **alpha**:
|
6
|
-
Coefficient of leakage.
|
6
|
+
Coefficient of leakage default to 0.01.
|
7
|
+
* **consumed_inputs**:
|
8
|
+
legacy optimization attribute.
|
7
9
|
**Inputs**
|
8
10
|
* **X** (heterogeneous) - **T**:
|
9
11
|
Input tensor
|
10
12
|
**Outputs**
|
11
13
|
* **Y** (heterogeneous) - **T**:
|
12
14
|
Output tensor
|
13
15
|
**Type Constraints**
|
14
16
|
* **T** in (
|
15
17
|
tensor(double),
|
16
18
|
tensor(float),
|
17
19
|
tensor(float16)
|
18
20
|
):
|
19
21
|
Constrain input and output types to float tensors.
|