PRelu - 1 vs 16#

Next section compares an older to a newer version of the same operator after both definition are converted into markdown text. Green means an addition to the newer version, red means a deletion. Anything else is unchanged.

Files changed (1) hide show
  1. PRelu1 → PRelu16 +9 -13
PRelu1 → PRelu16 RENAMED
@@ -1 +1 @@
1
1
  PRelu takes input data (Tensor<T>) and slope tensor as input, and produces one
2
2
  output data (Tensor<T>) where the function f(x) = slope * x for x < 0,
3
3
  f(x) = x for x >= 0., is applied to the data tensor elementwise.
4
- **History**
5
- - Version 16 adds bfloat16 to the types allowed.
6
- This operator supports **unidirectional broadcasting** (tensor slope should be unidirectional broadcastable to input tensor X); for more details please check Broadcasting in ONNX <https://github.com/onnx/onnx/blob/master/docs/Broadcasting.md>_.
4
+ **Attributes**
5
+
6
+ * **consumed_inputs**:
7
+ legacy optimization attribute.
7
8
  **Inputs**
8
9
  * **X** (heterogeneous) - **T**:
9
10
  Input tensor
10
11
  * **slope** (heterogeneous) - **T**:
11
- Slope tensor. The shape of slope can be smaller then first input X;
12
+ Slope tensor. If Slope is of size 1, the value is sharedacross
12
- if so, its shape must be unidirectional broadcastable to X
13
+ different channels
13
14
  **Outputs**
14
15
  * **Y** (heterogeneous) - **T**:
15
- Output tensor (same size as X)
16
+ Output tensor
16
17
  **Type Constraints**
17
18
  * **T** in (
18
- tensor(bfloat16),
19
19
  tensor(double),
20
20
  tensor(float),
21
- tensor(float16),
21
+ tensor(float16)
22
- tensor(int32),
23
- tensor(int64),
24
- tensor(uint32),
25
- tensor(uint64)
26
22
  ):
27
- Constrain input and output types to float/int tensors.? ----
23
+ Constrain input and output types to float tensors.