Relu#

Relu - 14#

Version

  • name: Relu (GitHub)

  • domain: main

  • since_version: 14

  • function: False

  • support_level: SupportType.COMMON

  • shape inference: True

This version of the operator has been available since version 14.

Summary

Relu takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the rectified linear function, y = max(0, x), is applied to the tensor elementwise.

Inputs

  • X (heterogeneous) - T: Input tensor

Outputs

  • Y (heterogeneous) - T: Output tensor

Type Constraints

  • T in ( tensor(bfloat16), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8) ): Constrain input and output types to signed numeric tensors.

Examples

Differences

00Relu takes one input data (Tensor) and produces one output dataRelu takes one input data (Tensor) and produces one output data
11(Tensor) where the rectified linear function, y = max(0, x), is applied to(Tensor) where the rectified linear function, y = max(0, x), is applied to
22the tensor elementwise.the tensor elementwise.
33
44**Inputs****Inputs**
55
66* **X** (heterogeneous) - **T**:* **X** (heterogeneous) - **T**:
77 Input tensor Input tensor
88
99**Outputs****Outputs**
1010
1111* **Y** (heterogeneous) - **T**:* **Y** (heterogeneous) - **T**:
1212 Output tensor Output tensor
1313
1414**Type Constraints****Type Constraints**
1515
1616* **T** in (* **T** in (
1717 tensor(bfloat16), tensor(bfloat16),
1818 tensor(double), tensor(double),
1919 tensor(float), tensor(float),
2020 tensor(float16) tensor(float16),
21 tensor(int16),
22 tensor(int32),
23 tensor(int64),
24 tensor(int8)
2125 ): ):
2226 Constrain input and output types to float tensors. Constrain input and output types to signed numeric tensors.

Relu - 13#

Version

  • name: Relu (GitHub)

  • domain: main

  • since_version: 13

  • function: False

  • support_level: SupportType.COMMON

  • shape inference: True

This version of the operator has been available since version 13.

Summary

Relu takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the rectified linear function, y = max(0, x), is applied to the tensor elementwise.

Inputs

  • X (heterogeneous) - T: Input tensor

Outputs

  • Y (heterogeneous) - T: Output tensor

Type Constraints

  • T in ( tensor(bfloat16), tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.

Differences

00Relu takes one input data (Tensor) and produces one output dataRelu takes one input data (Tensor) and produces one output data
11(Tensor) where the rectified linear function, y = max(0, x), is applied to(Tensor) where the rectified linear function, y = max(0, x), is applied to
22the tensor elementwise.the tensor elementwise.
33
44**Inputs****Inputs**
55
66* **X** (heterogeneous) - **T**:* **X** (heterogeneous) - **T**:
77 Input tensor Input tensor
88
99**Outputs****Outputs**
1010
1111* **Y** (heterogeneous) - **T**:* **Y** (heterogeneous) - **T**:
1212 Output tensor Output tensor
1313
1414**Type Constraints****Type Constraints**
1515
1616* **T** in (* **T** in (
17 tensor(bfloat16),
1718 tensor(double), tensor(double),
1819 tensor(float), tensor(float),
1920 tensor(float16) tensor(float16)
2021 ): ):
2122 Constrain input and output types to float tensors. Constrain input and output types to float tensors.

Relu - 6#

Version

  • name: Relu (GitHub)

  • domain: main

  • since_version: 6

  • function: False

  • support_level: SupportType.COMMON

  • shape inference: True

This version of the operator has been available since version 6.

Summary

Relu takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the rectified linear function, y = max(0, x), is applied to the tensor elementwise.

Inputs

  • X (heterogeneous) - T: Input tensor

Outputs

  • Y (heterogeneous) - T: Output tensor

Type Constraints

  • T in ( tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.

Differences

00Relu takes one input data (Tensor) and produces one output dataRelu takes one input data (Tensor) and produces one output data
11(Tensor) where the rectified linear function, y = max(0, x), is applied to(Tensor) where the rectified linear function, y = max(0, x), is applied to
22the tensor elementwise.the tensor elementwise.
33
4**Attributes**
5
6* **consumed_inputs**:
7 legacy optimization attribute.
8
94**Inputs****Inputs**
105
116* **X** (heterogeneous) - **T**:* **X** (heterogeneous) - **T**:
127 Input tensor Input tensor
138
149**Outputs****Outputs**
1510
1611* **Y** (heterogeneous) - **T**:* **Y** (heterogeneous) - **T**:
1712 Output tensor Output tensor
1813
1914**Type Constraints****Type Constraints**
2015
2116* **T** in (* **T** in (
2217 tensor(double), tensor(double),
2318 tensor(float), tensor(float),
2419 tensor(float16) tensor(float16)
2520 ): ):
2621 Constrain input and output types to float tensors. Constrain input and output types to float tensors.

Relu - 1#

Version

  • name: Relu (GitHub)

  • domain: main

  • since_version: 1

  • function: False

  • support_level: SupportType.COMMON

  • shape inference: False

This version of the operator has been available since version 1.

Summary

Relu takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the rectified linear function, y = max(0, x), is applied to the tensor elementwise.

Attributes

  • consumed_inputs: legacy optimization attribute.

Inputs

  • X (heterogeneous) - T: Input tensor

Outputs

  • Y (heterogeneous) - T: Output tensor

Type Constraints

  • T in ( tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.