RNN#

RNN - 14#

Version

  • name: RNN (GitHub)

  • domain: main

  • since_version: 14

  • function: False

  • support_level: SupportType.COMMON

  • shape inference: True

This version of the operator has been available since version 14.

Summary

Computes an one-layer simple RNN. This operator is usually supported via some custom implementation such as CuDNN.

Notations:

X - input tensor

i - input gate

t - time step (t-1 means previous time step)

Wi - W parameter weight matrix for input gate

Ri - R recurrence weight matrix for input gate

Wbi - W parameter bias vector for input gate

Rbi - R parameter bias vector for input gate

WBi - W parameter weight matrix for backward input gate

RBi - R recurrence weight matrix for backward input gate

WBbi - WR bias vectors for backward input gate

RBbi - RR bias vectors for backward input gate

H - Hidden state

num_directions - 2 if direction == bidirectional else 1

Activation functions:

Relu(x) - max(0, x)

Tanh(x) - (1 - e^{-2x})/(1 + e^{-2x})

Sigmoid(x) - 1/(1 + e^{-x})

(NOTE: Below are optional)

Affine(x) - alpha*x + beta

LeakyRelu(x) - x if x >= 0 else alpha * x

ThresholdedRelu(x) - x if x >= alpha else 0

ScaledTanh(x) - alpha*Tanh(beta*x)

HardSigmoid(x) - min(max(alpha*x + beta, 0), 1)

Elu(x) - x if x >= 0 else alpha*(e^x - 1)

Softsign(x) - x/(1 + |x|)

Softplus(x) - log(1 + e^x)

Equations (Default: f=Tanh):

  • Ht = f(Xt*(Wi^T) + Ht-1*(Ri^T) + Wbi + Rbi)

This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.

Attributes

  • activation_alpha: Optional scaling values used by some activation functions. The values are consumed in the order of activation functions, for example (f, g, h) in LSTM. Default values are the same as of corresponding ONNX operators.For example with LeakyRelu, the default alpha is 0.01.

  • activation_beta: Optional scaling values used by some activation functions. The values are consumed in the order of activation functions, for example (f, g, h) in LSTM. Default values are the same as of corresponding ONNX operators.

  • activations: One (or two if bidirectional) activation function for input gate. The activation function must be one of the activation functions specified above. Optional: Default Tanh if not specified. Default value is [b'Tanh' b'Tanh'].

  • clip: Cell clip threshold. Clipping bounds the elements of a tensor in the range of [-threshold, +threshold] and is applied to the input of activations. No clip if not specified.

  • direction: Specify if the RNN is forward, reverse, or bidirectional. Must be one of forward (default), reverse, or bidirectional. Default value is 'forward'.

  • hidden_size: Number of neurons in the hidden layer

  • layout: The shape format of inputs X, initial_h and outputs Y, Y_h. If 0, the following shapes are expected: X.shape = [seq_length, batch_size, input_size], Y.shape = [seq_length, num_directions, batch_size, hidden_size], initial_h.shape = Y_h.shape = [num_directions, batch_size, hidden_size]. If 1, the following shapes are expected: X.shape = [batch_size, seq_length, input_size], Y.shape = [batch_size, seq_length, num_directions, hidden_size], initial_h.shape = Y_h.shape = [batch_size, num_directions, hidden_size]. Default value is 0.

Inputs

Between 3 and 6 inputs.

  • X (heterogeneous) - T: The input sequences packed (and potentially padded) into one 3-D tensor with the shape of [seq_length, batch_size, input_size].

  • W (heterogeneous) - T: The weight tensor for input gate. Concatenation of Wi and WBi (if bidirectional). The tensor has shape [num_directions, hidden_size, input_size].

  • R (heterogeneous) - T: The recurrence weight tensor. Concatenation of Ri and RBi (if bidirectional). The tensor has shape [num_directions, hidden_size, hidden_size].

  • B (optional, heterogeneous) - T: The bias tensor for input gate. Concatenation of [Wbi, Rbi] and [WBbi, RBbi] (if bidirectional). The tensor has shape [num_directions, 2*hidden_size]. Optional: If not specified - assumed to be 0.

  • sequence_lens (optional, heterogeneous) - T1: Optional tensor specifying lengths of the sequences in a batch. If not specified - assumed all sequences in the batch to have length seq_length. It has shape [batch_size].

  • initial_h (optional, heterogeneous) - T: Optional initial value of the hidden. If not specified - assumed to be 0. It has shape [num_directions, batch_size, hidden_size].

Outputs

Between 0 and 2 outputs.

  • Y (optional, heterogeneous) - T: A tensor that concats all the intermediate output values of the hidden. It has shape [seq_length, num_directions, batch_size, hidden_size].

  • Y_h (optional, heterogeneous) - T: The last output value of the hidden. It has shape [num_directions, batch_size, hidden_size].

Type Constraints

  • T in ( tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.

  • T1 in ( tensor(int32) ): Constrain seq_lens to integer tensor.

Examples

defaults

input = np.array([[[1., 2.], [3., 4.], [5., 6.]]]).astype(np.float32)

input_size = 2
hidden_size = 4
weight_scale = 0.1

node = onnx.helper.make_node(
    'RNN',
    inputs=['X', 'W', 'R'],
    outputs=['', 'Y_h'],
    hidden_size=hidden_size
)

W = weight_scale * np.ones((1, hidden_size, input_size)).astype(np.float32)
R = weight_scale * np.ones((1, hidden_size, hidden_size)).astype(np.float32)

rnn = RNN_Helper(X=input, W=W, R=R)
_, Y_h = rnn.step()
expect(node, inputs=[input, W, R], outputs=[Y_h.astype(np.float32)], name='test_simple_rnn_defaults')

initial_bias

input = np.array([[[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]]]).astype(np.float32)

input_size = 3
hidden_size = 5
custom_bias = 0.1
weight_scale = 0.1

node = onnx.helper.make_node(
    'RNN',
    inputs=['X', 'W', 'R', 'B'],
    outputs=['', 'Y_h'],
    hidden_size=hidden_size
)

W = weight_scale * np.ones((1, hidden_size, input_size)).astype(np.float32)
R = weight_scale * np.ones((1, hidden_size, hidden_size)).astype(np.float32)

# Adding custom bias
W_B = custom_bias * np.ones((1, hidden_size)).astype(np.float32)
R_B = np.zeros((1, hidden_size)).astype(np.float32)
B = np.concatenate((W_B, R_B), axis=1)

rnn = RNN_Helper(X=input, W=W, R=R, B=B)
_, Y_h = rnn.step()
expect(node, inputs=[input, W, R, B], outputs=[Y_h.astype(np.float32)],
       name='test_simple_rnn_with_initial_bias')

seq_length

input = np.array([[[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]],
                  [[10., 11., 12.], [13., 14., 15.], [16., 17., 18.]]]).astype(np.float32)

input_size = 3
hidden_size = 5

node = onnx.helper.make_node(
    'RNN',
    inputs=['X', 'W', 'R', 'B'],
    outputs=['', 'Y_h'],
    hidden_size=hidden_size
)

W = np.random.randn(1, hidden_size, input_size).astype(np.float32)
R = np.random.randn(1, hidden_size, hidden_size).astype(np.float32)

# Adding custom bias
W_B = np.random.randn(1, hidden_size).astype(np.float32)
R_B = np.random.randn(1, hidden_size).astype(np.float32)
B = np.concatenate((W_B, R_B), axis=1)

rnn = RNN_Helper(X=input, W=W, R=R, B=B)
_, Y_h = rnn.step()
expect(node, inputs=[input, W, R, B], outputs=[Y_h.astype(np.float32)], name='test_rnn_seq_length')

batchwise

input = np.array([[[1., 2.]], [[3., 4.]], [[5., 6.]]]).astype(np.float32)

input_size = 2
hidden_size = 4
weight_scale = 0.5
layout = 1

node = onnx.helper.make_node(
    'RNN',
    inputs=['X', 'W', 'R'],
    outputs=['Y', 'Y_h'],
    hidden_size=hidden_size,
    layout=layout
)

W = weight_scale * np.ones((1, hidden_size, input_size)).astype(np.float32)
R = weight_scale * np.ones((1, hidden_size, hidden_size)).astype(np.float32)

rnn = RNN_Helper(X=input, W=W, R=R, layout=layout)
Y, Y_h = rnn.step()
expect(node, inputs=[input, W, R], outputs=[Y.astype(np.float32), Y_h.astype(np.float32)], name='test_simple_rnn_batchwise')

Differences

00Computes an one-layer simple RNN. This operator is usually supportedComputes an one-layer simple RNN. This operator is usually supported
11via some custom implementation such as CuDNN.via some custom implementation such as CuDNN.
22
33Notations:Notations:
44
55X - input tensorX - input tensor
66
77i - input gatei - input gate
88
99t - time step (t-1 means previous time step)t - time step (t-1 means previous time step)
1010
1111Wi - W parameter weight matrix for input gateWi - W parameter weight matrix for input gate
1212
1313Ri - R recurrence weight matrix for input gateRi - R recurrence weight matrix for input gate
1414
1515Wbi - W parameter bias vector for input gateWbi - W parameter bias vector for input gate
1616
1717Rbi - R parameter bias vector for input gateRbi - R parameter bias vector for input gate
1818
1919WBi - W parameter weight matrix for backward input gateWBi - W parameter weight matrix for backward input gate
2020
2121RBi - R recurrence weight matrix for backward input gateRBi - R recurrence weight matrix for backward input gate
2222
2323WBbi - WR bias vectors for backward input gateWBbi - WR bias vectors for backward input gate
2424
2525RBbi - RR bias vectors for backward input gateRBbi - RR bias vectors for backward input gate
2626
2727H - Hidden stateH - Hidden state
2828
2929num_directions - 2 if direction == bidirectional else 1num_directions - 2 if direction == bidirectional else 1
3030
3131Activation functions:Activation functions:
3232
3333 Relu(x) - max(0, x) Relu(x) - max(0, x)
3434
3535 Tanh(x) - (1 - e^{-2x})/(1 + e^{-2x}) Tanh(x) - (1 - e^{-2x})/(1 + e^{-2x})
3636
3737 Sigmoid(x) - 1/(1 + e^{-x}) Sigmoid(x) - 1/(1 + e^{-x})
3838
3939 (NOTE: Below are optional) (NOTE: Below are optional)
4040
4141 Affine(x) - alpha*x + beta Affine(x) - alpha*x + beta
4242
4343 LeakyRelu(x) - x if x >= 0 else alpha * x LeakyRelu(x) - x if x >= 0 else alpha * x
4444
4545 ThresholdedRelu(x) - x if x >= alpha else 0 ThresholdedRelu(x) - x if x >= alpha else 0
4646
4747 ScaledTanh(x) - alpha*Tanh(beta*x) ScaledTanh(x) - alpha*Tanh(beta*x)
4848
4949 HardSigmoid(x) - min(max(alpha*x + beta, 0), 1) HardSigmoid(x) - min(max(alpha*x + beta, 0), 1)
5050
5151 Elu(x) - x if x >= 0 else alpha*(e^x - 1) Elu(x) - x if x >= 0 else alpha*(e^x - 1)
5252
5353 Softsign(x) - x/(1 + |x|) Softsign(x) - x/(1 + |x|)
5454
5555 Softplus(x) - log(1 + e^x) Softplus(x) - log(1 + e^x)
5656
5757Equations (Default: f=Tanh):Equations (Default: f=Tanh):
5858
5959 - Ht = f(Xt*(Wi^T) + Ht-1*(Ri^T) + Wbi + Rbi) - Ht = f(Xt*(Wi^T) + Ht-1*(Ri^T) + Wbi + Rbi)
6060This operator has **optional** inputs/outputs. See ONNX _ for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument's name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.This operator has **optional** inputs/outputs. See ONNX _ for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument's name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.
6161
6262**Attributes****Attributes**
6363
6464* **activation_alpha**:* **activation_alpha**:
6565 Optional scaling values used by some activation functions. The Optional scaling values used by some activation functions. The
6666 values are consumed in the order of activation functions, for values are consumed in the order of activation functions, for
6767 example (f, g, h) in LSTM. Default values are the same as of example (f, g, h) in LSTM. Default values are the same as of
6868 corresponding ONNX operators.For example with LeakyRelu, the default corresponding ONNX operators.For example with LeakyRelu, the default
6969 alpha is 0.01. alpha is 0.01.
7070* **activation_beta**:* **activation_beta**:
7171 Optional scaling values used by some activation functions. The Optional scaling values used by some activation functions. The
7272 values are consumed in the order of activation functions, for values are consumed in the order of activation functions, for
7373 example (f, g, h) in LSTM. Default values are the same as of example (f, g, h) in LSTM. Default values are the same as of
7474 corresponding ONNX operators. corresponding ONNX operators.
7575* **activations**:* **activations**:
7676 One (or two if bidirectional) activation function for input gate. One (or two if bidirectional) activation function for input gate.
7777 The activation function must be one of the activation functions The activation function must be one of the activation functions
7878 specified above. Optional: Default Tanh if not specified. Default value is [b'Tanh' b'Tanh']. specified above. Optional: Default Tanh if not specified. Default value is [b'Tanh' b'Tanh'].
7979* **clip**:* **clip**:
8080 Cell clip threshold. Clipping bounds the elements of a tensor in the Cell clip threshold. Clipping bounds the elements of a tensor in the
8181 range of [-threshold, +threshold] and is applied to the input of range of [-threshold, +threshold] and is applied to the input of
8282 activations. No clip if not specified. activations. No clip if not specified.
8383* **direction**:* **direction**:
8484 Specify if the RNN is forward, reverse, or bidirectional. Must be Specify if the RNN is forward, reverse, or bidirectional. Must be
8585 one of forward (default), reverse, or bidirectional. Default value is 'forward'. one of forward (default), reverse, or bidirectional. Default value is 'forward'.
8686* **hidden_size**:* **hidden_size**:
8787 Number of neurons in the hidden layer Number of neurons in the hidden layer
88* **layout**:
89 The shape format of inputs X, initial_h and outputs Y, Y_h. If 0,
90 the following shapes are expected: X.shape = [seq_length,
91 batch_size, input_size], Y.shape = [seq_length, num_directions,
92 batch_size, hidden_size], initial_h.shape = Y_h.shape =
93 [num_directions, batch_size, hidden_size]. If 1, the following
94 shapes are expected: X.shape = [batch_size, seq_length, input_size],
95 Y.shape = [batch_size, seq_length, num_directions, hidden_size],
96 initial_h.shape = Y_h.shape = [batch_size, num_directions,
97 hidden_size]. Default value is 0.
8898
8999**Inputs****Inputs**
90100
91101Between 3 and 6 inputs.Between 3 and 6 inputs.
92102
93103* **X** (heterogeneous) - **T**:* **X** (heterogeneous) - **T**:
94104 The input sequences packed (and potentially padded) into one 3-D The input sequences packed (and potentially padded) into one 3-D
95105 tensor with the shape of [seq_length, batch_size, input_size]. tensor with the shape of [seq_length, batch_size, input_size].
96106* **W** (heterogeneous) - **T**:* **W** (heterogeneous) - **T**:
97107 The weight tensor for input gate. Concatenation of Wi and WBi The weight tensor for input gate. Concatenation of Wi and WBi
98108 (if bidirectional). The tensor has shape [num_directions, (if bidirectional). The tensor has shape [num_directions,
99109 hidden_size, input_size]. hidden_size, input_size].
100110* **R** (heterogeneous) - **T**:* **R** (heterogeneous) - **T**:
101111 The recurrence weight tensor. Concatenation of Ri and RBi (if The recurrence weight tensor. Concatenation of Ri and RBi (if
102112 bidirectional). The tensor has shape [num_directions, hidden_size, bidirectional). The tensor has shape [num_directions, hidden_size,
103113 hidden_size]. hidden_size].
104114* **B** (optional, heterogeneous) - **T**:* **B** (optional, heterogeneous) - **T**:
105115 The bias tensor for input gate. Concatenation of [Wbi, Rbi] and The bias tensor for input gate. Concatenation of [Wbi, Rbi] and
106116 [WBbi, RBbi] (if bidirectional). The tensor has shape [WBbi, RBbi] (if bidirectional). The tensor has shape
107117 [num_directions, 2*hidden_size]. Optional: If not specified - [num_directions, 2*hidden_size]. Optional: If not specified -
108118 assumed to be 0. assumed to be 0.
109119* **sequence_lens** (optional, heterogeneous) - **T1**:* **sequence_lens** (optional, heterogeneous) - **T1**:
110120 Optional tensor specifying lengths of the sequences in a batch. If Optional tensor specifying lengths of the sequences in a batch. If
111121 not specified - assumed all sequences in the batch to have length not specified - assumed all sequences in the batch to have length
112122 seq_length. It has shape [batch_size]. seq_length. It has shape [batch_size].
113123* **initial_h** (optional, heterogeneous) - **T**:* **initial_h** (optional, heterogeneous) - **T**:
114124 Optional initial value of the hidden. If not specified - assumed to Optional initial value of the hidden. If not specified - assumed to
115125 be 0. It has shape [num_directions, batch_size, hidden_size]. be 0. It has shape [num_directions, batch_size, hidden_size].
116126
117127**Outputs****Outputs**
118128
119129Between 0 and 2 outputs.Between 0 and 2 outputs.
120130
121131* **Y** (optional, heterogeneous) - **T**:* **Y** (optional, heterogeneous) - **T**:
122132 A tensor that concats all the intermediate output values of the A tensor that concats all the intermediate output values of the
123133 hidden. It has shape [seq_length, num_directions, batch_size, hidden. It has shape [seq_length, num_directions, batch_size,
124134 hidden_size]. hidden_size].
125135* **Y_h** (optional, heterogeneous) - **T**:* **Y_h** (optional, heterogeneous) - **T**:
126136 The last output value of the hidden. It has shape [num_directions, The last output value of the hidden. It has shape [num_directions,
127137 batch_size, hidden_size]. batch_size, hidden_size].
128138
129139**Type Constraints****Type Constraints**
130140
131141* **T** in (* **T** in (
132142 tensor(double), tensor(double),
133143 tensor(float), tensor(float),
134144 tensor(float16) tensor(float16)
135145 ): ):
136146 Constrain input and output types to float tensors. Constrain input and output types to float tensors.
137147* **T1** in (* **T1** in (
138148 tensor(int32) tensor(int32)
139149 ): ):
140150 Constrain seq_lens to integer tensor. Constrain seq_lens to integer tensor.

RNN - 7#

Version

  • name: RNN (GitHub)

  • domain: main

  • since_version: 7

  • function: False

  • support_level: SupportType.COMMON

  • shape inference: True

This version of the operator has been available since version 7.

Summary

Computes an one-layer simple RNN. This operator is usually supported via some custom implementation such as CuDNN.

Notations:

X - input tensor

i - input gate

t - time step (t-1 means previous time step)

Wi - W parameter weight matrix for input gate

Ri - R recurrence weight matrix for input gate

Wbi - W parameter bias vector for input gate

Rbi - R parameter bias vector for input gate

WBi - W parameter weight matrix for backward input gate

RBi - R recurrence weight matrix for backward input gate

WBbi - WR bias vectors for backward input gate

RBbi - RR bias vectors for backward input gate

H - Hidden state

num_directions - 2 if direction == bidirectional else 1

Activation functions:

Relu(x) - max(0, x)

Tanh(x) - (1 - e^{-2x})/(1 + e^{-2x})

Sigmoid(x) - 1/(1 + e^{-x})

(NOTE: Below are optional)

Affine(x) - alpha*x + beta

LeakyRelu(x) - x if x >= 0 else alpha * x

ThresholdedRelu(x) - x if x >= alpha else 0

ScaledTanh(x) - alpha*Tanh(beta*x)

HardSigmoid(x) - min(max(alpha*x + beta, 0), 1)

Elu(x) - x if x >= 0 else alpha*(e^x - 1)

Softsign(x) - x/(1 + |x|)

Softplus(x) - log(1 + e^x)

Equations (Default: f=Tanh):

  • Ht = f(Xt*(Wi^T) + Ht-1*(Ri^T) + Wbi + Rbi)

This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.

Attributes

  • activation_alpha: Optional scaling values used by some activation functions. The values are consumed in the order of activation functions, for example (f, g, h) in LSTM. Default values are the same as of corresponding ONNX operators.For example with LeakyRelu, the default alpha is 0.01.

  • activation_beta: Optional scaling values used by some activation functions. The values are consumed in the order of activation functions, for example (f, g, h) in LSTM. Default values are the same as of corresponding ONNX operators.

  • activations: One (or two if bidirectional) activation function for input gate. The activation function must be one of the activation functions specified above. Optional: Default Tanh if not specified. Default value is [b'Tanh' b'Tanh'].

  • clip: Cell clip threshold. Clipping bounds the elements of a tensor in the range of [-threshold, +threshold] and is applied to the input of activations. No clip if not specified.

  • direction: Specify if the RNN is forward, reverse, or bidirectional. Must be one of forward (default), reverse, or bidirectional. Default value is 'forward'.

  • hidden_size: Number of neurons in the hidden layer

Inputs

Between 3 and 6 inputs.

  • X (heterogeneous) - T: The input sequences packed (and potentially padded) into one 3-D tensor with the shape of [seq_length, batch_size, input_size].

  • W (heterogeneous) - T: The weight tensor for input gate. Concatenation of Wi and WBi (if bidirectional). The tensor has shape [num_directions, hidden_size, input_size].

  • R (heterogeneous) - T: The recurrence weight tensor. Concatenation of Ri and RBi (if bidirectional). The tensor has shape [num_directions, hidden_size, hidden_size].

  • B (optional, heterogeneous) - T: The bias tensor for input gate. Concatenation of [Wbi, Rbi] and [WBbi, RBbi] (if bidirectional). The tensor has shape [num_directions, 2*hidden_size]. Optional: If not specified - assumed to be 0.

  • sequence_lens (optional, heterogeneous) - T1: Optional tensor specifying lengths of the sequences in a batch. If not specified - assumed all sequences in the batch to have length seq_length. It has shape [batch_size].

  • initial_h (optional, heterogeneous) - T: Optional initial value of the hidden. If not specified - assumed to be 0. It has shape [num_directions, batch_size, hidden_size].

Outputs

Between 0 and 2 outputs.

  • Y (optional, heterogeneous) - T: A tensor that concats all the intermediate output values of the hidden. It has shape [seq_length, num_directions, batch_size, hidden_size].

  • Y_h (optional, heterogeneous) - T: The last output value of the hidden. It has shape [num_directions, batch_size, hidden_size].

Type Constraints

  • T in ( tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.

  • T1 in ( tensor(int32) ): Constrain seq_lens to integer tensor.

Differences

00Computes an one-layer simple RNN. This operator is usually supportedComputes an one-layer simple RNN. This operator is usually supported
11via some custom implementation such as CuDNN.via some custom implementation such as CuDNN.
22
33Notations:Notations:
44
55X - input tensorX - input tensor
66
77i - input gatei - input gate
88
99t - time step (t-1 means previous time step)t - time step (t-1 means previous time step)
1010
1111Wi - W parameter weight matrix for input gateWi - W parameter weight matrix for input gate
1212
1313Ri - R recurrence weight matrix for input gateRi - R recurrence weight matrix for input gate
1414
1515Wbi - W parameter bias vector for input gateWbi - W parameter bias vector for input gate
1616
1717Rbi - R parameter bias vector for input gateRbi - R parameter bias vector for input gate
1818
1919WBi - W parameter weight matrix for backward input gateWBi - W parameter weight matrix for backward input gate
2020
2121RBi - R recurrence weight matrix for backward input gateRBi - R recurrence weight matrix for backward input gate
2222
2323WBbi - WR bias vectors for backward input gateWBbi - WR bias vectors for backward input gate
2424
2525RBbi - RR bias vectors for backward input gateRBbi - RR bias vectors for backward input gate
2626
2727H - Hidden stateH - Hidden state
2828
2929num_directions - 2 if direction == bidirectional else 1num_directions - 2 if direction == bidirectional else 1
3030
3131Activation functions:Activation functions:
3232
3333 Relu(x) - max(0, x) Relu(x) - max(0, x)
3434
3535 Tanh(x) - (1 - e^{-2x})/(1 + e^{-2x}) Tanh(x) - (1 - e^{-2x})/(1 + e^{-2x})
3636
3737 Sigmoid(x) - 1/(1 + e^{-x}) Sigmoid(x) - 1/(1 + e^{-x})
3838
3939 (NOTE: Below are optional) (NOTE: Below are optional)
4040
4141 Affine(x) - alpha*x + beta Affine(x) - alpha*x + beta
4242
4343 LeakyRelu(x) - x if x >= 0 else alpha * x LeakyRelu(x) - x if x >= 0 else alpha * x
4444
4545 ThresholdedRelu(x) - x if x >= alpha else 0 ThresholdedRelu(x) - x if x >= alpha else 0
4646
4747 ScaledTanh(x) - alpha*Tanh(beta*x) ScaledTanh(x) - alpha*Tanh(beta*x)
4848
4949 HardSigmoid(x) - min(max(alpha*x + beta, 0), 1) HardSigmoid(x) - min(max(alpha*x + beta, 0), 1)
5050
5151 Elu(x) - x if x >= 0 else alpha*(e^x - 1) Elu(x) - x if x >= 0 else alpha*(e^x - 1)
5252
5353 Softsign(x) - x/(1 + |x|) Softsign(x) - x/(1 + |x|)
5454
5555 Softplus(x) - log(1 + e^x) Softplus(x) - log(1 + e^x)
5656
5757Equations (Default: f=Tanh):Equations (Default: f=Tanh):
5858
59 - Ht = f(Xt*(Wi^T) + Ht-1*(Ri^T) + Wbi + Rbi)
5960 - Ht = f(Xt*(Wi^T) + Ht-1*Ri + Wbi + Rbi)This operator has **optional** inputs/outputs. See ONNX <https://github.com/onnx/onnx/blob/master/docs/IR.md>_ for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument's name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.
6061
6162**Attributes****Attributes**
6263
6364* **activation_alpha**:* **activation_alpha**:
6465 Optional scaling values used by some activation functions. The Optional scaling values used by some activation functions. The
6566 values are consumed in the order of activation functions, for values are consumed in the order of activation functions, for
6667 example (f, g, h) in LSTM. Default values are the same as of example (f, g, h) in LSTM. Default values are the same as of
6768 corresponding ONNX operators.For example with LeakyRelu, the default corresponding ONNX operators.For example with LeakyRelu, the default
6869 alpha is 0.01. alpha is 0.01.
6970* **activation_beta**:* **activation_beta**:
7071 Optional scaling values used by some activation functions. The Optional scaling values used by some activation functions. The
7172 values are consumed in the order of activation functions, for values are consumed in the order of activation functions, for
7273 example (f, g, h) in LSTM. Default values are the same as of example (f, g, h) in LSTM. Default values are the same as of
7374 corresponding ONNX operators. corresponding ONNX operators.
7475* **activations**:* **activations**:
7576 One (or two if bidirectional) activation function for input gate. One (or two if bidirectional) activation function for input gate.
7677 The activation function must be one of the activation functions The activation function must be one of the activation functions
7778 specified above. Optional: Default Tanh if not specified. Default value is [b'Tanh' b'Tanh']. specified above. Optional: Default Tanh if not specified. Default value is [b'Tanh' b'Tanh'].
7879* **clip**:* **clip**:
7980 Cell clip threshold. Clipping bounds the elements of a tensor in the Cell clip threshold. Clipping bounds the elements of a tensor in the
8081 range of [-threshold, +threshold] and is applied to the input of range of [-threshold, +threshold] and is applied to the input of
8182 activations. No clip if not specified. activations. No clip if not specified.
8283* **direction**:* **direction**:
8384 Specify if the RNN is forward, reverse, or bidirectional. Must be Specify if the RNN is forward, reverse, or bidirectional. Must be
8485 one of forward (default), reverse, or bidirectional. Default value is 'forward'. one of forward (default), reverse, or bidirectional. Default value is 'forward'.
8586* **hidden_size**:* **hidden_size**:
8687 Number of neurons in the hidden layer Number of neurons in the hidden layer
87* **output_sequence**:
88 The sequence output for the hidden is optional if 0. Default 0. Default value is 0.
8988
9089**Inputs****Inputs**
9190
9291Between 3 and 6 inputs.Between 3 and 6 inputs.
9392
9493* **X** (heterogeneous) - **T**:* **X** (heterogeneous) - **T**:
9594 The input sequences packed (and potentially padded) into one 3-D The input sequences packed (and potentially padded) into one 3-D
9695 tensor with the shape of [seq_length, batch_size, input_size]. tensor with the shape of [seq_length, batch_size, input_size].
9796* **W** (heterogeneous) - **T**:* **W** (heterogeneous) - **T**:
9897 The weight tensor for input gate. Concatenation of Wi and WBi The weight tensor for input gate. Concatenation of Wi and WBi
9998 (if bidirectional). The tensor has shape [num_directions, (if bidirectional). The tensor has shape [num_directions,
10099 hidden_size, input_size]. hidden_size, input_size].
101100* **R** (heterogeneous) - **T**:* **R** (heterogeneous) - **T**:
102101 The recurrence weight tensor. Concatenation of Ri and RBi (if The recurrence weight tensor. Concatenation of Ri and RBi (if
103102 bidirectional). The tensor has shape [num_directions, hidden_size, bidirectional). The tensor has shape [num_directions, hidden_size,
104103 hidden_size]. hidden_size].
105104* **B** (optional, heterogeneous) - **T**:* **B** (optional, heterogeneous) - **T**:
106105 The bias tensor for input gate. Concatenation of [Wbi, Rbi] and The bias tensor for input gate. Concatenation of [Wbi, Rbi] and
107106 [WBbi, RBbi] (if bidirectional). The tensor has shape [WBbi, RBbi] (if bidirectional). The tensor has shape
108107 [num_directions, 2*hidden_size]. Optional: If not specified - [num_directions, 2*hidden_size]. Optional: If not specified -
109108 assumed to be 0. assumed to be 0.
110109* **sequence_lens** (optional, heterogeneous) - **T1**:* **sequence_lens** (optional, heterogeneous) - **T1**:
111110 Optional tensor specifying lengths of the sequences in a batch. If Optional tensor specifying lengths of the sequences in a batch. If
112111 not specified - assumed all sequences in the batch to have length not specified - assumed all sequences in the batch to have length
113112 seq_length. It has shape [batch_size]. seq_length. It has shape [batch_size].
114113* **initial_h** (optional, heterogeneous) - **T**:* **initial_h** (optional, heterogeneous) - **T**:
115114 Optional initial value of the hidden. If not specified - assumed to Optional initial value of the hidden. If not specified - assumed to
116115 be 0. It has shape [num_directions, batch_size, hidden_size]. be 0. It has shape [num_directions, batch_size, hidden_size].
117116
118117**Outputs****Outputs**
119118
120119Between 0 and 2 outputs.Between 0 and 2 outputs.
121120
122121* **Y** (optional, heterogeneous) - **T**:* **Y** (optional, heterogeneous) - **T**:
123122 A tensor that concats all the intermediate output values of the A tensor that concats all the intermediate output values of the
124123 hidden. It has shape [seq_length, num_directions, batch_size, hidden. It has shape [seq_length, num_directions, batch_size,
125124 hidden_size]. It is optional if output_sequence is 0. hidden_size].
126125* **Y_h** (optional, heterogeneous) - **T**:* **Y_h** (optional, heterogeneous) - **T**:
127126 The last output value of the hidden. It has shape [num_directions, The last output value of the hidden. It has shape [num_directions,
128127 batch_size, hidden_size]. batch_size, hidden_size].
129128
130129**Type Constraints****Type Constraints**
131130
132131* **T** in (* **T** in (
133132 tensor(double), tensor(double),
134133 tensor(float), tensor(float),
135134 tensor(float16) tensor(float16)
136135 ): ):
137136 Constrain input and output types to float tensors. Constrain input and output types to float tensors.
138137* **T1** in (* **T1** in (
139138 tensor(int32) tensor(int32)
140139 ): ):
141140 Constrain seq_lens to integer tensor. Constrain seq_lens to integer tensor.

RNN - 1#

Version

  • name: RNN (GitHub)

  • domain: main

  • since_version: 1

  • function: False

  • support_level: SupportType.COMMON

  • shape inference: True

This version of the operator has been available since version 1.

Summary

Computes an one-layer simple RNN. This operator is usually supported via some custom implementation such as CuDNN.

Notations:

X - input tensor

i - input gate

t - time step (t-1 means previous time step)

Wi - W parameter weight matrix for input gate

Ri - R recurrence weight matrix for input gate

Wbi - W parameter bias vector for input gate

Rbi - R parameter bias vector for input gate

WBi - W parameter weight matrix for backward input gate

RBi - R recurrence weight matrix for backward input gate

WBbi - WR bias vectors for backward input gate

RBbi - RR bias vectors for backward input gate

H - Hidden state

num_directions - 2 if direction == bidirectional else 1

Activation functions:

Relu(x) - max(0, x)

Tanh(x) - (1 - e^{-2x})/(1 + e^{-2x})

Sigmoid(x) - 1/(1 + e^{-x})

(NOTE: Below are optional)

Affine(x) - alpha*x + beta

LeakyRelu(x) - x if x >= 0 else alpha * x

ThresholdedRelu(x) - x if x >= alpha else 0

ScaledTanh(x) - alpha*Tanh(beta*x)

HardSigmoid(x) - min(max(alpha*x + beta, 0), 1)

Elu(x) - x if x >= 0 else alpha*(e^x - 1)

Softsign(x) - x/(1 + |x|)

Softplus(x) - log(1 + e^x)

Equations (Default: f=Tanh):

  • Ht = f(Xt*(Wi^T) + Ht-1*Ri + Wbi + Rbi)

Attributes

  • activation_alpha: Optional scaling values used by some activation functions. The values are consumed in the order of activation functions, for example (f, g, h) in LSTM. Default values are the same as of corresponding ONNX operators.For example with LeakyRelu, the default alpha is 0.01.

  • activation_beta: Optional scaling values used by some activation functions. The values are consumed in the order of activation functions, for example (f, g, h) in LSTM. Default values are the same as of corresponding ONNX operators.

  • activations: One (or two if bidirectional) activation function for input gate. The activation function must be one of the activation functions specified above. Optional: Default Tanh if not specified. Default value is [b'Tanh' b'Tanh'].

  • clip: Cell clip threshold. Clipping bounds the elements of a tensor in the range of [-threshold, +threshold] and is applied to the input of activations. No clip if not specified.

  • direction: Specify if the RNN is forward, reverse, or bidirectional. Must be one of forward (default), reverse, or bidirectional. Default value is 'forward'.

  • hidden_size: Number of neurons in the hidden layer

  • output_sequence: The sequence output for the hidden is optional if 0. Default 0. Default value is 0.

Inputs

Between 3 and 6 inputs.

  • X (heterogeneous) - T: The input sequences packed (and potentially padded) into one 3-D tensor with the shape of [seq_length, batch_size, input_size].

  • W (heterogeneous) - T: The weight tensor for input gate. Concatenation of Wi and WBi (if bidirectional). The tensor has shape [num_directions, hidden_size, input_size].

  • R (heterogeneous) - T: The recurrence weight tensor. Concatenation of Ri and RBi (if bidirectional). The tensor has shape [num_directions, hidden_size, hidden_size].

  • B (optional, heterogeneous) - T: The bias tensor for input gate. Concatenation of [Wbi, Rbi] and [WBbi, RBbi] (if bidirectional). The tensor has shape [num_directions, 2*hidden_size]. Optional: If not specified - assumed to be 0.

  • sequence_lens (optional, heterogeneous) - T1: Optional tensor specifying lengths of the sequences in a batch. If not specified - assumed all sequences in the batch to have length seq_length. It has shape [batch_size].

  • initial_h (optional, heterogeneous) - T: Optional initial value of the hidden. If not specified - assumed to be 0. It has shape [num_directions, batch_size, hidden_size].

Outputs

Between 0 and 2 outputs.

  • Y (optional, heterogeneous) - T: A tensor that concats all the intermediate output values of the hidden. It has shape [seq_length, num_directions, batch_size, hidden_size]. It is optional if output_sequence is 0.

  • Y_h (optional, heterogeneous) - T: The last output value of the hidden. It has shape [num_directions, batch_size, hidden_size].

Type Constraints

  • T in ( tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.

  • T1 in ( tensor(int32) ): Constrain seq_lens to integer tensor.