RNN#
RNN - 1#
Version
name: RNN (GitHub)
domain: main
since_version: 1
function:
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 1.
Summary
Attributes
activation_alpha - FLOATS : Optional scaling values used by some activation functions. The values are consumed in the order of activation functions, for example (f, g, h) in LSTM. Default values are the same as of corresponding ONNX operators.For example with LeakyRelu, the default alpha is 0.01.
activation_beta - FLOATS : Optional scaling values used by some activation functions. The values are consumed in the order of activation functions, for example (f, g, h) in LSTM. Default values are the same as of corresponding ONNX operators.
activations - STRINGS : One (or two if bidirectional) activation function for input gate. The activation function must be one of the activation functions specified above. Optional: Default Tanh if not specified.
clip - FLOAT : Cell clip threshold. Clipping bounds the elements of a tensor in the range of [-threshold, +threshold] and is applied to the input of activations. No clip if not specified.
direction - STRING : Specify if the RNN is forward, reverse, or bidirectional. Must be one of forward (default), reverse, or bidirectional.
hidden_size - INT : Number of neurons in the hidden layer
output_sequence - INT : The sequence output for the hidden is optional if 0. Default 0.
Inputs
Between 3 and 6 inputs.
X (heterogeneous) - T:
W (heterogeneous) - T:
R (heterogeneous) - T:
B (optional, heterogeneous) - T:
sequence_lens (optional, heterogeneous) - T1:
initial_h (optional, heterogeneous) - T:
Outputs
Between 0 and 2 outputs.
Y (optional, heterogeneous) - T:
Y_h (optional, heterogeneous) - T:
Type Constraints
T in ( tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.
T1 in ( tensor(int32) ): Constrain seq_lens to integer tensor.