GRU#
GRU - 1#
Version
name: GRU (GitHub)
domain: main
since_version: 1
function:
support_level: SupportType.COMMON
shape inference: False
This version of the operator has been available since version 1.
Summary
Attributes
activation_alpha - FLOATS : Optional scaling values used by some activation functions. The values are consumed in the order of activation functions, for example (f, g, h) in LSTM.
activation_beta - FLOATS : Optional scaling values used by some activation functions. The values are consumed in the order of activation functions, for example (f, g, h) in LSTM.
activations - STRINGS : A list of 2 (or 4 if bidirectional) activation functions for update, reset, and hidden gates. The activation functions must be one of the activation functions specified above. Optional: See the equations for default if not specified.
clip - FLOAT : Cell clip threshold. Clipping bounds the elements of a tensor in the range of [-threshold, +threshold] and is applied to the input of activations. No clip if not specified.
direction - STRING : Specify if the RNN is forward, reverse, or bidirectional. Must be one of forward (default), reverse, or bidirectional.
hidden_size - INT : Number of neurons in the hidden layer
output_sequence - INT : The sequence output for the hidden is optional if 0. Default 0.
Inputs
Between 3 and 6 inputs.
X (heterogeneous) - T:
W (heterogeneous) - T:
R (heterogeneous) - T:
B (optional, heterogeneous) - T:
sequence_lens (optional, heterogeneous) - T1:
initial_h (optional, heterogeneous) - T:
Outputs
Y (optional, heterogeneous) - T:
Y_h (heterogeneous) - T:
Type Constraints
T in ( tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.
T1 in ( tensor(int32) ): Constrain seq_lens to integer tensor.