If#

If - 16#

Version

  • name: If (GitHub)

  • domain: main

  • since_version: 16

  • function: False

  • support_level: SupportType.COMMON

  • shape inference: True

This version of the operator has been available since version 16.

Summary

If conditional

Attributes

  • else_branch (required): Graph to run if condition is false. Has N outputs: values you wish to be live-out to the enclosing scope. The number of outputs must match the number of outputs in the then_branch.

  • then_branch (required): Graph to run if condition is true. Has N outputs: values you wish to be live-out to the enclosing scope. The number of outputs must match the number of outputs in the else_branch.

Inputs

  • cond (heterogeneous) - B: Condition for the if

Outputs

Between 1 and 2147483647 outputs.

  • outputs (variadic) - V: Values that are live-out to the enclosing scope. The return values in the then_branch and else_branch must be of the same data type. The then_branch and else_branch may produce tensors with the same element type and different shapes. If corresponding outputs from the then-branch and the else-branch have static shapes S1 and S2, then the shape of the corresponding output variable of the if- node (if present) must be compatible with both S1 and S2 as it represents the union of both possible shapes.For example, if in a model file, the first output of then_branch is typed float tensor with shape [2] and the first output of else_branch is another float tensor with shape [3], If’s first output should have (a) no shape set, or (b) a shape of rank 1 with neither dim_value nor dim_param set, or (c) a shape of rank 1 with a unique dim_param. In contrast, the first output cannot have the shape [2] since [2] and [3] are not compatible.

Type Constraints

  • V in ( optional(seq(tensor(bfloat16))), optional(seq(tensor(bool))), optional(seq(tensor(complex128))), optional(seq(tensor(complex64))), optional(seq(tensor(double))), optional(seq(tensor(float))), optional(seq(tensor(float16))), optional(seq(tensor(int16))), optional(seq(tensor(int32))), optional(seq(tensor(int64))), optional(seq(tensor(int8))), optional(seq(tensor(string))), optional(seq(tensor(uint16))), optional(seq(tensor(uint32))), optional(seq(tensor(uint64))), optional(seq(tensor(uint8))), optional(tensor(bfloat16)), optional(tensor(bool)), optional(tensor(complex128)), optional(tensor(complex64)), optional(tensor(double)), optional(tensor(float)), optional(tensor(float16)), optional(tensor(int16)), optional(tensor(int32)), optional(tensor(int64)), optional(tensor(int8)), optional(tensor(string)), optional(tensor(uint16)), optional(tensor(uint32)), optional(tensor(uint64)), optional(tensor(uint8)), seq(tensor(bfloat16)), seq(tensor(bool)), seq(tensor(complex128)), seq(tensor(complex64)), seq(tensor(double)), seq(tensor(float)), seq(tensor(float16)), seq(tensor(int16)), seq(tensor(int32)), seq(tensor(int64)), seq(tensor(int8)), seq(tensor(string)), seq(tensor(uint16)), seq(tensor(uint32)), seq(tensor(uint64)), seq(tensor(uint8)), tensor(bfloat16), tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): All Tensor, Sequence(Tensor), Optional(Tensor), and Optional(Sequence(Tensor)) types

  • B in ( tensor(bool) ): Only bool

Examples

_if

import numpy as np
import onnx

# Given a bool scalar input cond.
# return constant tensor x if cond is True, otherwise return constant tensor y.

then_out = onnx.helper.make_tensor_value_info(
    "then_out", onnx.TensorProto.FLOAT, [5]
)
else_out = onnx.helper.make_tensor_value_info(
    "else_out", onnx.TensorProto.FLOAT, [5]
)

x = np.array([1, 2, 3, 4, 5]).astype(np.float32)
y = np.array([5, 4, 3, 2, 1]).astype(np.float32)

then_const_node = onnx.helper.make_node(
    "Constant",
    inputs=[],
    outputs=["then_out"],
    value=onnx.numpy_helper.from_array(x),
)

else_const_node = onnx.helper.make_node(
    "Constant",
    inputs=[],
    outputs=["else_out"],
    value=onnx.numpy_helper.from_array(y),
)

then_body = onnx.helper.make_graph(
    [then_const_node], "then_body", [], [then_out]
)

else_body = onnx.helper.make_graph(
    [else_const_node], "else_body", [], [else_out]
)

if_node = onnx.helper.make_node(
    "If",
    inputs=["cond"],
    outputs=["res"],
    then_branch=then_body,
    else_branch=else_body,
)

cond = np.array(1).astype(bool)
res = x if cond else y
expect(
    if_node,
    inputs=[cond],
    outputs=[res],
    name="test_if",
    opset_imports=[onnx.helper.make_opsetid("", 11)],
)

_if_seq

import numpy as np
import onnx

# Given a bool scalar input cond.
# return constant sequence x if cond is True, otherwise return constant sequence y.

then_out = onnx.helper.make_tensor_sequence_value_info(
    "then_out", onnx.TensorProto.FLOAT, shape=[5]
)
else_out = onnx.helper.make_tensor_sequence_value_info(
    "else_out", onnx.TensorProto.FLOAT, shape=[5]
)

x = [np.array([1, 2, 3, 4, 5]).astype(np.float32)]
y = [np.array([5, 4, 3, 2, 1]).astype(np.float32)]

then_const_node = onnx.helper.make_node(
    "Constant",
    inputs=[],
    outputs=["x"],
    value=onnx.numpy_helper.from_array(x[0]),
)

then_seq_node = onnx.helper.make_node(
    "SequenceConstruct", inputs=["x"], outputs=["then_out"]
)

else_const_node = onnx.helper.make_node(
    "Constant",
    inputs=[],
    outputs=["y"],
    value=onnx.numpy_helper.from_array(y[0]),
)

else_seq_node = onnx.helper.make_node(
    "SequenceConstruct", inputs=["y"], outputs=["else_out"]
)

then_body = onnx.helper.make_graph(
    [then_const_node, then_seq_node], "then_body", [], [then_out]
)

else_body = onnx.helper.make_graph(
    [else_const_node, else_seq_node], "else_body", [], [else_out]
)

if_node = onnx.helper.make_node(
    "If",
    inputs=["cond"],
    outputs=["res"],
    then_branch=then_body,
    else_branch=else_body,
)

cond = np.array(1).astype(bool)
res = x if cond else y
expect(
    if_node,
    inputs=[cond],
    outputs=[res],
    name="test_if_seq",
    opset_imports=[onnx.helper.make_opsetid("", 13)],
)

_if_optional

import numpy as np
import onnx

# Given a bool scalar input cond, return an empty optional sequence of
# tensor if True, return an optional sequence with value x
# (the input optional sequence) otherwise.

ten_in_tp = onnx.helper.make_tensor_type_proto(
    onnx.TensorProto.FLOAT, shape=[5]
)
seq_in_tp = onnx.helper.make_sequence_type_proto(ten_in_tp)

then_out_tensor_tp = onnx.helper.make_tensor_type_proto(
    onnx.TensorProto.FLOAT, shape=[5]
)
then_out_seq_tp = onnx.helper.make_sequence_type_proto(then_out_tensor_tp)
then_out_opt_tp = onnx.helper.make_optional_type_proto(then_out_seq_tp)
then_out = onnx.helper.make_value_info("optional_empty", then_out_opt_tp)

else_out_tensor_tp = onnx.helper.make_tensor_type_proto(
    onnx.TensorProto.FLOAT, shape=[5]
)
else_out_seq_tp = onnx.helper.make_sequence_type_proto(else_out_tensor_tp)
else_out_opt_tp = onnx.helper.make_optional_type_proto(else_out_seq_tp)
else_out = onnx.helper.make_value_info("else_opt", else_out_opt_tp)

x = [np.array([1, 2, 3, 4, 5]).astype(np.float32)]
cond = np.array(0).astype(bool)
res = compute_if_outputs(x, cond)

opt_empty_in = onnx.helper.make_node(
    "Optional", inputs=[], outputs=["optional_empty"], type=seq_in_tp
)

then_body = onnx.helper.make_graph([opt_empty_in], "then_body", [], [then_out])

else_const_node = onnx.helper.make_node(
    "Constant",
    inputs=[],
    outputs=["x"],
    value=onnx.numpy_helper.from_array(x[0]),
)

else_seq_node = onnx.helper.make_node(
    "SequenceConstruct", inputs=["x"], outputs=["else_seq"]
)

else_optional_seq_node = onnx.helper.make_node(
    "Optional", inputs=["else_seq"], outputs=["else_opt"]
)

else_body = onnx.helper.make_graph(
    [else_const_node, else_seq_node, else_optional_seq_node],
    "else_body",
    [],
    [else_out],
)

if_node = onnx.helper.make_node(
    "If",
    inputs=["cond"],
    outputs=["sequence"],
    then_branch=then_body,
    else_branch=else_body,
)

expect(
    if_node,
    inputs=[cond],
    outputs=[res],
    name="test_if_opt",
    output_type_protos=[else_out_opt_tp],
    opset_imports=[onnx.helper.make_opsetid("", 16)],
)

If - 13#

Version

  • name: If (GitHub)

  • domain: main

  • since_version: 13

  • function: False

  • support_level: SupportType.COMMON

  • shape inference: True

This version of the operator has been available since version 13.

Summary

If conditional

Attributes

  • else_branch (required): Graph to run if condition is false. Has N outputs: values you wish to be live-out to the enclosing scope. The number of outputs must match the number of outputs in the then_branch.

  • then_branch (required): Graph to run if condition is true. Has N outputs: values you wish to be live-out to the enclosing scope. The number of outputs must match the number of outputs in the else_branch.

Inputs

  • cond (heterogeneous) - B: Condition for the if

Outputs

Between 1 and 2147483647 outputs.

  • outputs (variadic) - V: Values that are live-out to the enclosing scope. The return values in the then_branch and else_branch must be of the same data type. The then_branch and else_branch may produce tensors with the same element type and different shapes. If corresponding outputs from the then-branch and the else-branch have static shapes S1 and S2, then the shape of the corresponding output variable of the if- node (if present) must be compatible with both S1 and S2 as it represents the union of both possible shapes.For example, if in a model file, the first output of then_branch is typed float tensor with shape [2] and the first output of else_branch is another float tensor with shape [3], If’s first output should have (a) no shape set, or (b) a shape of rank 1 with neither dim_value nor dim_param set, or (c) a shape of rank 1 with a unique dim_param. In contrast, the first output cannot have the shape [2] since [2] and [3] are not compatible.

Type Constraints

  • V in ( seq(tensor(bool)), seq(tensor(complex128)), seq(tensor(complex64)), seq(tensor(double)), seq(tensor(float)), seq(tensor(float16)), seq(tensor(int16)), seq(tensor(int32)), seq(tensor(int64)), seq(tensor(int8)), seq(tensor(string)), seq(tensor(uint16)), seq(tensor(uint32)), seq(tensor(uint64)), seq(tensor(uint8)), tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): All Tensor and Sequence types

  • B in ( tensor(bool) ): Only bool

If - 11#

Version

  • name: If (GitHub)

  • domain: main

  • since_version: 11

  • function: False

  • support_level: SupportType.COMMON

  • shape inference: True

This version of the operator has been available since version 11.

Summary

If conditional

Attributes

  • else_branch (required): Graph to run if condition is false. Has N outputs: values you wish to be live-out to the enclosing scope. The number of outputs must match the number of outputs in the then_branch.

  • then_branch (required): Graph to run if condition is true. Has N outputs: values you wish to be live-out to the enclosing scope. The number of outputs must match the number of outputs in the else_branch.

Inputs

  • cond (heterogeneous) - B: Condition for the if

Outputs

Between 1 and 2147483647 outputs.

  • outputs (variadic) - V: Values that are live-out to the enclosing scope. The return values in the then_branch and else_branch must be of the same data type. The then_branch and else_branch may produce tensors with the same element type and different shapes. If corresponding outputs from the then-branch and the else-branch have static shapes S1 and S2, then the shape of the corresponding output variable of the if- node (if present) must be compatible with both S1 and S2 as it represents the union of both possible shapes.For example, if in a model file, the first output of then_branch is typed float tensor with shape [2] and the first output of else_branch is another float tensor with shape [3], If’s first output should have (a) no shape set, or (b) a shape of rank 1 with neither dim_value nor dim_param set, or (c) a shape of rank 1 with a unique dim_param. In contrast, the first output cannot have the shape [2] since [2] and [3] are not compatible.

Type Constraints

  • V in ( tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): All Tensor types

  • B in ( tensor(bool) ): Only bool

If - 1#

Version

  • name: If (GitHub)

  • domain: main

  • since_version: 1

  • function: False

  • support_level: SupportType.COMMON

  • shape inference: True

This version of the operator has been available since version 1.

Summary

If conditional

Attributes

  • else_branch (required): Graph to run if condition is false. Has N outputs: values you wish to be live-out to the enclosing scope. The number of outputs must match the number of outputs in the then_branch.

  • then_branch (required): Graph to run if condition is true. Has N outputs: values you wish to be live-out to the enclosing scope. The number of outputs must match the number of outputs in the else_branch.

Inputs

  • cond (heterogeneous) - B: Condition for the if

Outputs

Between 1 and 2147483647 outputs.

  • outputs (variadic) - V: Values that are live-out to the enclosing scope. The return values in the then_branch and else_branch must be of the same shape and same data type.

Type Constraints

  • V in ( tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): All Tensor types

  • B in ( tensor(bool) ): Only bool