If#
If - 16#
Version
name: If (GitHub)
domain: main
since_version: 16
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 16.
Summary
If conditional
Attributes
else_branch (required): Graph to run if condition is false. Has N outputs: values you wish to be live-out to the enclosing scope. The number of outputs must match the number of outputs in the then_branch.
then_branch (required): Graph to run if condition is true. Has N outputs: values you wish to be live-out to the enclosing scope. The number of outputs must match the number of outputs in the else_branch.
Inputs
cond (heterogeneous) - B: Condition for the if
Outputs
Between 1 and 2147483647 outputs.
outputs (variadic) - V: Values that are live-out to the enclosing scope. The return values in the then_branch and else_branch must be of the same data type. The then_branch and else_branch may produce tensors with the same element type and different shapes. If corresponding outputs from the then-branch and the else-branch have static shapes S1 and S2, then the shape of the corresponding output variable of the if- node (if present) must be compatible with both S1 and S2 as it represents the union of both possible shapes.For example, if in a model file, the the first output of then_branch is typed float tensor with shape [2] and the first output of else_branch is another float tensor with shape [3], If’s first output should have (a) no shape set, or (b) a shape of rank 1 with neither dim_value nor dim_param set, or (c) a shape of rank 1 with a unique dim_param. In contrast, the first output cannot have the shape [2] since [2] and [3] are not compatible.
Type Constraints
V in ( optional(seq(tensor(bfloat16))), optional(seq(tensor(bool))), optional(seq(tensor(complex128))), optional(seq(tensor(complex64))), optional(seq(tensor(double))), optional(seq(tensor(float))), optional(seq(tensor(float16))), optional(seq(tensor(int16))), optional(seq(tensor(int32))), optional(seq(tensor(int64))), optional(seq(tensor(int8))), optional(seq(tensor(string))), optional(seq(tensor(uint16))), optional(seq(tensor(uint32))), optional(seq(tensor(uint64))), optional(seq(tensor(uint8))), optional(tensor(bfloat16)), optional(tensor(bool)), optional(tensor(complex128)), optional(tensor(complex64)), optional(tensor(double)), optional(tensor(float)), optional(tensor(float16)), optional(tensor(int16)), optional(tensor(int32)), optional(tensor(int64)), optional(tensor(int8)), optional(tensor(string)), optional(tensor(uint16)), optional(tensor(uint32)), optional(tensor(uint64)), optional(tensor(uint8)), seq(tensor(bfloat16)), seq(tensor(bool)), seq(tensor(complex128)), seq(tensor(complex64)), seq(tensor(double)), seq(tensor(float)), seq(tensor(float16)), seq(tensor(int16)), seq(tensor(int32)), seq(tensor(int64)), seq(tensor(int8)), seq(tensor(string)), seq(tensor(uint16)), seq(tensor(uint32)), seq(tensor(uint64)), seq(tensor(uint8)), tensor(bfloat16), tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): All Tensor, Sequence(Tensor), Optional(Tensor), and Optional(Sequence(Tensor)) types
B in ( tensor(bool) ): Only bool
Examples
if
# Given a bool scalar input cond.
# return constant tensor x if cond is True, otherwise return constant tensor y.
then_out = onnx.helper.make_tensor_value_info('then_out', onnx.TensorProto.FLOAT, [5])
else_out = onnx.helper.make_tensor_value_info('else_out', onnx.TensorProto.FLOAT, [5])
x = np.array([1, 2, 3, 4, 5]).astype(np.float32)
y = np.array([5, 4, 3, 2, 1]).astype(np.float32)
then_const_node = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=['then_out'],
value=onnx.numpy_helper.from_array(x)
)
else_const_node = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=['else_out'],
value=onnx.numpy_helper.from_array(y)
)
then_body = onnx.helper.make_graph(
[then_const_node],
'then_body',
[],
[then_out]
)
else_body = onnx.helper.make_graph(
[else_const_node],
'else_body',
[],
[else_out]
)
if_node = onnx.helper.make_node(
'If',
inputs=['cond'],
outputs=['res'],
then_branch=then_body,
else_branch=else_body
)
cond = np.array(1).astype(bool)
res = x if cond else y
expect(if_node, inputs=[cond], outputs=[res], name='test_if',
opset_imports=[onnx.helper.make_opsetid("", 11)])
if_seq
# Given a bool scalar input cond.
# return constant sequence x if cond is True, otherwise return constant sequence y.
then_out = onnx.helper.make_tensor_sequence_value_info('then_out', onnx.TensorProto.FLOAT, shape=[5])
else_out = onnx.helper.make_tensor_sequence_value_info('else_out', onnx.TensorProto.FLOAT, shape=[5])
x = [np.array([1, 2, 3, 4, 5]).astype(np.float32)]
y = [np.array([5, 4, 3, 2, 1]).astype(np.float32)]
then_const_node = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=['x'],
value=onnx.numpy_helper.from_array(x[0])
)
then_seq_node = onnx.helper.make_node(
'SequenceConstruct',
inputs=['x'],
outputs=['then_out']
)
else_const_node = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=['y'],
value=onnx.numpy_helper.from_array(y[0])
)
else_seq_node = onnx.helper.make_node(
'SequenceConstruct',
inputs=['y'],
outputs=['else_out']
)
then_body = onnx.helper.make_graph(
[then_const_node, then_seq_node],
'then_body',
[],
[then_out]
)
else_body = onnx.helper.make_graph(
[else_const_node, else_seq_node],
'else_body',
[],
[else_out]
)
if_node = onnx.helper.make_node(
'If',
inputs=['cond'],
outputs=['res'],
then_branch=then_body,
else_branch=else_body
)
cond = np.array(1).astype(bool)
res = x if cond else y
expect(if_node, inputs=[cond], outputs=[res], name='test_if_seq',
opset_imports=[onnx.helper.make_opsetid("", 13)])
if_optional
# Given a bool scalar input cond, return an empty optional sequence of
# tensor if True, return an optional sequence with value x
# (the input optional sequence) otherwise.
ten_in_tp = onnx.helper.make_tensor_type_proto(onnx.TensorProto.FLOAT, shape=[5])
seq_in_tp = onnx.helper.make_sequence_type_proto(ten_in_tp)
then_out_tensor_tp = onnx.helper.make_tensor_type_proto(onnx.TensorProto.FLOAT, shape=[5])
then_out_seq_tp = onnx.helper.make_sequence_type_proto(then_out_tensor_tp)
then_out_opt_tp = onnx.helper.make_optional_type_proto(then_out_seq_tp)
then_out = onnx.helper.make_value_info('optional_empty', then_out_opt_tp)
else_out_tensor_tp = onnx.helper.make_tensor_type_proto(onnx.TensorProto.FLOAT, shape=[5])
else_out_seq_tp = onnx.helper.make_sequence_type_proto(else_out_tensor_tp)
else_out_opt_tp = onnx.helper.make_optional_type_proto(else_out_seq_tp)
else_out = onnx.helper.make_value_info('else_opt', else_out_opt_tp)
x = [np.array([1, 2, 3, 4, 5]).astype(np.float32)]
cond = np.array(0).astype(bool)
res = compute_if_outputs(x, cond)
opt_empty_in = onnx.helper.make_node(
'Optional',
inputs=[],
outputs=['optional_empty'],
type=seq_in_tp
)
then_body = onnx.helper.make_graph(
[opt_empty_in],
'then_body',
[],
[then_out]
)
else_const_node = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=['x'],
value=onnx.numpy_helper.from_array(x[0])
)
else_seq_node = onnx.helper.make_node(
'SequenceConstruct',
inputs=['x'],
outputs=['else_seq']
)
else_optional_seq_node = onnx.helper.make_node(
'Optional',
inputs=['else_seq'],
outputs=['else_opt']
)
else_body = onnx.helper.make_graph(
[else_const_node, else_seq_node, else_optional_seq_node],
'else_body',
[],
[else_out]
)
if_node = onnx.helper.make_node(
'If',
inputs=['cond'],
outputs=['sequence'],
then_branch=then_body,
else_branch=else_body
)
expect(if_node, inputs=[cond], outputs=[res], name='test_if_opt',
output_type_protos=[else_out_opt_tp],
opset_imports=[onnx.helper.make_opsetid("", 16)])
Differences
0 | 0 | If conditional | If conditional |
1 | 1 |
|
|
2 | 2 | **Attributes** | **Attributes** |
3 | 3 |
|
|
4 | 4 | * **else_branch** (required): | * **else_branch** (required): |
5 | 5 | Graph to run if condition is false. Has N outputs: values you wish | Graph to run if condition is false. Has N outputs: values you wish |
6 | 6 | to be live-out to the enclosing scope. The number of outputs must | to be live-out to the enclosing scope. The number of outputs must |
7 | 7 | match the number of outputs in the then_branch. | match the number of outputs in the then_branch. |
8 | 8 | * **then_branch** (required): | * **then_branch** (required): |
9 | 9 | Graph to run if condition is true. Has N outputs: values you wish to | Graph to run if condition is true. Has N outputs: values you wish to |
10 | 10 | be live-out to the enclosing scope. The number of outputs must match | be live-out to the enclosing scope. The number of outputs must match |
11 | 11 | the number of outputs in the else_branch. | the number of outputs in the else_branch. |
12 | 12 |
|
|
13 | 13 | **Inputs** | **Inputs** |
14 | 14 |
|
|
15 | 15 | * **cond** (heterogeneous) - **B**: | * **cond** (heterogeneous) - **B**: |
16 | 16 | Condition for the if | Condition for the if |
17 | 17 |
|
|
18 | 18 | **Outputs** | **Outputs** |
19 | 19 |
|
|
20 | 20 | Between 1 and 2147483647 outputs. | Between 1 and 2147483647 outputs. |
21 | 21 |
|
|
22 | 22 | * **outputs** (variadic) - **V**: | * **outputs** (variadic) - **V**: |
23 | 23 | Values that are live-out to the enclosing scope. The return values | Values that are live-out to the enclosing scope. The return values |
24 | 24 | in the then_branch and else_branch must be of the same data | in the then_branch and else_branch must be of the same data |
25 | 25 | type. The then_branch and else_branch may produce tensors with | type. The then_branch and else_branch may produce tensors with |
26 | 26 | the same element type and different shapes. If corresponding outputs | the same element type and different shapes. If corresponding outputs |
27 | 27 | from the then-branch and the else-branch have static shapes S1 and | from the then-branch and the else-branch have static shapes S1 and |
28 | 28 | S2, then the shape of the corresponding output variable of the if- | S2, then the shape of the corresponding output variable of the if- |
29 | 29 | node (if present) must be compatible with both S1 and S2 as it | node (if present) must be compatible with both S1 and S2 as it |
30 | 30 | represents the union of both possible shapes.For example, if in a | represents the union of both possible shapes.For example, if in a |
31 | 31 | model file, the the first output of then_branch is typed float | model file, the the first output of then_branch is typed float |
32 | 32 | tensor with shape [2] and the first output of else_branch is | tensor with shape [2] and the first output of else_branch is |
33 | 33 | another float tensor with shape [3], If's first output should have | another float tensor with shape [3], If's first output should have |
34 | 34 | (a) no shape set, or (b) a shape of rank 1 with neither dim_value | (a) no shape set, or (b) a shape of rank 1 with neither dim_value |
35 | 35 | nor dim_param set, or (c) a shape of rank 1 with a unique | nor dim_param set, or (c) a shape of rank 1 with a unique |
36 | 36 | dim_param. In contrast, the first output cannot have the shape [2] | dim_param. In contrast, the first output cannot have the shape [2] |
37 | 37 | since [2] and [3] are not compatible. | since [2] and [3] are not compatible. |
38 | 38 |
|
|
39 | 39 | **Type Constraints** | **Type Constraints** |
40 | 40 |
|
|
41 | 41 | * **V** in ( | * **V** in ( |
42 | optional(seq(tensor(bfloat16))), | ||
43 | optional(seq(tensor(bool))), | ||
44 | optional(seq(tensor(complex128))), | ||
45 | optional(seq(tensor(complex64))), | ||
46 | optional(seq(tensor(double))), | ||
47 | optional(seq(tensor(float))), | ||
48 | optional(seq(tensor(float16))), | ||
49 | optional(seq(tensor(int16))), | ||
50 | optional(seq(tensor(int32))), | ||
51 | optional(seq(tensor(int64))), | ||
52 | optional(seq(tensor(int8))), | ||
53 | optional(seq(tensor(string))), | ||
54 | optional(seq(tensor(uint16))), | ||
55 | optional(seq(tensor(uint32))), | ||
56 | optional(seq(tensor(uint64))), | ||
57 | optional(seq(tensor(uint8))), | ||
58 | optional(tensor(bfloat16)), | ||
59 | optional(tensor(bool)), | ||
60 | optional(tensor(complex128)), | ||
61 | optional(tensor(complex64)), | ||
62 | optional(tensor(double)), | ||
63 | optional(tensor(float)), | ||
64 | optional(tensor(float16)), | ||
65 | optional(tensor(int16)), | ||
66 | optional(tensor(int32)), | ||
67 | optional(tensor(int64)), | ||
68 | optional(tensor(int8)), | ||
69 | optional(tensor(string)), | ||
70 | optional(tensor(uint16)), | ||
71 | optional(tensor(uint32)), | ||
72 | optional(tensor(uint64)), | ||
73 | optional(tensor(uint8)), | ||
74 | seq(tensor(bfloat16)), | ||
42 | 75 | seq(tensor(bool)), | seq(tensor(bool)), |
43 | 76 | seq(tensor(complex128)), | seq(tensor(complex128)), |
44 | 77 | seq(tensor(complex64)), | seq(tensor(complex64)), |
45 | 78 | seq(tensor(double)), | seq(tensor(double)), |
46 | 79 | seq(tensor(float)), | seq(tensor(float)), |
47 | 80 | seq(tensor(float16)), | seq(tensor(float16)), |
48 | 81 | seq(tensor(int16)), | seq(tensor(int16)), |
49 | 82 | seq(tensor(int32)), | seq(tensor(int32)), |
50 | 83 | seq(tensor(int64)), | seq(tensor(int64)), |
51 | 84 | seq(tensor(int8)), | seq(tensor(int8)), |
52 | 85 | seq(tensor(string)), | seq(tensor(string)), |
53 | 86 | seq(tensor(uint16)), | seq(tensor(uint16)), |
54 | 87 | seq(tensor(uint32)), | seq(tensor(uint32)), |
55 | 88 | seq(tensor(uint64)), | seq(tensor(uint64)), |
56 | 89 | seq(tensor(uint8)), | seq(tensor(uint8)), |
90 | tensor(bfloat16), | ||
57 | 91 | tensor(bool), | tensor(bool), |
58 | 92 | tensor(complex128), | tensor(complex128), |
59 | 93 | tensor(complex64), | tensor(complex64), |
60 | 94 | tensor(double), | tensor(double), |
61 | 95 | tensor(float), | tensor(float), |
62 | 96 | tensor(float16), | tensor(float16), |
63 | 97 | tensor(int16), | tensor(int16), |
64 | 98 | tensor(int32), | tensor(int32), |
65 | 99 | tensor(int64), | tensor(int64), |
66 | 100 | tensor(int8), | tensor(int8), |
67 | 101 | tensor(string), | tensor(string), |
68 | 102 | tensor(uint16), | tensor(uint16), |
69 | 103 | tensor(uint32), | tensor(uint32), |
70 | 104 | tensor(uint64), | tensor(uint64), |
71 | 105 | tensor(uint8) | tensor(uint8) |
72 | 106 | ): | ): |
73 | 107 | All Tensor and Sequence types |
|
108 | Optional(Sequence(Tensor)) types | ||
74 | 109 | * **B** in ( | * **B** in ( |
75 | 110 | tensor(bool) | tensor(bool) |
76 | 111 | ): | ): |
77 | 112 | Only bool | Only bool |
If - 13#
Version
name: If (GitHub)
domain: main
since_version: 13
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 13.
Summary
If conditional
Attributes
else_branch (required): Graph to run if condition is false. Has N outputs: values you wish to be live-out to the enclosing scope. The number of outputs must match the number of outputs in the then_branch.
then_branch (required): Graph to run if condition is true. Has N outputs: values you wish to be live-out to the enclosing scope. The number of outputs must match the number of outputs in the else_branch.
Inputs
cond (heterogeneous) - B: Condition for the if
Outputs
Between 1 and 2147483647 outputs.
outputs (variadic) - V: Values that are live-out to the enclosing scope. The return values in the then_branch and else_branch must be of the same data type. The then_branch and else_branch may produce tensors with the same element type and different shapes. If corresponding outputs from the then-branch and the else-branch have static shapes S1 and S2, then the shape of the corresponding output variable of the if- node (if present) must be compatible with both S1 and S2 as it represents the union of both possible shapes.For example, if in a model file, the the first output of then_branch is typed float tensor with shape [2] and the first output of else_branch is another float tensor with shape [3], If’s first output should have (a) no shape set, or (b) a shape of rank 1 with neither dim_value nor dim_param set, or (c) a shape of rank 1 with a unique dim_param. In contrast, the first output cannot have the shape [2] since [2] and [3] are not compatible.
Type Constraints
V in ( seq(tensor(bool)), seq(tensor(complex128)), seq(tensor(complex64)), seq(tensor(double)), seq(tensor(float)), seq(tensor(float16)), seq(tensor(int16)), seq(tensor(int32)), seq(tensor(int64)), seq(tensor(int8)), seq(tensor(string)), seq(tensor(uint16)), seq(tensor(uint32)), seq(tensor(uint64)), seq(tensor(uint8)), tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): All Tensor and Sequence types
B in ( tensor(bool) ): Only bool
Differences
0 | 0 | If conditional | If conditional |
1 | 1 |
|
|
2 | 2 | **Attributes** | **Attributes** |
3 | 3 |
|
|
4 | 4 | * **else_branch** (required): | * **else_branch** (required): |
5 | 5 | Graph to run if condition is false. Has N outputs: values you wish | Graph to run if condition is false. Has N outputs: values you wish |
6 | 6 | to be live-out to the enclosing scope. The number of outputs must | to be live-out to the enclosing scope. The number of outputs must |
7 | 7 | match the number of outputs in the then_branch. | match the number of outputs in the then_branch. |
8 | 8 | * **then_branch** (required): | * **then_branch** (required): |
9 | 9 | Graph to run if condition is true. Has N outputs: values you wish to | Graph to run if condition is true. Has N outputs: values you wish to |
10 | 10 | be live-out to the enclosing scope. The number of outputs must match | be live-out to the enclosing scope. The number of outputs must match |
11 | 11 | the number of outputs in the else_branch. | the number of outputs in the else_branch. |
12 | 12 |
|
|
13 | 13 | **Inputs** | **Inputs** |
14 | 14 |
|
|
15 | 15 | * **cond** (heterogeneous) - **B**: | * **cond** (heterogeneous) - **B**: |
16 | 16 | Condition for the if | Condition for the if |
17 | 17 |
|
|
18 | 18 | **Outputs** | **Outputs** |
19 | 19 |
|
|
20 | 20 | Between 1 and 2147483647 outputs. | Between 1 and 2147483647 outputs. |
21 | 21 |
|
|
22 | 22 | * **outputs** (variadic) - **V**: | * **outputs** (variadic) - **V**: |
23 | 23 | Values that are live-out to the enclosing scope. The return values | Values that are live-out to the enclosing scope. The return values |
24 | 24 | in the then_branch and else_branch must be of the same data | in the then_branch and else_branch must be of the same data |
25 | 25 | type. The then_branch and else_branch may produce tensors with | type. The then_branch and else_branch may produce tensors with |
26 | 26 | the same element type and different shapes. If corresponding outputs | the same element type and different shapes. If corresponding outputs |
27 | 27 | from the then-branch and the else-branch have static shapes S1 and | from the then-branch and the else-branch have static shapes S1 and |
28 | 28 | S2, then the shape of the corresponding output variable of the if- | S2, then the shape of the corresponding output variable of the if- |
29 | 29 | node (if present) must be compatible with both S1 and S2 as it | node (if present) must be compatible with both S1 and S2 as it |
30 | 30 | represents the union of both possible shapes.For example, if in a | represents the union of both possible shapes.For example, if in a |
31 | 31 | model file, the the first output of then_branch is typed float | model file, the the first output of then_branch is typed float |
32 | 32 | tensor with shape [2] and the first output of else_branch is | tensor with shape [2] and the first output of else_branch is |
33 | 33 | another float tensor with shape [3], If's first output should have | another float tensor with shape [3], If's first output should have |
34 | 34 | (a) no shape set, or (b) a shape of rank 1 with neither dim_value | (a) no shape set, or (b) a shape of rank 1 with neither dim_value |
35 | 35 | nor dim_param set, or (c) a shape of rank 1 with a unique | nor dim_param set, or (c) a shape of rank 1 with a unique |
36 | 36 | dim_param. In contrast, the first output cannot have the shape [2] | dim_param. In contrast, the first output cannot have the shape [2] |
37 | 37 | since [2] and [3] are not compatible. | since [2] and [3] are not compatible. |
38 | 38 |
|
|
39 | 39 | **Type Constraints** | **Type Constraints** |
40 | 40 |
|
|
41 | 41 | * **V** in ( | * **V** in ( |
42 | seq(tensor(bool)), | ||
43 | seq(tensor(complex128)), | ||
44 | seq(tensor(complex64)), | ||
45 | seq(tensor(double)), | ||
46 | seq(tensor(float)), | ||
47 | seq(tensor(float16)), | ||
48 | seq(tensor(int16)), | ||
49 | seq(tensor(int32)), | ||
50 | seq(tensor(int64)), | ||
51 | seq(tensor(int8)), | ||
52 | seq(tensor(string)), | ||
53 | seq(tensor(uint16)), | ||
54 | seq(tensor(uint32)), | ||
55 | seq(tensor(uint64)), | ||
56 | seq(tensor(uint8)), | ||
42 | 57 | tensor(bool), | tensor(bool), |
43 | 58 | tensor(complex128), | tensor(complex128), |
44 | 59 | tensor(complex64), | tensor(complex64), |
45 | 60 | tensor(double), | tensor(double), |
46 | 61 | tensor(float), | tensor(float), |
47 | 62 | tensor(float16), | tensor(float16), |
48 | 63 | tensor(int16), | tensor(int16), |
49 | 64 | tensor(int32), | tensor(int32), |
50 | 65 | tensor(int64), | tensor(int64), |
51 | 66 | tensor(int8), | tensor(int8), |
52 | 67 | tensor(string), | tensor(string), |
53 | 68 | tensor(uint16), | tensor(uint16), |
54 | 69 | tensor(uint32), | tensor(uint32), |
55 | 70 | tensor(uint64), | tensor(uint64), |
56 | 71 | tensor(uint8) | tensor(uint8) |
57 | 72 | ): | ): |
58 | 73 | All Tensor types |
|
59 | 74 | * **B** in ( | * **B** in ( |
60 | 75 | tensor(bool) | tensor(bool) |
61 | 76 | ): | ): |
62 | 77 | Only bool | Only bool |
If - 11#
Version
name: If (GitHub)
domain: main
since_version: 11
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 11.
Summary
If conditional
Attributes
else_branch (required): Graph to run if condition is false. Has N outputs: values you wish to be live-out to the enclosing scope. The number of outputs must match the number of outputs in the then_branch.
then_branch (required): Graph to run if condition is true. Has N outputs: values you wish to be live-out to the enclosing scope. The number of outputs must match the number of outputs in the else_branch.
Inputs
cond (heterogeneous) - B: Condition for the if
Outputs
Between 1 and 2147483647 outputs.
outputs (variadic) - V: Values that are live-out to the enclosing scope. The return values in the then_branch and else_branch must be of the same data type. The then_branch and else_branch may produce tensors with the same element type and different shapes. If corresponding outputs from the then-branch and the else-branch have static shapes S1 and S2, then the shape of the corresponding output variable of the if- node (if present) must be compatible with both S1 and S2 as it represents the union of both possible shapes.For example, if in a model file, the the first output of then_branch is typed float tensor with shape [2] and the first output of else_branch is another float tensor with shape [3], If’s first output should have (a) no shape set, or (b) a shape of rank 1 with neither dim_value nor dim_param set, or (c) a shape of rank 1 with a unique dim_param. In contrast, the first output cannot have the shape [2] since [2] and [3] are not compatible.
Type Constraints
V in ( tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): All Tensor types
B in ( tensor(bool) ): Only bool
Differences
0 | 0 | If conditional | If conditional |
1 | 1 |
|
|
2 | 2 | **Attributes** | **Attributes** |
3 | 3 |
|
|
4 | 4 | * **else_branch** (required): | * **else_branch** (required): |
5 | 5 | Graph to run if condition is false. Has N outputs: values you wish | Graph to run if condition is false. Has N outputs: values you wish |
6 | 6 | to be live-out to the enclosing scope. The number of outputs must | to be live-out to the enclosing scope. The number of outputs must |
7 | 7 | match the number of outputs in the then_branch. | match the number of outputs in the then_branch. |
8 | 8 | * **then_branch** (required): | * **then_branch** (required): |
9 | 9 | Graph to run if condition is true. Has N outputs: values you wish to | Graph to run if condition is true. Has N outputs: values you wish to |
10 | 10 | be live-out to the enclosing scope. The number of outputs must match | be live-out to the enclosing scope. The number of outputs must match |
11 | 11 | the number of outputs in the else_branch. | the number of outputs in the else_branch. |
12 | 12 |
|
|
13 | 13 | **Inputs** | **Inputs** |
14 | 14 |
|
|
15 | 15 | * **cond** (heterogeneous) - **B**: | * **cond** (heterogeneous) - **B**: |
16 | 16 | Condition for the if | Condition for the if |
17 | 17 |
|
|
18 | 18 | **Outputs** | **Outputs** |
19 | 19 |
|
|
20 | 20 | Between 1 and 2147483647 outputs. | Between 1 and 2147483647 outputs. |
21 | 21 |
|
|
22 | 22 | * **outputs** (variadic) - **V**: | * **outputs** (variadic) - **V**: |
23 | 23 | Values that are live-out to the enclosing scope. The return values | Values that are live-out to the enclosing scope. The return values |
24 | 24 | in the then_branch and else_branch must be of the same shape and |
|
25 | type. The then_branch and else_branch may produce tensors with | ||
26 | the same element type and different shapes. If corresponding outputs | ||
27 | from the then-branch and the else-branch have static shapes S1 and | ||
28 | S2, then the shape of the corresponding output variable of the if- | ||
29 | node (if present) must be compatible with both S1 and S2 as it | ||
30 | represents the union of both possible shapes.For example, if in a | ||
25 | 31 | same data type. |
|
32 | tensor with shape [2] and the first output of else_branch is | ||
33 | another float tensor with shape [3], If's first output should have | ||
34 | (a) no shape set, or (b) a shape of rank 1 with neither dim_value | ||
35 | nor dim_param set, or (c) a shape of rank 1 with a unique | ||
36 | dim_param. In contrast, the first output cannot have the shape [2] | ||
37 | since [2] and [3] are not compatible. | ||
26 | 38 |
|
|
27 | 39 | **Type Constraints** | **Type Constraints** |
28 | 40 |
|
|
29 | 41 | * **V** in ( | * **V** in ( |
30 | 42 | tensor(bool), | tensor(bool), |
31 | 43 | tensor(complex128), | tensor(complex128), |
32 | 44 | tensor(complex64), | tensor(complex64), |
33 | 45 | tensor(double), | tensor(double), |
34 | 46 | tensor(float), | tensor(float), |
35 | 47 | tensor(float16), | tensor(float16), |
36 | 48 | tensor(int16), | tensor(int16), |
37 | 49 | tensor(int32), | tensor(int32), |
38 | 50 | tensor(int64), | tensor(int64), |
39 | 51 | tensor(int8), | tensor(int8), |
40 | 52 | tensor(string), | tensor(string), |
41 | 53 | tensor(uint16), | tensor(uint16), |
42 | 54 | tensor(uint32), | tensor(uint32), |
43 | 55 | tensor(uint64), | tensor(uint64), |
44 | 56 | tensor(uint8) | tensor(uint8) |
45 | 57 | ): | ): |
46 | 58 | All Tensor types | All Tensor types |
47 | 59 | * **B** in ( | * **B** in ( |
48 | 60 | tensor(bool) | tensor(bool) |
49 | 61 | ): | ): |
50 | 62 | Only bool | Only bool |
If - 1#
Version
name: If (GitHub)
domain: main
since_version: 1
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 1.
Summary
If conditional
Attributes
else_branch (required): Graph to run if condition is false. Has N outputs: values you wish to be live-out to the enclosing scope. The number of outputs must match the number of outputs in the then_branch.
then_branch (required): Graph to run if condition is true. Has N outputs: values you wish to be live-out to the enclosing scope. The number of outputs must match the number of outputs in the else_branch.
Inputs
cond (heterogeneous) - B: Condition for the if
Outputs
Between 1 and 2147483647 outputs.
outputs (variadic) - V: Values that are live-out to the enclosing scope. The return values in the then_branch and else_branch must be of the same shape and same data type.
Type Constraints
V in ( tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): All Tensor types
B in ( tensor(bool) ): Only bool