ReduceL2#
ReduceL2 - 13#
Version
name: ReduceL2 (GitHub)
domain: main
since_version: 13
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 13.
Summary
Computes the L2 norm of the input tensor’s element along the provided axes. The resulted tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then the resulted tensor have the reduced dimension pruned.
The above behavior is similar to numpy, with the exception that numpy default keepdims to False instead of True.
Attributes
axes: A list of integers, along which to reduce. The default is to reduce over all the dimensions of the input tensor. Accepted range is [-r, r-1] where r = rank(data).
keepdims: Keep the reduced dimension or not, default 1 mean keep reduced dimension. Default value is
1
.
Inputs
data (heterogeneous) - T: An input tensor.
Outputs
reduced (heterogeneous) - T: Reduced output tensor.
Type Constraints
T in ( tensor(bfloat16), tensor(double), tensor(float), tensor(float16), tensor(int32), tensor(int64), tensor(uint32), tensor(uint64) ): Constrain input and output types to high-precision numeric tensors.
Examples
do_not_keepdims
shape = [3, 2, 2]
axes = [2]
keepdims = 0
node = onnx.helper.make_node(
'ReduceL2',
inputs=['data'],
outputs=['reduced'],
axes=axes,
keepdims=keepdims
)
data = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape)
#print(data)
#[[[1., 2.], [3., 4.]], [[5., 6.], [7., 8.]], [[9., 10.], [11., 12.]]]
reduced = np.sqrt(np.sum(
a=np.square(data), axis=tuple(axes), keepdims=keepdims == 1))
#print(reduced)
#[[2.23606798, 5.],
# [7.81024968, 10.63014581],
# [13.45362405, 16.2788206]]
expect(node, inputs=[data], outputs=[reduced],
name='test_reduce_l2_do_not_keepdims_example')
np.random.seed(0)
data = np.random.uniform(-10, 10, shape).astype(np.float32)
reduced = np.sqrt(np.sum(
a=np.square(data), axis=tuple(axes), keepdims=keepdims == 1))
expect(node, inputs=[data], outputs=[reduced],
name='test_reduce_l2_do_not_keepdims_random')
keepdims
shape = [3, 2, 2]
axes = [2]
keepdims = 1
node = onnx.helper.make_node(
'ReduceL2',
inputs=['data'],
outputs=['reduced'],
axes=axes,
keepdims=keepdims
)
data = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape)
#print(data)
#[[[1., 2.], [3., 4.]], [[5., 6.], [7., 8.]], [[9., 10.], [11., 12.]]]
reduced = np.sqrt(np.sum(
a=np.square(data), axis=tuple(axes), keepdims=keepdims == 1))
#print(reduced)
#[[[2.23606798], [5.]]
# [[7.81024968], [10.63014581]]
# [[13.45362405], [16.2788206 ]]]
expect(node, inputs=[data], outputs=[reduced],
name='test_reduce_l2_keep_dims_example')
np.random.seed(0)
data = np.random.uniform(-10, 10, shape).astype(np.float32)
reduced = np.sqrt(np.sum(
a=np.square(data), axis=tuple(axes), keepdims=keepdims == 1))
expect(node, inputs=[data], outputs=[reduced], name='test_reduce_l2_keep_dims_random')
default_axes_keepdims
shape = [3, 2, 2]
axes = None
keepdims = 1
node = onnx.helper.make_node(
'ReduceL2',
inputs=['data'],
outputs=['reduced'],
keepdims=keepdims
)
data = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape)
#print(data)
#[[[1., 2.], [3., 4.]], [[5., 6.], [7., 8.]], [[9., 10.], [11., 12.]]]
reduced = np.sqrt(np.sum(
a=np.square(data), axis=axes, keepdims=keepdims == 1))
#print(reduced)
#[[[25.49509757]]]
expect(node, inputs=[data], outputs=[reduced],
name='test_reduce_l2_default_axes_keepdims_example')
np.random.seed(0)
data = np.random.uniform(-10, 10, shape).astype(np.float32)
reduced = np.sqrt(np.sum(
a=np.square(data), axis=axes, keepdims=keepdims == 1))
expect(node, inputs=[data], outputs=[reduced],
name='test_reduce_l2_default_axes_keepdims_random')
negative_axes_keepdims
shape = [3, 2, 2]
axes = [-1]
keepdims = 1
node = onnx.helper.make_node(
'ReduceL2',
inputs=['data'],
outputs=['reduced'],
axes=axes,
keepdims=keepdims
)
data = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape)
# print(data)
#[[[1., 2.], [3., 4.]], [[5., 6.], [7., 8.]], [[9., 10.], [11., 12.]]]
reduced = np.sqrt(np.sum(
a=np.square(data), axis=tuple(axes), keepdims=keepdims == 1))
# print(reduced)
#[[[2.23606798], [5.]]
# [[7.81024968], [10.63014581]]
# [[13.45362405], [16.2788206 ]]]
expect(node, inputs=[data], outputs=[reduced],
name='test_reduce_l2_negative_axes_keep_dims_example')
np.random.seed(0)
data = np.random.uniform(-10, 10, shape).astype(np.float32)
reduced = np.sqrt(np.sum(
a=np.square(data), axis=tuple(axes), keepdims=keepdims == 1))
expect(node, inputs=[data], outputs=[reduced],
name='test_reduce_l2_negative_axes_keep_dims_random')
Differences
0 | 0 | Computes the L2 norm of the input tensor's element along the provided axes. The resulted | Computes the L2 norm of the input tensor's element along the provided axes. The resulted |
1 | 1 | tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then | tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then |
2 | 2 | the resulted tensor have the reduced dimension pruned. | the resulted tensor have the reduced dimension pruned. |
3 | 3 |
|
|
4 | 4 | The above behavior is similar to numpy, with the exception that numpy default keepdims to | The above behavior is similar to numpy, with the exception that numpy default keepdims to |
5 | 5 | False instead of True. | False instead of True. |
6 | 6 |
|
|
7 | 7 | **Attributes** | **Attributes** |
8 | 8 |
|
|
9 | 9 | * **axes**: | * **axes**: |
10 | 10 | A list of integers, along which to reduce. The default is to reduce | A list of integers, along which to reduce. The default is to reduce |
11 | 11 | over all the dimensions of the input tensor. Accepted range is [-r, | over all the dimensions of the input tensor. Accepted range is [-r, |
12 | 12 | r-1] where r = rank(data). | r-1] where r = rank(data). |
13 | 13 | * **keepdims**: | * **keepdims**: |
14 | 14 | Keep the reduced dimension or not, default 1 mean keep reduced | Keep the reduced dimension or not, default 1 mean keep reduced |
15 | 15 | dimension. Default value is 1. | dimension. Default value is 1. |
16 | 16 |
|
|
17 | 17 | **Inputs** | **Inputs** |
18 | 18 |
|
|
19 | 19 | * **data** (heterogeneous) - **T**: | * **data** (heterogeneous) - **T**: |
20 | 20 | An input tensor. | An input tensor. |
21 | 21 |
|
|
22 | 22 | **Outputs** | **Outputs** |
23 | 23 |
|
|
24 | 24 | * **reduced** (heterogeneous) - **T**: | * **reduced** (heterogeneous) - **T**: |
25 | 25 | Reduced output tensor. | Reduced output tensor. |
26 | 26 |
|
|
27 | 27 | **Type Constraints** | **Type Constraints** |
28 | 28 |
|
|
29 | 29 | * **T** in ( | * **T** in ( |
30 | tensor(bfloat16), | ||
30 | 31 | tensor(double), | tensor(double), |
31 | 32 | tensor(float), | tensor(float), |
32 | 33 | tensor(float16), | tensor(float16), |
33 | 34 | tensor(int32), | tensor(int32), |
34 | 35 | tensor(int64), | tensor(int64), |
35 | 36 | tensor(uint32), | tensor(uint32), |
36 | 37 | tensor(uint64) | tensor(uint64) |
37 | 38 | ): | ): |
38 | 39 | Constrain input and output types to high-precision numeric tensors. | Constrain input and output types to high-precision numeric tensors. |
ReduceL2 - 11#
Version
name: ReduceL2 (GitHub)
domain: main
since_version: 11
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 11.
Summary
Computes the L2 norm of the input tensor’s element along the provided axes. The resulted tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then the resulted tensor have the reduced dimension pruned.
The above behavior is similar to numpy, with the exception that numpy default keepdims to False instead of True.
Attributes
axes: A list of integers, along which to reduce. The default is to reduce over all the dimensions of the input tensor. Accepted range is [-r, r-1] where r = rank(data).
keepdims: Keep the reduced dimension or not, default 1 mean keep reduced dimension. Default value is
1
.
Inputs
data (heterogeneous) - T: An input tensor.
Outputs
reduced (heterogeneous) - T: Reduced output tensor.
Type Constraints
T in ( tensor(double), tensor(float), tensor(float16), tensor(int32), tensor(int64), tensor(uint32), tensor(uint64) ): Constrain input and output types to high-precision numeric tensors.
Differences
0 | 0 | Computes the L2 norm of the input tensor's element along the provided axes. The resulted | Computes the L2 norm of the input tensor's element along the provided axes. The resulted |
1 | 1 | tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then | tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then |
2 | 2 | the resulted tensor have the reduced dimension pruned. | the resulted tensor have the reduced dimension pruned. |
3 | 3 |
|
|
4 | 4 | The above behavior is similar to numpy, with the exception that numpy default keepdims to | The above behavior is similar to numpy, with the exception that numpy default keepdims to |
5 | 5 | False instead of True. | False instead of True. |
6 | 6 |
|
|
7 | 7 | **Attributes** | **Attributes** |
8 | 8 |
|
|
9 | 9 | * **axes**: | * **axes**: |
10 | 10 | A list of integers, along which to reduce. The default is to reduce | A list of integers, along which to reduce. The default is to reduce |
11 | 11 | over all the dimensions of the input tensor. |
|
12 | r-1] where r = rank(data). | ||
12 | 13 | * **keepdims**: | * **keepdims**: |
13 | 14 | Keep the reduced dimension or not, default 1 mean keep reduced | Keep the reduced dimension or not, default 1 mean keep reduced |
14 | 15 | dimension. Default value is 1. | dimension. Default value is 1. |
15 | 16 |
|
|
16 | 17 | **Inputs** | **Inputs** |
17 | 18 |
|
|
18 | 19 | * **data** (heterogeneous) - **T**: | * **data** (heterogeneous) - **T**: |
19 | 20 | An input tensor. | An input tensor. |
20 | 21 |
|
|
21 | 22 | **Outputs** | **Outputs** |
22 | 23 |
|
|
23 | 24 | * **reduced** (heterogeneous) - **T**: | * **reduced** (heterogeneous) - **T**: |
24 | 25 | Reduced output tensor. | Reduced output tensor. |
25 | 26 |
|
|
26 | 27 | **Type Constraints** | **Type Constraints** |
27 | 28 |
|
|
28 | 29 | * **T** in ( | * **T** in ( |
29 | 30 | tensor(double), | tensor(double), |
30 | 31 | tensor(float), | tensor(float), |
31 | 32 | tensor(float16), | tensor(float16), |
32 | 33 | tensor(int32), | tensor(int32), |
33 | 34 | tensor(int64), | tensor(int64), |
34 | 35 | tensor(uint32), | tensor(uint32), |
35 | 36 | tensor(uint64) | tensor(uint64) |
36 | 37 | ): | ): |
37 | 38 | Constrain input and output types to high-precision numeric tensors. | Constrain input and output types to high-precision numeric tensors. |
ReduceL2 - 1#
Version
name: ReduceL2 (GitHub)
domain: main
since_version: 1
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 1.
Summary
Computes the L2 norm of the input tensor’s element along the provided axes. The resulted tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then the resulted tensor have the reduced dimension pruned.
The above behavior is similar to numpy, with the exception that numpy default keepdims to False instead of True.
Attributes
axes: A list of integers, along which to reduce. The default is to reduce over all the dimensions of the input tensor.
keepdims: Keep the reduced dimension or not, default 1 mean keep reduced dimension. Default value is
1
.
Inputs
data (heterogeneous) - T: An input tensor.
Outputs
reduced (heterogeneous) - T: Reduced output tensor.
Type Constraints
T in ( tensor(double), tensor(float), tensor(float16), tensor(int32), tensor(int64), tensor(uint32), tensor(uint64) ): Constrain input and output types to high-precision numeric tensors.