ReduceL1#
ReduceL1 - 13#
Version
name: ReduceL1 (GitHub)
domain: main
since_version: 13
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 13.
Summary
Computes the L1 norm of the input tensor’s element along the provided axes. The resulting tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then the resulting tensor has the reduced dimension pruned.
The above behavior is similar to numpy, with the exception that numpy defaults keepdims to False instead of True.
Attributes
axes: A list of integers, along which to reduce. The default is to reduce over all the dimensions of the input tensor. Accepted range is [-r, r-1] where r = rank(data).
keepdims: Keep the reduced dimension or not, default 1 means keep reduced dimension.
Inputs
data (heterogeneous) - T: An input tensor.
Outputs
reduced (heterogeneous) - T: Reduced output tensor.
Type Constraints
T in ( tensor(bfloat16), tensor(double), tensor(float), tensor(float16), tensor(int32), tensor(int64), tensor(uint32), tensor(uint64) ): Constrain input and output types to high-precision numeric tensors.
Examples
_do_not_keepdims
import numpy as np
import onnx
shape = [3, 2, 2]
axes = [2]
keepdims = 0
node = onnx.helper.make_node(
"ReduceL1",
inputs=["data"],
outputs=["reduced"],
axes=axes,
keepdims=keepdims,
)
data = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape)
# print(data)
# [[[1., 2.], [3., 4.]], [[5., 6.], [7., 8.]], [[9., 10.], [11., 12.]]]
reduced = np.sum(a=np.abs(data), axis=tuple(axes), keepdims=keepdims == 1)
# print(reduced)
# [[3., 7.], [11., 15.], [19., 23.]]
expect(
node,
inputs=[data],
outputs=[reduced],
name="test_reduce_l1_do_not_keepdims_example",
)
np.random.seed(0)
data = np.random.uniform(-10, 10, shape).astype(np.float32)
reduced = np.sum(a=np.abs(data), axis=tuple(axes), keepdims=keepdims == 1)
expect(
node,
inputs=[data],
outputs=[reduced],
name="test_reduce_l1_do_not_keepdims_random",
)
_keepdims
import numpy as np
import onnx
shape = [3, 2, 2]
axes = [2]
keepdims = 1
node = onnx.helper.make_node(
"ReduceL1",
inputs=["data"],
outputs=["reduced"],
axes=axes,
keepdims=keepdims,
)
data = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape)
# print(data)
# [[[1., 2.], [3., 4.]], [[5., 6.], [7., 8.]], [[9., 10.], [11., 12.]]]
reduced = np.sum(a=np.abs(data), axis=tuple(axes), keepdims=keepdims == 1)
# print(reduced)
# [[[3.], [7.]], [[11.], [15.]], [[19.], [23.]]]
expect(
node,
inputs=[data],
outputs=[reduced],
name="test_reduce_l1_keep_dims_example",
)
np.random.seed(0)
data = np.random.uniform(-10, 10, shape).astype(np.float32)
reduced = np.sum(a=np.abs(data), axis=tuple(axes), keepdims=keepdims == 1)
expect(
node,
inputs=[data],
outputs=[reduced],
name="test_reduce_l1_keep_dims_random",
)
_default_axes_keepdims
import numpy as np
import onnx
shape = [3, 2, 2]
axes = None
keepdims = 1
node = onnx.helper.make_node(
"ReduceL1", inputs=["data"], outputs=["reduced"], keepdims=keepdims
)
data = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape)
# print(data)
# [[[1., 2.], [3., 4.]], [[5., 6.], [7., 8.]], [[9., 10.], [11., 12.]]]
reduced = np.sum(a=np.abs(data), axis=axes, keepdims=keepdims == 1)
# print(reduced)
# [[[78.]]]
expect(
node,
inputs=[data],
outputs=[reduced],
name="test_reduce_l1_default_axes_keepdims_example",
)
np.random.seed(0)
data = np.random.uniform(-10, 10, shape).astype(np.float32)
reduced = np.sum(a=np.abs(data), axis=axes, keepdims=keepdims == 1)
expect(
node,
inputs=[data],
outputs=[reduced],
name="test_reduce_l1_default_axes_keepdims_random",
)
_negative_axes_keepdims
import numpy as np
import onnx
shape = [3, 2, 2]
axes = [-1]
keepdims = 1
node = onnx.helper.make_node(
"ReduceL1",
inputs=["data"],
outputs=["reduced"],
axes=axes,
keepdims=keepdims,
)
data = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape)
# print(data)
# [[[1., 2.], [3., 4.]], [[5., 6.], [7., 8.]], [[9., 10.], [11., 12.]]]
reduced = np.sum(a=np.abs(data), axis=tuple(axes), keepdims=keepdims == 1)
# print(reduced)
# [[[3.], [7.]], [[11.], [15.]], [[19.], [23.]]]
expect(
node,
inputs=[data],
outputs=[reduced],
name="test_reduce_l1_negative_axes_keep_dims_example",
)
np.random.seed(0)
data = np.random.uniform(-10, 10, shape).astype(np.float32)
reduced = np.sum(a=np.abs(data), axis=tuple(axes), keepdims=keepdims == 1)
expect(
node,
inputs=[data],
outputs=[reduced],
name="test_reduce_l1_negative_axes_keep_dims_random",
)
ReduceL1 - 11#
Version
name: ReduceL1 (GitHub)
domain: main
since_version: 11
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 11.
Summary
Computes the L1 norm of the input tensor’s element along the provided axes. The resulting tensor has the same rank as the input if keepdims equals 1. If keepdims equal 0, then the resulted tensor have the reduced dimension pruned.
The above behavior is similar to numpy, with the exception that numpy defaults keepdims to False instead of True.
Attributes
axes: A list of integers, along which to reduce. The default is to reduce over all the dimensions of the input tensor. Accepted range is [-r, r-1] where r = rank(data).
keepdims: Keep the reduced dimension or not, default 1 means keep reduced dimension.
Inputs
data (heterogeneous) - T: An input tensor.
Outputs
reduced (heterogeneous) - T: Reduced output tensor.
Type Constraints
T in ( tensor(double), tensor(float), tensor(float16), tensor(int32), tensor(int64), tensor(uint32), tensor(uint64) ): Constrain input and output types to high-precision numeric tensors.
ReduceL1 - 1#
Version
name: ReduceL1 (GitHub)
domain: main
since_version: 1
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 1.
Summary
Computes the L1 norm of the input tensor’s element along the provided axes. The resulting tensor has the same rank as the input if keepdims equals 1. If keepdims equal 0, then the resulted tensor have the reduced dimension pruned.
The above behavior is similar to numpy, with the exception that numpy defaults keepdims to False instead of True.
Attributes
axes: A list of integers, along which to reduce. The default is to reduce over all the dimensions of the input tensor.
keepdims: Keep the reduced dimension or not, default 1 means keep reduced dimension.
Inputs
data (heterogeneous) - T: An input tensor.
Outputs
reduced (heterogeneous) - T: Reduced output tensor.
Type Constraints
T in ( tensor(double), tensor(float), tensor(float16), tensor(int32), tensor(int64), tensor(uint32), tensor(uint64) ): Constrain input and output types to high-precision numeric tensors.