ReduceSumSquare#

ReduceSumSquare - 13#

Version

  • name: ReduceSumSquare (GitHub)

  • domain: main

  • since_version: 13

  • function: False

  • support_level: SupportType.COMMON

  • shape inference: True

This version of the operator has been available since version 13.

Summary

Computes the sum square of the input tensor’s element along the provided axes. The resulting tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then the resulting tensor has the reduced dimension pruned.

The above behavior is similar to numpy, with the exception that numpy defaults keepdims to False instead of True.

Attributes

  • axes: A list of integers, along which to reduce. The default is to reduce over all the dimensions of the input tensor. Accepted range is [-r, r-1] where r = rank(data).

  • keepdims: Keep the reduced dimension or not, default 1 means keep reduced dimension.

Inputs

  • data (heterogeneous) - T: An input tensor.

Outputs

  • reduced (heterogeneous) - T: Reduced output tensor.

Type Constraints

  • T in ( tensor(bfloat16), tensor(double), tensor(float), tensor(float16), tensor(int32), tensor(int64), tensor(uint32), tensor(uint64) ): Constrain input and output types to high-precision numeric tensors.

Examples

_do_not_keepdims

import numpy as np
import onnx

shape = [3, 2, 2]
axes = [1]
keepdims = 0

node = onnx.helper.make_node(
    "ReduceSumSquare",
    inputs=["data"],
    outputs=["reduced"],
    axes=axes,
    keepdims=keepdims,
)

data = np.array(
    [[[1, 2], [3, 4]], [[5, 6], [7, 8]], [[9, 10], [11, 12]]], dtype=np.float32
)
reduced = np.sum(np.square(data), axis=tuple(axes), keepdims=keepdims == 1)
# print(reduced)
# [[10., 20.]
# [74., 100.]
# [202., 244.]]

expect(
    node,
    inputs=[data],
    outputs=[reduced],
    name="test_reduce_sum_square_do_not_keepdims_example",
)

np.random.seed(0)
data = np.random.uniform(-10, 10, shape).astype(np.float32)
reduced = np.sum(np.square(data), axis=tuple(axes), keepdims=keepdims == 1)

expect(
    node,
    inputs=[data],
    outputs=[reduced],
    name="test_reduce_sum_square_do_not_keepdims_random",
)

_keepdims

import numpy as np
import onnx

shape = [3, 2, 2]
axes = [1]
keepdims = 1

node = onnx.helper.make_node(
    "ReduceSumSquare",
    inputs=["data"],
    outputs=["reduced"],
    axes=axes,
    keepdims=keepdims,
)

data = np.array(
    [[[1, 2], [3, 4]], [[5, 6], [7, 8]], [[9, 10], [11, 12]]], dtype=np.float32
)
reduced = np.sum(np.square(data), axis=tuple(axes), keepdims=keepdims == 1)
# print(reduced)
# [[[10., 20.]]
# [[74., 100.]]
# [[202., 244.]]]

expect(
    node,
    inputs=[data],
    outputs=[reduced],
    name="test_reduce_sum_square_keepdims_example",
)

np.random.seed(0)
data = np.random.uniform(-10, 10, shape).astype(np.float32)
reduced = np.sum(np.square(data), axis=tuple(axes), keepdims=keepdims == 1)

expect(
    node,
    inputs=[data],
    outputs=[reduced],
    name="test_reduce_sum_square_keepdims_random",
)

_default_axes_keepdims

import numpy as np
import onnx

shape = [3, 2, 2]
axes = None
keepdims = 1

node = onnx.helper.make_node(
    "ReduceSumSquare", inputs=["data"], outputs=["reduced"], keepdims=keepdims
)

data = np.array(
    [[[1, 2], [3, 4]], [[5, 6], [7, 8]], [[9, 10], [11, 12]]], dtype=np.float32
)
reduced = np.sum(np.square(data), axis=axes, keepdims=keepdims == 1)
# print(reduced)
# [[[650.]]]

expect(
    node,
    inputs=[data],
    outputs=[reduced],
    name="test_reduce_sum_square_default_axes_keepdims_example",
)

np.random.seed(0)
data = np.random.uniform(-10, 10, shape).astype(np.float32)
reduced = np.sum(np.square(data), axis=axes, keepdims=keepdims == 1)

expect(
    node,
    inputs=[data],
    outputs=[reduced],
    name="test_reduce_sum_square_default_axes_keepdims_random",
)

_negative_axes_keepdims

import numpy as np
import onnx

shape = [3, 2, 2]
axes = [-2]
keepdims = 1

node = onnx.helper.make_node(
    "ReduceSumSquare",
    inputs=["data"],
    outputs=["reduced"],
    axes=axes,
    keepdims=keepdims,
)

data = np.array(
    [[[1, 2], [3, 4]], [[5, 6], [7, 8]], [[9, 10], [11, 12]]], dtype=np.float32
)
reduced = np.sum(np.square(data), axis=tuple(axes), keepdims=keepdims == 1)
# print(reduced)
# [[[10., 20.s]]
# [[74., 100.]]
# [[202., 244.]]]

expect(
    node,
    inputs=[data],
    outputs=[reduced],
    name="test_reduce_sum_square_negative_axes_keepdims_example",
)

np.random.seed(0)
data = np.random.uniform(-10, 10, shape).astype(np.float32)
reduced = np.sum(np.square(data), axis=tuple(axes), keepdims=keepdims == 1)

expect(
    node,
    inputs=[data],
    outputs=[reduced],
    name="test_reduce_sum_square_negative_axes_keepdims_random",
)

ReduceSumSquare - 11#

Version

  • name: ReduceSumSquare (GitHub)

  • domain: main

  • since_version: 11

  • function: False

  • support_level: SupportType.COMMON

  • shape inference: True

This version of the operator has been available since version 11.

Summary

Computes the sum square of the input tensor’s element along the provided axes. The resulting tensor has the same rank as the input if keepdims equals 1. If keepdims equal 0, then the resulted tensor have the reduced dimension pruned.

The above behavior is similar to numpy, with the exception that numpy defaults keepdims to False instead of True.

Attributes

  • axes: A list of integers, along which to reduce. The default is to reduce over all the dimensions of the input tensor. Accepted range is [-r, r-1] where r = rank(data).

  • keepdims: Keep the reduced dimension or not, default 1 means keep reduced dimension.

Inputs

  • data (heterogeneous) - T: An input tensor.

Outputs

  • reduced (heterogeneous) - T: Reduced output tensor.

Type Constraints

  • T in ( tensor(double), tensor(float), tensor(float16), tensor(int32), tensor(int64), tensor(uint32), tensor(uint64) ): Constrain input and output types to high-precision numeric tensors.

ReduceSumSquare - 1#

Version

  • name: ReduceSumSquare (GitHub)

  • domain: main

  • since_version: 1

  • function: False

  • support_level: SupportType.COMMON

  • shape inference: True

This version of the operator has been available since version 1.

Summary

Computes the sum square of the input tensor’s element along the provided axes. The resulting tensor has the same rank as the input if keepdims equals 1. If keepdims equal 0, then the resulted tensor have the reduced dimension pruned.

The above behavior is similar to numpy, with the exception that numpy defaults keepdims to False instead of True.

Attributes

  • axes: A list of integers, along which to reduce. The default is to reduce over all the dimensions of the input tensor.

  • keepdims: Keep the reduced dimension or not, default 1 means keep reduced dimension.

Inputs

  • data (heterogeneous) - T: An input tensor.

Outputs

  • reduced (heterogeneous) - T: Reduced output tensor.

Type Constraints

  • T in ( tensor(double), tensor(float), tensor(float16), tensor(int32), tensor(int64), tensor(uint32), tensor(uint64) ): Constrain input and output types to high-precision numeric tensors.