AveragePool#
AveragePool - 11#
Version
name: AveragePool (GitHub)
domain: main
since_version: 11
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 11.
Summary
AveragePool consumes an input tensor X and applies average pooling across the tensor according to kernel sizes, stride sizes, and pad lengths. average pooling consisting of computing the average on all values of a subset of the input tensor according to the kernel size and downsampling the data into the output tensor Y for further processing. The output spatial shape will be following:
output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i] - kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)
or#
output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i] - kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)
if ceil_mode is enabled
* pad_shape[i] is sum of pads along axis i
auto_pad is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:
VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i] - kernel_spatial_shape[i] + 1) / strides_spatial_shape[i])
SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])
And pad shape will be following if SAME_UPPER or SAME_LOWER:
pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + kernel_spatial_shape[i] - input_spatial_shape[i]
The output of each pooling window is divided by the number of elements (exclude pad when attribute count_include_pad is zero).
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that output_shape[i] = ceil(input_shape[i] / strides[i]) for each axis i. The padding is split between the two sides equally or almost equally (depending on whether it is even or odd). In case the padding is an odd number, the extra padding is added at the end for SAME_UPPER and at the beginning for SAME_LOWER.
ceil_mode: Whether to use ceil or floor (default) to compute the output shape.
count_include_pad: Whether include pad pixels when calculating values for the edges. Default is 0, doesn’t count include pad.
kernel_shape (required): The size of the kernel along each axis.
pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis.
strides: Stride along each spatial axis. If not present, the stride defaults to 1 along each spatial axis.
Inputs
X (heterogeneous) - T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
Outputs
Y (heterogeneous) - T: Output data tensor from average or max pooling across the input tensor. Dimensions will vary based on various kernel, stride, and pad sizes. Floor value of the dimension is used
Type Constraints
T in ( tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.
Examples
_averagepool_2d_precomputed_pads
import numpy as np
import onnx
"""
input_shape: [1, 1, 5, 5]
output_shape: [1, 1, 5, 5]
pad_shape: [4, 4] -> [2, 2, 2, 2] by axis
"""
node = onnx.helper.make_node(
"AveragePool",
inputs=["x"],
outputs=["y"],
kernel_shape=[5, 5],
pads=[2, 2, 2, 2],
)
x = np.array(
[
[
[
[1, 2, 3, 4, 5],
[6, 7, 8, 9, 10],
[11, 12, 13, 14, 15],
[16, 17, 18, 19, 20],
[21, 22, 23, 24, 25],
]
]
]
).astype(np.float32)
y = np.array(
[
[
[
[7, 7.5, 8, 8.5, 9],
[9.5, 10, 10.5, 11, 11.5],
[12, 12.5, 13, 13.5, 14],
[14.5, 15, 15.5, 16, 16.5],
[17, 17.5, 18, 18.5, 19],
]
]
]
).astype(np.float32)
expect(
node, inputs=[x], outputs=[y], name="test_averagepool_2d_precomputed_pads"
)
_averagepool_2d_precomputed_pads_count_include_pad
import numpy as np
import onnx
"""
input_shape: [1, 1, 5, 5]
output_shape: [1, 1, 5, 5]
pad_shape: [4, 4] -> [2, 2, 2, 2] by axis
"""
node = onnx.helper.make_node(
"AveragePool",
inputs=["x"],
outputs=["y"],
kernel_shape=[5, 5],
pads=[2, 2, 2, 2],
count_include_pad=1,
)
x = np.array(
[
[
[
[1, 2, 3, 4, 5],
[6, 7, 8, 9, 10],
[11, 12, 13, 14, 15],
[16, 17, 18, 19, 20],
[21, 22, 23, 24, 25],
]
]
]
).astype(np.float32)
y = np.array(
[
[
[
[2.5200, 3.6000, 4.8000, 4.0800, 3.2400],
[4.5600, 6.4000, 8.4000, 7.0400, 5.5200],
[7.2000, 10.0000, 13.0000, 10.8000, 8.4000],
[6.9600, 9.6000, 12.4000, 10.2400, 7.9200],
[6.1200, 8.4000, 10.8000, 8.8800, 6.8400],
]
]
]
).astype(np.float32)
expect(
node,
inputs=[x],
outputs=[y],
name="test_averagepool_2d_precomputed_pads_count_include_pad",
)
_averagepool_2d_precomputed_strides
import numpy as np
import onnx
"""
input_shape: [1, 1, 5, 5]
output_shape: [1, 1, 2, 2]
"""
node = onnx.helper.make_node(
"AveragePool",
inputs=["x"],
outputs=["y"],
kernel_shape=[2, 2],
strides=[2, 2],
)
x = np.array(
[
[
[
[1, 2, 3, 4, 5],
[6, 7, 8, 9, 10],
[11, 12, 13, 14, 15],
[16, 17, 18, 19, 20],
[21, 22, 23, 24, 25],
]
]
]
).astype(np.float32)
y = np.array([[[[4, 6], [14, 16]]]]).astype(np.float32)
expect(
node,
inputs=[x],
outputs=[y],
name="test_averagepool_2d_precomputed_strides",
)
_averagepool_2d_precomputed_same_upper
import numpy as np
import onnx
"""
input_shape: [1, 1, 5, 5]
output_shape: [1, 1, 3, 3]
pad_shape: [2, 2] -> [1, 1, 1, 1] by axis
"""
node = onnx.helper.make_node(
"AveragePool",
inputs=["x"],
outputs=["y"],
kernel_shape=[3, 3],
strides=[2, 2],
auto_pad="SAME_UPPER",
)
x = np.array(
[
[
[
[1, 2, 3, 4, 5],
[6, 7, 8, 9, 10],
[11, 12, 13, 14, 15],
[16, 17, 18, 19, 20],
[21, 22, 23, 24, 25],
]
]
]
).astype(np.float32)
y = np.array([[[[4, 5.5, 7], [11.5, 13, 14.5], [19, 20.5, 22]]]]).astype(
np.float32
)
expect(
node,
inputs=[x],
outputs=[y],
name="test_averagepool_2d_precomputed_same_upper",
)
_averagepool_1d_default
import numpy as np
import onnx
"""
input_shape: [1, 3, 32]
output_shape: [1, 3, 31]
"""
node = onnx.helper.make_node(
"AveragePool",
inputs=["x"],
outputs=["y"],
kernel_shape=[2],
)
x = np.random.randn(1, 3, 32).astype(np.float32)
x_shape = np.shape(x)
kernel_shape = [2]
strides = [1]
out_shape = get_output_shape("VALID", x_shape[2:], kernel_shape, strides)
padded = x
y = pool(padded, x_shape, kernel_shape, strides, out_shape, [0], "AVG")
expect(node, inputs=[x], outputs=[y], name="test_averagepool_1d_default")
_averagepool_2d_default
import numpy as np
import onnx
"""
input_shape: [1, 3, 32, 32]
output_shape: [1, 3, 31, 31]
"""
node = onnx.helper.make_node(
"AveragePool",
inputs=["x"],
outputs=["y"],
kernel_shape=[2, 2],
)
x = np.random.randn(1, 3, 32, 32).astype(np.float32)
x_shape = np.shape(x)
kernel_shape = (2, 2)
strides = (1, 1)
out_shape = get_output_shape("VALID", x_shape[2:], kernel_shape, strides)
padded = x
y = pool(padded, x_shape, kernel_shape, strides, out_shape, (0, 0), "AVG")
expect(node, inputs=[x], outputs=[y], name="test_averagepool_2d_default")
_averagepool_3d_default
import numpy as np
import onnx
"""
input_shape: [1, 3, 32, 32, 32]
output_shape: [1, 3, 31, 31, 31]
"""
node = onnx.helper.make_node(
"AveragePool",
inputs=["x"],
outputs=["y"],
kernel_shape=[2, 2, 2],
)
x = np.random.randn(1, 3, 32, 32, 32).astype(np.float32)
x_shape = np.shape(x)
kernel_shape = [2, 2, 2]
strides = [1, 1, 1]
out_shape = get_output_shape("VALID", x_shape[2:], kernel_shape, strides)
padded = x
y = pool(padded, x_shape, kernel_shape, strides, out_shape, [0, 0, 0], "AVG")
expect(node, inputs=[x], outputs=[y], name="test_averagepool_3d_default")
_averagepool_2d_same_upper
import numpy as np
import onnx
"""
input_shape: [1, 3, 32, 32]
output_shape: [1, 3, 32, 32]
pad_shape: [1, 1] -> [0, 1, 0, 1] by axis
"""
node = onnx.helper.make_node(
"AveragePool",
inputs=["x"],
outputs=["y"],
kernel_shape=[2, 2],
auto_pad="SAME_UPPER",
)
x = np.random.randn(1, 3, 32, 32).astype(np.float32)
x_shape = np.shape(x)
kernel_shape = (2, 2)
strides = (1, 1)
out_shape = get_output_shape("SAME_UPPER", x_shape[2:], kernel_shape, strides)
pad_shape = get_pad_shape(
"SAME_UPPER", x_shape[2:], kernel_shape, strides, out_shape
)
pad_top = pad_shape[0] // 2
pad_bottom = pad_shape[0] - pad_top
pad_left = pad_shape[1] // 2
pad_right = pad_shape[1] - pad_left
padded = np.pad(
x,
((0, 0), (0, 0), (pad_top, pad_bottom), (pad_left, pad_right)),
mode="constant",
constant_values=np.nan,
)
y = pool(padded, x_shape, kernel_shape, strides, out_shape, pad_shape, "AVG")
expect(node, inputs=[x], outputs=[y], name="test_averagepool_2d_same_upper")
_averagepool_2d_same_lower
import numpy as np
import onnx
"""
input_shape: [1, 3, 32, 32]
output_shape: [1, 3, 32, 32]
pad_shape: [1, 1] -> [1, 0, 1, 0] by axis
"""
node = onnx.helper.make_node(
"AveragePool",
inputs=["x"],
outputs=["y"],
kernel_shape=[2, 2],
auto_pad="SAME_LOWER",
)
x = np.random.randn(1, 3, 32, 32).astype(np.float32)
x_shape = np.shape(x)
kernel_shape = (2, 2)
strides = (1, 1)
out_shape = get_output_shape("SAME_LOWER", x_shape[2:], kernel_shape, strides)
pad_shape = get_pad_shape(
"SAME_LOWER", x_shape[2:], kernel_shape, strides, out_shape
)
pad_bottom = pad_shape[0] // 2
pad_top = pad_shape[0] - pad_bottom
pad_right = pad_shape[1] // 2
pad_left = pad_shape[1] - pad_right
padded = np.pad(
x,
((0, 0), (0, 0), (pad_top, pad_bottom), (pad_left, pad_right)),
mode="constant",
constant_values=np.nan,
)
y = pool(padded, x_shape, kernel_shape, strides, out_shape, pad_shape, "AVG")
expect(node, inputs=[x], outputs=[y], name="test_averagepool_2d_same_lower")
_averagepool_2d_pads
import numpy as np
import onnx
"""
input_shape: [1, 3, 28, 28]
output_shape: [1, 3, 30, 30]
pad_shape: [4, 4] -> [2, 2, 2, 2] by axis
"""
node = onnx.helper.make_node(
"AveragePool",
inputs=["x"],
outputs=["y"],
kernel_shape=[3, 3],
pads=[2, 2, 2, 2],
)
x = np.random.randn(1, 3, 28, 28).astype(np.float32)
x_shape = np.shape(x)
kernel_shape = (3, 3)
strides = (1, 1)
pad_bottom = 2
pad_top = 2
pad_right = 2
pad_left = 2
pad_shape = [pad_top + pad_bottom, pad_left + pad_right]
out_shape = get_output_shape(
"VALID", np.add(x_shape[2:], pad_shape), kernel_shape, strides
)
padded = np.pad(
x,
((0, 0), (0, 0), (pad_top, pad_bottom), (pad_left, pad_right)),
mode="constant",
constant_values=np.nan,
)
y = pool(padded, x_shape, kernel_shape, strides, out_shape, pad_shape, "AVG")
expect(node, inputs=[x], outputs=[y], name="test_averagepool_2d_pads")
_averagepool_2d_pads_count_include_pad
import numpy as np
import onnx
"""
input_shape: [1, 3, 28, 28]
output_shape: [1, 3, 30, 30]
pad_shape: [4, 4] -> [2, 2, 2, 2] by axis
"""
node = onnx.helper.make_node(
"AveragePool",
inputs=["x"],
outputs=["y"],
kernel_shape=[3, 3],
pads=[2, 2, 2, 2],
count_include_pad=1,
)
x = np.random.randn(1, 3, 28, 28).astype(np.float32)
x_shape = np.shape(x)
kernel_shape = (3, 3)
strides = (1, 1)
pad_bottom = 2
pad_top = 2
pad_right = 2
pad_left = 2
pad_shape = [pad_top + pad_bottom, pad_left + pad_right]
out_shape = get_output_shape(
"VALID", np.add(x_shape[2:], pad_shape), kernel_shape, strides
)
padded = np.pad(
x,
((0, 0), (0, 0), (pad_top, pad_bottom), (pad_left, pad_right)),
mode="constant",
constant_values=0,
)
y = pool(
padded,
x_shape,
kernel_shape,
strides,
out_shape,
pad_shape,
"AVG",
count_include_pad=1,
)
expect(
node,
inputs=[x],
outputs=[y],
name="test_averagepool_2d_pads_count_include_pad",
)
_averagepool_2d_strides
import numpy as np
import onnx
"""
input_shape: [1, 3, 32, 32]
output_shape: [1, 3, 10, 10]
"""
node = onnx.helper.make_node(
"AveragePool",
inputs=["x"],
outputs=["y"],
kernel_shape=[5, 5],
strides=[3, 3],
)
x = np.random.randn(1, 3, 32, 32).astype(np.float32)
x_shape = np.shape(x)
kernel_shape = (5, 5)
strides = (3, 3)
out_shape = get_output_shape("VALID", x_shape[2:], kernel_shape, strides)
padded = x
y = pool(padded, x_shape, kernel_shape, strides, out_shape, (0, 0), "AVG")
expect(node, inputs=[x], outputs=[y], name="test_averagepool_2d_strides")
_averagepool_2d_ceil
import numpy as np
import onnx
"""
input_shape: [1, 1, 4, 4]
output_shape: [1, 1, 2, 2]
"""
node = onnx.helper.make_node(
"AveragePool",
inputs=["x"],
outputs=["y"],
kernel_shape=[3, 3],
strides=[2, 2],
ceil_mode=True,
)
x = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16],
]
]
]
).astype(np.float32)
y = np.array([[[[6, 7.5], [12, 13.5]]]]).astype(np.float32)
expect(node, inputs=[x], outputs=[y], name="test_averagepool_2d_ceil")
AveragePool - 10#
Version
name: AveragePool (GitHub)
domain: main
since_version: 10
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 10.
Summary
AveragePool consumes an input tensor X and applies average pooling across the tensor according to kernel sizes, stride sizes, and pad lengths. average pooling consisting of computing the average on all values of a subset of the input tensor according to the kernel size and downsampling the data into the output tensor Y for further processing. The output spatial shape will be following:
output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i] - kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)
or#
output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i] - kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)
if ceil_mode is enabled
* pad_shape[i] is sum of pads along axis i
auto_pad is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:
VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i] - kernel_spatial_shape[i] + 1) / strides_spatial_shape[i])
SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])
And pad shape will be following if SAME_UPPER or SAME_LOWER:
pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + kernel_spatial_shape[i] - input_spatial_shape[i]
The output of each pooling window is divided by the number of elements (exclude pad when attribute count_include_pad is zero).
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding.
ceil_mode: Whether to use ceil or floor (default) to compute the output shape.
count_include_pad: Whether include pad pixels when calculating values for the edges. Default is 0, doesn’t count include pad.
kernel_shape (required): The size of the kernel along each axis.
pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis.
strides: Stride along each spatial axis.
Inputs
X (heterogeneous) - T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
Outputs
Y (heterogeneous) - T: Output data tensor from average or max pooling across the input tensor. Dimensions will vary based on various kernel, stride, and pad sizes. Floor value of the dimension is used
Type Constraints
T in ( tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.
AveragePool - 7#
Version
name: AveragePool (GitHub)
domain: main
since_version: 7
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 7.
Summary
AveragePool consumes an input tensor X and applies average pooling across the tensor according to kernel sizes, stride sizes, and pad lengths. average pooling consisting of computing the average on all values of a subset of the input tensor according to the kernel size and downsampling the data into the output tensor Y for further processing. The output spatial shape will be following:
output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i] - kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)
* pad_shape[i] is sum of pads along axis i
auto_pad is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:
VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i] - kernel_spatial_shape[i] + 1) / strides_spatial_shape[i])
SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])
And pad shape will be following if SAME_UPPER or SAME_LOWER:
pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + kernel_spatial_shape[i] - input_spatial_shape[i]
The output of each pooling window is divided by the number of elements (exclude pad when attribute count_include_pad is zero).
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding.
count_include_pad: Whether include pad pixels when calculating values for the edges. Default is 0, doesn’t count include pad.
kernel_shape (required): The size of the kernel along each axis.
pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis.
strides: Stride along each spatial axis.
Inputs
X (heterogeneous) - T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
Outputs
Y (heterogeneous) - T: Output data tensor from average or max pooling across the input tensor. Dimensions will vary based on various kernel, stride, and pad sizes. Floor value of the dimension is used
Type Constraints
T in ( tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.
AveragePool - 1#
Version
name: AveragePool (GitHub)
domain: main
since_version: 1
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 1.
Summary
AveragePool consumes an input tensor X and applies average pooling across the tensor according to kernel sizes, stride sizes, and pad lengths. average pooling consisting of computing the average on all values of a subset of the input tensor according to the kernel size and downsampling the data into the output tensor Y for further processing. The output spatial shape will be following:
output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i] - kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)
* pad_shape[i] is sum of pads along axis i
auto_pad is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:
VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i] - kernel_spatial_shape[i] + 1) / strides_spatial_shape[i])
SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])
And pad shape will be following if SAME_UPPER or SAME_LOWER:
pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + kernel_spatial_shape[i] - input_spatial_shape[i]
The output of each pooling window is divided by the number of elements exclude pad.
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding.
kernel_shape (required): The size of the kernel along each axis.
pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis.
strides: Stride along each spatial axis.
Inputs
X (heterogeneous) - T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
Outputs
Y (heterogeneous) - T: Output data tensor from average or max pooling across the input tensor. Dimensions will vary based on various kernel, stride, and pad sizes. Floor value of the dimension is used
Type Constraints
T in ( tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.