Resize#
Resize - 18#
Version
name: Resize (GitHub)
domain: main
since_version: 18
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 18.
Summary
Resize the input tensor. In general, it calculates every value in the output tensor as a weighted average of neighborhood (a.k.a. sampling locations) in the input tensor. Each dimension value of the output tensor is: <br/>
output_dimension = floor(input_dimension * (roi_end - roi_start) * scale) <br/>
if input "sizes" is not specified.
Attributes
antialias: If set to 1, “linear” and “cubic” interpolation modes will use an antialiasing filter when downscaling. Antialiasing is achieved by stretching the resampling filter by a factor max(1, 1 / scale), which means that when downsampling, more input pixels contribute to an output pixel.
axes: If provided, it specifies a subset of axes that ‘roi’, ‘scales’ and ‘sizes’ refer to. If not provided, all axes are assumed [0, 1, …, r-1], where r = rank(data). Non-specified dimensions are interpreted as non-resizable. Negative value means counting dimensions from the back. Accepted range is [-r, r-1], where r = rank(data). Behavior is undefined if an axis is repeated.
- coordinate_transformation_mode:
This attribute describes how to transform the coordinate in the
resized tensor to the coordinate in the original tensor. <br/> The coordinate of each dimension is transformed individually. Let’s describe a case using axis x as an example. Denote x_resized as the coordinate of axis x in the resized tensor, x_original as the coordinate of axis x in the original tensor, length_original as the length of the original tensor in axis x, length_resized as the length of the resized tensor in axis x, roi_x = (start_x, end_x) of the axis x in input “roi”, scale = length_resized / length_original, <br/> if coordinate_transformation_mode is “half_pixel”, <br/> x_original = (x_resized + 0.5) / scale - 0.5 <br/> if coordinate_transformation_mode is “pytorch_half_pixel”, <br/> x_original = length_resized > 1 ? (x_resized + 0.5) / scale - 0.5 : 0 <br/> if coordinate_transformation_mode is “align_corners”, <br/> x_original = x_resized * (length_original - 1) / (length_resized - 1) <br/> if coordinate_transformation_mode is “asymmetric”, <br/> x_original = x_resized / scale <br/> if coordinate_transformation_mode is “tf_crop_and_resize”, <br/> x_original = length_resized > 1 ? start_x * (length_original - 1) + x_resized * (end_x - start_x) * (length_original - 1) / (length_resized - 1) : 0.5 * (start_x + end_x) * (length_original - 1) .
cubic_coeff_a: The coefficient ‘a’ used in cubic interpolation. Two common choice are -0.5 (in some cases of TensorFlow) and -0.75 (in PyTorch). Check out Equation (4) in https://ieeexplore.ieee.org/document/1163711 for the details. This attribute is valid only if mode is “cubic”.
exclude_outside: If set to 1, the weight of sampling locations outside the tensor will be set to 0 and the weight will be renormalized so that their sum is 1.0. The default value is 0.
extrapolation_value: When coordinate_transformation_mode is “tf_crop_and_resize” and x_original is outside the range [0, length_original - 1], this value is used as the corresponding output value. Default is 0.0f.
- keep_aspect_ratio_policy:
This attribute describes how to interpret the sizes input with
regard to keeping the original aspect ratio of the input, and it is not applicable when the scales input is used. <br/> Given a set of sizes, associated with a subset of axes (explicitly provided or default), and assuming d = axes[i], with i being the index of the provided sizes. <br/> If keep_aspect_ratio_policy is “stretch”, the original aspect ratio is disregarded, and the input is resized to the specified size: <br/> out_size[d] = sizes[i] <br/> If keep_aspect_ratio_policy is “not_larger”, the sizes are adjusted so that no extent of the output is larger than the specified size, while keeping the original aspect ratio: <br/> scale = Min(sizes[i] / in_size[d]) <br/> out_size[d] = round_int(scale * in_size[i]) <br/> If keep_aspect_ratio_policy is “not_smaller”, the sizes are adjusted so that no extent of the output is smaller than the specified size, while keeping the original aspect ratio: <br/> scale = Max(sizes[i] / in_size[d]) <br/> out_size[d] = round_int(scale * in_size[i]) <br/> For non- resizable axes (those not specified in axes), the output size will be equal to the input size. Note: round_int stands for computing the nearest integer value, rounding halfway cases up.
mode: Three interpolation modes: “nearest” (default), “linear” and “cubic”. The “linear” mode includes linear interpolation for 1D tensor and N-linear interpolation for N-D tensor (for example, bilinear interpolation for 2D tensor). The “cubic” mode includes cubic interpolation for 1D tensor and N-cubic interpolation for N-D tensor (for example, bicubic interpolation for 2D tensor).
nearest_mode: Four modes: “round_prefer_floor” (default, as known as round half down), “round_prefer_ceil” (as known as round half up), “floor”, “ceil”. Only used by nearest interpolation. It indicates how to get “nearest” pixel in input tensor from x_original, so this attribute is valid only if “mode” is “nearest”.
Inputs
Between 1 and 4 inputs.
X (heterogeneous) - T1: N-D tensor
roi (optional, heterogeneous) - T2: 1-D tensor given as [start1, …, startN, end1, …, endN], where N is the rank of X or the length of axes, if provided. The RoIs’ coordinates are normalized in the coordinate system of the input image. It only takes effect when coordinate_transformation_mode is “tf_crop_and_resize”
scales (optional, heterogeneous) - tensor(float): The scale array along each dimension. It takes value greater than 0. If it’s less than 1, it’s sampling down, otherwise, it’s upsampling. The number of elements of ‘scales’ should be the same as the rank of input ‘X’ or the length of ‘axes’, if provided. One of ‘scales’ and ‘sizes’ MUST be specified and it is an error if both are specified. If ‘sizes’ is needed, the user can use an empty string as the name of ‘scales’ in this operator’s input list.
sizes (optional, heterogeneous) - tensor(int64): Target size of the output tensor. Its interpretation depends on the ‘keep_aspect_ratio_policy’ value.The number of elements of ‘sizes’ should be the same as the rank of input ‘X’, or the length of ‘axes’, if provided. Only one of ‘scales’ and ‘sizes’ can be specified.
Outputs
Y (heterogeneous) - T1: N-D tensor after resizing
Type Constraints
T1 in ( tensor(bfloat16), tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): Constrain input ‘X’ and output ‘Y’ to all tensor types.
T2 in ( tensor(double), tensor(float), tensor(float16) ): Constrain roi type to float or double.
Examples
_resize_upsample_scales_nearest
import numpy as np
import onnx
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "scales"],
outputs=["Y"],
mode="nearest",
)
data = np.array(
[
[
[
[1, 2],
[3, 4],
]
]
],
dtype=np.float32,
)
scales = np.array([1.0, 1.0, 2.0, 3.0], dtype=np.float32)
# [[[[1. 1. 1. 2. 2. 2.]
# [1. 1. 1. 2. 2. 2.]
# [3. 3. 3. 4. 4. 4.]
# [3. 3. 3. 4. 4. 4.]]]]
output = interpolate_nd(
data, lambda x, _: nearest_coeffs(x), scale_factors=scales
).astype(np.float32)
expect(
node,
inputs=[data, scales],
outputs=[output],
name="test_resize_upsample_scales_nearest",
)
_resize_downsample_scales_nearest
import numpy as np
import onnx
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "scales"],
outputs=["Y"],
mode="nearest",
)
data = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
]
]
],
dtype=np.float32,
)
scales = np.array([1.0, 1.0, 0.6, 0.6], dtype=np.float32)
# [[[[1. 3.]]]]
output = interpolate_nd(
data, lambda x, _: nearest_coeffs(x), scale_factors=scales
).astype(np.float32)
expect(
node,
inputs=[data, scales],
outputs=[output],
name="test_resize_downsample_scales_nearest",
)
_resize_upsample_sizes_nearest
import numpy as np
import onnx
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "", "sizes"],
outputs=["Y"],
mode="nearest",
)
data = np.array(
[
[
[
[1, 2],
[3, 4],
]
]
],
dtype=np.float32,
)
sizes = np.array([1, 1, 7, 8], dtype=np.int64)
# [[[[1. 1. 1. 1. 2. 2. 2. 2.]
# [1. 1. 1. 1. 2. 2. 2. 2.]
# [1. 1. 1. 1. 2. 2. 2. 2.]
# [1. 1. 1. 1. 2. 2. 2. 2.]
# [3. 3. 3. 3. 4. 4. 4. 4.]
# [3. 3. 3. 3. 4. 4. 4. 4.]
# [3. 3. 3. 3. 4. 4. 4. 4.]]]]
output = interpolate_nd(
data, lambda x, _: nearest_coeffs(x), output_size=sizes
).astype(np.float32)
expect(
node,
inputs=[data, sizes],
outputs=[output],
name="test_resize_upsample_sizes_nearest",
)
_resize_downsample_sizes_nearest
import numpy as np
import onnx
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "", "sizes"],
outputs=["Y"],
mode="nearest",
)
data = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
]
]
],
dtype=np.float32,
)
sizes = np.array([1, 1, 1, 3], dtype=np.int64)
# [[[[1. 2. 4.]]]]
output = interpolate_nd(
data, lambda x, _: nearest_coeffs(x), output_size=sizes
).astype(np.float32)
expect(
node,
inputs=[data, sizes],
outputs=[output],
name="test_resize_downsample_sizes_nearest",
)
_resize_upsample_scales_linear
import numpy as np
import onnx
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "scales"],
outputs=["Y"],
mode="linear",
)
data = np.array(
[
[
[
[1, 2],
[3, 4],
]
]
],
dtype=np.float32,
)
scales = np.array([1.0, 1.0, 2.0, 2.0], dtype=np.float32)
# [[[[1. 1.25 1.75 2. ]
# [1.5 1.75 2.25 2.5 ]
# [2.5 2.75 3.25 3.5 ]
# [3. 3.25 3.75 4. ]]]]
output = interpolate_nd(
data, lambda x, _: linear_coeffs(x), scale_factors=scales
).astype(np.float32)
expect(
node,
inputs=[data, scales],
outputs=[output],
name="test_resize_upsample_scales_linear",
)
_resize_upsample_scales_linear_align_corners
import numpy as np
import onnx
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "scales"],
outputs=["Y"],
mode="linear",
coordinate_transformation_mode="align_corners",
)
data = np.array(
[
[
[
[1, 2],
[3, 4],
]
]
],
dtype=np.float32,
)
scales = np.array([1.0, 1.0, 2.0, 2.0], dtype=np.float32)
# [[[[1. 1.33333333 1.66666667 2. ]
# [1.66666667 2. 2.33333333 2.66666667]
# [2.33333333 2.66666667 3. 3.33333333]
# [3. 3.33333333 3.66666667 4. ]]]]
output = interpolate_nd(
data,
lambda x, _: linear_coeffs(x),
scale_factors=scales,
coordinate_transformation_mode="align_corners",
).astype(np.float32)
expect(
node,
inputs=[data, scales],
outputs=[output],
name="test_resize_upsample_scales_linear_align_corners",
)
_resize_downsample_scales_linear
import numpy as np
import onnx
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "scales"],
outputs=["Y"],
mode="linear",
)
data = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
]
]
],
dtype=np.float32,
)
scales = np.array([1.0, 1.0, 0.6, 0.6], dtype=np.float32)
# [[[[2.6666665 4.3333331]]]]
output = interpolate_nd(
data, lambda x, _: linear_coeffs(x), scale_factors=scales
).astype(np.float32)
expect(
node,
inputs=[data, scales],
outputs=[output],
name="test_resize_downsample_scales_linear",
)
_resize_downsample_scales_linear_align_corners
import numpy as np
import onnx
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "scales"],
outputs=["Y"],
mode="linear",
coordinate_transformation_mode="align_corners",
)
data = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
]
]
],
dtype=np.float32,
)
scales = np.array([1.0, 1.0, 0.6, 0.6], dtype=np.float32)
# [[[[1. 3.142857]]]]
output = interpolate_nd(
data,
lambda x, _: linear_coeffs(x),
scale_factors=scales,
coordinate_transformation_mode="align_corners",
).astype(np.float32)
expect(
node,
inputs=[data, scales],
outputs=[output],
name="test_resize_downsample_scales_linear_align_corners",
)
_resize_upsample_scales_cubic
import numpy as np
import onnx
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "scales"],
outputs=["Y"],
mode="cubic",
)
data = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16],
]
]
],
dtype=np.float32,
)
scales = np.array([1.0, 1.0, 2.0, 2.0], dtype=np.float32)
# [[[[ 0.47265625 0.76953125 1.24609375 1.875 2.28125
# 2.91015625 3.38671875 3.68359375]
# [ 1.66015625 1.95703125 2.43359375 3.0625 3.46875
# 4.09765625 4.57421875 4.87109375]
# [ 3.56640625 3.86328125 4.33984375 4.96875 5.375
# 6.00390625 6.48046875 6.77734375]
# [ 6.08203125 6.37890625 6.85546875 7.484375 7.890625
# 8.51953125 8.99609375 9.29296875]
# [ 7.70703125 8.00390625 8.48046875 9.109375 9.515625
# 10.14453125 10.62109375 10.91796875]
# [10.22265625 10.51953125 10.99609375 11.625 12.03125
# 12.66015625 13.13671875 13.43359375]
# [12.12890625 12.42578125 12.90234375 13.53125 13.9375
# 14.56640625 15.04296875 15.33984375]
# [13.31640625 13.61328125 14.08984375 14.71875 15.125
# 15.75390625 16.23046875 16.52734375]]]]
output = interpolate_nd(
data, lambda x, _: cubic_coeffs(x), scale_factors=scales
).astype(np.float32)
expect(
node,
inputs=[data, scales],
outputs=[output],
name="test_resize_upsample_scales_cubic",
)
_resize_upsample_scales_cubic_align_corners
import numpy as np
import onnx
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "scales"],
outputs=["Y"],
mode="cubic",
coordinate_transformation_mode="align_corners",
)
data = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16],
]
]
],
dtype=np.float32,
)
scales = np.array([1.0, 1.0, 2.0, 2.0], dtype=np.float32)
# [[[[ 1. 1.34110787 1.80029155 2.32944606 2.67055394
# 3.19970845 3.65889213 4. ]
# [ 2.36443149 2.70553936 3.16472303 3.69387755 4.03498542
# 4.56413994 5.02332362 5.36443149]
# [ 4.20116618 4.54227405 5.00145773 5.53061224 5.87172012
# 6.40087464 6.86005831 7.20116618]
# [ 6.31778426 6.65889213 7.1180758 7.64723032 7.98833819
# 8.51749271 8.97667638 9.31778426]
# [ 7.68221574 8.02332362 8.48250729 9.01166181 9.35276968
# 9.8819242 10.34110787 10.68221574]
# [ 9.79883382 10.13994169 10.59912536 11.12827988 11.46938776
# 11.99854227 12.45772595 12.79883382]
# [11.63556851 11.97667638 12.43586006 12.96501458 13.30612245
# 13.83527697 14.29446064 14.63556851]
# [13. 13.34110787 13.80029155 14.32944606 14.67055394
# 15.19970845 15.65889213 16. ]]]]
output = interpolate_nd(
data,
lambda x, _: cubic_coeffs(x),
scale_factors=scales,
coordinate_transformation_mode="align_corners",
).astype(np.float32)
expect(
node,
inputs=[data, scales],
outputs=[output],
name="test_resize_upsample_scales_cubic_align_corners",
)
_resize_downsample_scales_cubic
import numpy as np
import onnx
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "scales"],
outputs=["Y"],
mode="cubic",
)
data = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16],
]
]
],
dtype=np.float32,
)
scales = np.array([1.0, 1.0, 0.8, 0.8], dtype=np.float32)
# [[[[ 1.47119141 2.78125 4.08251953]
# [ 6.71142578 8.02148438 9.32275391]
# [11.91650391 13.2265625 14.52783203]]]]
output = interpolate_nd(
data, lambda x, _: cubic_coeffs(x), scale_factors=scales
).astype(np.float32)
expect(
node,
inputs=[data, scales],
outputs=[output],
name="test_resize_downsample_scales_cubic",
)
_resize_downsample_scales_cubic_align_corners
import numpy as np
import onnx
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "scales"],
outputs=["Y"],
mode="cubic",
coordinate_transformation_mode="align_corners",
)
data = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16],
]
]
],
dtype=np.float32,
)
scales = np.array([1.0, 1.0, 0.8, 0.8], dtype=np.float32)
# [[[[ 1. 2.39519159 3.79038317]
# [ 6.58076634 7.97595793 9.37114951]
# [12.16153268 13.55672427 14.95191585]]]]
output = interpolate_nd(
data,
lambda x, _: cubic_coeffs(x),
scale_factors=scales,
coordinate_transformation_mode="align_corners",
).astype(np.float32)
expect(
node,
inputs=[data, scales],
outputs=[output],
name="test_resize_downsample_scales_cubic_align_corners",
)
_resize_upsample_sizes_cubic
import numpy as np
import onnx
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "", "sizes"],
outputs=["Y"],
mode="cubic",
)
data = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16],
]
]
],
dtype=np.float32,
)
sizes = np.array([1, 1, 9, 10], dtype=np.int64)
# [[[[ 0.45507922 0.64057922 0.97157922 1.42257922 1.90732922
# 2.22332922 2.70807922 3.15907922 3.49007922 3.67557922]
# [ 1.39437963 1.57987963 1.91087963 2.36187963 2.84662963
# 3.16262963 3.64737963 4.09837963 4.42937963 4.61487963]
# [ 2.95130693 3.13680693 3.46780693 3.91880693 4.40355693
# 4.71955693 5.20430693 5.65530693 5.98630693 6.17180693]
# [ 5.20525069 5.39075069 5.72175069 6.17275069 6.65750069
# 6.97350069 7.45825069 7.90925069 8.24025069 8.42575069]
# [ 6.88975 7.07525 7.40625 7.85725 8.342
# 8.658 9.14275 9.59375 9.92475 10.11025 ]
# [ 8.57424931 8.75974931 9.09074931 9.54174931 10.02649931
# 10.34249931 10.82724931 11.27824931 11.60924931 11.79474931]
# [10.82819307 11.01369307 11.34469307 11.79569307 12.28044307
# 12.59644307 13.08119307 13.53219307 13.86319307 14.04869307]
# [12.38512037 12.57062037 12.90162037 13.35262037 13.83737037
# 14.15337037 14.63812037 15.08912037 15.42012037 15.60562037]
# [13.32442078 13.50992078 13.84092078 14.29192078 14.77667078
# 15.09267078 15.57742078 16.02842078 16.35942078 16.54492078]]]]
output = interpolate_nd(
data, lambda x, _: cubic_coeffs(x), output_size=sizes
).astype(np.float32)
expect(
node,
inputs=[data, sizes],
outputs=[output],
name="test_resize_upsample_sizes_cubic",
)
_resize_downsample_sizes_cubic
import numpy as np
import onnx
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "", "sizes"],
outputs=["Y"],
mode="cubic",
)
data = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16],
]
]
],
dtype=np.float32,
)
sizes = np.array([1, 1, 3, 3], dtype=np.int64)
# [[[[ 1.63078704 3.00462963 4.37847222]
# [ 7.12615741 8.5 9.87384259]
# [12.62152778 13.99537037 15.36921296]]]]
output = interpolate_nd(
data, lambda x, _: cubic_coeffs(x), output_size=sizes
).astype(np.float32)
expect(
node,
inputs=[data, sizes],
outputs=[output],
name="test_resize_downsample_sizes_cubic",
)
# TensorFlow v1 bicubic with half_pixel_centers=True
_resize_upsample_scales_cubic_A_n0p5_exclude_outside
import numpy as np
import onnx
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "scales"],
outputs=["Y"],
mode="cubic",
cubic_coeff_a=-0.5,
exclude_outside=True,
)
data = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16],
]
]
],
dtype=np.float32,
)
scales = np.array([1.0, 1.0, 2.0, 2.0], dtype=np.float32)
# [[[[ 0.55882353 0.81494204 1.35698249 1.89705882 2.39705882
# 2.93713516 3.47917561 3.73529412]
# [ 1.58329755 1.83941606 2.38145651 2.92153285 3.42153285
# 3.96160918 4.50364964 4.75976814]
# [ 3.75145936 4.00757787 4.54961832 5.08969466 5.58969466
# 6.12977099 6.67181144 6.92792995]
# [ 5.91176471 6.16788321 6.70992366 7.25 7.75
# 8.29007634 8.83211679 9.08823529]
# [ 7.91176471 8.16788321 8.70992366 9.25 9.75
# 10.29007634 10.83211679 11.08823529]
# [10.07207005 10.32818856 10.87022901 11.41030534 11.91030534
# 12.45038168 12.99242213 13.24854064]
# [12.24023186 12.49635036 13.03839082 13.57846715 14.07846715
# 14.61854349 15.16058394 15.41670245]
# [13.26470588 13.52082439 14.06286484 14.60294118 15.10294118
# 15.64301751 16.18505796 16.44117647]]]]
output = interpolate_nd(
data,
lambda x, _: cubic_coeffs(x, A=-0.5),
scale_factors=scales,
exclude_outside=True,
).astype(np.float32)
expect(
node,
inputs=[data, scales],
outputs=[output],
name="test_resize_upsample_scales_cubic_A_n0p5_exclude_outside",
)
_resize_downsample_scales_cubic_A_n0p5_exclude_outside
import numpy as np
import onnx
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "scales"],
outputs=["Y"],
mode="cubic",
cubic_coeff_a=-0.5,
exclude_outside=True,
)
data = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16],
]
]
],
dtype=np.float32,
)
scales = np.array([1.0, 1.0, 0.8, 0.8], dtype=np.float32)
# [[[[ 1.36812675 2.6695014 4.0133367 ]
# [ 6.57362535 7.875 9.2188353 ]
# [11.94896657 13.25034122 14.59417652]]]]
output = interpolate_nd(
data,
lambda x, _: cubic_coeffs(x, A=-0.5),
scale_factors=scales,
exclude_outside=True,
).astype(np.float32)
expect(
node,
inputs=[data, scales],
outputs=[output],
name="test_resize_downsample_scales_cubic_A_n0p5_exclude_outside",
)
# TensorFlow v1 bicubic with half_pixel_centers=False
_resize_upsample_scales_cubic_asymmetric
import numpy as np
import onnx
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "scales"],
outputs=["Y"],
mode="cubic",
coordinate_transformation_mode="asymmetric",
)
data = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16],
]
]
],
dtype=np.float32,
)
scales = np.array([1.0, 1.0, 2.0, 2.0], dtype=np.float32)
# [[[[ 1. 1.40625 2. 2.5 3. 3.59375 4.
# 4.09375]
# [ 2.625 3.03125 3.625 4.125 4.625 5.21875 5.625
# 5.71875]
# [ 5. 5.40625 6. 6.5 7. 7.59375 8.
# 8.09375]
# [ 7. 7.40625 8. 8.5 9. 9.59375 10.
# 10.09375]
# [ 9. 9.40625 10. 10.5 11. 11.59375 12.
# 12.09375]
# [11.375 11.78125 12.375 12.875 13.375 13.96875 14.375
# 14.46875]
# [13. 13.40625 14. 14.5 15. 15.59375 16.
# 16.09375]
# [13.375 13.78125 14.375 14.875 15.375 15.96875 16.375
# 16.46875]]]]
output = interpolate_nd(
data,
lambda x, _: cubic_coeffs(x, A=-0.75),
scale_factors=scales,
coordinate_transformation_mode="asymmetric",
).astype(np.float32)
expect(
node,
inputs=[data, scales],
outputs=[output],
name="test_resize_upsample_scales_cubic_asymmetric",
)
_resize_tf_crop_and_resize
import numpy as np
import onnx
node = onnx.helper.make_node(
"Resize",
inputs=["X", "roi", "", "sizes"],
outputs=["Y"],
mode="linear",
coordinate_transformation_mode="tf_crop_and_resize",
)
data = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16],
]
]
],
dtype=np.float32,
)
# Note: for some rois, the result may be different with that of TF for inaccurate floating point
roi = np.array([0, 0, 0.4, 0.6, 1, 1, 0.6, 0.8], dtype=np.float32)
sizes = np.array([1, 1, 3, 3], dtype=np.int64)
# [[[[ 7.6000004 7.9 8.2 ]
# [ 8.8 9.1 9.400001 ]
# [10. 10.3 10.6 ]]]]
output = interpolate_nd(
data,
lambda x, _: linear_coeffs(x),
output_size=sizes,
roi=roi,
coordinate_transformation_mode="tf_crop_and_resize",
).astype(np.float32)
expect(
node,
inputs=[data, roi, sizes],
outputs=[output],
name="test_resize_tf_crop_and_resize",
)
_resize_tf_crop_and_resize_extrapolation_value
import numpy as np
import onnx
node = onnx.helper.make_node(
"Resize",
inputs=["X", "roi", "", "sizes"],
outputs=["Y"],
mode="linear",
coordinate_transformation_mode="tf_crop_and_resize",
extrapolation_value=10.0,
)
data = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16],
]
]
],
dtype=np.float32,
)
# Note: for some rois, the result may be different with that of TF for inaccurate floating point
roi = np.array([0, 0, 0.4, 0.6, 1, 1, 1.2, 1.7], dtype=np.float32)
sizes = np.array([1, 1, 3, 3], dtype=np.int64)
# [[[[ 7.6000004 10. 10. ]
# [12.400001 10. 10. ]
# [10. 10. 10. ]]]]
output = interpolate_nd(
data,
lambda x, _: linear_coeffs(x),
output_size=sizes,
roi=roi,
coordinate_transformation_mode="tf_crop_and_resize",
extrapolation_value=10.0,
).astype(np.float32)
expect(
node,
inputs=[data, roi, sizes],
outputs=[output],
name="test_resize_tf_crop_and_resize",
)
_resize_downsample_sizes_linear_pytorch_half_pixel
import numpy as np
import onnx
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "", "sizes"],
outputs=["Y"],
mode="linear",
coordinate_transformation_mode="pytorch_half_pixel",
)
data = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16],
]
]
],
dtype=np.float32,
)
sizes = np.array([1, 1, 3, 1], dtype=np.int64)
# [[[[ 1.6666666]
# [ 7. ]
# [12.333333 ]]]]
output = interpolate_nd(
data,
lambda x, _: linear_coeffs(x),
output_size=sizes,
coordinate_transformation_mode="pytorch_half_pixel",
).astype(np.float32)
expect(
node,
inputs=[data, sizes],
outputs=[output],
name="test_resize_downsample_sizes_linear_pytorch_half_pixel",
)
_resize_upsample_sizes_nearest_floor_align_corners
import numpy as np
import onnx
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "", "sizes"],
outputs=["Y"],
mode="nearest",
coordinate_transformation_mode="align_corners",
nearest_mode="floor",
)
data = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16],
]
]
],
dtype=np.float32,
)
sizes = np.array([1, 1, 8, 8], dtype=np.int64)
# [[[[ 1. 1. 1. 2. 2. 3. 3. 4.]
# [ 1. 1. 1. 2. 2. 3. 3. 4.]
# [ 1. 1. 1. 2. 2. 3. 3. 4.]
# [ 5. 5. 5. 6. 6. 7. 7. 8.]
# [ 5. 5. 5. 6. 6. 7. 7. 8.]
# [ 9. 9. 9. 10. 10. 11. 11. 12.]
# [ 9. 9. 9. 10. 10. 11. 11. 12.]
# [13. 13. 13. 14. 14. 15. 15. 16.]]]]
output = interpolate_nd(
data,
lambda x, _: nearest_coeffs(x, mode="floor"),
output_size=sizes,
coordinate_transformation_mode="align_corners",
).astype(np.float32)
expect(
node,
inputs=[data, sizes],
outputs=[output],
name="test_resize_upsample_sizes_nearest_floor_align_corners",
)
_resize_upsample_sizes_nearest_round_prefer_ceil_asymmetric
import numpy as np
import onnx
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "", "sizes"],
outputs=["Y"],
mode="nearest",
coordinate_transformation_mode="asymmetric",
nearest_mode="round_prefer_ceil",
)
data = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16],
]
]
],
dtype=np.float32,
)
sizes = np.array([1, 1, 8, 8], dtype=np.int64)
# [[[[ 1. 2. 2. 3. 3. 4. 4. 4.]
# [ 5. 6. 6. 7. 7. 8. 8. 8.]
# [ 5. 6. 6. 7. 7. 8. 8. 8.]
# [ 9. 10. 10. 11. 11. 12. 12. 12.]
# [ 9. 10. 10. 11. 11. 12. 12. 12.]
# [13. 14. 14. 15. 15. 16. 16. 16.]
# [13. 14. 14. 15. 15. 16. 16. 16.]
# [13. 14. 14. 15. 15. 16. 16. 16.]]]]
output = interpolate_nd(
data,
lambda x, _: nearest_coeffs(x, mode="round_prefer_ceil"),
output_size=sizes,
coordinate_transformation_mode="asymmetric",
).astype(np.float32)
expect(
node,
inputs=[data, sizes],
outputs=[output],
name="test_resize_upsample_sizes_nearest_round_prefer_ceil_asymmetric",
)
_resize_upsample_sizes_nearest_ceil_half_pixel
import numpy as np
import onnx
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "", "sizes"],
outputs=["Y"],
mode="nearest",
coordinate_transformation_mode="half_pixel",
nearest_mode="ceil",
)
data = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16],
]
]
],
dtype=np.float32,
)
sizes = np.array([1, 1, 8, 8], dtype=np.int64)
# [[[[ 1. 2. 2. 3. 3. 4. 4. 4.]
# [ 5. 6. 6. 7. 7. 8. 8. 8.]
# [ 5. 6. 6. 7. 7. 8. 8. 8.]
# [ 9. 10. 10. 11. 11. 12. 12. 12.]
# [ 9. 10. 10. 11. 11. 12. 12. 12.]
# [13. 14. 14. 15. 15. 16. 16. 16.]
# [13. 14. 14. 15. 15. 16. 16. 16.]
# [13. 14. 14. 15. 15. 16. 16. 16.]]]]
output = interpolate_nd(
data, lambda x, _: nearest_coeffs(x, mode="ceil"), output_size=sizes
).astype(np.float32)
expect(
node,
inputs=[data, sizes],
outputs=[output],
name="test_resize_upsample_sizes_nearest_ceil_half_pixel",
)
_resize_downsample_scales_linear_antialias
import numpy as np
import onnx
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "scales"],
outputs=["Y"],
mode="linear",
antialias=1,
)
data = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16],
]
]
],
dtype=np.float32,
)
scales = np.array([1.0, 1.0, 0.6, 0.6], dtype=np.float32)
# [[[[ 2.875 4.5 ]
# [ 9.375 11. ]]]]
output = interpolate_nd(
data, linear_coeffs_antialias, scale_factors=scales
).astype(np.float32)
expect(
node,
inputs=[data, scales],
outputs=[output],
name="test_resize_downsample_scales_linear_antialias",
)
_resize_downsample_sizes_linear_antialias
import numpy as np
import onnx
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "", "sizes"],
outputs=["Y"],
mode="linear",
antialias=1,
)
data = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16],
]
]
],
dtype=np.float32,
)
sizes = np.array([1, 1, 3, 3], dtype=np.int64)
# [[[[ 2.3636363 3.590909 4.818182 ]
# [ 7.2727275 8.5 9.727273 ]
# [12.181818 13.409091 14.636364 ]]]]
output = interpolate_nd(
data, linear_coeffs_antialias, output_size=sizes
).astype(np.float32)
expect(
node,
inputs=[data, sizes],
outputs=[output],
name="test_resize_downsample_sizes_linear_antialias",
)
_resize_downsample_scales_cubic_antialias
import numpy as np
import onnx
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "scales"],
outputs=["Y"],
mode="cubic",
antialias=1,
)
data = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16],
]
]
],
dtype=np.float32,
)
scales = np.array([1.0, 1.0, 0.6, 0.6], dtype=np.float32)
# [[[[ 2.5180721 4.2858863]
# [ 9.589329 11.357142 ]]]]
output = interpolate_nd(
data, cubic_coeffs_antialias, scale_factors=scales
).astype(np.float32)
expect(
node,
inputs=[data, scales],
outputs=[output],
name="test_resize_downsample_scales_cubic_antialias",
)
_resize_downsample_sizes_cubic_antialias
import numpy as np
import onnx
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "", "sizes"],
outputs=["Y"],
mode="cubic",
antialias=1,
)
data = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16],
]
]
],
dtype=np.float32,
)
sizes = np.array([1, 1, 3, 3], dtype=np.int64)
# [[[[ 1.7750092 3.1200073 4.4650054]
# [ 7.1550016 8.5 9.844998 ]
# [12.534994 13.8799925 15.224991 ]]]]
output = interpolate_nd(data, cubic_coeffs_antialias, output_size=sizes).astype(
np.float32
)
expect(
node,
inputs=[data, sizes],
outputs=[output],
name="test_resize_downsample_sizes_cubic_antialias",
)
_resize_upsample_scales_nearest_axes_2_3
import numpy as np
import onnx
axes = [2, 3]
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "scales"],
outputs=["Y"],
mode="nearest",
axes=axes,
)
data = np.array(
[
[
[
[1, 2],
[3, 4],
]
]
],
dtype=np.float32,
)
scales = np.array([2.0, 3.0], dtype=np.float32)
# [[[[1. 1. 1. 2. 2. 2.]
# [1. 1. 1. 2. 2. 2.]
# [3. 3. 3. 4. 4. 4.]
# [3. 3. 3. 4. 4. 4.]]]]
output = interpolate_nd(
data, lambda x, _: nearest_coeffs(x), scale_factors=scales, axes=axes
).astype(np.float32)
expect(
node,
inputs=[data, scales],
outputs=[output],
name="test_resize_upsample_scales_nearest_axes_2_3",
)
_resize_upsample_scales_nearest_axes_3_2
import numpy as np
import onnx
axes = [3, 2]
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "scales"],
outputs=["Y"],
mode="nearest",
axes=axes,
)
data = np.array(
[
[
[
[1, 2],
[3, 4],
]
]
],
dtype=np.float32,
)
scales = np.array([3.0, 2.0], dtype=np.float32)
# [[[[1. 1. 1. 2. 2. 2.]
# [1. 1. 1. 2. 2. 2.]
# [3. 3. 3. 4. 4. 4.]
# [3. 3. 3. 4. 4. 4.]]]]
output = interpolate_nd(
data, lambda x, _: nearest_coeffs(x), scale_factors=scales, axes=axes
).astype(np.float32)
expect(
node,
inputs=[data, scales],
outputs=[output],
name="test_resize_upsample_scales_nearest_axes_3_2",
)
_resize_upsample_sizes_nearest_axes_2_3
import numpy as np
import onnx
axes = [2, 3]
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "", "sizes"],
outputs=["Y"],
mode="nearest",
axes=axes,
)
data = np.array(
[
[
[
[1, 2],
[3, 4],
]
]
],
dtype=np.float32,
)
sizes = np.array([7, 8], dtype=np.int64)
# [[[[1. 1. 1. 1. 2. 2. 2. 2.]
# [1. 1. 1. 1. 2. 2. 2. 2.]
# [1. 1. 1. 1. 2. 2. 2. 2.]
# [1. 1. 1. 1. 2. 2. 2. 2.]
# [3. 3. 3. 3. 4. 4. 4. 4.]
# [3. 3. 3. 3. 4. 4. 4. 4.]
# [3. 3. 3. 3. 4. 4. 4. 4.]]]]
output = interpolate_nd(
data, lambda x, _: nearest_coeffs(x), output_size=sizes, axes=axes
).astype(np.float32)
expect(
node,
inputs=[data, sizes],
outputs=[output],
name="test_resize_upsample_sizes_nearest_axes_2_3",
)
_resize_upsample_sizes_nearest_axes_3_2
import numpy as np
import onnx
axes = [3, 2]
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "", "sizes"],
outputs=["Y"],
mode="nearest",
axes=axes,
)
data = np.array(
[
[
[
[1, 2],
[3, 4],
]
]
],
dtype=np.float32,
)
sizes = np.array([8, 7], dtype=np.int64)
# [[[[1. 1. 1. 1. 2. 2. 2. 2.]
# [1. 1. 1. 1. 2. 2. 2. 2.]
# [1. 1. 1. 1. 2. 2. 2. 2.]
# [1. 1. 1. 1. 2. 2. 2. 2.]
# [3. 3. 3. 3. 4. 4. 4. 4.]
# [3. 3. 3. 3. 4. 4. 4. 4.]
# [3. 3. 3. 3. 4. 4. 4. 4.]]]]
output = interpolate_nd(
data, lambda x, _: nearest_coeffs(x), output_size=sizes, axes=axes
).astype(np.float32)
expect(
node,
inputs=[data, sizes],
outputs=[output],
name="test_resize_upsample_sizes_nearest_axes_3_2",
)
_resize_tf_crop_and_resize_axes_2_3
import numpy as np
import onnx
axes = [2, 3]
node = onnx.helper.make_node(
"Resize",
inputs=["X", "roi", "", "sizes"],
outputs=["Y"],
mode="linear",
coordinate_transformation_mode="tf_crop_and_resize",
)
data = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16],
]
]
],
dtype=np.float32,
)
# Note: for some rois, the result may be different with that of TF for inaccurate floating point
roi = np.array([0.4, 0.6, 0.6, 0.8], dtype=np.float32)
sizes = np.array([3, 3], dtype=np.int64)
# [[[[ 7.6000004 7.9 8.2 ]
# [ 8.8 9.1 9.400001 ]
# [10. 10.3 10.6 ]]]]
output = interpolate_nd(
data,
lambda x, _: linear_coeffs(x),
output_size=sizes,
roi=roi,
axes=axes,
coordinate_transformation_mode="tf_crop_and_resize",
).astype(np.float32)
expect(
node,
inputs=[data, roi, sizes],
outputs=[output],
name="test_resize_tf_crop_and_resize_axes_2_3",
)
_resize_tf_crop_and_resize_axes_3_2
import numpy as np
import onnx
axes = [3, 2]
node = onnx.helper.make_node(
"Resize",
inputs=["X", "roi", "", "sizes"],
outputs=["Y"],
mode="linear",
coordinate_transformation_mode="tf_crop_and_resize",
)
data = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16],
]
]
],
dtype=np.float32,
)
# Note: for some rois, the result may be different with that of TF for inaccurate floating point
roi = np.array([0.6, 0.4, 0.8, 0.6], dtype=np.float32)
sizes = np.array([3, 3], dtype=np.int64)
# [[[[ 7.6000004 7.9 8.2 ]
# [ 8.8 9.1 9.400001 ]
# [10. 10.3 10.6 ]]]]
output = interpolate_nd(
data,
lambda x, _: linear_coeffs(x),
output_size=sizes,
roi=roi,
axes=axes,
coordinate_transformation_mode="tf_crop_and_resize",
).astype(np.float32)
expect(
node,
inputs=[data, roi, sizes],
outputs=[output],
name="test_resize_tf_crop_and_resize_axes_3_2",
)
_resize_upsample_sizes_nearest_not_larger
import numpy as np
import onnx
keep_aspect_ratio_policy = "not_larger"
axes = [2, 3]
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "", "sizes"],
outputs=["Y"],
mode="nearest",
axes=axes,
keep_aspect_ratio_policy=keep_aspect_ratio_policy,
)
data = np.array(
[
[
[
[1, 2],
[3, 4],
]
]
],
dtype=np.float32,
)
sizes = np.array([7, 8], dtype=np.int64) # Results in 7x7
# [[[[1. 1. 1. 1. 2. 2. 2.]
# [1. 1. 1. 1. 2. 2. 2.]
# [1. 1. 1. 1. 2. 2. 2.]
# [1. 1. 1. 1. 2. 2. 2.]
# [3. 3. 3. 3. 4. 4. 4.]
# [3. 3. 3. 3. 4. 4. 4.]
# [3. 3. 3. 3. 4. 4. 4.]]]]
output = interpolate_nd(
data,
lambda x, _: nearest_coeffs(x),
output_size=sizes,
axes=axes,
keep_aspect_ratio_policy=keep_aspect_ratio_policy,
).astype(np.float32)
expect(
node,
inputs=[data, sizes],
outputs=[output],
name="test_resize_upsample_sizes_nearest_not_larger",
)
_resize_upsample_sizes_nearest_not_smaller
import numpy as np
import onnx
keep_aspect_ratio_policy = "not_smaller"
axes = [2, 3]
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "", "sizes"],
outputs=["Y"],
mode="nearest",
axes=axes,
keep_aspect_ratio_policy=keep_aspect_ratio_policy,
)
data = np.array(
[
[
[
[1, 2],
[3, 4],
]
]
],
dtype=np.float32,
)
sizes = np.array([7, 8], dtype=np.int64) # Results in 8x8
# [[[[1. 1. 1. 1. 2. 2. 2. 2.]
# [1. 1. 1. 1. 2. 2. 2. 2.]
# [1. 1. 1. 1. 2. 2. 2. 2.]
# [1. 1. 1. 1. 2. 2. 2. 2.]
# [3. 3. 3. 3. 4. 4. 4. 4.]
# [3. 3. 3. 3. 4. 4. 4. 4.]
# [3. 3. 3. 3. 4. 4. 4. 4.]]]]
output = interpolate_nd(
data,
lambda x, _: nearest_coeffs(x),
output_size=sizes,
axes=axes,
keep_aspect_ratio_policy=keep_aspect_ratio_policy,
).astype(np.float32)
expect(
node,
inputs=[data, sizes],
outputs=[output],
name="test_resize_upsample_sizes_nearest_not_larger",
)
_resize_downsample_sizes_nearest_not_larger
import numpy as np
import onnx
keep_aspect_ratio_policy = "not_larger"
axes = [2, 3]
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "", "sizes"],
outputs=["Y"],
mode="nearest",
axes=axes,
keep_aspect_ratio_policy=keep_aspect_ratio_policy,
)
data = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
]
]
],
dtype=np.float32,
)
sizes = np.array([1, 3], dtype=np.int64) # Results in 1x2
# [[[[1. 3.]]]]
output = interpolate_nd(
data,
lambda x, _: nearest_coeffs(x),
output_size=sizes,
axes=axes,
keep_aspect_ratio_policy=keep_aspect_ratio_policy,
).astype(np.float32)
expect(
node,
inputs=[data, sizes],
outputs=[output],
name="test_resize_downsample_sizes_nearest_not_larger",
)
_resize_downsample_sizes_nearest_not_smaller
import numpy as np
import onnx
keep_aspect_ratio_policy = "not_smaller"
axes = [2, 3]
node = onnx.helper.make_node(
"Resize",
inputs=["X", "", "", "sizes"],
outputs=["Y"],
mode="nearest",
axes=axes,
keep_aspect_ratio_policy=keep_aspect_ratio_policy,
)
data = np.array(
[
[
[
[1, 2, 3, 4],
[5, 6, 7, 8],
]
]
],
dtype=np.float32,
)
sizes = np.array([1, 3], dtype=np.int64) # Results in 2x3
# [[[[1. 2. 4.]
# [5. 6. 8.]]]]
output = interpolate_nd(
data,
lambda x, _: nearest_coeffs(x),
output_size=sizes,
axes=axes,
keep_aspect_ratio_policy=keep_aspect_ratio_policy,
).astype(np.float32)
expect(
node,
inputs=[data, sizes],
outputs=[output],
name="test_resize_downsample_sizes_nearest_not_smaller",
)
Resize - 13#
Version
name: Resize (GitHub)
domain: main
since_version: 13
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 13.
Summary
Resize the input tensor. In general, it calculates every value in the output tensor as a weighted average of neighborhood (a.k.a. sampling locations) in the input tensor. Each dimension value of the output tensor is:
output_dimension = floor(input_dimension * (roi_end - roi_start) * scale) if input "sizes" is not specified.
Attributes
- coordinate_transformation_mode:
This attribute describes how to transform the coordinate in the
resized tensor to the coordinate in the original tensor. <br/> The coordinate of each dimension is transformed individually. Let’s describe a case using axis x as an example. Denote x_resized as the coordinate of axis x in the resized tensor, x_original as the coordinate of axis x in the original tensor, length_original as the length of the original tensor in axis x, length_resized as the length of the resized tensor in axis x, roi_x = (start_x, end_x) of the axis x in input “roi”, scale = length_resized / length_original, <br/> if coordinate_transformation_mode is “half_pixel”, <br/> x_original = (x_resized + 0.5) / scale - 0.5, <br/> if coordinate_transformation_mode is “pytorch_half_pixel”, <br/> x_original = length_resized > 1 ? (x_resized + 0.5) / scale - 0.5 : 0, <br/> if coordinate_transformation_mode is “align_corners”, <br/> x_original = x_resized * (length_original - 1) / (length_resized - 1), <br/> if coordinate_transformation_mode is “asymmetric”, <br/> x_original = x_resized / scale, <br/> if coordinate_transformation_mode is “tf_crop_and_resize”, <br/> x_original = length_resized > 1 ? start_x * (length_original - 1) + x_resized * (end_x - start_x) * (length_original - 1) / (length_resized - 1) : 0.5 * (start_x + end_x) * (length_original - 1).
cubic_coeff_a: The coefficient ‘a’ used in cubic interpolation. Two common choice are -0.5 (in some cases of TensorFlow) and -0.75 (in PyTorch). Check out Equation (4) in https://ieeexplore.ieee.org/document/1163711 for the details. This attribute is valid only if “mode” is “cubic”.
exclude_outside: If set to 1, the weight of sampling locations outside the tensor will be set to 0 and the weight will be renormalized so that their sum is 1.0. The default value is 0.
extrapolation_value: When coordinate_transformation_mode is “tf_crop_and_resize” and x_original is outside the range [0, length_original - 1], this value is used as the corresponding output value. Default is 0.0f.
mode: Three interpolation modes: nearest (default), linear and cubic. The “linear” mode includes linear interpolation for 1D tensor and N-linear interpolation for N-D tensor (for example, bilinear interpolation for 2D tensor). The “cubic” mode includes cubic interpolation for 1D tensor and N-cubic interpolation for N-D tensor (for example, bicubic interpolation for 2D tensor).
nearest_mode: Four modes: round_prefer_floor (default, as known as round half down), round_prefer_ceil (as known as round half up), floor, ceil. Only used by nearest interpolation. It indicates how to get “nearest” pixel in input tensor from x_original, so this attribute is valid only if “mode” is “nearest”.
Inputs
Between 1 and 4 inputs.
X (heterogeneous) - T1: N-D tensor
roi (optional, heterogeneous) - T2: 1-D tensor given as [start1, …, startN, end1, …, endN], where N is the rank of X. The RoIs’ coordinates are normalized in the coordinate system of the input image. It only takes effect when coordinate_transformation_mode is “tf_crop_and_resize”
scales (optional, heterogeneous) - tensor(float): The scale array along each dimension. It takes value greater than 0. If it’s less than 1, it’s sampling down, otherwise, it’s upsampling. The number of elements of ‘scales’ should be the same as the rank of input ‘X’. One of ‘scales’ and ‘sizes’ MUST be specified and it is an error if both are specified. If ‘sizes’ is needed, the user can use an empty string as the name of ‘scales’ in this operator’s input list.
sizes (optional, heterogeneous) - tensor(int64): The size of the output tensor. The number of elements of ‘sizes’ should be the same as the rank of input ‘X’. Only one of ‘scales’ and ‘sizes’ can be specified.
Outputs
Y (heterogeneous) - T1: N-D tensor after resizing
Type Constraints
T1 in ( tensor(bfloat16), tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): Constrain input ‘X’ and output ‘Y’ to all tensor types.
T2 in ( tensor(double), tensor(float), tensor(float16) ): Constrain roi type to float or double.
Resize - 11#
Version
name: Resize (GitHub)
domain: main
since_version: 11
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 11.
Summary
Resize the input tensor. In general, it calculates every value in the output tensor as a weighted average of neighborhood (a.k.a. sampling locations) in the input tensor. Each dimension value of the output tensor is:
output_dimension = floor(input_dimension * (roi_end - roi_start) * scale) if input "sizes" is not specified.
Attributes
- coordinate_transformation_mode:
This attribute describes how to transform the coordinate in the
resized tensor to the coordinate in the original tensor. <br/> The coordinate of each dimension is transformed individually. Let’s describe a case using axis x as an example. Denote x_resized as the coordinate of axis x in the resized tensor, x_original as the coordinate of axis x in the original tensor, length_original as the length of the original tensor in axis x, length_resized as the length of the resized tensor in axis x, roi_x = (start_x, end_x) of the axis x in input “roi”, scale = length_resized / length_original, <br/> if coordinate_transformation_mode is “half_pixel”, <br/> x_original = (x_resized + 0.5) / scale - 0.5, <br/> if coordinate_transformation_mode is “pytorch_half_pixel”, <br/> x_original = length_resized > 1 ? (x_resized + 0.5) / scale - 0.5 : 0, <br/> if coordinate_transformation_mode is “align_corners”, <br/> x_original = x_resized * (length_original - 1) / (length_resized - 1), <br/> if coordinate_transformation_mode is “asymmetric”, <br/> x_original = x_resized / scale, <br/> if coordinate_transformation_mode is “tf_half_pixel_for_nn”, <br/> x_original = (x_resized + 0.5) / scale, <br/> if coordinate_transformation_mode is “tf_crop_and_resize”, <br/> x_original = length_resized > 1 ? start_x * (length_original - 1) + x_resized * (end_x - start_x) * (length_original - 1) / (length_resized - 1) : 0.5 * (start_x + end_x) * (length_original - 1).
cubic_coeff_a: The coefficient ‘a’ used in cubic interpolation. Two common choice are -0.5 (in some cases of TensorFlow) and -0.75 (in PyTorch). Check out Equation (4) in https://ieeexplore.ieee.org/document/1163711 for the details. This attribute is valid only if “mode” is “cubic”.
exclude_outside: If set to 1, the weight of sampling locations outside the tensor will be set to 0 and the weight will be renormalized so that their sum is 1.0. The default value is 0.
extrapolation_value: When coordinate_transformation_mode is “tf_crop_and_resize” and x_original is outside the range [0, length_original - 1], this value is used as the corresponding output value. Default is 0.0f.
mode: Three interpolation modes: nearest (default), linear and cubic. The “linear” mode includes linear interpolation for 1D tensor and N-linear interpolation for N-D tensor (for example, bilinear interpolation for 2D tensor). The “cubic” mode includes cubic interpolation for 1D tensor and N-cubic interpolation for N-D tensor (for example, bicubic interpolation for 2D tensor).
nearest_mode: Four modes: round_prefer_floor (default, as known as round half down), round_prefer_ceil (as known as round half up), floor, ceil. Only used by nearest interpolation. It indicates how to get “nearest” pixel in input tensor from x_original, so this attribute is valid only if “mode” is “nearest”.
Inputs
Between 3 and 4 inputs.
X (heterogeneous) - T1: N-D tensor
roi (heterogeneous) - T2: 1-D tensor given as [start1, …, startN, end1, …, endN], where N is the rank of X. The RoIs’ coordinates are normalized in the coordinate system of the input image. It only takes effect when coordinate_transformation_mode is “tf_crop_and_resize”
scales (heterogeneous) - tensor(float): The scale array along each dimension. It takes value greater than 0. If it’s less than 1, it’s sampling down, otherwise, it’s upsampling. The number of elements of ‘scales’ should be the same as the rank of input ‘X’. If ‘size’ is needed, the user must set ‘scales’ to an empty tensor.
sizes (optional, heterogeneous) - tensor(int64): The size of the output tensor. The number of elements of ‘sizes’ should be the same as the rank of input ‘X’. May only be set if ‘scales’ is set to an empty tensor.
Outputs
Y (heterogeneous) - T1: N-D tensor after resizing
Type Constraints
T1 in ( tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): Constrain input ‘X’ and output ‘Y’ to all tensor types.
T2 in ( tensor(double), tensor(float), tensor(float16) ): Constrain roi type to float or double.
Resize - 10#
Version
name: Resize (GitHub)
domain: main
since_version: 10
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 10.
Summary
Resize the input tensor. Each dimension value of the output tensor is:
output_dimension = floor(input_dimension * scale).
Attributes
mode: Two interpolation modes: nearest (default), and linear (including bilinear, trilinear, etc)
Inputs
X (heterogeneous) - T: N-D tensor
scales (heterogeneous) - tensor(float): The scale array along each dimension. It takes value greater than 0. If it’s less than 1, it’s sampling down, otherwise, it’s upsampling. The number of elements of ‘scales’ should be the same as the rank of input ‘X’.
Outputs
Y (heterogeneous) - T: N-D tensor after resizing
Type Constraints
T in ( tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): Constrain input ‘X’ and output ‘Y’ to all tensor types.