ConvTranspose#
ConvTranspose - 11#
Version
name: ConvTranspose (GitHub)
domain: main
since_version: 11
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 11.
Summary
The convolution transpose operator consumes an input tensor and a filter, and computes the output.
If the pads parameter is provided the shape of the output is calculated via the following equation:
output_shape[i] = stride[i] * (input_size[i] - 1) + output_padding[i] + ((kernel_shape[i] - 1) * dilations[i] + 1) - pads[start_i] - pads[end_i]
output_shape can also be explicitly specified in which case pads values are auto generated using these equations:
total_padding[i] = stride[i] * (input_size[i] - 1) + output_padding[i] + ((kernel_shape[i] - 1) * dilations[i] + 1) - output_shape[i] If (auto_pads == SAME_UPPER): pads[start_i] = total_padding[i]/2; pads[end_i] = total_padding[i] - (total_padding[i]/2) Else: pads[start_i] = total_padding[i] - (total_padding[i]/2); pads[end_i] = (total_padding[i]/2).
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that output_shape[i] = input_shape[i] * strides[i] for each axis i. The padding is split between the two sides equally or almost equally (depending on whether it is even or odd). In case the padding is an odd number, the extra padding is added at the end for SAME_UPPER and at the beginning for SAME_LOWER.
dilations: dilation value along each spatial axis of the filter. If not present, the dilation defaults to 1 along each spatial axis.
group: number of groups input channels and output channels are divided into.
kernel_shape: The shape of the convolution kernel. If not present, should be inferred from input W.
output_padding: Additional elements added to the side with higher coordinate indices in the output. Each padding value in “output_padding” must be less than the corresponding stride/dilation dimension. By default, this attribute is a zero vector. Note that this attribute doesn’t directly affect the computed output values. It only controls the selection of the computed values, so changing this attribute only adds or removes output elements. If “output_shape” is explicitly provided, “output_padding” does not contribute additional size to “output_shape” but participates in the computation of the needed padding amount. This is also called adjs or adjustment in some frameworks.
output_shape: The shape of the output can be explicitly set which will cause pads values to be auto generated. If output_shape is specified pads values are ignored. See doc for details for equations to generate pads
pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis.
strides: Stride along each spatial axis. If not present, the stride defaults to 1 along each spatial axis.
Inputs
Between 2 and 3 inputs.
X (heterogeneous) - T: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn)
W (heterogeneous) - T: The weight tensor that will be used in the convolutions; has size (C x M/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the weight shape will be (C x M/group x k1 x k2 x … x kn), where (k1 x k2 x … x kn) is the dimension of the kernel. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)
B (optional, heterogeneous) - T: Optional 1D bias to be added to the convolution, has size of M.
Outputs
Y (heterogeneous) - T: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, pad lengths and group count. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)
Type Constraints
T in ( tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.
Examples
default
import numpy as np
import onnx
x = np.array(
[[[[0.0, 1.0, 2.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0]]]] # (1, 1, 3, 3)
).astype(np.float32)
W = np.array(
[
[
[[1.0, 1.0, 1.0], [1.0, 1.0, 1.0], [1.0, 1.0, 1.0]], # (1, 2, 3, 3)
[[1.0, 1.0, 1.0], [1.0, 1.0, 1.0], [1.0, 1.0, 1.0]],
]
]
).astype(np.float32)
node = onnx.helper.make_node("ConvTranspose", ["X", "W"], ["Y"])
y = np.array(
[
[
[
[0.0, 1.0, 3.0, 3.0, 2.0], # (1, 2, 5, 5)
[3.0, 8.0, 15.0, 12.0, 7.0],
[9.0, 21.0, 36.0, 27.0, 15.0],
[9.0, 20.0, 33.0, 24.0, 13.0],
[6.0, 13.0, 21.0, 15.0, 8.0],
],
[
[0.0, 1.0, 3.0, 3.0, 2.0],
[3.0, 8.0, 15.0, 12.0, 7.0],
[9.0, 21.0, 36.0, 27.0, 15.0],
[9.0, 20.0, 33.0, 24.0, 13.0],
[6.0, 13.0, 21.0, 15.0, 8.0],
],
]
]
).astype(np.float32)
expect(node, inputs=[x, W], outputs=[y], name="test_convtranspose")
_convtranspose_1d
import numpy as np
import onnx
x = np.array([[[0.0, 1.0, 2.0]]]).astype(np.float32) # (1, 1, 3)
W = np.array([[[1.0, 1.0, 1.0], [1.0, 1.0, 1.0]]]).astype( # (1, 2, 3)
np.float32
)
node = onnx.helper.make_node("ConvTranspose", ["X", "W"], ["Y"])
y = np.array(
[[[0.0, 1.0, 3.0, 3.0, 2.0], [0.0, 1.0, 3.0, 3.0, 2.0]]] # (1, 2, 5)
).astype(np.float32)
expect(node, inputs=[x, W], outputs=[y], name="test_convtranspose_1d")
_convtranspose_3d
import numpy as np
import onnx
x = np.array(
[
[
[
[
[0.0, 1.0, 2.0, 3.0, 4.0], # (1, 1, 3, 4, 5)
[5.0, 6.0, 7.0, 8.0, 9.0],
[10.0, 11.0, 12.0, 13.0, 14.0],
[15.0, 16.0, 17.0, 18.0, 19.0],
],
[
[20.0, 21.0, 22.0, 23.0, 24.0],
[25.0, 26.0, 27.0, 28.0, 29.0],
[30.0, 31.0, 32.0, 33.0, 34.0],
[35.0, 36.0, 37.0, 38.0, 39.0],
],
[
[40.0, 41.0, 42.0, 43.0, 44.0],
[45.0, 46.0, 47.0, 48.0, 49.0],
[50.0, 51.0, 52.0, 53.0, 54.0],
[55.0, 56.0, 57.0, 58.0, 59.0],
],
]
]
]
).astype(np.float32)
W = np.array(
[
[
[
[
[1.0, 1.0, 1.0], # (1, 2, 3, 3, 3)
[1.0, 1.0, 1.0],
[1.0, 1.0, 1.0],
],
[[1.0, 1.0, 1.0], [1.0, 1.0, 1.0], [1.0, 1.0, 1.0]],
[[1.0, 1.0, 1.0], [1.0, 1.0, 1.0], [1.0, 1.0, 1.0]],
],
[
[[1.0, 1.0, 1.0], [1.0, 1.0, 1.0], [1.0, 1.0, 1.0]],
[[1.0, 1.0, 1.0], [1.0, 1.0, 1.0], [1.0, 1.0, 1.0]],
[[1.0, 1.0, 1.0], [1.0, 1.0, 1.0], [1.0, 1.0, 1.0]],
],
]
]
).astype(np.float32)
node = onnx.helper.make_node("ConvTranspose", ["X", "W"], ["Y"])
y = np.array(
[
[
[
[
[0.0, 1.0, 3.0, 6.0, 9.0, 7.0, 4.0], # (1, 2, 5, 6, 7)
[5.0, 12.0, 21.0, 27.0, 33.0, 24.0, 13.0],
[15.0, 33.0, 54.0, 63.0, 72.0, 51.0, 27.0],
[30.0, 63.0, 99.0, 108.0, 117.0, 81.0, 42.0],
[25.0, 52.0, 81.0, 87.0, 93.0, 64.0, 33.0],
[15.0, 31.0, 48.0, 51.0, 54.0, 37.0, 19.0],
],
[
[20.0, 42.0, 66.0, 72.0, 78.0, 54.0, 28.0],
[50.0, 104.0, 162.0, 174.0, 186.0, 128.0, 66.0],
[90.0, 186.0, 288.0, 306.0, 324.0, 222.0, 114.0],
[120.0, 246.0, 378.0, 396.0, 414.0, 282.0, 144.0],
[90.0, 184.0, 282.0, 294.0, 306.0, 208.0, 106.0],
[50.0, 102.0, 156.0, 162.0, 168.0, 114.0, 58.0],
],
[
[60.0, 123.0, 189.0, 198.0, 207.0, 141.0, 72.0],
[135.0, 276.0, 423.0, 441.0, 459.0, 312.0, 159.0],
[225.0, 459.0, 702.0, 729.0, 756.0, 513.0, 261.0],
[270.0, 549.0, 837.0, 864.0, 891.0, 603.0, 306.0],
[195.0, 396.0, 603.0, 621.0, 639.0, 432.0, 219.0],
[105.0, 213.0, 324.0, 333.0, 342.0, 231.0, 117.0],
],
[
[60.0, 122.0, 186.0, 192.0, 198.0, 134.0, 68.0],
[130.0, 264.0, 402.0, 414.0, 426.0, 288.0, 146.0],
[210.0, 426.0, 648.0, 666.0, 684.0, 462.0, 234.0],
[240.0, 486.0, 738.0, 756.0, 774.0, 522.0, 264.0],
[170.0, 344.0, 522.0, 534.0, 546.0, 368.0, 186.0],
[90.0, 182.0, 276.0, 282.0, 288.0, 194.0, 98.0],
],
[
[40.0, 81.0, 123.0, 126.0, 129.0, 87.0, 44.0],
[85.0, 172.0, 261.0, 267.0, 273.0, 184.0, 93.0],
[135.0, 273.0, 414.0, 423.0, 432.0, 291.0, 147.0],
[150.0, 303.0, 459.0, 468.0, 477.0, 321.0, 162.0],
[105.0, 212.0, 321.0, 327.0, 333.0, 224.0, 113.0],
[55.0, 111.0, 168.0, 171.0, 174.0, 117.0, 59.0],
],
],
[
[
[0.0, 1.0, 3.0, 6.0, 9.0, 7.0, 4.0],
[5.0, 12.0, 21.0, 27.0, 33.0, 24.0, 13.0],
[15.0, 33.0, 54.0, 63.0, 72.0, 51.0, 27.0],
[30.0, 63.0, 99.0, 108.0, 117.0, 81.0, 42.0],
[25.0, 52.0, 81.0, 87.0, 93.0, 64.0, 33.0],
[15.0, 31.0, 48.0, 51.0, 54.0, 37.0, 19.0],
],
[
[20.0, 42.0, 66.0, 72.0, 78.0, 54.0, 28.0],
[50.0, 104.0, 162.0, 174.0, 186.0, 128.0, 66.0],
[90.0, 186.0, 288.0, 306.0, 324.0, 222.0, 114.0],
[120.0, 246.0, 378.0, 396.0, 414.0, 282.0, 144.0],
[90.0, 184.0, 282.0, 294.0, 306.0, 208.0, 106.0],
[50.0, 102.0, 156.0, 162.0, 168.0, 114.0, 58.0],
],
[
[60.0, 123.0, 189.0, 198.0, 207.0, 141.0, 72.0],
[135.0, 276.0, 423.0, 441.0, 459.0, 312.0, 159.0],
[225.0, 459.0, 702.0, 729.0, 756.0, 513.0, 261.0],
[270.0, 549.0, 837.0, 864.0, 891.0, 603.0, 306.0],
[195.0, 396.0, 603.0, 621.0, 639.0, 432.0, 219.0],
[105.0, 213.0, 324.0, 333.0, 342.0, 231.0, 117.0],
],
[
[60.0, 122.0, 186.0, 192.0, 198.0, 134.0, 68.0],
[130.0, 264.0, 402.0, 414.0, 426.0, 288.0, 146.0],
[210.0, 426.0, 648.0, 666.0, 684.0, 462.0, 234.0],
[240.0, 486.0, 738.0, 756.0, 774.0, 522.0, 264.0],
[170.0, 344.0, 522.0, 534.0, 546.0, 368.0, 186.0],
[90.0, 182.0, 276.0, 282.0, 288.0, 194.0, 98.0],
],
[
[40.0, 81.0, 123.0, 126.0, 129.0, 87.0, 44.0],
[85.0, 172.0, 261.0, 267.0, 273.0, 184.0, 93.0],
[135.0, 273.0, 414.0, 423.0, 432.0, 291.0, 147.0],
[150.0, 303.0, 459.0, 468.0, 477.0, 321.0, 162.0],
[105.0, 212.0, 321.0, 327.0, 333.0, 224.0, 113.0],
[55.0, 111.0, 168.0, 171.0, 174.0, 117.0, 59.0],
],
],
]
]
).astype(np.float32)
expect(node, inputs=[x, W], outputs=[y], name="test_convtranspose_3d")
_convtranspose_attributes
import numpy as np
import onnx
x = np.array(
[[[[0.0, 1.0, 2.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0]]]] # (1, 1, 3, 3)
).astype(np.float32)
W = np.array(
[
[
[[1.0, 1.0, 1.0], [1.0, 1.0, 1.0], [1.0, 1.0, 1.0]], # (1, 2, 3, 3)
[[1.0, 1.0, 1.0], [1.0, 1.0, 1.0], [1.0, 1.0, 1.0]],
]
]
).astype(np.float32)
y = np.array(
[
[
[
[0.0, 0.0, 1.0, 1.0, 3.0, 2.0, 2.0, 0.0], # (1, 2, 10, 8)
[0.0, 0.0, 1.0, 1.0, 3.0, 2.0, 2.0, 0.0],
[0.0, 0.0, 1.0, 1.0, 3.0, 2.0, 2.0, 0.0],
[3.0, 3.0, 7.0, 4.0, 9.0, 5.0, 5.0, 0.0],
[3.0, 3.0, 7.0, 4.0, 9.0, 5.0, 5.0, 0.0],
[3.0, 3.0, 7.0, 4.0, 9.0, 5.0, 5.0, 0.0],
[6.0, 6.0, 13.0, 7.0, 15.0, 8.0, 8.0, 0.0],
[6.0, 6.0, 13.0, 7.0, 15.0, 8.0, 8.0, 0.0],
[6.0, 6.0, 13.0, 7.0, 15.0, 8.0, 8.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
],
[
[0.0, 0.0, 1.0, 1.0, 3.0, 2.0, 2.0, 0.0],
[0.0, 0.0, 1.0, 1.0, 3.0, 2.0, 2.0, 0.0],
[0.0, 0.0, 1.0, 1.0, 3.0, 2.0, 2.0, 0.0],
[3.0, 3.0, 7.0, 4.0, 9.0, 5.0, 5.0, 0.0],
[3.0, 3.0, 7.0, 4.0, 9.0, 5.0, 5.0, 0.0],
[3.0, 3.0, 7.0, 4.0, 9.0, 5.0, 5.0, 0.0],
[6.0, 6.0, 13.0, 7.0, 15.0, 8.0, 8.0, 0.0],
[6.0, 6.0, 13.0, 7.0, 15.0, 8.0, 8.0, 0.0],
[6.0, 6.0, 13.0, 7.0, 15.0, 8.0, 8.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
],
]
]
).astype(np.float32)
node = onnx.helper.make_node(
"ConvTranspose", ["X", "W"], ["Y"], strides=[3, 2], output_shape=[10, 8]
)
expect(node, inputs=[x, W], outputs=[y], name="test_convtranspose_output_shape")
node = onnx.helper.make_node(
"ConvTranspose", ["X", "W"], ["Y"], strides=[3, 2], output_padding=[1, 1]
)
expect(node, inputs=[x, W], outputs=[y], name="test_convtranspose_pad")
node = onnx.helper.make_node(
"ConvTranspose",
["X", "W"],
["Y"],
name="test",
strides=[3, 2],
output_shape=[10, 8],
kernel_shape=[3, 3],
output_padding=[1, 1],
)
expect(node, inputs=[x, W], outputs=[y], name="test_convtranspose_kernel_shape")
_convtranspose_pads
import numpy as np
import onnx
x = np.array(
[[[[0.0, 1.0, 2.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0]]]] # (1, 1, 3, 3)
).astype(np.float32)
W = np.array(
[
[
[[1.0, 1.0, 1.0], [1.0, 1.0, 1.0], [1.0, 1.0, 1.0]], # (1, 2, 3, 3)
[[1.0, 1.0, 1.0], [1.0, 1.0, 1.0], [1.0, 1.0, 1.0]],
]
]
).astype(np.float32)
node = onnx.helper.make_node(
"ConvTranspose", ["X", "W"], ["Y"], strides=[3, 2], pads=[1, 2, 1, 2]
)
y = np.array(
[
[
[
[1.0, 1.0, 3.0], # (1, 2, 7, 3)
[1.0, 1.0, 3.0],
[7.0, 4.0, 9.0],
[7.0, 4.0, 9.0],
[7.0, 4.0, 9.0],
[13.0, 7.0, 15.0],
[13.0, 7.0, 15.0],
],
[
[1.0, 1.0, 3.0],
[1.0, 1.0, 3.0],
[7.0, 4.0, 9.0],
[7.0, 4.0, 9.0],
[7.0, 4.0, 9.0],
[13.0, 7.0, 15.0],
[13.0, 7.0, 15.0],
],
]
]
).astype(np.float32)
expect(node, inputs=[x, W], outputs=[y], name="test_convtranspose_pads")
_convtranspose_dilations
import numpy as np
import onnx
x = np.array(
[[[[3.0, 8.0, 1.0], [9.0, 5.0, 7.0], [3.0, 2.0, 6.0]]]] # (1, 1, 3, 3)
).astype(np.float32)
W = np.array([[[[7.0, 2.0], [1.0, 9.0]]]]).astype(np.float32) # (1, 1, 2, 2)
node = onnx.helper.make_node(
"ConvTranspose", ["X", "W"], ["Y"], dilations=[2, 2]
)
y = np.array(
[
[
[
[21.0, 56.0, 13.0, 16.0, 2.0], # [1, 1, 5, 5]
[63.0, 35.0, 67.0, 10.0, 14.0],
[24.0, 22.0, 76.0, 76.0, 21.0],
[9.0, 5.0, 88.0, 45.0, 63.0],
[3.0, 2.0, 33.0, 18.0, 54.0],
]
]
]
).astype(np.float32)
expect(node, inputs=[x, W], outputs=[y], name="test_convtranspose_dilations")
_convtranspose_autopad_same
import numpy as np
import onnx
x = np.array(
[[[[0.0, 1.0, 2.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0]]]] # (1, 1, 3, 3)
).astype(np.float32)
W = np.array(
[
[
[[1.0, 1.0, 1.0], [1.0, 1.0, 1.0], [1.0, 1.0, 1.0]], # (1, 2, 3, 3)
[[1.0, 1.0, 1.0], [1.0, 1.0, 1.0], [1.0, 1.0, 1.0]],
]
]
).astype(np.float32)
node = onnx.helper.make_node(
"ConvTranspose", ["X", "W"], ["Y"], auto_pad="SAME_UPPER", strides=[2, 2]
)
y = np.array(
[
[
[
[0.0, 0.0, 1.0, 1.0, 3.0, 2.0],
[0.0, 0.0, 1.0, 1.0, 3.0, 2.0],
[3.0, 3.0, 8.0, 5.0, 12.0, 7.0],
[3.0, 3.0, 7.0, 4.0, 9.0, 5.0],
[9.0, 9.0, 20.0, 11.0, 24.0, 13.0],
[6.0, 6.0, 13.0, 7.0, 15.0, 8.0],
],
[
[0.0, 0.0, 1.0, 1.0, 3.0, 2.0],
[0.0, 0.0, 1.0, 1.0, 3.0, 2.0],
[3.0, 3.0, 8.0, 5.0, 12.0, 7.0],
[3.0, 3.0, 7.0, 4.0, 9.0, 5.0],
[9.0, 9.0, 20.0, 11.0, 24.0, 13.0],
[6.0, 6.0, 13.0, 7.0, 15.0, 8.0],
],
]
]
).astype(np.float32)
expect(node, inputs=[x, W], outputs=[y], name="test_convtranspose_autopad_same")
ConvTranspose - 1#
Version
name: ConvTranspose (GitHub)
domain: main
since_version: 1
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 1.
Summary
The convolution transpose operator consumes an input tensor and a filter, and computes the output.
If the pads parameter is provided the shape of the output is calculated via the following equation:
output_shape[i] = stride[i] * (input_size[i] - 1) + output_padding[i] + ((kernel_shape[i] - 1) * dilations[i] + 1) - pads[start_i] - pads[end_i]
output_shape can also be explicitly specified in which case pads values are auto generated using these equations:
total_padding[i] = stride[i] * (input_size[i] - 1) + output_padding[i] + ((kernel_shape[i] - 1) * dilations[i] + 1) - output_shape[i] If (auto_pads != SAME_UPPER): pads[start_i] = total_padding[i]/2; pads[end_i] = total_padding[i] - (total_padding[i]/2) Else: pads[start_i] = total_padding[i] - (total_padding[i]/2); pads[end_i] = (total_padding[i]/2).
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding.
dilations: dilation value along each spatial axis of the filter.
group: number of groups input channels and output channels are divided into.
kernel_shape: The shape of the convolution kernel. If not present, should be inferred from input W.
output_padding: The zero-padding added to one side of the output. This is also called adjs/adjustment in some frameworks.
output_shape: The shape of the output can be explicitly set which will cause pads values to be auto generated. If output_shape is specified pads values are ignored. See doc for details for equations to generate pads
pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis.
strides: Stride along each spatial axis.
Inputs
Between 2 and 3 inputs.
X (heterogeneous) - T: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn)
W (heterogeneous) - T: The weight tensor that will be used in the convolutions; has size (C x M/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the weight shape will be (C x M/group x k1 x k2 x … x kn), where (k1 x k2 x … x kn) is the dimension of the kernel. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)
B (optional, heterogeneous) - T: Optional 1D bias to be added to the convolution, has size of M.
Outputs
Y (heterogeneous) - T: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, pad lengths and group count. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)
Type Constraints
T in ( tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.