ConvTranspose#
ConvTranspose - 11#
Version
name: ConvTranspose (GitHub)
domain: main
since_version: 11
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 11.
Summary
The convolution transpose operator consumes an input tensor and a filter, and computes the output.
If the pads parameter is provided the shape of the output is calculated via the following equation:
output_shape[i] = stride[i] * (input_size[i] - 1) + output_padding[i] + ((kernel_shape[i] - 1) * dilations[i] + 1) - pads[start_i] - pads[end_i]
output_shape can also be explicitly specified in which case pads values are auto generated using these equations:
total_padding[i] = stride[i] * (input_size[i] - 1) + output_padding[i] + ((kernel_shape[i] - 1) * dilations[i] + 1) - output_shape[i] If (auto_pads == SAME_UPPER): pads[start_i] = total_padding[i]/2; pads[end_i] = total_padding[i] - (total_padding[i]/2) Else: pads[start_i] = total_padding[i] - (total_padding[i]/2); pads[end_i] = (total_padding[i]/2).
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that output_shape[i] = input_shape[i] * strides[i] for each axis i. The padding is split between the two sides equally or almost equally (depending on whether it is even or odd). In case the padding is an odd number, the extra padding is added at the end for SAME_UPPER and at the beginning for SAME_LOWER. Default value is
'NOTSET'
.dilations: dilation value along each spatial axis of the filter. If not present, the dilation defaults to 1 along each spatial axis.
group: number of groups input channels and output channels are divided into. Default value is
1
.kernel_shape: The shape of the convolution kernel. If not present, should be inferred from input W.
output_padding: Additional elements added to the side with higher coordinate indices in the output. Each padding value in “output_padding” must be less than the corresponding stride/dilation dimension. By default, this attribute is a zero vector. Note that this attribute doesn’t directly affect the computed output values. It only controls the selection of the computed values, so changing this attribute only adds or removes output elements. If “output_shape” is explicitly provided, “output_padding” does not contribute additional size to “output_shape” but participates in the computation of the needed padding amount. This is also called adjs or adjustment in some frameworks.
output_shape: The shape of the output can be explicitly set which will cause pads values to be auto generated. If output_shape is specified pads values are ignored. See doc for details for equations to generate pads
pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis.
strides: Stride along each spatial axis. If not present, the stride defaults to 1 along each spatial axis.
Inputs
Between 2 and 3 inputs.
X (heterogeneous) - T: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn)
W (heterogeneous) - T: The weight tensor that will be used in the convolutions; has size (C x M/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the weight shape will be (C x M/group x k1 x k2 x … x kn), where (k1 x k2 x … x kn) is the dimension of the kernel. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)
B (optional, heterogeneous) - T: Optional 1D bias to be added to the convolution, has size of M.
Outputs
Y (heterogeneous) - T: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, pad lengths and group count. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)
Type Constraints
T in ( tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.
Examples
convtranspose_1d
x = np.array([[[0., 1., 2.]]]).astype(np.float32) # (1, 1, 3)
W = np.array([[[1., 1., 1.], # (1, 2, 3)
[1., 1., 1.]]]).astype(np.float32)
node = onnx.helper.make_node("ConvTranspose", ["X", "W"], ["Y"])
y = np.array([[[0., 1., 3., 3., 2.], # (1, 2, 5)
[0., 1., 3., 3., 2.]]]).astype(np.float32)
expect(node, inputs=[x, W], outputs=[y], name='test_convtranspose_1d')
convtranspose_3d
x = np.array([[[[[0., 1., 2., 3., 4.], # (1, 1, 3, 4, 5)
[5., 6., 7., 8., 9.],
[10., 11., 12., 13., 14.],
[15., 16., 17., 18., 19.]],
[[20., 21., 22., 23., 24.],
[25., 26., 27., 28., 29.],
[30., 31., 32., 33., 34.],
[35., 36., 37., 38., 39.]],
[[40., 41., 42., 43., 44.],
[45., 46., 47., 48., 49.],
[50., 51., 52., 53., 54.],
[55., 56., 57., 58., 59.]]]]]).astype(np.float32)
W = np.array([[[[[1., 1., 1.], # (1, 2, 3, 3, 3)
[1., 1., 1.],
[1., 1., 1.]],
[[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]],
[[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]]],
[[[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]],
[[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]],
[[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]]]]]).astype(np.float32)
node = onnx.helper.make_node("ConvTranspose", ["X", "W"], ["Y"])
y = np.array([[[[[0., 1., 3., 6., 9., 7., 4.], # (1, 2, 5, 6, 7)
[5., 12., 21., 27., 33., 24., 13.],
[15., 33., 54., 63., 72., 51., 27.],
[30., 63., 99., 108., 117., 81., 42.],
[25., 52., 81., 87., 93., 64., 33.],
[15., 31., 48., 51., 54., 37., 19.]],
[[20., 42., 66., 72., 78., 54., 28.],
[50., 104., 162., 174., 186., 128., 66.],
[90., 186., 288., 306., 324., 222., 114.],
[120., 246., 378., 396., 414., 282., 144.],
[90., 184., 282., 294., 306., 208., 106.],
[50., 102., 156., 162., 168., 114., 58.]],
[[60., 123., 189., 198., 207., 141., 72.],
[135., 276., 423., 441., 459., 312., 159.],
[225., 459., 702., 729., 756., 513., 261.],
[270., 549., 837., 864., 891., 603., 306.],
[195., 396., 603., 621., 639., 432., 219.],
[105., 213., 324., 333., 342., 231., 117.]],
[[60., 122., 186., 192., 198., 134., 68.],
[130., 264., 402., 414., 426., 288., 146.],
[210., 426., 648., 666., 684., 462., 234.],
[240., 486., 738., 756., 774., 522., 264.],
[170., 344., 522., 534., 546., 368., 186.],
[90., 182., 276., 282., 288., 194., 98.]],
[[40., 81., 123., 126., 129., 87., 44.],
[85., 172., 261., 267., 273., 184., 93.],
[135., 273., 414., 423., 432., 291., 147.],
[150., 303., 459., 468., 477., 321., 162.],
[105., 212., 321., 327., 333., 224., 113.],
[55., 111., 168., 171., 174., 117., 59.]]],
[[[0., 1., 3., 6., 9., 7., 4.],
[5., 12., 21., 27., 33., 24., 13.],
[15., 33., 54., 63., 72., 51., 27.],
[30., 63., 99., 108., 117., 81., 42.],
[25., 52., 81., 87., 93., 64., 33.],
[15., 31., 48., 51., 54., 37., 19.]],
[[20., 42., 66., 72., 78., 54., 28.],
[50., 104., 162., 174., 186., 128., 66.],
[90., 186., 288., 306., 324., 222., 114.],
[120., 246., 378., 396., 414., 282., 144.],
[90., 184., 282., 294., 306., 208., 106.],
[50., 102., 156., 162., 168., 114., 58.]],
[[60., 123., 189., 198., 207., 141., 72.],
[135., 276., 423., 441., 459., 312., 159.],
[225., 459., 702., 729., 756., 513., 261.],
[270., 549., 837., 864., 891., 603., 306.],
[195., 396., 603., 621., 639., 432., 219.],
[105., 213., 324., 333., 342., 231., 117.]],
[[60., 122., 186., 192., 198., 134., 68.],
[130., 264., 402., 414., 426., 288., 146.],
[210., 426., 648., 666., 684., 462., 234.],
[240., 486., 738., 756., 774., 522., 264.],
[170., 344., 522., 534., 546., 368., 186.],
[90., 182., 276., 282., 288., 194., 98.]],
[[40., 81., 123., 126., 129., 87., 44.],
[85., 172., 261., 267., 273., 184., 93.],
[135., 273., 414., 423., 432., 291., 147.],
[150., 303., 459., 468., 477., 321., 162.],
[105., 212., 321., 327., 333., 224., 113.],
[55., 111., 168., 171., 174., 117., 59.]]]]]).astype(np.float32)
expect(node, inputs=[x, W], outputs=[y], name='test_convtranspose_3d')
convtranspose_attributes
x = np.array([[[[0., 1., 2.], # (1, 1, 3, 3)
[3., 4., 5.],
[6., 7., 8.]]]]).astype(np.float32)
W = np.array([[[[1., 1., 1.], # (1, 2, 3, 3)
[1., 1., 1.],
[1., 1., 1.]],
[[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]]]]).astype(np.float32)
y = np.array([[[[0., 0., 1., 1., 3., 2., 2., 0.], # (1, 2, 10, 8)
[0., 0., 1., 1., 3., 2., 2., 0.],
[0., 0., 1., 1., 3., 2., 2., 0.],
[3., 3., 7., 4., 9., 5., 5., 0.],
[3., 3., 7., 4., 9., 5., 5., 0.],
[3., 3., 7., 4., 9., 5., 5., 0.],
[6., 6., 13., 7., 15., 8., 8., 0.],
[6., 6., 13., 7., 15., 8., 8., 0.],
[6., 6., 13., 7., 15., 8., 8., 0.],
[0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 1., 1., 3., 2., 2., 0.],
[0., 0., 1., 1., 3., 2., 2., 0.],
[0., 0., 1., 1., 3., 2., 2., 0.],
[3., 3., 7., 4., 9., 5., 5., 0.],
[3., 3., 7., 4., 9., 5., 5., 0.],
[3., 3., 7., 4., 9., 5., 5., 0.],
[6., 6., 13., 7., 15., 8., 8., 0.],
[6., 6., 13., 7., 15., 8., 8., 0.],
[6., 6., 13., 7., 15., 8., 8., 0.],
[0., 0., 0., 0., 0., 0., 0., 0.]]]]).astype(np.float32)
node = onnx.helper.make_node("ConvTranspose", ["X", "W"], ["Y"],
strides=[3, 2],
output_shape=[10, 8])
expect(node, inputs=[x, W], outputs=[y], name='test_convtranspose_output_shape')
node = onnx.helper.make_node("ConvTranspose", ["X", "W"], ["Y"],
strides=[3, 2],
output_padding=[1, 1])
expect(node, inputs=[x, W], outputs=[y], name='test_convtranspose_pad')
node = onnx.helper.make_node(
'ConvTranspose', ['X', 'W'], ['Y'],
name='test',
strides=[3, 2],
output_shape=[10, 8],
kernel_shape=[3, 3],
output_padding=[1, 1]
)
expect(node, inputs=[x, W], outputs=[y],
name='test_convtranspose_kernel_shape')
convtranspose_pads
x = np.array([[[[0., 1., 2.], # (1, 1, 3, 3)
[3., 4., 5.],
[6., 7., 8.]]]]).astype(np.float32)
W = np.array([[[[1., 1., 1.], # (1, 2, 3, 3)
[1., 1., 1.],
[1., 1., 1.]],
[[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]]]]).astype(np.float32)
node = onnx.helper.make_node("ConvTranspose", ["X", "W"], ["Y"],
strides=[3, 2],
pads=[1, 2, 1, 2])
y = np.array([[[[1., 1., 3.], # (1, 2, 7, 3)
[1., 1., 3.],
[7., 4., 9.],
[7., 4., 9.],
[7., 4., 9.],
[13., 7., 15.],
[13., 7., 15.]],
[[1., 1., 3.],
[1., 1., 3.],
[7., 4., 9.],
[7., 4., 9.],
[7., 4., 9.],
[13., 7., 15.],
[13., 7., 15.]]]]).astype(np.float32)
expect(node, inputs=[x, W], outputs=[y], name='test_convtranspose_pads')
convtranspose_dilations
x = np.array([[[[3., 8., 1.], # (1, 1, 3, 3)
[9., 5., 7.],
[3., 2., 6.]]]]).astype(np.float32)
W = np.array([[[[7., 2.], # (1, 1, 2, 2)
[1., 9.]]]]).astype(np.float32)
node = onnx.helper.make_node("ConvTranspose", ["X", "W"], ["Y"], dilations=[2, 2])
y = np.array([[[[21., 56., 13., 16., 2.], # [1, 1, 5, 5]
[63., 35., 67., 10., 14.],
[24., 22., 76., 76., 21.],
[9., 5., 88., 45., 63.],
[3., 2., 33., 18., 54.]]]]).astype(np.float32)
expect(node, inputs=[x, W], outputs=[y], name='test_convtranspose_dilations')
convtranspose_autopad_same
x = np.array([[[[0., 1., 2.], # (1, 1, 3, 3)
[3., 4., 5.],
[6., 7., 8.]]]]).astype(np.float32)
W = np.array([[[[1., 1., 1.], # (1, 2, 3, 3)
[1., 1., 1.],
[1., 1., 1.]],
[[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]]]]).astype(np.float32)
node = onnx.helper.make_node("ConvTranspose", ["X", "W"], ["Y"], auto_pad="SAME_UPPER", strides=[2, 2])
y = np.array([[[[0., 0., 1., 1., 3., 2.],
[0., 0., 1., 1., 3., 2.],
[3., 3., 8., 5., 12., 7.],
[3., 3., 7., 4., 9., 5.],
[9., 9., 20., 11., 24., 13.],
[6., 6., 13., 7., 15., 8.]],
[[0., 0., 1., 1., 3., 2.],
[0., 0., 1., 1., 3., 2.],
[3., 3., 8., 5., 12., 7.],
[3., 3., 7., 4., 9., 5.],
[9., 9., 20., 11., 24., 13.],
[6., 6., 13., 7., 15., 8.]]]]).astype(np.float32)
expect(node, inputs=[x, W], outputs=[y], name='test_convtranspose_autopad_same')
Differences
0 | 0 | The convolution transpose operator consumes an input tensor and a filter, | The convolution transpose operator consumes an input tensor and a filter, |
1 | 1 | and computes the output. | and computes the output. |
2 | 2 |
|
|
3 | 3 | If the pads parameter is provided the shape of the output is calculated via the following equation: | If the pads parameter is provided the shape of the output is calculated via the following equation: |
4 | 4 |
|
|
5 | 5 | output_shape[i] = stride[i] * (input_size[i] - 1) + output_padding[i] + ((kernel_shape[i] - 1) * dilations[i] + 1) - pads[start_i] - pads[end_i] | output_shape[i] = stride[i] * (input_size[i] - 1) + output_padding[i] + ((kernel_shape[i] - 1) * dilations[i] + 1) - pads[start_i] - pads[end_i] |
6 | 6 |
|
|
7 | 7 | output_shape can also be explicitly specified in which case pads values are auto generated using these equations: | output_shape can also be explicitly specified in which case pads values are auto generated using these equations: |
8 | 8 |
|
|
9 | 9 | total_padding[i] = stride[i] * (input_size[i] - 1) + output_padding[i] + ((kernel_shape[i] - 1) * dilations[i] + 1) - output_shape[i] | total_padding[i] = stride[i] * (input_size[i] - 1) + output_padding[i] + ((kernel_shape[i] - 1) * dilations[i] + 1) - output_shape[i] |
10 | 10 | If (auto_pads != SAME_UPPER): pads[start_i] = total_padding[i]/2; pads[end_i] = total_padding[i] - (total_padding[i]/2) |
|
11 | 11 | Else: pads[start_i] = total_padding[i] - (total_padding[i]/2); pads[end_i] = (total_padding[i]/2). | Else: pads[start_i] = total_padding[i] - (total_padding[i]/2); pads[end_i] = (total_padding[i]/2). |
12 | 12 |
|
|
13 | 13 | **Attributes** | **Attributes** |
14 | 14 |
|
|
15 | 15 | * **auto_pad**: | * **auto_pad**: |
16 | 16 | auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. | auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. |
17 | 17 | Where default value is NOTSET, which means explicit padding is used. | Where default value is NOTSET, which means explicit padding is used. |
18 | 18 | SAME_UPPER or SAME_LOWER mean pad the input so that the output |
|
19 | spatial size match the input.In case of odd number add the extra | ||
19 | = input_shape[i] * strides[i] for each axis i. The padding is | ||
20 | split between the two sides equally or almost equally (depending on | ||
21 | whether it is even or odd). In case the padding is an odd number, | ||
20 | 22 | padding at the end for SAME_UPPER and at the beginning for |
|
21 | 23 | SAME_LOWER. VALID mean no padding. Default value is 'NOTSET'. |
|
22 | 24 | * **dilations**: | * **dilations**: |
23 | 25 | dilation value along each spatial axis of the filter. |
|
26 | present, the dilation defaults to 1 along each spatial axis. | ||
24 | 27 | * **group**: | * **group**: |
25 | 28 | number of groups input channels and output channels are divided | number of groups input channels and output channels are divided |
26 | 29 | into. Default value is 1. | into. Default value is 1. |
27 | 30 | * **kernel_shape**: | * **kernel_shape**: |
28 | 31 | The shape of the convolution kernel. If not present, should be | The shape of the convolution kernel. If not present, should be |
29 | 32 | inferred from input W. | inferred from input W. |
30 | 33 | * **output_padding**: | * **output_padding**: |
34 | Additional elements added to the side with higher coordinate indices | ||
35 | in the output. Each padding value in "output_padding" must be less | ||
36 | than the corresponding stride/dilation dimension. By default, this | ||
37 | attribute is a zero vector. Note that this attribute doesn't | ||
38 | directly affect the computed output values. It only controls the | ||
39 | selection of the computed values, so changing this attribute only | ||
40 | adds or removes output elements. If "output_shape" is explicitly | ||
41 | provided, "output_padding" does not contribute additional size to | ||
42 | "output_shape" but participates in the computation of the needed | ||
43 | padding amount. This is also called adjs or adjustment in some | ||
31 | 44 | The zero-padding added to one side of the output. This is also |
|
32 | called adjs/adjustment in some frameworks. | ||
33 | 45 | * **output_shape**: | * **output_shape**: |
34 | 46 | The shape of the output can be explicitly set which will cause pads | The shape of the output can be explicitly set which will cause pads |
35 | 47 | values to be auto generated. If output_shape is specified pads | values to be auto generated. If output_shape is specified pads |
36 | 48 | values are ignored. See doc for details for equations to generate | values are ignored. See doc for details for equations to generate |
37 | 49 | pads | pads |
38 | 50 | * **pads**: | * **pads**: |
39 | 51 | Padding for the beginning and ending along each spatial axis, it can | Padding for the beginning and ending along each spatial axis, it can |
40 | 52 | take any value greater than or equal to 0. The value represent the | take any value greater than or equal to 0. The value represent the |
41 | 53 | number of pixels added to the beginning and end part of the | number of pixels added to the beginning and end part of the |
42 | 54 | corresponding axis. pads format should be as follow [x1_begin, | corresponding axis. pads format should be as follow [x1_begin, |
43 | 55 | x2_begin...x1_end, x2_end,...], where xi_begin the number of pixels | x2_begin...x1_end, x2_end,...], where xi_begin the number of pixels |
44 | 56 | added at the beginning of axis i and xi_end, the number of pixels | added at the beginning of axis i and xi_end, the number of pixels |
45 | 57 | added at the end of axis i. This attribute cannot be used | added at the end of axis i. This attribute cannot be used |
46 | 58 | simultaneously with auto_pad attribute. If not present, the padding | simultaneously with auto_pad attribute. If not present, the padding |
47 | 59 | defaults to 0 along start and end of each spatial axis. | defaults to 0 along start and end of each spatial axis. |
48 | 60 | * **strides**: | * **strides**: |
49 | 61 | Stride along each spatial axis. |
|
62 | to 1 along each spatial axis. | ||
50 | 63 |
|
|
51 | 64 | **Inputs** | **Inputs** |
52 | 65 |
|
|
53 | 66 | Between 2 and 3 inputs. | Between 2 and 3 inputs. |
54 | 67 |
|
|
55 | 68 | * **X** (heterogeneous) - **T**: | * **X** (heterogeneous) - **T**: |
56 | 69 | Input data tensor from previous layer; has size (N x C x H x W), | Input data tensor from previous layer; has size (N x C x H x W), |
57 | 70 | where N is the batch size, C is the number of channels, and H and W | where N is the batch size, C is the number of channels, and H and W |
58 | 71 | are the height and width. Note that this is for the 2D image. | are the height and width. Note that this is for the 2D image. |
59 | 72 | Otherwise the size is (N x C x D1 x D2 ... x Dn) | Otherwise the size is (N x C x D1 x D2 ... x Dn) |
60 | 73 | * **W** (heterogeneous) - **T**: | * **W** (heterogeneous) - **T**: |
61 | 74 | The weight tensor that will be used in the convolutions; has size (C | The weight tensor that will be used in the convolutions; has size (C |
62 | 75 | x M/group x kH x kW), where C is the number of channels, and kH and | x M/group x kH x kW), where C is the number of channels, and kH and |
63 | 76 | kW are the height and width of the kernel, and M is the number of | kW are the height and width of the kernel, and M is the number of |
64 | 77 | feature maps. For more than 2 dimensions, the weight shape will be | feature maps. For more than 2 dimensions, the weight shape will be |
65 | 78 | (C x M/group x k1 x k2 x ... x kn), where (k1 x k2 x ... x kn) is | (C x M/group x k1 x k2 x ... x kn), where (k1 x k2 x ... x kn) is |
66 | 79 | the dimension of the kernel. The number of channels in the output | the dimension of the kernel. The number of channels in the output |
67 | 80 | should be equal to W.shape[1] * group (assuming zero based indices | should be equal to W.shape[1] * group (assuming zero based indices |
68 | 81 | of the shape array) | of the shape array) |
69 | 82 | * **B** (optional, heterogeneous) - **T**: | * **B** (optional, heterogeneous) - **T**: |
70 | 83 | Optional 1D bias to be added to the convolution, has size of M. | Optional 1D bias to be added to the convolution, has size of M. |
71 | 84 |
|
|
72 | 85 | **Outputs** | **Outputs** |
73 | 86 |
|
|
74 | 87 | * **Y** (heterogeneous) - **T**: | * **Y** (heterogeneous) - **T**: |
75 | 88 | Output data tensor that contains the result of the convolution. The | Output data tensor that contains the result of the convolution. The |
76 | 89 | output dimensions are functions of the kernel size, stride size, pad | output dimensions are functions of the kernel size, stride size, pad |
77 | 90 | lengths and group count. The number of channels in the output should | lengths and group count. The number of channels in the output should |
78 | 91 | be equal to W.shape[1] * group (assuming zero based indices of the | be equal to W.shape[1] * group (assuming zero based indices of the |
79 | 92 | shape array) | shape array) |
80 | 93 |
|
|
81 | 94 | **Type Constraints** | **Type Constraints** |
82 | 95 |
|
|
83 | 96 | * **T** in ( | * **T** in ( |
84 | 97 | tensor(double), | tensor(double), |
85 | 98 | tensor(float), | tensor(float), |
86 | 99 | tensor(float16) | tensor(float16) |
87 | 100 | ): | ): |
88 | 101 | Constrain input and output types to float tensors. | Constrain input and output types to float tensors. |
ConvTranspose - 1#
Version
name: ConvTranspose (GitHub)
domain: main
since_version: 1
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 1.
Summary
The convolution transpose operator consumes an input tensor and a filter, and computes the output.
If the pads parameter is provided the shape of the output is calculated via the following equation:
output_shape[i] = stride[i] * (input_size[i] - 1) + output_padding[i] + ((kernel_shape[i] - 1) * dilations[i] + 1) - pads[start_i] - pads[end_i]
output_shape can also be explicitly specified in which case pads values are auto generated using these equations:
total_padding[i] = stride[i] * (input_size[i] - 1) + output_padding[i] + ((kernel_shape[i] - 1) * dilations[i] + 1) - output_shape[i] If (auto_pads != SAME_UPPER): pads[start_i] = total_padding[i]/2; pads[end_i] = total_padding[i] - (total_padding[i]/2) Else: pads[start_i] = total_padding[i] - (total_padding[i]/2); pads[end_i] = (total_padding[i]/2).
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding. Default value is
'NOTSET'
.dilations: dilation value along each spatial axis of the filter.
group: number of groups input channels and output channels are divided into. Default value is
1
.kernel_shape: The shape of the convolution kernel. If not present, should be inferred from input W.
output_padding: The zero-padding added to one side of the output. This is also called adjs/adjustment in some frameworks.
output_shape: The shape of the output can be explicitly set which will cause pads values to be auto generated. If output_shape is specified pads values are ignored. See doc for details for equations to generate pads
pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis.
strides: Stride along each spatial axis.
Inputs
Between 2 and 3 inputs.
X (heterogeneous) - T: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn)
W (heterogeneous) - T: The weight tensor that will be used in the convolutions; has size (C x M/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the weight shape will be (C x M/group x k1 x k2 x … x kn), where (k1 x k2 x … x kn) is the dimension of the kernel. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)
B (optional, heterogeneous) - T: Optional 1D bias to be added to the convolution, has size of M.
Outputs
Y (heterogeneous) - T: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, pad lengths and group count. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)
Type Constraints
T in ( tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.