ConvTranspose - 1 vs 11

Files changed (1) hide show
  1. ConvTranspose1 → ConvTranspose11 +22 -9
ConvTranspose1 → ConvTranspose11 RENAMED
@@ -1 +1 @@
1
1
  The convolution transpose operator consumes an input tensor and a filter,
2
2
  and computes the output.
3
3
  If the pads parameter is provided the shape of the output is calculated via the following equation:
4
4
  output_shape[i] = stride[i] * (input_size[i] - 1) + output_padding[i] + ((kernel_shape[i] - 1) * dilations[i] + 1) - pads[start_i] - pads[end_i]
5
5
  output_shape can also be explicitly specified in which case pads values are auto generated using these equations:
6
6
  total_padding[i] = stride[i] * (input_size[i] - 1) + output_padding[i] + ((kernel_shape[i] - 1) * dilations[i] + 1) - output_shape[i]
7
- If (auto_pads != SAME_UPPER): pads[start_i] = total_padding[i]/2; pads[end_i] = total_padding[i] - (total_padding[i]/2)
7
+ If (auto_pads == SAME_UPPER): pads[start_i] = total_padding[i]/2; pads[end_i] = total_padding[i] - (total_padding[i]/2)
8
8
  Else: pads[start_i] = total_padding[i] - (total_padding[i]/2); pads[end_i] = (total_padding[i]/2).
9
9
  **Attributes**
10
10
  * **auto_pad**:
11
11
  auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID.
12
12
  Where default value is NOTSET, which means explicit padding is used.
13
- SAME_UPPER or SAME_LOWER mean pad the input so that the output
13
+ SAME_UPPER or SAME_LOWER mean pad the input so that output_shape[i]
14
+ = input_shape[i] * strides[i] for each axis i. The padding is
15
+ split between the two sides equally or almost equally (depending on
14
- spatial size match the input.In case of odd number add the extra
16
+ whether it is even or odd). In case the padding is an odd number,
15
- padding at the end for SAME_UPPER and at the beginning for
17
+ the extra padding is added at the end for SAME_UPPER and at the
16
- SAME_LOWER. VALID mean no padding.
18
+ beginning for SAME_LOWER.
17
19
  * **dilations**:
18
- dilation value along each spatial axis of the filter.
20
+ dilation value along each spatial axis of the filter. If not
21
+ present, the dilation defaults to 1 along each spatial axis.
19
22
  * **group**:
20
23
  number of groups input channels and output channels are divided
21
24
  into.
22
25
  * **kernel_shape**:
23
26
  The shape of the convolution kernel. If not present, should be
24
27
  inferred from input W.
25
28
  * **output_padding**:
26
- The zero-padding added to one side of the output. This is also
29
+ Additional elements added to the side with higher coordinate indices
30
+ in the output. Each padding value in "output_padding" must be less
31
+ than the corresponding stride/dilation dimension. By default, this
32
+ attribute is a zero vector. Note that this attribute doesn't
33
+ directly affect the computed output values. It only controls the
34
+ selection of the computed values, so changing this attribute only
35
+ adds or removes output elements. If "output_shape" is explicitly
36
+ provided, "output_padding" does not contribute additional size to
37
+ "output_shape" but participates in the computation of the needed
27
- called adjs/adjustment in some frameworks.
38
+ padding amount. This is also called adjs or adjustment in some
39
+ frameworks.
28
40
  * **output_shape**:
29
41
  The shape of the output can be explicitly set which will cause pads
30
42
  values to be auto generated. If output_shape is specified pads
31
43
  values are ignored. See doc for details for equations to generate
32
44
  pads
33
45
  * **pads**:
34
46
  Padding for the beginning and ending along each spatial axis, it can
35
47
  take any value greater than or equal to 0. The value represent the
36
48
  number of pixels added to the beginning and end part of the
37
49
  corresponding axis. pads format should be as follow [x1_begin,
38
50
  x2_begin...x1_end, x2_end,...], where xi_begin the number of pixels
39
51
  added at the beginning of axis i and xi_end, the number of pixels
40
52
  added at the end of axis i. This attribute cannot be used
41
53
  simultaneously with auto_pad attribute. If not present, the padding
42
54
  defaults to 0 along start and end of each spatial axis.
43
55
  * **strides**:
56
+ Stride along each spatial axis. If not present, the stride defaults
44
- Stride along each spatial axis.
57
+ to 1 along each spatial axis.
45
58
  **Inputs**
46
59
  Between 2 and 3 inputs.
47
60
  * **X** (heterogeneous) - **T**:
48
61
  Input data tensor from previous layer; has size (N x C x H x W),
49
62
  where N is the batch size, C is the number of channels, and H and W
50
63
  are the height and width. Note that this is for the 2D image.
51
64
  Otherwise the size is (N x C x D1 x D2 ... x Dn)
52
65
  * **W** (heterogeneous) - **T**:
53
66
  The weight tensor that will be used in the convolutions; has size (C
54
67
  x M/group x kH x kW), where C is the number of channels, and kH and
55
68
  kW are the height and width of the kernel, and M is the number of
56
69
  feature maps. For more than 2 dimensions, the weight shape will be
57
70
  (C x M/group x k1 x k2 x ... x kn), where (k1 x k2 x ... x kn) is
58
71
  the dimension of the kernel. The number of channels in the output
59
72
  should be equal to W.shape[1] * group (assuming zero based indices
60
73
  of the shape array)
61
74
  * **B** (optional, heterogeneous) - **T**:
62
75
  Optional 1D bias to be added to the convolution, has size of M.
63
76
  **Outputs**
64
77
  * **Y** (heterogeneous) - **T**:
65
78
  Output data tensor that contains the result of the convolution. The
66
79
  output dimensions are functions of the kernel size, stride size, pad
67
80
  lengths and group count. The number of channels in the output should
68
81
  be equal to W.shape[1] * group (assuming zero based indices of the
69
82
  shape array)
70
83
  **Type Constraints**
71
84
  * **T** in (
72
85
  tensor(double),
73
86
  tensor(float),
74
87
  tensor(float16)
75
88
  ):
76
89
  Constrain input and output types to float tensors.