MaxPool - 11 vs 12#

Next section compares an older to a newer version of the same operator after both definition are converted into markdown text. Green means an addition to the newer version, red means a deletion. Anything else is unchanged.

Files changed (1) hide show
  1. MaxPool11 → MaxPool12 +6 -10
MaxPool11 → MaxPool12 RENAMED
@@ -1 +1 @@
1
1
  MaxPool consumes an input tensor X and applies max pooling across
2
2
  the tensor according to kernel sizes, stride sizes, and pad lengths.
3
3
  max pooling consisting of computing the max on all values of a
4
4
  subset of the input tensor according to the kernel size and downsampling the
5
5
  data into the output tensor Y for further processing. The output spatial shape will be following:
6
6
  ::
7
7
  output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1)) / strides_spatial_shape[i] + 1)
8
8
  or
9
9
  ::
10
10
  output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1)) / strides_spatial_shape[i] + 1)
11
11
  if ceil_mode is enabled
12
12
  ::
13
13
  * pad_shape[i] is sum of pads along axis i
14
14
  auto_pad is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:
15
15
  ::
16
16
  VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) + 1) / strides_spatial_shape[i])
17
17
  SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])
18
18
  And pad shape will be following if SAME_UPPER or SAME_LOWER:
19
19
  ::
20
20
  pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) - input_spatial_shape[i]
21
21
  The output of each pooling window is maximum number of elements exclude pad.
22
22
  **Attributes**
23
23
  * **auto_pad**:
24
24
  auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID.
25
25
  Where default value is NOTSET, which means explicit padding is used.
26
- SAME_UPPER or SAME_LOWER mean pad the input so that output_shape[i]
26
+ SAME_UPPER or SAME_LOWER mean pad the input so that the output
27
+ spatial size match the input.In case of odd number add the extra
28
+ padding at the end for SAME_UPPER and at the beginning for
29
+ SAME_LOWER. VALID mean no padding.
27
- = ceil(input_shape[i] / strides[i]) for each axis i. The padding
28
- is split between the two sides equally or almost equally (depending
29
- on whether it is even or odd). In case the padding is an odd number,
30
- the extra padding is added at the end for SAME_UPPER and at the
31
- beginning for SAME_LOWER.
32
30
  * **ceil_mode**:
33
31
  Whether to use ceil or floor (default) to compute the output shape.
34
32
  * **dilations**:
35
33
  Dilation value along each spatial axis of filter. If not present,
36
34
  the dilation defaults to 1 along each spatial axis.
37
35
  * **kernel_shape** (required):
38
36
  The size of the kernel along each axis.
39
37
  * **pads**:
40
38
  Padding for the beginning and ending along each spatial axis, it can
41
39
  take any value greater than or equal to 0. The value represent the
42
40
  number of pixels added to the beginning and end part of the
43
41
  corresponding axis. pads format should be as follow [x1_begin,
44
42
  x2_begin...x1_end, x2_end,...], where xi_begin the number of pixels
45
43
  added at the beginning of axis i and xi_end, the number of pixels
46
44
  added at the end of axis i. This attribute cannot be used
47
45
  simultaneously with auto_pad attribute. If not present, the padding
48
46
  defaults to 0 along start and end of each spatial axis.
49
47
  * **storage_order**:
50
48
  The storage order of the tensor. 0 is row major, and 1 is column
51
49
  major.
52
50
  * **strides**:
53
51
  Stride along each spatial axis. If not present, the stride defaults
54
52
  to 1 along each spatial axis.
55
53
  **Inputs**
56
54
  * **X** (heterogeneous) - **T**:
57
55
  Input data tensor from the previous operator; dimensions for image
58
56
  case are (N x C x H x W), where N is the batch size, C is the number
59
57
  of channels, and H and W are the height and the width of the data.
60
58
  For non image case, the dimensions are in the form of (N x C x D1 x
61
59
  D2 ... Dn), where N is the batch size. Optionally, if dimension
62
60
  denotation is in effect, the operation expects the input data tensor
63
61
  to arrive with the dimension denotation of [DATA_BATCH,
64
62
  DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE ...].
65
63
  **Outputs**
66
64
  Between 1 and 2 outputs.
67
65
  * **Y** (heterogeneous) - **T**:
68
66
  Output data tensor from average or max pooling across the input
69
67
  tensor. Dimensions will vary based on various kernel, stride, and
70
68
  pad sizes. Floor value of the dimension is used
71
69
  * **Indices** (optional, heterogeneous) - **I**:
72
70
  Indices tensor from max pooling across the input tensor. The
73
71
  dimensions of indices are the same as output tensor. The values in
74
72
  indices of are the indices of the selected values during pooling.
75
73
  The indices are computed as flatten 1-D tensor, and the indices do
76
74
  not consider padding. So the values in indices are in [0, N x C x D1
77
75
  x ... x Dn).
78
76
  **Type Constraints**
79
77
  * **T** in (
80
78
  tensor(double),
81
79
  tensor(float),
82
- tensor(float16),
80
+ tensor(float16)
83
- tensor(int8),
84
- tensor(uint8)
85
81
  ):
86
- Constrain input and output types to float and 8 bit tensors.
82
+ Constrain input and output types to float tensors.
87
83
  * **I** in (
88
84
  tensor(int64)
89
85
  ):
90
86
  Constrain index tensor to int64