AveragePool - 10 vs 11

Files changed (1) hide show
  1. AveragePool10 → AveragePool11 +8 -5
AveragePool10 → AveragePool11 RENAMED
@@ -1 +1 @@
1
1
  AveragePool consumes an input tensor X and applies average pooling across
2
2
  the tensor according to kernel sizes, stride sizes, and pad lengths.
3
3
  average pooling consisting of computing the average on all values of a
4
4
  subset of the input tensor according to the kernel size and downsampling the
5
5
  data into the output tensor Y for further processing. The output spatial shape will be following:
6
6
  ::
7
7
  output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i] - kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)
8
8
  or
9
9
  ::
10
10
  output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i] - kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)
11
11
  if ceil_mode is enabled
12
12
  ::
13
13
  * pad_shape[i] is sum of pads along axis i
14
14
  auto_pad is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:
15
15
  ::
16
16
  VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i] - kernel_spatial_shape[i] + 1) / strides_spatial_shape[i])
17
17
  SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])
18
18
  And pad shape will be following if SAME_UPPER or SAME_LOWER:
19
19
  ::
20
20
  pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + kernel_spatial_shape[i] - input_spatial_shape[i]
21
21
  The output of each pooling window is divided by the number of elements (exclude pad when attribute count_include_pad is zero).
22
22
  **Attributes**
23
23
  * **auto_pad**:
24
24
  auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID.
25
25
  Where default value is NOTSET, which means explicit padding is used.
26
- SAME_UPPER or SAME_LOWER mean pad the input so that the output
26
+ SAME_UPPER or SAME_LOWER mean pad the input so that output_shape[i]
27
+ = ceil(input_shape[i] / strides[i]) for each axis i. The padding
28
+ is split between the two sides equally or almost equally (depending
27
- spatial size match the input.In case of odd number add the extra
29
+ on whether it is even or odd). In case the padding is an odd number,
28
- padding at the end for SAME_UPPER and at the beginning for
30
+ the extra padding is added at the end for SAME_UPPER and at the
29
- SAME_LOWER. VALID mean no padding.
31
+ beginning for SAME_LOWER.
30
32
  * **ceil_mode**:
31
33
  Whether to use ceil or floor (default) to compute the output shape.
32
34
  * **count_include_pad**:
33
35
  Whether include pad pixels when calculating values for the edges.
34
36
  Default is 0, doesn't count include pad.
35
37
  * **kernel_shape** (required):
36
38
  The size of the kernel along each axis.
37
39
  * **pads**:
38
40
  Padding for the beginning and ending along each spatial axis, it can
39
41
  take any value greater than or equal to 0. The value represent the
40
42
  number of pixels added to the beginning and end part of the
41
43
  corresponding axis. pads format should be as follow [x1_begin,
42
44
  x2_begin...x1_end, x2_end,...], where xi_begin the number of pixels
43
45
  added at the beginning of axis i and xi_end, the number of pixels
44
46
  added at the end of axis i. This attribute cannot be used
45
47
  simultaneously with auto_pad attribute. If not present, the padding
46
48
  defaults to 0 along start and end of each spatial axis.
47
49
  * **strides**:
50
+ Stride along each spatial axis. If not present, the stride defaults
48
- Stride along each spatial axis.
51
+ to 1 along each spatial axis.
49
52
  **Inputs**
50
53
  * **X** (heterogeneous) - **T**:
51
54
  Input data tensor from the previous operator; dimensions for image
52
55
  case are (N x C x H x W), where N is the batch size, C is the number
53
56
  of channels, and H and W are the height and the width of the data.
54
57
  For non image case, the dimensions are in the form of (N x C x D1 x
55
58
  D2 ... Dn), where N is the batch size. Optionally, if dimension
56
59
  denotation is in effect, the operation expects the input data tensor
57
60
  to arrive with the dimension denotation of [DATA_BATCH,
58
61
  DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE ...].
59
62
  **Outputs**
60
63
  * **Y** (heterogeneous) - **T**:
61
64
  Output data tensor from average or max pooling across the input
62
65
  tensor. Dimensions will vary based on various kernel, stride, and
63
66
  pad sizes. Floor value of the dimension is used
64
67
  **Type Constraints**
65
68
  * **T** in (
66
69
  tensor(double),
67
70
  tensor(float),
68
71
  tensor(float16)
69
72
  ):
70
73
  Constrain input and output types to float tensors.