Conv - 1 vs 11¶
- Conv1 → Conv11 +16 -9
Conv1 → Conv11
RENAMED
@@ -1 +1 @@
|
|
1
1
|
The convolution operator consumes an input tensor and a filter, and
|
2
2
|
computes the output.
|
3
3
|
**Attributes**
|
4
4
|
* **auto_pad**:
|
5
5
|
auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID.
|
6
6
|
Where default value is NOTSET, which means explicit padding is used.
|
7
|
-
SAME_UPPER or SAME_LOWER mean pad the input so that
|
7
|
+
SAME_UPPER or SAME_LOWER mean pad the input so that output_shape[i]
|
8
|
+
= ceil(input_shape[i] / strides[i]) for each axis i. The padding
|
9
|
+
is split between the two sides equally or almost equally (depending
|
8
|
-
|
10
|
+
on whether it is even or odd). In case the padding is an odd number,
|
9
|
-
padding at the end for SAME_UPPER and at the
|
11
|
+
the extra padding is added at the end for SAME_UPPER and at the
|
10
|
-
|
12
|
+
beginning for SAME_LOWER.
|
11
13
|
* **dilations**:
|
12
|
-
dilation value along each spatial axis of the filter.
|
14
|
+
dilation value along each spatial axis of the filter. If not
|
15
|
+
present, the dilation defaults is 1 along each spatial axis.
|
13
16
|
* **group**:
|
14
17
|
number of groups input channels and output channels are divided
|
15
18
|
into.
|
16
19
|
* **kernel_shape**:
|
17
20
|
The shape of the convolution kernel. If not present, should be
|
18
21
|
inferred from input W.
|
19
22
|
* **pads**:
|
20
23
|
Padding for the beginning and ending along each spatial axis, it can
|
21
24
|
take any value greater than or equal to 0. The value represent the
|
22
25
|
number of pixels added to the beginning and end part of the
|
23
26
|
corresponding axis. pads format should be as follow [x1_begin,
|
24
27
|
x2_begin...x1_end, x2_end,...], where xi_begin the number of pixels
|
25
28
|
added at the beginning of axis i and xi_end, the number of pixels
|
26
29
|
added at the end of axis i. This attribute cannot be used
|
27
30
|
simultaneously with auto_pad attribute. If not present, the padding
|
28
31
|
defaults to 0 along start and end of each spatial axis.
|
29
32
|
* **strides**:
|
33
|
+
Stride along each spatial axis. If not present, the stride defaults
|
30
|
-
|
34
|
+
is 1 along each spatial axis.
|
31
35
|
**Inputs**
|
32
36
|
Between 2 and 3 inputs.
|
33
37
|
* **X** (heterogeneous) - **T**:
|
34
38
|
Input data tensor from previous layer; has size (N x C x H x W),
|
35
39
|
where N is the batch size, C is the number of channels, and H and W
|
36
40
|
are the height and width. Note that this is for the 2D image.
|
37
41
|
Otherwise the size is (N x C x D1 x D2 ... x Dn). Optionally, if
|
38
42
|
dimension denotation is in effect, the operation expects input data
|
39
43
|
tensor to arrive with the dimension denotation of [DATA_BATCH,
|
40
44
|
DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE ...].
|
41
45
|
* **W** (heterogeneous) - **T**:
|
42
46
|
The weight tensor that will be used in the convolutions; has size (M
|
43
47
|
x C/group x kH x kW), where C is the number of channels, and kH and
|
44
48
|
kW are the height and width of the kernel, and M is the number of
|
45
49
|
feature maps. For more than 2 dimensions, the kernel shape will be
|
46
50
|
(M x C/group x k1 x k2 x ... x kn), where (k1 x k2 x ... kn) is the
|
47
51
|
dimension of the kernel. Optionally, if dimension denotation is in
|
48
52
|
effect, the operation expects the weight tensor to arrive with the
|
49
53
|
dimension denotation of [FILTER_OUT_CHANNEL, FILTER_IN_CHANNEL,
|
50
|
-
FILTER_SPATIAL, FILTER_SPATIAL ...].
|
54
|
+
FILTER_SPATIAL, FILTER_SPATIAL ...]. Assuming zero based indices for
|
55
|
+
the shape array, X.shape[1] == (W.shape[1] * group) == C and
|
51
|
-
|
56
|
+
W.shape[0] mod G == 0. Or in other words FILTER_IN_CHANNEL
|
52
|
-
|
57
|
+
multiplied by the number of groups should be equal to DATA_CHANNEL
|
58
|
+
and the number of feature maps M should be a multiple of the number
|
59
|
+
of groups G.
|
53
60
|
* **B** (optional, heterogeneous) - **T**:
|
54
61
|
Optional 1D bias to be added to the convolution, has size of M.
|
55
62
|
**Outputs**
|
56
63
|
* **Y** (heterogeneous) - **T**:
|
57
64
|
Output data tensor that contains the result of the convolution. The
|
58
65
|
output dimensions are functions of the kernel size, stride size, and
|
59
66
|
pad lengths.
|
60
67
|
**Type Constraints**
|
61
68
|
* **T** in (
|
62
69
|
tensor(double),
|
63
70
|
tensor(float),
|
64
71
|
tensor(float16)
|
65
72
|
):
|
66
73
|
Constrain input and output types to float tensors.
|