BatchNormalization - 7 vs 15#

Next section compares an older to a newer version of the same operator after both definition are converted into markdown text. Green means an addition to the newer version, red means a deletion. Anything else is unchanged.

BatchNormalization7 → BatchNormalization15 RENAMED
@@ -1 +1 @@
1
1
  Carries out batch normalization as described in the paper
2
2
  https://arxiv.org/abs/1502.03167. Depending on the mode it is being run,
3
- There are five required inputs 'X', 'scale', 'B', 'input_mean' and
4
- 'input_var'.
5
- Note that 'input_mean' and 'input_var' are expected to be the estimated
6
- statistics in inference mode (training_mode=False, default),
7
- and the running statistics in training mode (training_mode=True).
8
- There are multiple cases for the number of outputs, which we list below:
3
+ there are multiple cases for the number of outputs, which we list below:
4
+ Output case #1: Y, mean, var, saved_mean, saved_var (training mode)
5
+ Output case #2: Y (test mode)
9
- Output case #1: Y, running_mean, running_var (training_mode=True)
10
- Output case #2: Y (training_mode=False)
11
-
12
- When training_mode=False, extra outputs are invalid.
13
- The outputs are updated as follows when training_mode=True:
14
- ::
15
-
16
- running_mean = input_mean * momentum + current_mean * (1 - momentum)
17
- running_var = input_var * momentum + current_var * (1 - momentum)
18
-
19
- Y = (X - current_mean) / sqrt(current_var + epsilon) * scale + B
20
-
21
- where:
22
-
23
- current_mean = ReduceMean(X, axis=all_except_channel_index)
24
- current_var = ReduceVar(X, axis=all_except_channel_index)
25
-
26
- Notice that ReduceVar refers to the population variance, and it equals to
27
- sum(sqrd(x_i - x_avg)) / N
28
- where N is the population size (this formula does not use sample size N - 1).
29
-
30
- The computation of ReduceMean and ReduceVar uses float to avoid overflow for float16 inputs.
31
-
32
- When training_mode=False:
33
- ::
34
-
35
- Y = (X - input_mean) / sqrt(input_var + epsilon) * scale + B
36
-
37
- For previous (depreciated) non-spatial cases, implementors are suggested
38
- to flatten the input shape to (N x C * D1 * D2 * ... * Dn) before a BatchNormalization Op.
39
- This operator has **optional** inputs/outputs. See ONNX <https://github.com/onnx/onnx/blob/master/docs/IR.md>_ for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument's name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.
6
+ This operator has **optional** inputs/outputs. See ONNX <https://github.com/onnx/onnx/blob/master/docs/IR.md>_ for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument's name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.
40
7
  **Attributes**
41
8
  * **epsilon**:
42
9
  The epsilon value to use to avoid division by zero.
43
10
  * **momentum**:
44
11
  Factor used in computing the running mean and variance.e.g.,
45
12
  running_mean = running_mean * momentum + mean * (1 - momentum).
46
- * **training_mode**:
13
+ * **spatial**:
14
+ If true, compute the mean and variance across per activation. If
47
- If set to true, it indicates BatchNormalization is being used for
15
+ false, compute the mean and variance across per feature over each
48
- training, and outputs 1, 2, 3, and 4 would be populated.
16
+ mini-batch.
49
17
  **Inputs**
50
18
  * **X** (heterogeneous) - **T**:
51
- Input data tensor from the previous operator; dimensions are in the
19
+ Input data tensor from the previous operator; dimensions for image
20
+ case are (N x C x H x W), where N is the batch size, C is the number
21
+ of channels, and H and W are the height and the width of the data.
22
+ For non image case, the dimensions are in the form of (N x C x D1 x
23
+ D2 ... Dn), where N is the batch size.
52
- form of (N x C x D1 x D2 ... Dn), where N is the batch size, C is
53
- the number of channels. Statistics are computed for every channel of
54
- C over N and D1 to Dn dimensions. For image data, input dimensions
55
- become (N x C x H x W). The op also accepts single dimension input
56
- of size N in which case C is assumed to be 1
57
- * **scale** (heterogeneous) - **T1**:
24
+ * **scale** (heterogeneous) - **T**:
25
+ If spatial is true, the dimension of scale is (C). If spatial is
58
- Scale tensor of shape (C).
26
+ false, the dimensions of scale are (C x D1 x ... x Dn)
59
- * **B** (heterogeneous) - **T1**:
27
+ * **B** (heterogeneous) - **T**:
28
+ If spatial is true, the dimension of bias is (C). If spatial is
60
- Bias tensor of shape (C).
29
+ false, the dimensions of bias are (C x D1 x ... x Dn)
61
- * **input_mean** (heterogeneous) - **T2**:
30
+ * **mean** (heterogeneous) - **T**:
31
+ If spatial is true, the dimension of the running mean (training) or
32
+ the estimated mean (testing) is (C). If spatial is false, the
62
- running (training) or estimated (testing) mean tensor of shape (C).
33
+ dimensions of the running mean (training) or the estimated mean
34
+ (testing) are (C x D1 x ... x Dn).
63
- * **input_var** (heterogeneous) - **T2**:
35
+ * **var** (heterogeneous) - **T**:
36
+ If spatial is true, the dimension of the running variance(training)
37
+ or the estimated variance (testing) is (C). If spatial is false, the
64
- running (training) or estimated (testing) variance tensor of shape
38
+ dimensions of the running variance(training) or the estimated
65
- (C).
39
+ variance (testing) are (C x D1 x ... x Dn).
66
40
  **Outputs**
67
- Between 1 and 3 outputs.
41
+ Between 1 and 5 outputs.
68
42
  * **Y** (heterogeneous) - **T**:
69
43
  The output tensor of the same shape as X
70
- * **running_mean** (optional, heterogeneous) - **T2**:
44
+ * **mean** (optional, heterogeneous) - **T**:
71
45
  The running mean after the BatchNormalization operator.
72
- * **running_var** (optional, heterogeneous) - **T2**:
46
+ * **var** (optional, heterogeneous) - **T**:
73
- The running variance after the BatchNormalization operator. This op
47
+ The running variance after the BatchNormalization operator.
48
+ * **saved_mean** (optional, heterogeneous) - **T**:
74
- uses the population size (N) for calculating variance, and not the
49
+ Saved mean used during training to speed up gradient computation.
50
+ * **saved_var** (optional, heterogeneous) - **T**:
51
+ Saved variance used during training to speed up gradient
75
- sample size N-1.
52
+ computation.
76
53
  **Type Constraints**
77
54
  * **T** in (
78
- tensor(bfloat16),
79
55
  tensor(double),
80
56
  tensor(float),
81
57
  tensor(float16)
82
58
  ):
83
- Constrain input and output types to float tensors.
59
+ Constrain input and output types to float tensors.- * **T1** in (
84
- tensor(bfloat16),
85
- tensor(double),
86
- tensor(float),
87
- tensor(float16)
88
- ):
89
- Constrain scale and bias types to float tensors.
90
- * **T2** in (
91
- tensor(bfloat16),
92
- tensor(double),
93
- tensor(float),
94
- tensor(float16)
95
- ):
96
- Constrain mean and variance types to float tensors.