InstanceNormalization - 1 vs 6#
Next section compares an older to a newer version of the same operator after both definition are converted into markdown text. Green means an addition to the newer version, red means a deletion. Anything else is unchanged.
InstanceNormalization1 → InstanceNormalization6
RENAMED
@@ -1 +1 @@
|
|
1
1
|
Carries out instance normalization as described in the paper
|
2
2
|
https://arxiv.org/abs/1607.08022.
|
3
3
|
y = scale * (x - mean) / sqrt(variance + epsilon) + B,
|
4
4
|
where mean and variance are computed per instance per channel.
|
5
5
|
**Attributes**
|
6
|
+
* **consumed_inputs**:
|
7
|
+
legacy optimization attribute.
|
6
8
|
* **epsilon**:
|
7
|
-
The epsilon value to use to avoid division by zero
|
9
|
+
The epsilon value to use to avoid division by zero, default is
|
10
|
+
1e-5f.
|
8
11
|
**Inputs**
|
9
12
|
* **input** (heterogeneous) - **T**:
|
13
|
+
The input 4-dimensional tensor of shape NCHW.
|
10
|
-
Input data tensor from the previous operator; dimensions for image
|
11
|
-
case are (N x C x H x W), where N is the batch size, C is the number
|
12
|
-
of channels, and H and W are the height and the width of the data.
|
13
|
-
For non image case, the dimensions are in the form of (N x C x D1 x
|
14
|
-
D2 ... Dn), where N is the batch size.
|
15
14
|
* **scale** (heterogeneous) - **T**:
|
16
15
|
The input 1-dimensional scale tensor of size C.
|
17
16
|
* **B** (heterogeneous) - **T**:
|
18
17
|
The input 1-dimensional bias tensor of size C.
|
19
18
|
**Outputs**
|
20
19
|
* **output** (heterogeneous) - **T**:
|
21
|
-
The output tensor of the same shape as input.
|
20
|
+
The output 4-dimensional tensor of the same shape as input.
|
22
21
|
**Type Constraints**
|
23
22
|
* **T** in (
|
24
23
|
tensor(double),
|
25
24
|
tensor(float),
|
26
25
|
tensor(float16)
|
27
26
|
):
|
28
27
|
Constrain input and output types to float tensors.
|