InstanceNormalization - 1 vs 6¶
InstanceNormalization1 → InstanceNormalization6
RENAMED
@@ -1 +1 @@
|
|
1
1
|
Carries out instance normalization as described in the paper
|
2
2
|
https://arxiv.org/abs/1607.08022.
|
3
3
|
y = scale * (x - mean) / sqrt(variance + epsilon) + B,
|
4
4
|
where mean and variance are computed per instance per channel.
|
5
5
|
**Attributes**
|
6
|
-
* **consumed_inputs**:
|
7
|
-
legacy optimization attribute.
|
8
6
|
* **epsilon**:
|
9
|
-
The epsilon value to use to avoid division by zero
|
7
|
+
The epsilon value to use to avoid division by zero.
|
10
|
-
1e-5f.
|
11
8
|
**Inputs**
|
12
9
|
* **input** (heterogeneous) - **T**:
|
10
|
+
Input data tensor from the previous operator; dimensions for image
|
11
|
+
case are (N x C x H x W), where N is the batch size, C is the number
|
12
|
+
of channels, and H and W are the height and the width of the data.
|
13
|
-
|
13
|
+
For non image case, the dimensions are in the form of (N x C x D1 x
|
14
|
+
D2 ... Dn), where N is the batch size.
|
14
15
|
* **scale** (heterogeneous) - **T**:
|
15
16
|
The input 1-dimensional scale tensor of size C.
|
16
17
|
* **B** (heterogeneous) - **T**:
|
17
18
|
The input 1-dimensional bias tensor of size C.
|
18
19
|
**Outputs**
|
19
20
|
* **output** (heterogeneous) - **T**:
|
20
|
-
The output
|
21
|
+
The output tensor of the same shape as input.
|
21
22
|
**Type Constraints**
|
22
23
|
* **T** in (
|
23
24
|
tensor(double),
|
24
25
|
tensor(float),
|
25
26
|
tensor(float16)
|
26
27
|
):
|
27
28
|
Constrain input and output types to float tensors.
|