LRN#
LRN - 13#
Version
name: LRN (GitHub)
domain: main
since_version: 13
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 13.
Summary
Local Response Normalization proposed in the [AlexNet paper](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf). It normalizes over local input regions. The local region is defined across the channels. For an element X[n, c, d1, …, dk] in a tensor of shape (N x C x D1 x D2, …, Dk), its region is {X[n, i, d1, …, dk] | max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2))}.
square_sum[n, c, d1, …, dk] = sum(X[n, i, d1, …, dk] ^ 2), where max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2)).
Y[n, c, d1, …, dk] = X[n, c, d1, …, dk] / (bias + alpha / size * square_sum[n, c, d1, …, dk] ) ^ beta
Attributes
alpha: Scaling parameter. Default value is
9.999999747378752e-05
.beta: The exponent. Default value is
0.75
.bias:
Default value is
1.0
.
size (required): The number of channels to sum over
Inputs
X (heterogeneous) - T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
Outputs
Y (heterogeneous) - T: Output tensor, which has the shape and type as input tensor
Type Constraints
T in ( tensor(bfloat16), tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.
Examples
default
alpha = 0.0001
beta = 0.75
bias = 1.0
nsize = 3
node = onnx.helper.make_node(
'LRN',
inputs=['x'],
outputs=['y'],
size=3
)
x = np.random.randn(5, 5, 5, 5).astype(np.float32)
square_sum = np.zeros((5, 5, 5, 5)).astype(np.float32)
for n, c, h, w in np.ndindex(x.shape):
square_sum[n, c, h, w] = sum(x[n,
max(0, c - int(math.floor((nsize - 1) / 2))):min(5, c + int(math.ceil((nsize - 1) / 2)) + 1),
h,
w] ** 2)
y = x / ((bias + (alpha / nsize) * square_sum) ** beta)
expect(node, inputs=[x], outputs=[y],
name='test_lrn_default')
Differences
0 | 0 | Local Response Normalization proposed in the [AlexNet paper](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf). | Local Response Normalization proposed in the [AlexNet paper](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf). |
1 | 1 | It normalizes over local input regions. | It normalizes over local input regions. |
2 | 2 | The local region is defined across the channels. For an element X[n, c, d1, ..., dk] in a tensor | The local region is defined across the channels. For an element X[n, c, d1, ..., dk] in a tensor |
3 | 3 | of shape (N x C x D1 x D2, ..., Dk), its region is | of shape (N x C x D1 x D2, ..., Dk), its region is |
4 | 4 | {X[n, i, d1, ..., dk] | max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2))}. | {X[n, i, d1, ..., dk] | max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2))}. |
5 | 5 |
|
|
6 | 6 | square_sum[n, c, d1, ..., dk] = sum(X[n, i, d1, ..., dk] ^ 2), | square_sum[n, c, d1, ..., dk] = sum(X[n, i, d1, ..., dk] ^ 2), |
7 | 7 | where max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2)). | where max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2)). |
8 | 8 |
|
|
9 | 9 | Y[n, c, d1, ..., dk] = X[n, c, d1, ..., dk] / (bias + alpha / size * square_sum[n, c, d1, ..., dk] ) ^ beta | Y[n, c, d1, ..., dk] = X[n, c, d1, ..., dk] / (bias + alpha / size * square_sum[n, c, d1, ..., dk] ) ^ beta |
10 | 10 |
|
|
11 | 11 | **Attributes** | **Attributes** |
12 | 12 |
|
|
13 | 13 | * **alpha**: | * **alpha**: |
14 | 14 | Scaling parameter. Default value is 9.999999747378752e-05. | Scaling parameter. Default value is 9.999999747378752e-05. |
15 | 15 | * **beta**: | * **beta**: |
16 | 16 | The exponent. Default value is 0.75. | The exponent. Default value is 0.75. |
17 | 17 | * **bias**: | * **bias**: |
18 | 18 | Default value is 1.0. | Default value is 1.0. |
19 | 19 | * **size** (required): | * **size** (required): |
20 | 20 | The number of channels to sum over | The number of channels to sum over |
21 | 21 |
|
|
22 | 22 | **Inputs** | **Inputs** |
23 | 23 |
|
|
24 | 24 | * **X** (heterogeneous) - **T**: | * **X** (heterogeneous) - **T**: |
25 | 25 | Input data tensor from the previous operator; dimensions for image | Input data tensor from the previous operator; dimensions for image |
26 | 26 | case are (N x C x H x W), where N is the batch size, C is the number | case are (N x C x H x W), where N is the batch size, C is the number |
27 | 27 | of channels, and H and W are the height and the width of the data. | of channels, and H and W are the height and the width of the data. |
28 | 28 | For non image case, the dimensions are in the form of (N x C x D1 x | For non image case, the dimensions are in the form of (N x C x D1 x |
29 | 29 | D2 ... Dn), where N is the batch size. Optionally, if dimension | D2 ... Dn), where N is the batch size. Optionally, if dimension |
30 | 30 | denotation is in effect, the operation expects the input data tensor | denotation is in effect, the operation expects the input data tensor |
31 | 31 | to arrive with the dimension denotation of [DATA_BATCH, | to arrive with the dimension denotation of [DATA_BATCH, |
32 | 32 | DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE ...]. | DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE ...]. |
33 | 33 |
|
|
34 | 34 | **Outputs** | **Outputs** |
35 | 35 |
|
|
36 | 36 | * **Y** (heterogeneous) - **T**: | * **Y** (heterogeneous) - **T**: |
37 | 37 | Output tensor, which has the shape and type as input tensor | Output tensor, which has the shape and type as input tensor |
38 | 38 |
|
|
39 | 39 | **Type Constraints** | **Type Constraints** |
40 | 40 |
|
|
41 | 41 | * **T** in ( | * **T** in ( |
42 | tensor(bfloat16), | ||
42 | 43 | tensor(double), | tensor(double), |
43 | 44 | tensor(float), | tensor(float), |
44 | 45 | tensor(float16) | tensor(float16) |
45 | 46 | ): | ): |
46 | 47 | Constrain input and output types to float tensors. | Constrain input and output types to float tensors. |
LRN - 1#
Version
name: LRN (GitHub)
domain: main
since_version: 1
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 1.
Summary
Local Response Normalization proposed in the [AlexNet paper](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf). It normalizes over local input regions. The local region is defined across the channels. For an element X[n, c, d1, …, dk] in a tensor of shape (N x C x D1 x D2, …, Dk), its region is {X[n, i, d1, …, dk] | max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2))}.
square_sum[n, c, d1, …, dk] = sum(X[n, i, d1, …, dk] ^ 2), where max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2)).
Y[n, c, d1, …, dk] = X[n, c, d1, …, dk] / (bias + alpha / size * square_sum[n, c, d1, …, dk] ) ^ beta
Attributes
alpha: Scaling parameter. Default value is
9.999999747378752e-05
.beta: The exponent. Default value is
0.75
.bias:
Default value is
1.0
.
size (required): The number of channels to sum over
Inputs
X (heterogeneous) - T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
Outputs
Y (heterogeneous) - T: Output tensor, which has the shape and type as input tensor
Type Constraints
T in ( tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.