LRN#
LRN - 13#
Version
name: LRN (GitHub)
domain: main
since_version: 13
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 13.
Summary
Local Response Normalization proposed in the [AlexNet paper](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf). It normalizes over local input regions. The local region is defined across the channels. For an element X[n, c, d1, …, dk] in a tensor of shape (N x C x D1 x D2, …, Dk), its region is {X[n, i, d1, …, dk] | max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2))}.
square_sum[n, c, d1, …, dk] = sum(X[n, i, d1, …, dk] ^ 2), where max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2)).
Y[n, c, d1, …, dk] = X[n, c, d1, …, dk] / (bias + alpha / size * square_sum[n, c, d1, …, dk] ) ^ beta
Attributes
alpha: Scaling parameter.
beta: The exponent.
bias:
size (required): The number of channels to sum over
Inputs
X (heterogeneous) - T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
Outputs
Y (heterogeneous) - T: Output tensor, which has the shape and type as input tensor
Type Constraints
T in ( tensor(bfloat16), tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.
Examples
default
import numpy as np
import onnx
alpha = 0.0002
beta = 0.5
bias = 2.0
nsize = 3
node = onnx.helper.make_node(
"LRN",
inputs=["x"],
outputs=["y"],
alpha=alpha,
beta=beta,
bias=bias,
size=nsize,
)
x = np.random.randn(5, 5, 5, 5).astype(np.float32)
square_sum = np.zeros((5, 5, 5, 5)).astype(np.float32)
for n, c, h, w in np.ndindex(x.shape):
square_sum[n, c, h, w] = sum(
x[
n,
max(0, c - int(math.floor((nsize - 1) / 2))) : min(
5, c + int(math.ceil((nsize - 1) / 2)) + 1
),
h,
w,
]
** 2
)
y = x / ((bias + (alpha / nsize) * square_sum) ** beta)
expect(node, inputs=[x], outputs=[y], name="test_lrn")
_default
import numpy as np
import onnx
alpha = 0.0001
beta = 0.75
bias = 1.0
nsize = 3
node = onnx.helper.make_node("LRN", inputs=["x"], outputs=["y"], size=3)
x = np.random.randn(5, 5, 5, 5).astype(np.float32)
square_sum = np.zeros((5, 5, 5, 5)).astype(np.float32)
for n, c, h, w in np.ndindex(x.shape):
square_sum[n, c, h, w] = sum(
x[
n,
max(0, c - int(math.floor((nsize - 1) / 2))) : min(
5, c + int(math.ceil((nsize - 1) / 2)) + 1
),
h,
w,
]
** 2
)
y = x / ((bias + (alpha / nsize) * square_sum) ** beta)
expect(node, inputs=[x], outputs=[y], name="test_lrn_default")
LRN - 1#
Version
name: LRN (GitHub)
domain: main
since_version: 1
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 1.
Summary
Local Response Normalization proposed in the [AlexNet paper](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf). It normalizes over local input regions. The local region is defined across the channels. For an element X[n, c, d1, …, dk] in a tensor of shape (N x C x D1 x D2, …, Dk), its region is {X[n, i, d1, …, dk] | max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2))}.
square_sum[n, c, d1, …, dk] = sum(X[n, i, d1, …, dk] ^ 2), where max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2)).
Y[n, c, d1, …, dk] = X[n, c, d1, …, dk] / (bias + alpha / size * square_sum[n, c, d1, …, dk] ) ^ beta
Attributes
alpha: Scaling parameter.
beta: The exponent.
bias:
size (required): The number of channels to sum over
Inputs
X (heterogeneous) - T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
Outputs
Y (heterogeneous) - T: Output tensor, which has the shape and type as input tensor
Type Constraints
T in ( tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.