LRN - 1 vs 13#

Next section compares an older to a newer version of the same operator after both definition are converted into markdown text. Green means an addition to the newer version, red means a deletion. Anything else is unchanged.

Files changed (1) hide show
  1. LRN1 → LRN13 +0 -1
LRN1 → LRN13 RENAMED
@@ -1 +1 @@
1
1
  Local Response Normalization proposed in the [AlexNet paper](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf).
2
2
  It normalizes over local input regions.
3
3
  The local region is defined across the channels. For an element X[n, c, d1, ..., dk] in a tensor
4
4
  of shape (N x C x D1 x D2, ..., Dk), its region is
5
5
  {X[n, i, d1, ..., dk] | max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2))}.
6
6
  square_sum[n, c, d1, ..., dk] = sum(X[n, i, d1, ..., dk] ^ 2),
7
7
  where max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2)).
8
8
  Y[n, c, d1, ..., dk] = X[n, c, d1, ..., dk] / (bias + alpha / size * square_sum[n, c, d1, ..., dk] ) ^ beta
9
9
  **Attributes**
10
10
  * **alpha**:
11
11
  Scaling parameter.
12
12
  * **beta**:
13
13
  The exponent.
14
14
  * **bias**:
15
15
  * **size** (required):
16
16
  The number of channels to sum over
17
17
  **Inputs**
18
18
  * **X** (heterogeneous) - **T**:
19
19
  Input data tensor from the previous operator; dimensions for image
20
20
  case are (N x C x H x W), where N is the batch size, C is the number
21
21
  of channels, and H and W are the height and the width of the data.
22
22
  For non image case, the dimensions are in the form of (N x C x D1 x
23
23
  D2 ... Dn), where N is the batch size. Optionally, if dimension
24
24
  denotation is in effect, the operation expects the input data tensor
25
25
  to arrive with the dimension denotation of [DATA_BATCH,
26
26
  DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE ...].
27
27
  **Outputs**
28
28
  * **Y** (heterogeneous) - **T**:
29
29
  Output tensor, which has the shape and type as input tensor
30
30
  **Type Constraints**
31
31
  * **T** in (
32
- tensor(bfloat16),
33
32
  tensor(double),
34
33
  tensor(float),
35
34
  tensor(float16)
36
35
  ):
37
36
  Constrain input and output types to float tensors.