ScatterND - 16 vs 18#

Next section compares an older to a newer version of the same operator after both definition are converted into markdown text. Green means an addition to the newer version, red means a deletion. Anything else is unchanged.

Files changed (1) hide show
  1. ScatterND16 → ScatterND18 +10 -20
ScatterND16 → ScatterND18 RENAMED
@@ -1 +1 @@
1
1
  ScatterND takes three inputs data tensor of rank r >= 1, indices tensor of rank q >= 1,
2
2
  and updates tensor of rank q + r - indices.shape[-1] - 1. The output of the operation
3
3
  is produced by creating a copy of the input data, and then updating its value to values
4
4
  specified by updates at specific index positions specified by indices. Its output shape
5
5
  is the same as the shape of data. Note that indices should not have duplicate entries.
6
6
  That is, two or more updates for the same index-location is not supported.
7
-
8
7
  indices is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices.
9
8
  indices is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data.
10
9
  Hence, k can be a value at most the rank of data. When k equals rank(data), each update entry specifies an
11
10
  update to a single element of the tensor. When k is less than rank(data) each update entry specifies an
12
11
  update to a slice of the tensor.
13
-
14
12
  updates is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the
15
13
  first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape.
16
14
  The remaining dimensions of updates correspond to the dimensions of the
17
15
  replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor,
18
16
  corresponding to the trailing (r-k) dimensions of data. Thus, the shape of updates
19
17
  must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation
20
18
  of shapes.
21
-
22
19
  The output is calculated via the following equation:
23
-
24
20
  output = np.copy(data)
25
21
  update_indices = indices.shape[:-1]
26
22
  for idx in np.ndindex(update_indices):
27
23
  output[indices[idx]] = updates[idx]
28
-
29
24
  The order of iteration in the above loop is not specified.
30
25
  In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2].
31
26
  This ensures that the output value does not depend on the iteration order.
32
-
33
27
  reduction allows specification of an optional reduction operation, which is applied to all values in updates
34
28
  tensor into output at the specified indices.
35
29
  In cases where reduction is set to "none", indices should not have duplicate entries: that is, if idx1 != idx2,
36
30
  then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order.
37
- When reduction is set to some reduction function f, output is calculated as follows:
31
+ When reduction is set to "add", output is calculated as follows:
38
-
39
32
  output = np.copy(data)
40
33
  update_indices = indices.shape[:-1]
41
34
  for idx in np.ndindex(update_indices):
42
- output[indices[idx]] = f(output[indices[idx]], updates[idx])
35
+ output[indices[idx]] += updates[idx]
43
-
44
- where the f is +/*/max/min as specified.
36
+ When reduction is set to "mul", output is calculated as follows:
45
-
37
+ output = np.copy(data)
38
+ update_indices = indices.shape[:-1]
39
+ for idx in np.ndindex(update_indices):
40
+ output[indices[idx]] *= updates[idx]
46
41
  This operator is the inverse of GatherND.
47
-
48
- (Opset 18 change): Adds max/min to the set of allowed reduction ops.
49
-
50
42
  Example 1:
51
43
  ::
52
44
  data = [1, 2, 3, 4, 5, 6, 7, 8]
53
45
  indices = [[4], [3], [1], [7]]
54
46
  updates = [9, 10, 11, 12]
55
47
  output = [1, 11, 3, 10, 9, 6, 7, 12]
56
48
  Example 2:
57
49
  ::
58
50
  data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
59
51
  [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
60
52
  [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
61
53
  [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
62
54
  indices = [[0], [2]]
63
55
  updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
64
56
  [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]]
65
57
  output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
66
58
  [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
67
59
  [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
68
60
  [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
69
61
  **Attributes**
70
62
  * **reduction**:
71
- Type of reduction to apply: none (default), add, mul, max, min.
63
+ Type of reduction to apply: none (default), add, mul. 'none': no
72
- 'none': no reduction applied. 'add': reduction using the addition
64
+ reduction applied. 'add': reduction using the addition operation.
65
+ 'mul': reduction using the multiplication operation.
73
- operation. 'mul': reduction using the addition operation. 'max':
74
- reduction using the maximum operation.'min': reduction using the
75
- minimum operation.
76
66
  **Inputs**
77
67
  * **data** (heterogeneous) - **T**:
78
68
  Tensor of rank r >= 1.
79
69
  * **indices** (heterogeneous) - **tensor(int64)**:
80
70
  Tensor of rank q >= 1.
81
71
  * **updates** (heterogeneous) - **T**:
82
72
  Tensor of rank q + r - indices_shape[-1] - 1.
83
73
  **Outputs**
84
74
  * **output** (heterogeneous) - **T**:
85
75
  Tensor of rank r >= 1.
86
76
  **Type Constraints**
87
77
  * **T** in (
88
78
  tensor(bfloat16),
89
79
  tensor(bool),
90
80
  tensor(complex128),
91
81
  tensor(complex64),
92
82
  tensor(double),
93
83
  tensor(float),
94
84
  tensor(float16),
95
85
  tensor(int16),
96
86
  tensor(int32),
97
87
  tensor(int64),
98
88
  tensor(int8),
99
89
  tensor(string),
100
90
  tensor(uint16),
101
91
  tensor(uint32),
102
92
  tensor(uint64),
103
93
  tensor(uint8)
104
94
  ):
105
95
  Constrain input and output types to any tensor type.