ScatterND - 11 vs 18#

Next section compares an older to a newer version of the same operator after both definition are converted into markdown text. Green means an addition to the newer version, red means a deletion. Anything else is unchanged.

Files changed (1) hide show
  1. ScatterND11 → ScatterND18 +0 -25
ScatterND11 → ScatterND18 RENAMED
@@ -1 +1 @@
1
1
  ScatterND takes three inputs data tensor of rank r >= 1, indices tensor of rank q >= 1,
2
2
  and updates tensor of rank q + r - indices.shape[-1] - 1. The output of the operation
3
3
  is produced by creating a copy of the input data, and then updating its value to values
4
4
  specified by updates at specific index positions specified by indices. Its output shape
5
5
  is the same as the shape of data. Note that indices should not have duplicate entries.
6
6
  That is, two or more updates for the same index-location is not supported.
7
7
  indices is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices.
8
8
  indices is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data.
9
9
  Hence, k can be a value at most the rank of data. When k equals rank(data), each update entry specifies an
10
10
  update to a single element of the tensor. When k is less than rank(data) each update entry specifies an
11
11
  update to a slice of the tensor.
12
12
  updates is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the
13
13
  first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape.
14
14
  The remaining dimensions of updates correspond to the dimensions of the
15
15
  replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor,
16
16
  corresponding to the trailing (r-k) dimensions of data. Thus, the shape of updates
17
17
  must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation
18
18
  of shapes.
19
19
  The output is calculated via the following equation:
20
20
  output = np.copy(data)
21
21
  update_indices = indices.shape[:-1]
22
22
  for idx in np.ndindex(update_indices):
23
23
  output[indices[idx]] = updates[idx]
24
24
  The order of iteration in the above loop is not specified.
25
25
  In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2].
26
26
  This ensures that the output value does not depend on the iteration order.
27
- reduction allows specification of an optional reduction operation, which is applied to all values in updates
28
- tensor into output at the specified indices.
29
- In cases where reduction is set to "none", indices should not have duplicate entries: that is, if idx1 != idx2,
30
- then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order.
31
- When reduction is set to some reduction function f, output is calculated as follows:
32
-
33
- output = np.copy(data)
34
- update_indices = indices.shape[:-1]
35
- for idx in np.ndindex(update_indices):
36
- output[indices[idx]] = f(output[indices[idx]], updates[idx])
37
-
38
- where the f is +/*/max/min as specified.
39
-
40
27
  This operator is the inverse of GatherND.
41
-
42
- (Opset 18 change): Adds max/min to the set of allowed reduction ops.
43
28
  Example 1:
44
29
  ::
45
30
  data = [1, 2, 3, 4, 5, 6, 7, 8]
46
31
  indices = [[4], [3], [1], [7]]
47
32
  updates = [9, 10, 11, 12]
48
33
  output = [1, 11, 3, 10, 9, 6, 7, 12]
49
34
  Example 2:
50
35
  ::
51
36
  data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
52
37
  [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
53
38
  [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
54
39
  [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
55
40
  indices = [[0], [2]]
56
41
  updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
57
42
  [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]]
58
43
  output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
59
44
  [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
60
45
  [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
61
46
  [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
62
- **Attributes**
63
-
64
- * **reduction**:
65
- Type of reduction to apply: none (default), add, mul, max, min.
66
- 'none': no reduction applied. 'add': reduction using the addition
67
- operation. 'mul': reduction using the addition operation. 'max':
68
- reduction using the maximum operation.'min': reduction using the
69
- minimum operation.
70
-
71
47
  **Inputs**
72
48
  * **data** (heterogeneous) - **T**:
73
49
  Tensor of rank r >= 1.
74
50
  * **indices** (heterogeneous) - **tensor(int64)**:
75
51
  Tensor of rank q >= 1.
76
52
  * **updates** (heterogeneous) - **T**:
77
53
  Tensor of rank q + r - indices_shape[-1] - 1.
78
54
  **Outputs**
79
55
  * **output** (heterogeneous) - **T**:
80
56
  Tensor of rank r >= 1.
81
57
  **Type Constraints**
82
58
  * **T** in (
83
- tensor(bfloat16),
84
59
  tensor(bool),
85
60
  tensor(complex128),
86
61
  tensor(complex64),
87
62
  tensor(double),
88
63
  tensor(float),
89
64
  tensor(float16),
90
65
  tensor(int16),
91
66
  tensor(int32),
92
67
  tensor(int64),
93
68
  tensor(int8),
94
69
  tensor(string),
95
70
  tensor(uint16),
96
71
  tensor(uint32),
97
72
  tensor(uint64),
98
73
  tensor(uint8)
99
74
  ):
100
75
  Constrain input and output types to any tensor type.