ScatterND - 16 vs 18

Files changed (1) hide show
  1. ScatterND16 → ScatterND18 +20 -10
ScatterND16 → ScatterND18 RENAMED
@@ -1 +1 @@
1
1
  ScatterND takes three inputs data tensor of rank r >= 1, indices tensor of rank q >= 1,
2
2
  and updates tensor of rank q + r - indices.shape[-1] - 1. The output of the operation
3
3
  is produced by creating a copy of the input data, and then updating its value to values
4
4
  specified by updates at specific index positions specified by indices. Its output shape
5
5
  is the same as the shape of data. Note that indices should not have duplicate entries.
6
6
  That is, two or more updates for the same index-location is not supported.
7
+
7
8
  indices is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices.
8
9
  indices is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data.
9
10
  Hence, k can be a value at most the rank of data. When k equals rank(data), each update entry specifies an
10
11
  update to a single element of the tensor. When k is less than rank(data) each update entry specifies an
11
12
  update to a slice of the tensor.
13
+
12
14
  updates is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the
13
15
  first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape.
14
16
  The remaining dimensions of updates correspond to the dimensions of the
15
17
  replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor,
16
18
  corresponding to the trailing (r-k) dimensions of data. Thus, the shape of updates
17
19
  must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation
18
20
  of shapes.
21
+
19
22
  The output is calculated via the following equation:
23
+
20
24
  output = np.copy(data)
21
25
  update_indices = indices.shape[:-1]
22
26
  for idx in np.ndindex(update_indices):
23
27
  output[indices[idx]] = updates[idx]
28
+
24
29
  The order of iteration in the above loop is not specified.
25
30
  In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2].
26
31
  This ensures that the output value does not depend on the iteration order.
32
+
27
33
  reduction allows specification of an optional reduction operation, which is applied to all values in updates
28
34
  tensor into output at the specified indices.
29
35
  In cases where reduction is set to "none", indices should not have duplicate entries: that is, if idx1 != idx2,
30
36
  then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order.
31
- When reduction is set to "add", output is calculated as follows:
37
+ When reduction is set to some reduction function f, output is calculated as follows:
38
+
32
39
  output = np.copy(data)
33
40
  update_indices = indices.shape[:-1]
34
41
  for idx in np.ndindex(update_indices):
35
- output[indices[idx]] += updates[idx]
42
+ output[indices[idx]] = f(output[indices[idx]], updates[idx])
43
+
44
+ where the f is +/*/max/min as specified.
45
+
36
- When reduction is set to "mul", output is calculated as follows:
37
- output = np.copy(data)
38
- update_indices = indices.shape[:-1]
39
- for idx in np.ndindex(update_indices):
40
- output[indices[idx]] *= updates[idx]
41
46
  This operator is the inverse of GatherND.
47
+
48
+ (Opset 18 change): Adds max/min to the set of allowed reduction ops.
49
+
42
50
  Example 1:
43
51
  ::
44
52
  data = [1, 2, 3, 4, 5, 6, 7, 8]
45
53
  indices = [[4], [3], [1], [7]]
46
54
  updates = [9, 10, 11, 12]
47
55
  output = [1, 11, 3, 10, 9, 6, 7, 12]
48
56
  Example 2:
49
57
  ::
50
58
  data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
51
59
  [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
52
60
  [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
53
61
  [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
54
62
  indices = [[0], [2]]
55
63
  updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
56
64
  [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]]
57
65
  output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
58
66
  [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
59
67
  [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
60
68
  [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
61
69
  **Attributes**
62
70
  * **reduction**:
63
- Type of reduction to apply: none (default), add, mul. 'none': no
71
+ Type of reduction to apply: none (default), add, mul, max, min.
64
- reduction applied. 'add': reduction using the addition operation.
72
+ 'none': no reduction applied. 'add': reduction using the addition
65
- 'mul': reduction using the multiplication operation.
73
+ operation. 'mul': reduction using the addition operation. 'max':
74
+ reduction using the maximum operation.'min': reduction using the
75
+ minimum operation.
66
76
  **Inputs**
67
77
  * **data** (heterogeneous) - **T**:
68
78
  Tensor of rank r >= 1.
69
79
  * **indices** (heterogeneous) - **tensor(int64)**:
70
80
  Tensor of rank q >= 1.
71
81
  * **updates** (heterogeneous) - **T**:
72
82
  Tensor of rank q + r - indices_shape[-1] - 1.
73
83
  **Outputs**
74
84
  * **output** (heterogeneous) - **T**:
75
85
  Tensor of rank r >= 1.
76
86
  **Type Constraints**
77
87
  * **T** in (
78
88
  tensor(bfloat16),
79
89
  tensor(bool),
80
90
  tensor(complex128),
81
91
  tensor(complex64),
82
92
  tensor(double),
83
93
  tensor(float),
84
94
  tensor(float16),
85
95
  tensor(int16),
86
96
  tensor(int32),
87
97
  tensor(int64),
88
98
  tensor(int8),
89
99
  tensor(string),
90
100
  tensor(uint16),
91
101
  tensor(uint32),
92
102
  tensor(uint64),
93
103
  tensor(uint8)
94
104
  ):
95
105
  Constrain input and output types to any tensor type.