ScatterND - 13 vs 16

Files changed (1) hide show
  1. ScatterND13 → ScatterND16 +21 -7
ScatterND13 → ScatterND16 RENAMED
@@ -1 +1 @@
1
1
  ScatterND takes three inputs data tensor of rank r >= 1, indices tensor of rank q >= 1,
2
2
  and updates tensor of rank q + r - indices.shape[-1] - 1. The output of the operation
3
3
  is produced by creating a copy of the input data, and then updating its value to values
4
4
  specified by updates at specific index positions specified by indices. Its output shape
5
5
  is the same as the shape of data. Note that indices should not have duplicate entries.
6
6
  That is, two or more updates for the same index-location is not supported.
7
-
8
7
  indices is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices.
9
8
  indices is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data.
10
9
  Hence, k can be a value at most the rank of data. When k equals rank(data), each update entry specifies an
11
10
  update to a single element of the tensor. When k is less than rank(data) each update entry specifies an
12
11
  update to a slice of the tensor.
13
-
14
12
  updates is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the
15
13
  first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape.
16
14
  The remaining dimensions of updates correspond to the dimensions of the
17
15
  replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor,
18
16
  corresponding to the trailing (r-k) dimensions of data. Thus, the shape of updates
19
17
  must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation
20
18
  of shapes.
21
-
22
19
  The output is calculated via the following equation:
23
-
24
20
  output = np.copy(data)
25
21
  update_indices = indices.shape[:-1]
26
22
  for idx in np.ndindex(update_indices):
27
23
  output[indices[idx]] = updates[idx]
28
-
29
24
  The order of iteration in the above loop is not specified.
30
25
  In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2].
31
26
  This ensures that the output value does not depend on the iteration order.
32
-
27
+ reduction allows specification of an optional reduction operation, which is applied to all values in updates
28
+ tensor into output at the specified indices.
29
+ In cases where reduction is set to "none", indices should not have duplicate entries: that is, if idx1 != idx2,
30
+ then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order.
31
+ When reduction is set to "add", output is calculated as follows:
32
+ output = np.copy(data)
33
+ update_indices = indices.shape[:-1]
34
+ for idx in np.ndindex(update_indices):
35
+ output[indices[idx]] += updates[idx]
36
+ When reduction is set to "mul", output is calculated as follows:
37
+ output = np.copy(data)
38
+ update_indices = indices.shape[:-1]
39
+ for idx in np.ndindex(update_indices):
40
+ output[indices[idx]] *= updates[idx]
33
41
  This operator is the inverse of GatherND.
34
-
35
42
  Example 1:
36
43
  ::
37
44
  data = [1, 2, 3, 4, 5, 6, 7, 8]
38
45
  indices = [[4], [3], [1], [7]]
39
46
  updates = [9, 10, 11, 12]
40
47
  output = [1, 11, 3, 10, 9, 6, 7, 12]
41
48
  Example 2:
42
49
  ::
43
50
  data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
44
51
  [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
45
52
  [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
46
53
  [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
47
54
  indices = [[0], [2]]
48
55
  updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
49
56
  [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]]
50
57
  output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
51
58
  [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
52
59
  [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
53
60
  [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
61
+
62
+ **Attributes**
63
+
64
+ * **reduction**:
65
+ Type of reduction to apply: none (default), add, mul. 'none': no
66
+ reduction applied. 'add': reduction using the addition operation.
67
+ 'mul': reduction using the multiplication operation.
54
68
  **Inputs**
55
69
  * **data** (heterogeneous) - **T**:
56
70
  Tensor of rank r >= 1.
57
71
  * **indices** (heterogeneous) - **tensor(int64)**:
58
72
  Tensor of rank q >= 1.
59
73
  * **updates** (heterogeneous) - **T**:
60
74
  Tensor of rank q + r - indices_shape[-1] - 1.
61
75
  **Outputs**
62
76
  * **output** (heterogeneous) - **T**:
63
77
  Tensor of rank r >= 1.
64
78
  **Type Constraints**
65
79
  * **T** in (
66
80
  tensor(bfloat16),
67
81
  tensor(bool),
68
82
  tensor(complex128),
69
83
  tensor(complex64),
70
84
  tensor(double),
71
85
  tensor(float),
72
86
  tensor(float16),
73
87
  tensor(int16),
74
88
  tensor(int32),
75
89
  tensor(int64),
76
90
  tensor(int8),
77
91
  tensor(string),
78
92
  tensor(uint16),
79
93
  tensor(uint32),
80
94
  tensor(uint64),
81
95
  tensor(uint8)
82
96
  ):
83
97
  Constrain input and output types to any tensor type.