ScatterND - 11 vs 16#

Next section compares an older to a newer version of the same operator after both definition are converted into markdown text. Green means an addition to the newer version, red means a deletion. Anything else is unchanged.

Files changed (1) hide show
  1. ScatterND11 → ScatterND16 +7 -22
ScatterND11 → ScatterND16 RENAMED
@@ -1 +1 @@
1
1
  ScatterND takes three inputs data tensor of rank r >= 1, indices tensor of rank q >= 1,
2
2
  and updates tensor of rank q + r - indices.shape[-1] - 1. The output of the operation
3
3
  is produced by creating a copy of the input data, and then updating its value to values
4
4
  specified by updates at specific index positions specified by indices. Its output shape
5
5
  is the same as the shape of data. Note that indices should not have duplicate entries.
6
6
  That is, two or more updates for the same index-location is not supported.
7
+
7
8
  indices is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices.
8
9
  indices is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data.
9
10
  Hence, k can be a value at most the rank of data. When k equals rank(data), each update entry specifies an
10
11
  update to a single element of the tensor. When k is less than rank(data) each update entry specifies an
11
12
  update to a slice of the tensor.
13
+
12
14
  updates is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the
13
15
  first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape.
14
16
  The remaining dimensions of updates correspond to the dimensions of the
15
17
  replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor,
16
18
  corresponding to the trailing (r-k) dimensions of data. Thus, the shape of updates
17
19
  must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation
18
20
  of shapes.
21
+
19
22
  The output is calculated via the following equation:
23
+
20
24
  output = np.copy(data)
21
25
  update_indices = indices.shape[:-1]
22
26
  for idx in np.ndindex(update_indices):
23
27
  output[indices[idx]] = updates[idx]
28
+
24
29
  The order of iteration in the above loop is not specified.
25
30
  In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2].
26
31
  This ensures that the output value does not depend on the iteration order.
32
+
27
- reduction allows specification of an optional reduction operation, which is applied to all values in updates
28
- tensor into output at the specified indices.
29
- In cases where reduction is set to "none", indices should not have duplicate entries: that is, if idx1 != idx2,
30
- then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order.
31
- When reduction is set to "add", output is calculated as follows:
32
- output = np.copy(data)
33
- update_indices = indices.shape[:-1]
34
- for idx in np.ndindex(update_indices):
35
- output[indices[idx]] += updates[idx]
36
- When reduction is set to "mul", output is calculated as follows:
37
- output = np.copy(data)
38
- update_indices = indices.shape[:-1]
39
- for idx in np.ndindex(update_indices):
40
- output[indices[idx]] *= updates[idx]
41
33
  This operator is the inverse of GatherND.
34
+
42
35
  Example 1:
43
36
  ::
44
37
  data = [1, 2, 3, 4, 5, 6, 7, 8]
45
38
  indices = [[4], [3], [1], [7]]
46
39
  updates = [9, 10, 11, 12]
47
40
  output = [1, 11, 3, 10, 9, 6, 7, 12]
48
41
  Example 2:
49
42
  ::
50
43
  data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
51
44
  [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
52
45
  [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
53
46
  [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
54
47
  indices = [[0], [2]]
55
48
  updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
56
49
  [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]]
57
50
  output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
58
51
  [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
59
52
  [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
60
53
  [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
61
- **Attributes**
62
-
63
- * **reduction**:
64
- Type of reduction to apply: none (default), add, mul. 'none': no
65
- reduction applied. 'add': reduction using the addition operation.
66
- 'mul': reduction using the multiplication operation.
67
-
68
54
  **Inputs**
69
55
  * **data** (heterogeneous) - **T**:
70
56
  Tensor of rank r >= 1.
71
57
  * **indices** (heterogeneous) - **tensor(int64)**:
72
58
  Tensor of rank q >= 1.
73
59
  * **updates** (heterogeneous) - **T**:
74
60
  Tensor of rank q + r - indices_shape[-1] - 1.
75
61
  **Outputs**
76
62
  * **output** (heterogeneous) - **T**:
77
63
  Tensor of rank r >= 1.
78
64
  **Type Constraints**
79
65
  * **T** in (
80
- tensor(bfloat16),
81
66
  tensor(bool),
82
67
  tensor(complex128),
83
68
  tensor(complex64),
84
69
  tensor(double),
85
70
  tensor(float),
86
71
  tensor(float16),
87
72
  tensor(int16),
88
73
  tensor(int32),
89
74
  tensor(int64),
90
75
  tensor(int8),
91
76
  tensor(string),
92
77
  tensor(uint16),
93
78
  tensor(uint32),
94
79
  tensor(uint64),
95
80
  tensor(uint8)
96
81
  ):
97
82
  Constrain input and output types to any tensor type.