ScatterND#
ScatterND - 16#
Version
name: ScatterND (GitHub)
domain: main
since_version: 16
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 16.
Summary
ScatterND takes three inputs data tensor of rank r >= 1, indices tensor of rank q >= 1, and updates tensor of rank q + r - indices.shape[-1] - 1. The output of the operation is produced by creating a copy of the input data, and then updating its value to values specified by updates at specific index positions specified by indices. Its output shape is the same as the shape of data. Note that indices should not have duplicate entries. That is, two or more updates for the same index-location is not supported.
- indices is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices.
indices is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data.
Hence, k can be a value at most the rank of data. When k equals rank(data), each update entry specifies an update to a single element of the tensor. When k is less than rank(data) each update entry specifies an update to a slice of the tensor.
updates is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape. The remaining dimensions of updates correspond to the dimensions of the replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor, corresponding to the trailing (r-k) dimensions of data. Thus, the shape of updates must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation of shapes.
The output is calculated via the following equation:
output = np.copy(data) update_indices = indices.shape[:-1] for idx in np.ndindex(update_indices):
output[indices[idx]] = updates[idx]
The order of iteration in the above loop is not specified. In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order.
reduction allows specification of an optional reduction operation, which is applied to all values in updates tensor into output at the specified indices. In cases where reduction is set to “none”, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order. When reduction is set to “add”, output is calculated as follows:
output = np.copy(data) update_indices = indices.shape[:-1] for idx in np.ndindex(update_indices):
output[indices[idx]] += updates[idx]
When reduction is set to “mul”, output is calculated as follows:
output = np.copy(data) update_indices = indices.shape[:-1] for idx in np.ndindex(update_indices):
output[indices[idx]] *= updates[idx]
This operator is the inverse of GatherND.
Example 1:
data = [1, 2, 3, 4, 5, 6, 7, 8]
indices = [[4], [3], [1], [7]]
updates = [9, 10, 11, 12]
output = [1, 11, 3, 10, 9, 6, 7, 12]
Example 2:
data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
indices = [[0], [2]]
updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]]
output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
Attributes
reduction: Type of reduction to apply: none (default), add, mul. ‘none’: no reduction applied. ‘add’: reduction using the addition operation. ‘mul’: reduction using the multiplication operation. Default value is
'none'
.
Inputs
data (heterogeneous) - T: Tensor of rank r >= 1.
indices (heterogeneous) - tensor(int64): Tensor of rank q >= 1.
updates (heterogeneous) - T: Tensor of rank q + r - indices_shape[-1] - 1.
Outputs
output (heterogeneous) - T: Tensor of rank r >= 1.
Type Constraints
T in ( tensor(bfloat16), tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): Constrain input and output types to any tensor type.
Examples
scatternd
node = onnx.helper.make_node(
'ScatterND',
inputs=['data', 'indices', 'updates'],
outputs=['y'],
)
data = np.array(
[[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]], dtype=np.float32)
indices = np.array([[0], [2]], dtype=np.int64)
updates = np.array(
[[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]], dtype=np.float32)
# Expecting output as np.array(
# [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
# [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
# [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
# [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]], dtype=np.float32)
output = scatter_nd_impl(data, indices, updates)
expect(node, inputs=[data, indices, updates], outputs=[output],
name='test_scatternd')
scatternd_add
node = onnx.helper.make_node(
'ScatterND',
inputs=['data', 'indices', 'updates'],
outputs=['y'],
reduction='add',
)
data = np.array(
[[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]], dtype=np.float32)
indices = np.array([[0], [0]], dtype=np.int64)
updates = np.array(
[[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]], dtype=np.float32)
# Expecting output as np.array(
# [[[7, 8, 9, 10], [13, 14, 15, 16], [18, 17, 16, 15], [16, 15, 14, 13]],
# [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
# [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
# [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]], dtype=np.float32)
output = scatter_nd_impl(data, indices, updates, reduction='add')
expect(node, inputs=[data, indices, updates], outputs=[output],
name='test_scatternd_add')
scatternd_multiply
node = onnx.helper.make_node(
'ScatterND',
inputs=['data', 'indices', 'updates'],
outputs=['y'],
reduction='mul',
)
data = np.array(
[[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]], dtype=np.float32)
indices = np.array([[0], [0]], dtype=np.int64)
updates = np.array(
[[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]], dtype=np.float32)
# Expecting output as np.array(
# [[[5, 10, 15, 20], [60, 72, 84, 96], [168, 147, 126, 105], [128, 96, 64, 32]],
# [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
# [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
# [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]], dtype=np.float32)
output = scatter_nd_impl(data, indices, updates, reduction='mul')
expect(node, inputs=[data, indices, updates], outputs=[output],
name='test_scatternd_multiply')
Differences
0 | 0 | ScatterND takes three inputs data tensor of rank r >= 1, indices tensor of rank q >= 1, | ScatterND takes three inputs data tensor of rank r >= 1, indices tensor of rank q >= 1, |
1 | 1 | and updates tensor of rank q + r - indices.shape[-1] - 1. The output of the operation | and updates tensor of rank q + r - indices.shape[-1] - 1. The output of the operation |
2 | 2 | is produced by creating a copy of the input data, and then updating its value to values | is produced by creating a copy of the input data, and then updating its value to values |
3 | 3 | specified by updates at specific index positions specified by indices. Its output shape | specified by updates at specific index positions specified by indices. Its output shape |
4 | 4 | is the same as the shape of data. Note that indices should not have duplicate entries. | is the same as the shape of data. Note that indices should not have duplicate entries. |
5 | 5 | That is, two or more updates for the same index-location is not supported. | That is, two or more updates for the same index-location is not supported. |
6 | 6 |
|
|
7 | 7 | indices is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices. | indices is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices. |
8 | 8 | indices is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data. | indices is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data. |
9 | 9 | Hence, k can be a value at most the rank of data. When k equals rank(data), each update entry specifies an | Hence, k can be a value at most the rank of data. When k equals rank(data), each update entry specifies an |
10 | 10 | update to a single element of the tensor. When k is less than rank(data) each update entry specifies an | update to a single element of the tensor. When k is less than rank(data) each update entry specifies an |
11 | 11 | update to a slice of the tensor. | update to a slice of the tensor. |
12 | 12 |
|
|
13 | 13 | updates is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the | updates is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the |
14 | 14 | first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape. | first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape. |
15 | 15 | The remaining dimensions of updates correspond to the dimensions of the | The remaining dimensions of updates correspond to the dimensions of the |
16 | 16 | replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor, | replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor, |
17 | 17 | corresponding to the trailing (r-k) dimensions of data. Thus, the shape of updates | corresponding to the trailing (r-k) dimensions of data. Thus, the shape of updates |
18 | 18 | must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation | must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation |
19 | 19 | of shapes. | of shapes. |
20 | 20 |
|
|
21 | 21 | The output is calculated via the following equation: | The output is calculated via the following equation: |
22 | 22 |
|
|
23 | 23 | output = np.copy(data) | output = np.copy(data) |
24 | 24 | update_indices = indices.shape[:-1] | update_indices = indices.shape[:-1] |
25 | 25 | for idx in np.ndindex(update_indices): | for idx in np.ndindex(update_indices): |
26 | 26 | output[indices[idx]] = updates[idx] | output[indices[idx]] = updates[idx] |
27 | 27 |
|
|
28 | 28 | The order of iteration in the above loop is not specified. | The order of iteration in the above loop is not specified. |
29 | 29 | In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2]. | In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2]. |
30 | 30 | This ensures that the output value does not depend on the iteration order. | This ensures that the output value does not depend on the iteration order. |
31 | 31 |
|
|
32 | 32 | This operator is the inverse of GatherND. |
|
33 |
| ||
34 | 33 | Example 1: |
|
35 | :: | ||
36 |
| ||
37 | 34 | data = [1, 2, 3, 4, 5, 6, 7, 8] |
|
38 | 35 | indices = [[4], [3], [1], [7]] |
|
39 | 36 | updates = [9, 10, 11, 12] |
|
37 |
| ||
38 | output = np.copy(data) | ||
39 | update_indices = indices.shape[:-1] | ||
40 | for idx in np.ndindex(update_indices): | ||
41 | output[indices[idx]] += updates[idx] | ||
42 |
| ||
43 | When reduction is set to "mul", output is calculated as follows: | ||
44 |
| ||
45 | output = np.copy(data) | ||
46 | update_indices = indices.shape[:-1] | ||
47 | for idx in np.ndindex(update_indices): | ||
48 | output[indices[idx]] *= updates[idx] | ||
49 |
| ||
50 | This operator is the inverse of GatherND. | ||
51 |
| ||
52 | Example 1: | ||
53 | :: | ||
54 |
| ||
55 | data = [1, 2, 3, 4, 5, 6, 7, 8] | ||
56 | indices = [[4], [3], [1], [7]] | ||
57 | updates = [9, 10, 11, 12] | ||
40 | 58 | output = [1, 11, 3, 10, 9, 6, 7, 12] | output = [1, 11, 3, 10, 9, 6, 7, 12] |
41 | 59 |
|
|
42 | 60 | Example 2: | Example 2: |
43 | 61 | :: | :: |
44 | 62 |
|
|
45 | 63 | data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], | data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], |
46 | 64 | [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], | [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], |
47 | 65 | [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]], | [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]], |
48 | 66 | [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]] | [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]] |
49 | 67 | indices = [[0], [2]] | indices = [[0], [2]] |
50 | 68 | updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], | updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], |
51 | 69 | [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]] | [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]] |
52 | 70 | output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], | output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], |
53 | 71 | [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], | [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], |
54 | 72 | [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]], | [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]], |
55 | 73 | [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]] | [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]] |
56 | 74 |
|
|
75 | **Attributes** | ||
76 |
| ||
77 | * **reduction**: | ||
78 | Type of reduction to apply: none (default), add, mul. 'none': no | ||
79 | reduction applied. 'add': reduction using the addition operation. | ||
80 | 'mul': reduction using the multiplication operation. Default value is 'none'. | ||
81 |
| ||
57 | 82 | **Inputs** | **Inputs** |
58 | 83 |
|
|
59 | 84 | * **data** (heterogeneous) - **T**: | * **data** (heterogeneous) - **T**: |
60 | 85 | Tensor of rank r >= 1. | Tensor of rank r >= 1. |
61 | 86 | * **indices** (heterogeneous) - **tensor(int64)**: | * **indices** (heterogeneous) - **tensor(int64)**: |
62 | 87 | Tensor of rank q >= 1. | Tensor of rank q >= 1. |
63 | 88 | * **updates** (heterogeneous) - **T**: | * **updates** (heterogeneous) - **T**: |
64 | 89 | Tensor of rank q + r - indices_shape[-1] - 1. | Tensor of rank q + r - indices_shape[-1] - 1. |
65 | 90 |
|
|
66 | 91 | **Outputs** | **Outputs** |
67 | 92 |
|
|
68 | 93 | * **output** (heterogeneous) - **T**: | * **output** (heterogeneous) - **T**: |
69 | 94 | Tensor of rank r >= 1. | Tensor of rank r >= 1. |
70 | 95 |
|
|
71 | 96 | **Type Constraints** | **Type Constraints** |
72 | 97 |
|
|
73 | 98 | * **T** in ( | * **T** in ( |
74 | 99 | tensor(bfloat16), | tensor(bfloat16), |
75 | 100 | tensor(bool), | tensor(bool), |
76 | 101 | tensor(complex128), | tensor(complex128), |
77 | 102 | tensor(complex64), | tensor(complex64), |
78 | 103 | tensor(double), | tensor(double), |
79 | 104 | tensor(float), | tensor(float), |
80 | 105 | tensor(float16), | tensor(float16), |
81 | 106 | tensor(int16), | tensor(int16), |
82 | 107 | tensor(int32), | tensor(int32), |
83 | 108 | tensor(int64), | tensor(int64), |
84 | 109 | tensor(int8), | tensor(int8), |
85 | 110 | tensor(string), | tensor(string), |
86 | 111 | tensor(uint16), | tensor(uint16), |
87 | 112 | tensor(uint32), | tensor(uint32), |
88 | 113 | tensor(uint64), | tensor(uint64), |
89 | 114 | tensor(uint8) | tensor(uint8) |
90 | 115 | ): | ): |
91 | 116 | Constrain input and output types to any tensor type. | Constrain input and output types to any tensor type. |
ScatterND - 13#
Version
name: ScatterND (GitHub)
domain: main
since_version: 13
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 13.
Summary
ScatterND takes three inputs data tensor of rank r >= 1, indices tensor of rank q >= 1, and updates tensor of rank q + r - indices.shape[-1] - 1. The output of the operation is produced by creating a copy of the input data, and then updating its value to values specified by updates at specific index positions specified by indices. Its output shape is the same as the shape of data. Note that indices should not have duplicate entries. That is, two or more updates for the same index-location is not supported.
- indices is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices.
indices is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data.
Hence, k can be a value at most the rank of data. When k equals rank(data), each update entry specifies an update to a single element of the tensor. When k is less than rank(data) each update entry specifies an update to a slice of the tensor.
updates is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape. The remaining dimensions of updates correspond to the dimensions of the replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor, corresponding to the trailing (r-k) dimensions of data. Thus, the shape of updates must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation of shapes.
The output is calculated via the following equation:
output = np.copy(data) update_indices = indices.shape[:-1] for idx in np.ndindex(update_indices):
output[indices[idx]] = updates[idx]
The order of iteration in the above loop is not specified. In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order.
This operator is the inverse of GatherND.
Example 1:
data = [1, 2, 3, 4, 5, 6, 7, 8]
indices = [[4], [3], [1], [7]]
updates = [9, 10, 11, 12]
output = [1, 11, 3, 10, 9, 6, 7, 12]
Example 2:
data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
indices = [[0], [2]]
updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]]
output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
Inputs
data (heterogeneous) - T: Tensor of rank r >= 1.
indices (heterogeneous) - tensor(int64): Tensor of rank q >= 1.
updates (heterogeneous) - T: Tensor of rank q + r - indices_shape[-1] - 1.
Outputs
output (heterogeneous) - T: Tensor of rank r >= 1.
Type Constraints
T in ( tensor(bfloat16), tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): Constrain input and output types to any tensor type.
Differences
0 | 0 | ScatterND takes three inputs data tensor of rank r >= 1, indices tensor of rank q >= 1, | ScatterND takes three inputs data tensor of rank r >= 1, indices tensor of rank q >= 1, |
1 | 1 | and updates tensor of rank q + r - indices.shape[-1] - 1. The output of the operation | and updates tensor of rank q + r - indices.shape[-1] - 1. The output of the operation |
2 | 2 | is produced by creating a copy of the input data, and then updating its value to values | is produced by creating a copy of the input data, and then updating its value to values |
3 | 3 | specified by updates at specific index positions specified by indices. Its output shape | specified by updates at specific index positions specified by indices. Its output shape |
4 | 4 | is the same as the shape of data. Note that indices should not have duplicate entries. | is the same as the shape of data. Note that indices should not have duplicate entries. |
5 | 5 | That is, two or more updates for the same index-location is not supported. | That is, two or more updates for the same index-location is not supported. |
6 | 6 |
|
|
7 | 7 | indices is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices. | indices is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices. |
8 | 8 | indices is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data. | indices is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data. |
9 | 9 | Hence, k can be a value at most the rank of data. When k equals rank(data), each update entry specifies an | Hence, k can be a value at most the rank of data. When k equals rank(data), each update entry specifies an |
10 | 10 | update to a single element of the tensor. When k is less than rank(data) each update entry specifies an | update to a single element of the tensor. When k is less than rank(data) each update entry specifies an |
11 | 11 | update to a slice of the tensor. | update to a slice of the tensor. |
12 | 12 |
|
|
13 | 13 | updates is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the | updates is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the |
14 | 14 | first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape. | first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape. |
15 | 15 | The remaining dimensions of updates correspond to the dimensions of the | The remaining dimensions of updates correspond to the dimensions of the |
16 | 16 | replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor, | replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor, |
17 | 17 | corresponding to the trailing (r-k) dimensions of data. Thus, the shape of updates | corresponding to the trailing (r-k) dimensions of data. Thus, the shape of updates |
18 | 18 | must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation | must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation |
19 | 19 | of shapes. | of shapes. |
20 | 20 |
|
|
21 | 21 | The output is calculated via the following equation: | The output is calculated via the following equation: |
22 | 22 |
|
|
23 | 23 | output = np.copy(data) | output = np.copy(data) |
24 | 24 | update_indices = indices.shape[:-1] | update_indices = indices.shape[:-1] |
25 | 25 | for idx in np.ndindex(update_indices): | for idx in np.ndindex(update_indices): |
26 | 26 | output[indices[idx]] = updates[idx] | output[indices[idx]] = updates[idx] |
27 | 27 |
|
|
28 | 28 | The order of iteration in the above loop is not specified. | The order of iteration in the above loop is not specified. |
29 | 29 | In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2]. | In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2]. |
30 | 30 | This ensures that the output value does not depend on the iteration order. | This ensures that the output value does not depend on the iteration order. |
31 | 31 |
|
|
32 | 32 | This operator is the inverse of GatherND. | This operator is the inverse of GatherND. |
33 | 33 |
|
|
34 | 34 | Example 1: | Example 1: |
35 | 35 | :: | :: |
36 | 36 |
|
|
37 | 37 | data = [1, 2, 3, 4, 5, 6, 7, 8] | data = [1, 2, 3, 4, 5, 6, 7, 8] |
38 | 38 | indices = [[4], [3], [1], [7]] | indices = [[4], [3], [1], [7]] |
39 | 39 | updates = [9, 10, 11, 12] | updates = [9, 10, 11, 12] |
40 | 40 | output = [1, 11, 3, 10, 9, 6, 7, 12] | output = [1, 11, 3, 10, 9, 6, 7, 12] |
41 | 41 |
|
|
42 | 42 | Example 2: | Example 2: |
43 | 43 | :: | :: |
44 | 44 |
|
|
45 | 45 | data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], | data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], |
46 | 46 | [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], | [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], |
47 | 47 | [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]], | [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]], |
48 | 48 | [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]] | [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]] |
49 | 49 | indices = [[0], [2]] | indices = [[0], [2]] |
50 | 50 | updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], | updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], |
51 | 51 | [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]] | [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]] |
52 | 52 | output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], | output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], |
53 | 53 | [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], | [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], |
54 | 54 | [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]], | [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]], |
55 | 55 | [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]] | [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]] |
56 | 56 |
|
|
57 | 57 | **Inputs** | **Inputs** |
58 | 58 |
|
|
59 | 59 | * **data** (heterogeneous) - **T**: | * **data** (heterogeneous) - **T**: |
60 | 60 | Tensor of rank r >= 1. | Tensor of rank r >= 1. |
61 | 61 | * **indices** (heterogeneous) - **tensor(int64)**: | * **indices** (heterogeneous) - **tensor(int64)**: |
62 | 62 | Tensor of rank q >= 1. | Tensor of rank q >= 1. |
63 | 63 | * **updates** (heterogeneous) - **T**: | * **updates** (heterogeneous) - **T**: |
64 | 64 | Tensor of rank q + r - indices_shape[-1] - 1. | Tensor of rank q + r - indices_shape[-1] - 1. |
65 | 65 |
|
|
66 | 66 | **Outputs** | **Outputs** |
67 | 67 |
|
|
68 | 68 | * **output** (heterogeneous) - **T**: | * **output** (heterogeneous) - **T**: |
69 | 69 | Tensor of rank r >= 1. | Tensor of rank r >= 1. |
70 | 70 |
|
|
71 | 71 | **Type Constraints** | **Type Constraints** |
72 | 72 |
|
|
73 | 73 | * **T** in ( | * **T** in ( |
74 | tensor(bfloat16), | ||
74 | 75 | tensor(bool), | tensor(bool), |
75 | 76 | tensor(complex128), | tensor(complex128), |
76 | 77 | tensor(complex64), | tensor(complex64), |
77 | 78 | tensor(double), | tensor(double), |
78 | 79 | tensor(float), | tensor(float), |
79 | 80 | tensor(float16), | tensor(float16), |
80 | 81 | tensor(int16), | tensor(int16), |
81 | 82 | tensor(int32), | tensor(int32), |
82 | 83 | tensor(int64), | tensor(int64), |
83 | 84 | tensor(int8), | tensor(int8), |
84 | 85 | tensor(string), | tensor(string), |
85 | 86 | tensor(uint16), | tensor(uint16), |
86 | 87 | tensor(uint32), | tensor(uint32), |
87 | 88 | tensor(uint64), | tensor(uint64), |
88 | 89 | tensor(uint8) | tensor(uint8) |
89 | 90 | ): | ): |
90 | 91 | Constrain input and output types to any tensor type. | Constrain input and output types to any tensor type. |
ScatterND - 11#
Version
name: ScatterND (GitHub)
domain: main
since_version: 11
function: False
support_level: SupportType.COMMON
shape inference: True
This version of the operator has been available since version 11.
Summary
ScatterND takes three inputs data tensor of rank r >= 1, indices tensor of rank q >= 1, and updates tensor of rank q + r - indices.shape[-1] - 1. The output of the operation is produced by creating a copy of the input data, and then updating its value to values specified by updates at specific index positions specified by indices. Its output shape is the same as the shape of data. Note that indices should not have duplicate entries. That is, two or more updates for the same index-location is not supported.
- indices is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices.
indices is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data.
Hence, k can be a value at most the rank of data. When k equals rank(data), each update entry specifies an update to a single element of the tensor. When k is less than rank(data) each update entry specifies an update to a slice of the tensor.
updates is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape. The remaining dimensions of updates correspond to the dimensions of the replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor, corresponding to the trailing (r-k) dimensions of data. Thus, the shape of updates must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation of shapes.
The output is calculated via the following equation:
output = np.copy(data) update_indices = indices.shape[:-1] for idx in np.ndindex(update_indices):
output[indices[idx]] = updates[idx]
The order of iteration in the above loop is not specified. In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order.
This operator is the inverse of GatherND.
Example 1:
data = [1, 2, 3, 4, 5, 6, 7, 8]
indices = [[4], [3], [1], [7]]
updates = [9, 10, 11, 12]
output = [1, 11, 3, 10, 9, 6, 7, 12]
Example 2:
data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
indices = [[0], [2]]
updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]]
output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
Inputs
data (heterogeneous) - T: Tensor of rank r >= 1.
indices (heterogeneous) - tensor(int64): Tensor of rank q >= 1.
updates (heterogeneous) - T: Tensor of rank q + r - indices_shape[-1] - 1.
Outputs
output (heterogeneous) - T: Tensor of rank r >= 1.
Type Constraints
T in ( tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): Constrain input and output types to any tensor type.