ScatterND#

ScatterND - 16#

Version

  • name: ScatterND (GitHub)

  • domain: main

  • since_version: 16

  • function: False

  • support_level: SupportType.COMMON

  • shape inference: True

This version of the operator has been available since version 16.

Summary

ScatterND takes three inputs data tensor of rank r >= 1, indices tensor of rank q >= 1, and updates tensor of rank q + r - indices.shape[-1] - 1. The output of the operation is produced by creating a copy of the input data, and then updating its value to values specified by updates at specific index positions specified by indices. Its output shape is the same as the shape of data. Note that indices should not have duplicate entries. That is, two or more updates for the same index-location is not supported.

indices is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices.

indices is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data.

Hence, k can be a value at most the rank of data. When k equals rank(data), each update entry specifies an update to a single element of the tensor. When k is less than rank(data) each update entry specifies an update to a slice of the tensor.

updates is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape. The remaining dimensions of updates correspond to the dimensions of the replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor, corresponding to the trailing (r-k) dimensions of data. Thus, the shape of updates must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation of shapes.

The output is calculated via the following equation:

output = np.copy(data) update_indices = indices.shape[:-1] for idx in np.ndindex(update_indices):

output[indices[idx]] = updates[idx]

The order of iteration in the above loop is not specified. In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order.

reduction allows specification of an optional reduction operation, which is applied to all values in updates tensor into output at the specified indices. In cases where reduction is set to “none”, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order. When reduction is set to “add”, output is calculated as follows:

output = np.copy(data) update_indices = indices.shape[:-1] for idx in np.ndindex(update_indices):

output[indices[idx]] += updates[idx]

When reduction is set to “mul”, output is calculated as follows:

output = np.copy(data) update_indices = indices.shape[:-1] for idx in np.ndindex(update_indices):

output[indices[idx]] *= updates[idx]

This operator is the inverse of GatherND.

Example 1:

data    = [1, 2, 3, 4, 5, 6, 7, 8]
indices = [[4], [3], [1], [7]]
updates = [9, 10, 11, 12]
output  = [1, 11, 3, 10, 9, 6, 7, 12]

Example 2:

data    = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
           [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
           [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
           [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
indices = [[0], [2]]
updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
           [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]]
output  = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
           [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
           [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
           [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]

Attributes

  • reduction: Type of reduction to apply: none (default), add, mul. ‘none’: no reduction applied. ‘add’: reduction using the addition operation. ‘mul’: reduction using the multiplication operation. Default value is 'none'.

Inputs

  • data (heterogeneous) - T: Tensor of rank r >= 1.

  • indices (heterogeneous) - tensor(int64): Tensor of rank q >= 1.

  • updates (heterogeneous) - T: Tensor of rank q + r - indices_shape[-1] - 1.

Outputs

  • output (heterogeneous) - T: Tensor of rank r >= 1.

Type Constraints

  • T in ( tensor(bfloat16), tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): Constrain input and output types to any tensor type.

Examples

scatternd

node = onnx.helper.make_node(
    'ScatterND',
    inputs=['data', 'indices', 'updates'],
    outputs=['y'],
)
data = np.array(
    [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
     [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
     [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
     [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]], dtype=np.float32)
indices = np.array([[0], [2]], dtype=np.int64)
updates = np.array(
    [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
     [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]], dtype=np.float32)
# Expecting output as np.array(
#    [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
#     [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
#     [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
#     [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]], dtype=np.float32)
output = scatter_nd_impl(data, indices, updates)
expect(node, inputs=[data, indices, updates], outputs=[output],
       name='test_scatternd')

scatternd_add

node = onnx.helper.make_node(
    'ScatterND',
    inputs=['data', 'indices', 'updates'],
    outputs=['y'],
    reduction='add',
)
data = np.array(
    [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
        [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
        [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
        [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]], dtype=np.float32)
indices = np.array([[0], [0]], dtype=np.int64)
updates = np.array(
    [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
        [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]], dtype=np.float32)
# Expecting output as np.array(
#    [[[7, 8, 9, 10], [13, 14, 15, 16], [18, 17, 16, 15], [16, 15, 14, 13]],
#     [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
#     [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
#     [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]], dtype=np.float32)
output = scatter_nd_impl(data, indices, updates, reduction='add')
expect(node, inputs=[data, indices, updates], outputs=[output],
       name='test_scatternd_add')

scatternd_multiply

node = onnx.helper.make_node(
    'ScatterND',
    inputs=['data', 'indices', 'updates'],
    outputs=['y'],
    reduction='mul',
)
data = np.array(
    [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
        [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
        [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
        [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]], dtype=np.float32)
indices = np.array([[0], [0]], dtype=np.int64)
updates = np.array(
    [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
        [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]], dtype=np.float32)
# Expecting output as np.array(
#    [[[5, 10, 15, 20], [60, 72, 84, 96], [168, 147, 126, 105], [128, 96, 64, 32]],
#     [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
#     [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
#     [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]], dtype=np.float32)
output = scatter_nd_impl(data, indices, updates, reduction='mul')
expect(node, inputs=[data, indices, updates], outputs=[output],
       name='test_scatternd_multiply')

Differences

00ScatterND takes three inputs data tensor of rank r >= 1, indices tensor of rank q >= 1,ScatterND takes three inputs data tensor of rank r >= 1, indices tensor of rank q >= 1,
11and updates tensor of rank q + r - indices.shape[-1] - 1. The output of the operationand updates tensor of rank q + r - indices.shape[-1] - 1. The output of the operation
22is produced by creating a copy of the input data, and then updating its value to valuesis produced by creating a copy of the input data, and then updating its value to values
33specified by updates at specific index positions specified by indices. Its output shapespecified by updates at specific index positions specified by indices. Its output shape
44is the same as the shape of data. Note that indices should not have duplicate entries.is the same as the shape of data. Note that indices should not have duplicate entries.
55That is, two or more updates for the same index-location is not supported.That is, two or more updates for the same index-location is not supported.
66
77indices is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices.indices is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices.
88 indices is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data. indices is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data.
99Hence, k can be a value at most the rank of data. When k equals rank(data), each update entry specifies anHence, k can be a value at most the rank of data. When k equals rank(data), each update entry specifies an
1010update to a single element of the tensor. When k is less than rank(data) each update entry specifies anupdate to a single element of the tensor. When k is less than rank(data) each update entry specifies an
1111update to a slice of the tensor.update to a slice of the tensor.
1212
1313updates is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, theupdates is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the
1414first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape.first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape.
1515The remaining dimensions of updates correspond to the dimensions of theThe remaining dimensions of updates correspond to the dimensions of the
1616replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor,replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor,
1717corresponding to the trailing (r-k) dimensions of data. Thus, the shape of updatescorresponding to the trailing (r-k) dimensions of data. Thus, the shape of updates
1818must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenationmust equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation
1919of shapes.of shapes.
2020
2121The output is calculated via the following equation:The output is calculated via the following equation:
2222
2323 output = np.copy(data) output = np.copy(data)
2424 update_indices = indices.shape[:-1] update_indices = indices.shape[:-1]
2525 for idx in np.ndindex(update_indices): for idx in np.ndindex(update_indices):
2626 output[indices[idx]] = updates[idx] output[indices[idx]] = updates[idx]
2727
2828The order of iteration in the above loop is not specified.The order of iteration in the above loop is not specified.
2929In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2].In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2].
3030This ensures that the output value does not depend on the iteration order.This ensures that the output value does not depend on the iteration order.
3131
3232This operator is the inverse of GatherND.reduction allows specification of an optional reduction operation, which is applied to all values in updates
33
3433Example 1:tensor into output at the specified indices.
35::
36
3734 data = [1, 2, 3, 4, 5, 6, 7, 8]In cases where reduction is set to "none", indices should not have duplicate entries: that is, if idx1 != idx2,
3835 indices = [[4], [3], [1], [7]]then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order.
3936 updates = [9, 10, 11, 12]When reduction is set to "add", output is calculated as follows:
37
38 output = np.copy(data)
39 update_indices = indices.shape[:-1]
40 for idx in np.ndindex(update_indices):
41 output[indices[idx]] += updates[idx]
42
43When reduction is set to "mul", output is calculated as follows:
44
45 output = np.copy(data)
46 update_indices = indices.shape[:-1]
47 for idx in np.ndindex(update_indices):
48 output[indices[idx]] *= updates[idx]
49
50This operator is the inverse of GatherND.
51
52Example 1:
53::
54
55 data = [1, 2, 3, 4, 5, 6, 7, 8]
56 indices = [[4], [3], [1], [7]]
57 updates = [9, 10, 11, 12]
4058 output = [1, 11, 3, 10, 9, 6, 7, 12] output = [1, 11, 3, 10, 9, 6, 7, 12]
4159
4260Example 2:Example 2:
4361::::
4462
4563 data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
4664 [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
4765 [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]], [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
4866 [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]] [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
4967 indices = [[0], [2]] indices = [[0], [2]]
5068 updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
5169 [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]] [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]]
5270 output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
5371 [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
5472 [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]], [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
5573 [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]] [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
5674
75**Attributes**
76
77* **reduction**:
78 Type of reduction to apply: none (default), add, mul. 'none': no
79 reduction applied. 'add': reduction using the addition operation.
80 'mul': reduction using the multiplication operation. Default value is 'none'.
81
5782**Inputs****Inputs**
5883
5984* **data** (heterogeneous) - **T**:* **data** (heterogeneous) - **T**:
6085 Tensor of rank r >= 1. Tensor of rank r >= 1.
6186* **indices** (heterogeneous) - **tensor(int64)**:* **indices** (heterogeneous) - **tensor(int64)**:
6287 Tensor of rank q >= 1. Tensor of rank q >= 1.
6388* **updates** (heterogeneous) - **T**:* **updates** (heterogeneous) - **T**:
6489 Tensor of rank q + r - indices_shape[-1] - 1. Tensor of rank q + r - indices_shape[-1] - 1.
6590
6691**Outputs****Outputs**
6792
6893* **output** (heterogeneous) - **T**:* **output** (heterogeneous) - **T**:
6994 Tensor of rank r >= 1. Tensor of rank r >= 1.
7095
7196**Type Constraints****Type Constraints**
7297
7398* **T** in (* **T** in (
7499 tensor(bfloat16), tensor(bfloat16),
75100 tensor(bool), tensor(bool),
76101 tensor(complex128), tensor(complex128),
77102 tensor(complex64), tensor(complex64),
78103 tensor(double), tensor(double),
79104 tensor(float), tensor(float),
80105 tensor(float16), tensor(float16),
81106 tensor(int16), tensor(int16),
82107 tensor(int32), tensor(int32),
83108 tensor(int64), tensor(int64),
84109 tensor(int8), tensor(int8),
85110 tensor(string), tensor(string),
86111 tensor(uint16), tensor(uint16),
87112 tensor(uint32), tensor(uint32),
88113 tensor(uint64), tensor(uint64),
89114 tensor(uint8) tensor(uint8)
90115 ): ):
91116 Constrain input and output types to any tensor type. Constrain input and output types to any tensor type.

ScatterND - 13#

Version

  • name: ScatterND (GitHub)

  • domain: main

  • since_version: 13

  • function: False

  • support_level: SupportType.COMMON

  • shape inference: True

This version of the operator has been available since version 13.

Summary

ScatterND takes three inputs data tensor of rank r >= 1, indices tensor of rank q >= 1, and updates tensor of rank q + r - indices.shape[-1] - 1. The output of the operation is produced by creating a copy of the input data, and then updating its value to values specified by updates at specific index positions specified by indices. Its output shape is the same as the shape of data. Note that indices should not have duplicate entries. That is, two or more updates for the same index-location is not supported.

indices is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices.

indices is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data.

Hence, k can be a value at most the rank of data. When k equals rank(data), each update entry specifies an update to a single element of the tensor. When k is less than rank(data) each update entry specifies an update to a slice of the tensor.

updates is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape. The remaining dimensions of updates correspond to the dimensions of the replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor, corresponding to the trailing (r-k) dimensions of data. Thus, the shape of updates must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation of shapes.

The output is calculated via the following equation:

output = np.copy(data) update_indices = indices.shape[:-1] for idx in np.ndindex(update_indices):

output[indices[idx]] = updates[idx]

The order of iteration in the above loop is not specified. In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order.

This operator is the inverse of GatherND.

Example 1:

data    = [1, 2, 3, 4, 5, 6, 7, 8]
indices = [[4], [3], [1], [7]]
updates = [9, 10, 11, 12]
output  = [1, 11, 3, 10, 9, 6, 7, 12]

Example 2:

data    = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
           [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
           [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
           [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
indices = [[0], [2]]
updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
           [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]]
output  = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
           [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
           [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
           [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]

Inputs

  • data (heterogeneous) - T: Tensor of rank r >= 1.

  • indices (heterogeneous) - tensor(int64): Tensor of rank q >= 1.

  • updates (heterogeneous) - T: Tensor of rank q + r - indices_shape[-1] - 1.

Outputs

  • output (heterogeneous) - T: Tensor of rank r >= 1.

Type Constraints

  • T in ( tensor(bfloat16), tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): Constrain input and output types to any tensor type.

Differences

00ScatterND takes three inputs data tensor of rank r >= 1, indices tensor of rank q >= 1,ScatterND takes three inputs data tensor of rank r >= 1, indices tensor of rank q >= 1,
11and updates tensor of rank q + r - indices.shape[-1] - 1. The output of the operationand updates tensor of rank q + r - indices.shape[-1] - 1. The output of the operation
22is produced by creating a copy of the input data, and then updating its value to valuesis produced by creating a copy of the input data, and then updating its value to values
33specified by updates at specific index positions specified by indices. Its output shapespecified by updates at specific index positions specified by indices. Its output shape
44is the same as the shape of data. Note that indices should not have duplicate entries.is the same as the shape of data. Note that indices should not have duplicate entries.
55That is, two or more updates for the same index-location is not supported.That is, two or more updates for the same index-location is not supported.
66
77indices is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices.indices is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices.
88 indices is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data. indices is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data.
99Hence, k can be a value at most the rank of data. When k equals rank(data), each update entry specifies anHence, k can be a value at most the rank of data. When k equals rank(data), each update entry specifies an
1010update to a single element of the tensor. When k is less than rank(data) each update entry specifies anupdate to a single element of the tensor. When k is less than rank(data) each update entry specifies an
1111update to a slice of the tensor.update to a slice of the tensor.
1212
1313updates is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, theupdates is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the
1414first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape.first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape.
1515The remaining dimensions of updates correspond to the dimensions of theThe remaining dimensions of updates correspond to the dimensions of the
1616replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor,replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor,
1717corresponding to the trailing (r-k) dimensions of data. Thus, the shape of updatescorresponding to the trailing (r-k) dimensions of data. Thus, the shape of updates
1818must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenationmust equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation
1919of shapes.of shapes.
2020
2121The output is calculated via the following equation:The output is calculated via the following equation:
2222
2323 output = np.copy(data) output = np.copy(data)
2424 update_indices = indices.shape[:-1] update_indices = indices.shape[:-1]
2525 for idx in np.ndindex(update_indices): for idx in np.ndindex(update_indices):
2626 output[indices[idx]] = updates[idx] output[indices[idx]] = updates[idx]
2727
2828The order of iteration in the above loop is not specified.The order of iteration in the above loop is not specified.
2929In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2].In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2].
3030This ensures that the output value does not depend on the iteration order.This ensures that the output value does not depend on the iteration order.
3131
3232This operator is the inverse of GatherND.This operator is the inverse of GatherND.
3333
3434Example 1:Example 1:
3535::::
3636
3737 data = [1, 2, 3, 4, 5, 6, 7, 8] data = [1, 2, 3, 4, 5, 6, 7, 8]
3838 indices = [[4], [3], [1], [7]] indices = [[4], [3], [1], [7]]
3939 updates = [9, 10, 11, 12] updates = [9, 10, 11, 12]
4040 output = [1, 11, 3, 10, 9, 6, 7, 12] output = [1, 11, 3, 10, 9, 6, 7, 12]
4141
4242Example 2:Example 2:
4343::::
4444
4545 data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
4646 [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
4747 [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]], [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
4848 [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]] [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
4949 indices = [[0], [2]] indices = [[0], [2]]
5050 updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
5151 [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]] [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]]
5252 output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
5353 [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
5454 [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]], [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
5555 [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]] [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
5656
5757**Inputs****Inputs**
5858
5959* **data** (heterogeneous) - **T**:* **data** (heterogeneous) - **T**:
6060 Tensor of rank r >= 1. Tensor of rank r >= 1.
6161* **indices** (heterogeneous) - **tensor(int64)**:* **indices** (heterogeneous) - **tensor(int64)**:
6262 Tensor of rank q >= 1. Tensor of rank q >= 1.
6363* **updates** (heterogeneous) - **T**:* **updates** (heterogeneous) - **T**:
6464 Tensor of rank q + r - indices_shape[-1] - 1. Tensor of rank q + r - indices_shape[-1] - 1.
6565
6666**Outputs****Outputs**
6767
6868* **output** (heterogeneous) - **T**:* **output** (heterogeneous) - **T**:
6969 Tensor of rank r >= 1. Tensor of rank r >= 1.
7070
7171**Type Constraints****Type Constraints**
7272
7373* **T** in (* **T** in (
74 tensor(bfloat16),
7475 tensor(bool), tensor(bool),
7576 tensor(complex128), tensor(complex128),
7677 tensor(complex64), tensor(complex64),
7778 tensor(double), tensor(double),
7879 tensor(float), tensor(float),
7980 tensor(float16), tensor(float16),
8081 tensor(int16), tensor(int16),
8182 tensor(int32), tensor(int32),
8283 tensor(int64), tensor(int64),
8384 tensor(int8), tensor(int8),
8485 tensor(string), tensor(string),
8586 tensor(uint16), tensor(uint16),
8687 tensor(uint32), tensor(uint32),
8788 tensor(uint64), tensor(uint64),
8889 tensor(uint8) tensor(uint8)
8990 ): ):
9091 Constrain input and output types to any tensor type. Constrain input and output types to any tensor type.

ScatterND - 11#

Version

  • name: ScatterND (GitHub)

  • domain: main

  • since_version: 11

  • function: False

  • support_level: SupportType.COMMON

  • shape inference: True

This version of the operator has been available since version 11.

Summary

ScatterND takes three inputs data tensor of rank r >= 1, indices tensor of rank q >= 1, and updates tensor of rank q + r - indices.shape[-1] - 1. The output of the operation is produced by creating a copy of the input data, and then updating its value to values specified by updates at specific index positions specified by indices. Its output shape is the same as the shape of data. Note that indices should not have duplicate entries. That is, two or more updates for the same index-location is not supported.

indices is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices.

indices is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data.

Hence, k can be a value at most the rank of data. When k equals rank(data), each update entry specifies an update to a single element of the tensor. When k is less than rank(data) each update entry specifies an update to a slice of the tensor.

updates is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape. The remaining dimensions of updates correspond to the dimensions of the replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor, corresponding to the trailing (r-k) dimensions of data. Thus, the shape of updates must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation of shapes.

The output is calculated via the following equation:

output = np.copy(data) update_indices = indices.shape[:-1] for idx in np.ndindex(update_indices):

output[indices[idx]] = updates[idx]

The order of iteration in the above loop is not specified. In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order.

This operator is the inverse of GatherND.

Example 1:

data    = [1, 2, 3, 4, 5, 6, 7, 8]
indices = [[4], [3], [1], [7]]
updates = [9, 10, 11, 12]
output  = [1, 11, 3, 10, 9, 6, 7, 12]

Example 2:

data    = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
           [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
           [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
           [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
indices = [[0], [2]]
updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
           [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]]
output  = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
           [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
           [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
           [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]

Inputs

  • data (heterogeneous) - T: Tensor of rank r >= 1.

  • indices (heterogeneous) - tensor(int64): Tensor of rank q >= 1.

  • updates (heterogeneous) - T: Tensor of rank q + r - indices_shape[-1] - 1.

Outputs

  • output (heterogeneous) - T: Tensor of rank r >= 1.

Type Constraints

  • T in ( tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): Constrain input and output types to any tensor type.