GatherND - 11 vs 12#
Next section compares an older to a newer version of the same operator after both definition are converted into markdown text. Green means an addition to the newer version, red means a deletion. Anything else is unchanged.
- GatherND11 → GatherND12 +13 -44
GatherND11 → GatherND12
RENAMED
@@ -1 +1 @@
|
|
1
|
-
Given data tensor of rank r >= 1, indices tensor of rank q >= 1,
|
1
|
+
Given data tensor of rank r >= 1, and indices tensor of rank q >= 1, this operator gathers
|
2
|
-
slices of data into an output tensor of rank q + r - indices_shape[-1] - 1
|
2
|
+
slices of data into an output tensor of rank q + r - indices_shape[-1] - 1.
|
3
3
|
indices is an q-dimensional integer tensor, best thought of as a (q-1)-dimensional tensor of index-tuples into data,
|
4
4
|
where each element defines a slice of data
|
5
|
-
|
6
|
-
batch_dims (denoted as b) is an integer indicating the number of batch dimensions, i.e the leading b number of dimensions of
|
7
|
-
data tensor and indices are representing the batches, and the gather starts from the b+1 dimension.
|
8
5
|
Some salient points about the inputs' rank and shape:
|
9
6
|
1) r >= 1 and q >= 1 are to be honored. There is no dependency condition to be met between ranks r and q
|
10
|
-
2) The
|
7
|
+
2) The indices_shape[-1] should have a value between 1 (inclusive) and rank r (inclusive)
|
11
|
-
3) b < min(q, r) is to be honored.
|
12
|
-
|
13
|
-
4) The indices_shape[-1] should have a value between 1 (inclusive) and rank r-b (inclusive)
|
14
|
-
|
15
|
-
|
8
|
+
3) All values in indices are expected to be within bounds [-s, s-1] along axis of size s (i.e.) -data_shape[i] <= indices[...,i] <= data_shape[i] - 1.
|
16
9
|
It is an error if any of the index values are out of bounds.
|
17
10
|
The output is computed as follows:
|
18
11
|
The output tensor is obtained by mapping each index-tuple in the indices tensor to the corresponding slice of the input data.
|
19
|
-
1) If indices_shape[-1] > r
|
12
|
+
1) If indices_shape[-1] > r => error condition
|
20
|
-
2) If indices_shape[-1] == r
|
13
|
+
2) If indices_shape[-1] == r, since the rank of indices is q, indices can be thought of as a (q-1)-dimensional tensor
|
21
|
-
containing 1-D tensors of dimension r
|
14
|
+
containing 1-D tensors of dimension r. Let us think of each such r ranked tensor as indices_slice.
|
22
|
-
|
15
|
+
Each *scalar value* corresponding to data[indices_slice] is filled into the corresponding location of the (q-1)-dimensional tensor
|
23
|
-
|
16
|
+
to form the output tensor (Example 1 below)
|
24
|
-
3) If indices_shape[-1] < r
|
17
|
+
3) If indices_shape[-1] < r, since the rank of indices is q, indices can be thought of as a (q-1)-dimensional tensor
|
25
|
-
containing 1-D tensors of dimension < r
|
18
|
+
containing 1-D tensors of dimension < r. Let us think of each such tensors as indices_slice.
|
26
|
-
to data[
|
19
|
+
Each *tensor slice* corresponding to data[indices_slice , :] is filled into the corresponding location of the (q-1)-dimensional tensor
|
27
|
-
to form the output tensor (Examples 2, 3,
|
20
|
+
to form the output tensor (Examples 2, 3, and 4 below)
|
28
21
|
This operator is the inverse of ScatterND.
|
29
22
|
Example 1
|
30
|
-
|
31
|
-
batch_dims = 0
|
32
23
|
data = [[0,1],[2,3]] # data_shape = [2, 2]
|
33
24
|
indices = [[0,0],[1,1]] # indices_shape = [2, 2]
|
34
25
|
output = [0,3] # output_shape = [2]
|
35
26
|
Example 2
|
36
|
-
batch_dims = 0
|
37
|
-
|
38
27
|
data = [[0,1],[2,3]] # data_shape = [2, 2]
|
39
28
|
indices = [[1],[0]] # indices_shape = [2, 1]
|
40
29
|
output = [[2,3],[0,1]] # output_shape = [2, 2]
|
41
30
|
Example 3
|
42
|
-
|
43
|
-
batch_dims = 0
|
44
31
|
data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
|
45
32
|
indices = [[0,1],[1,0]] # indices_shape = [2, 2]
|
46
33
|
output = [[2,3],[4,5]] # output_shape = [2, 2]
|
47
34
|
Example 4
|
48
|
-
batch_dims = 0
|
49
|
-
|
50
35
|
data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
|
51
36
|
indices = [[[0,1]],[[1,0]]] # indices_shape = [2, 1, 2]
|
52
37
|
output = [[[2,3]],[[4,5]]] # output_shape = [2, 1, 2]
|
53
|
-
|
54
|
-
Example 5
|
55
|
-
|
56
|
-
batch_dims = 1
|
57
|
-
|
58
|
-
data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
|
59
|
-
|
60
|
-
indices = [[1],[0]] # indices_shape = [2, 1]
|
61
|
-
|
62
|
-
output = [[2,3],[4,5]] # output_shape = [2, 2]
|
63
|
-
|
64
|
-
**Attributes**
|
65
|
-
|
66
|
-
* **batch_dims**:
|
67
|
-
The number of batch dimensions. The gather of indexing starts from
|
68
|
-
dimension of data[batch_dims:]
|
69
38
|
**Inputs**
|
70
39
|
* **data** (heterogeneous) - **T**:
|
71
40
|
Tensor of rank r >= 1.
|
72
41
|
* **indices** (heterogeneous) - **tensor(int64)**:
|
73
42
|
Tensor of rank q >= 1. All index values are expected to be within
|
74
43
|
bounds [-s, s-1] along axis of size s. It is an error if any of the
|
75
44
|
index values are out of bounds.
|
76
45
|
**Outputs**
|
77
46
|
* **output** (heterogeneous) - **T**:
|
78
47
|
Tensor of rank q + r - indices_shape[-1] - 1.
|
79
48
|
**Type Constraints**
|
80
49
|
* **T** in (
|
81
50
|
tensor(bool),
|
82
51
|
tensor(complex128),
|
83
52
|
tensor(complex64),
|
84
53
|
tensor(double),
|
85
54
|
tensor(float),
|
86
55
|
tensor(float16),
|
87
56
|
tensor(int16),
|
88
57
|
tensor(int32),
|
89
58
|
tensor(int64),
|
90
59
|
tensor(int8),
|
91
60
|
tensor(string),
|
92
61
|
tensor(uint16),
|
93
62
|
tensor(uint32),
|
94
63
|
tensor(uint64),
|
95
64
|
tensor(uint8)
|
96
65
|
):
|
97
66
|
Constrain input and output types to any tensor type.
|