GatherND - 11 vs 12¶
- GatherND11 → GatherND12 +44 -13
GatherND11 → GatherND12
RENAMED
@@ -1 +1 @@
|
|
1
|
-
Given data tensor of rank r >= 1,
|
1
|
+
Given data tensor of rank r >= 1, indices tensor of rank q >= 1, and batch_dims integer b, this operator gathers
|
2
|
-
slices of data into an output tensor of rank q + r - indices_shape[-1] - 1.
|
2
|
+
slices of data into an output tensor of rank q + r - indices_shape[-1] - 1 - b.
|
3
3
|
indices is an q-dimensional integer tensor, best thought of as a (q-1)-dimensional tensor of index-tuples into data,
|
4
4
|
where each element defines a slice of data
|
5
|
+
|
6
|
+
batch_dims (denoted as b) is an integer indicating the number of batch dimensions, i.e the leading b number of dimensions of
|
7
|
+
data tensor and indices are representing the batches, and the gather starts from the b+1 dimension.
|
5
8
|
Some salient points about the inputs' rank and shape:
|
6
9
|
1) r >= 1 and q >= 1 are to be honored. There is no dependency condition to be met between ranks r and q
|
10
|
+
2) The first b dimensions of the shape of indices tensor and data tensor must be equal.
|
11
|
+
3) b < min(q, r) is to be honored.
|
12
|
+
|
7
|
-
|
13
|
+
4) The indices_shape[-1] should have a value between 1 (inclusive) and rank r-b (inclusive)
|
14
|
+
|
8
|
-
|
15
|
+
5) All values in indices are expected to be within bounds [-s, s-1] along axis of size s (i.e.) -data_shape[i] <= indices[...,i] <= data_shape[i] - 1.
|
9
16
|
It is an error if any of the index values are out of bounds.
|
10
17
|
The output is computed as follows:
|
11
18
|
The output tensor is obtained by mapping each index-tuple in the indices tensor to the corresponding slice of the input data.
|
12
|
-
1) If indices_shape[-1] > r => error condition
|
19
|
+
1) If indices_shape[-1] > r-b => error condition
|
13
|
-
2) If indices_shape[-1] == r, since the rank of indices is q, indices can be thought of as
|
20
|
+
2) If indices_shape[-1] == r-b, since the rank of indices is q, indices can be thought of as N (q-b-1)-dimensional tensors
|
14
|
-
containing 1-D tensors of dimension r
|
21
|
+
containing 1-D tensors of dimension r-b, where N is an integer equals to the product of 1 and all the elements in the batch dimensions
|
15
|
-
Each *scalar value* corresponding to data[indices_slice]
|
22
|
+
of the indices_shape. Let us think of each such r-b ranked tensor as indices_slice. Each *scalar value* corresponding to data[0:b-1,indices_slice]
|
16
|
-
to form the output tensor (Example 1 below)
|
23
|
+
is filled into the corresponding location of the (q-b-1)-dimensional tensor to form the output tensor (Example 1 below)
|
17
|
-
3) If indices_shape[-1] < r, since the rank of indices is q, indices can be thought of as
|
24
|
+
3) If indices_shape[-1] < r-b, since the rank of indices is q, indices can be thought of as N (q-b-1)-dimensional tensor
|
18
|
-
containing 1-D tensors of dimension < r. Let us think of each such tensors as indices_slice.
|
25
|
+
containing 1-D tensors of dimension < r-b. Let us think of each such tensors as indices_slice. Each *tensor slice* corresponding
|
19
|
-
|
26
|
+
to data[0:b-1, indices_slice , :] is filled into the corresponding location of the (q-b-1)-dimensional tensor
|
20
|
-
to form the output tensor (Examples 2, 3, and
|
27
|
+
to form the output tensor (Examples 2, 3, 4 and 5 below)
|
21
28
|
This operator is the inverse of ScatterND.
|
22
29
|
Example 1
|
30
|
+
|
31
|
+
batch_dims = 0
|
23
32
|
data = [[0,1],[2,3]] # data_shape = [2, 2]
|
24
33
|
indices = [[0,0],[1,1]] # indices_shape = [2, 2]
|
25
34
|
output = [0,3] # output_shape = [2]
|
26
35
|
Example 2
|
36
|
+
batch_dims = 0
|
37
|
+
|
27
38
|
data = [[0,1],[2,3]] # data_shape = [2, 2]
|
28
39
|
indices = [[1],[0]] # indices_shape = [2, 1]
|
29
40
|
output = [[2,3],[0,1]] # output_shape = [2, 2]
|
30
41
|
Example 3
|
42
|
+
|
43
|
+
batch_dims = 0
|
31
44
|
data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
|
32
45
|
indices = [[0,1],[1,0]] # indices_shape = [2, 2]
|
33
46
|
output = [[2,3],[4,5]] # output_shape = [2, 2]
|
34
47
|
Example 4
|
48
|
+
batch_dims = 0
|
49
|
+
|
35
50
|
data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
|
36
51
|
indices = [[[0,1]],[[1,0]]] # indices_shape = [2, 1, 2]
|
37
52
|
output = [[[2,3]],[[4,5]]] # output_shape = [2, 1, 2]
|
53
|
+
|
54
|
+
Example 5
|
55
|
+
|
56
|
+
batch_dims = 1
|
57
|
+
|
58
|
+
data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
|
59
|
+
|
60
|
+
indices = [[1],[0]] # indices_shape = [2, 1]
|
61
|
+
|
62
|
+
output = [[2,3],[4,5]] # output_shape = [2, 2]
|
63
|
+
|
64
|
+
**Attributes**
|
65
|
+
|
66
|
+
* **batch_dims**:
|
67
|
+
The number of batch dimensions. The gather of indexing starts from
|
68
|
+
dimension of data[batch_dims:]
|
38
69
|
**Inputs**
|
39
70
|
* **data** (heterogeneous) - **T**:
|
40
71
|
Tensor of rank r >= 1.
|
41
72
|
* **indices** (heterogeneous) - **tensor(int64)**:
|
42
73
|
Tensor of rank q >= 1. All index values are expected to be within
|
43
74
|
bounds [-s, s-1] along axis of size s. It is an error if any of the
|
44
75
|
index values are out of bounds.
|
45
76
|
**Outputs**
|
46
77
|
* **output** (heterogeneous) - **T**:
|
47
78
|
Tensor of rank q + r - indices_shape[-1] - 1.
|
48
79
|
**Type Constraints**
|
49
80
|
* **T** in (
|
50
81
|
tensor(bool),
|
51
82
|
tensor(complex128),
|
52
83
|
tensor(complex64),
|
53
84
|
tensor(double),
|
54
85
|
tensor(float),
|
55
86
|
tensor(float16),
|
56
87
|
tensor(int16),
|
57
88
|
tensor(int32),
|
58
89
|
tensor(int64),
|
59
90
|
tensor(int8),
|
60
91
|
tensor(string),
|
61
92
|
tensor(uint16),
|
62
93
|
tensor(uint32),
|
63
94
|
tensor(uint64),
|
64
95
|
tensor(uint8)
|
65
96
|
):
|
66
97
|
Constrain input and output types to any tensor type.
|