Skip to content

Commit b64414c

Browse files
committed
Run Java API Import for TF2.6.0
1 parent 03556e0 commit b64414c

22 files changed

+173
-336
lines changed

CONTRIBUTING.md

+58-1
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ the `dev` profile in your Maven command to use those artifacts instead of buildi
2121
Modifying the native op generation code (not the annotation processor) or the JavaCPP configuration (not the abstract Pointers) will require a
2222
complete build could be required to reflect the changes, otherwise `-Pdev` should be fine.
2323

24-
## JDK 16+
24+
### JDK 16+
2525

2626
If you're using JDK 16+, you need to add some exports for the formatter plugin:
2727

@@ -98,6 +98,63 @@ Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.22.0:tes
9898
This is because the native code crashed (i.e. because of a segfault), and it should have created a dump file somewhere in the project that you can use
9999
to tell what caused the issue.
100100

101+
## Upgrading TensorFlow Version
102+
103+
To upgrade the version of TensorFlow that is embedded within TensorFlow Java, please follow carefully these steps.
104+
105+
### Upgrading TensorFlow Runtime Library
106+
107+
You can upgrade the version of the TensorFlow library by updating the archive downloadeded in the Bazel
108+
[workspace](https://github.com/tensorflow/java/blob/master/tensorflow-core/tensorflow-core-api/WORKSPACE#L19) at build time. Make sure to
109+
update the `urls`, `sha256` and `strip_prefix` fields of the `org_tensorflow` archive rule to reflect the values for the new version.
110+
111+
### Ops Classification
112+
113+
After building with the version of TensorFlow, you might notice that a lot of new operations appeared in the `org.tensorflow.ops.core`
114+
package of the [generated sources](https://github.com/tensorflow/java/tree/master/tensorflow-core/tensorflow-core-api/src/gen/java/org/tensorflow/op/core) of
115+
the `tensorflow-core-api` module. Many of these ops must be reclassified manually after running this initial build.
116+
117+
The actual classification process is a bit arbitrary and based on the good jugement of the developer. The reason is that most ops in Python
118+
are being wrapped by a higher-level API and therefore are left unclassified, while in Java they are exposed and can be used directly by
119+
the users.
120+
121+
For classifying an op, a `api_def` proto must be added to the `tensorflow-core-api` [folder](https://github.com/tensorflow/java/tree/master/tensorflow-core/tensorflow-core-api/src/bazel/api_def)
122+
for this purpose, redefining optionally its endpoints or its visibility.
123+
124+
Writing these protos and trying the guess the right location for each new operation can become a tedious job so an utility program called `java_api_import`
125+
has been created to help you with this task. This utility is available under the `bazel-bin` folder of `tensorflow-core-api` after the
126+
initial build. Here is how to invoke it:
127+
128+
```
129+
cd tensorflow-core/tensorflow-core-api
130+
./bazel-bin/java_api_import \
131+
--java_api_dir=src/bazel/api_def \
132+
--tf_src_dir=bazel-tensorflow-core-api/external/org_tensorflow \
133+
--tf_lib_path=bazel-bin/external/org_tensorflow/tensorflow/libtensorflow_cc.<version>.<ext>
134+
```
135+
136+
For each new operation detected (i.e. any operation that does not have a valid `api_def` proto yet), the utility will suggest you some possible
137+
package names that can be a good match for its classification (unless a "perfect match" has been found in the Python code, in which case the utility
138+
will automatically classify the op). It is also possible to enter manually the name of the package to use, and the package can have multiple levels (e.g. `linalg.sparse`). The utility
139+
application will then take care to write the `api_def` proto for each operation classified.
140+
141+
Make sure to erase completely the generated source folder of the `tensorflow-core-api` module before rerunning the build so you can see
142+
if your ops have been classified properly.
143+
144+
#### Ops Kernel Upgrade
145+
146+
Some operations might be just an upgrade of another existing operations. For instance, there are many version of the `BatchMatMul` kernel (V1, V2, V3...).
147+
When you see that a new op is just an upgrade from another other one, make sure that the latest version has a valid endpoint and that all other
148+
previous versions of this operation are marked as `VISIBILITY: SKIP`.
149+
150+
### Java Protos Classification
151+
152+
TensorFlow Java distributes a large number proto definitions found in the TensorFlow Runtime Library as Java classes. Again, new protos might not
153+
be classified properly since they may be lacking the `option java_*` statements at the beginning of their definition. If you notice in the
154+
[generated protos](https://github.com/tensorflow/java/tree/master/tensorflow-core/tensorflow-core-api/src/gen/java/org/tensorflow/proto) of the `tensorflow-core-api`
155+
that some new proto classes seems to be in the wrong package, create a Bazel patch at this effect to add the missing options.
156+
See [existing patches](https://github.com/tensorflow/java/blob/master/tensorflow-core/tensorflow-core-api/external/tensorflow-proto.patch) for examples.
157+
101158
## Contributing
102159

103160
### Formatting
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,4 @@
11
op {
22
graph_op_name: "BatchMatMulV2"
3-
endpoint {
4-
name: "train.BatchMatMul"
5-
}
3+
visibility: SKIP
64
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
op {
2+
graph_op_name: "BatchMatMulV3"
3+
endpoint {
4+
name: "train.BatchMatMul"
5+
}
6+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
op {
2+
graph_op_name: "CompositeTensorVariantFromComponents"
3+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
op {
2+
graph_op_name: "CompositeTensorVariantToComponents"
3+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
op {
2+
graph_op_name: "SnapshotDatasetReader"
3+
endpoint {
4+
name: "data.SnapshotDatasetReader"
5+
}
6+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
op {
2+
graph_op_name: "SnapshotNestedDatasetReader"
3+
endpoint {
4+
name: "data.SnapshotNestedDatasetReader"
5+
}
6+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
op {
2+
graph_op_name: "SparseSegmentSumGrad"
3+
endpoint {
4+
name: "sparse.SparseSegmentSumGrad"
5+
}
6+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
op {
2+
graph_op_name: "Window"
3+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
op {
2+
graph_op_name: "XlaRemoveDynamicDimensionSize"
3+
endpoint {
4+
name: "xla.RemoveDynamicDimensionSize"
5+
}
6+
}

tensorflow-core/tensorflow-core-api/src/gen/annotations/org/tensorflow/op/Ops.java

-76
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,6 @@
5959
import org.tensorflow.op.core.BarrierTakeMany;
6060
import org.tensorflow.op.core.Batch;
6161
import org.tensorflow.op.core.BatchFunction;
62-
import org.tensorflow.op.core.BatchMatMulV3;
6362
import org.tensorflow.op.core.BatchToSpace;
6463
import org.tensorflow.op.core.BatchToSpaceNd;
6564
import org.tensorflow.op.core.Bitcast;
@@ -213,7 +212,6 @@
213212
import org.tensorflow.op.core.Slice;
214213
import org.tensorflow.op.core.Snapshot;
215214
import org.tensorflow.op.core.SpaceToBatchNd;
216-
import org.tensorflow.op.core.SparseSegmentSumGrad;
217215
import org.tensorflow.op.core.Split;
218216
import org.tensorflow.op.core.SplitV;
219217
import org.tensorflow.op.core.Squeeze;
@@ -296,7 +294,6 @@
296294
import org.tensorflow.op.core.VariableShape;
297295
import org.tensorflow.op.core.Where;
298296
import org.tensorflow.op.core.While;
299-
import org.tensorflow.op.core.XlaRemoveDynamicDimensionSize;
300297
import org.tensorflow.op.core.Zeros;
301298
import org.tensorflow.op.core.ZerosLike;
302299
import org.tensorflow.types.TBool;
@@ -842,42 +839,6 @@ public BatchFunction batchFunction(Iterable<Operand<?>> inTensors,
842839
return BatchFunction.create(scope, inTensors, capturedTensors, f, numBatchThreads, maxBatchSize, batchTimeoutMicros, Tout, options);
843840
}
844841

845-
/**
846-
* Multiplies slices of two tensors in batches.
847-
* Multiplies all slices of {@code Tensor} {@code x} and {@code y} (each slice can be
848-
* viewed as an element of a batch), and arranges the individual results
849-
* in a single output tensor of the same batch size. Each of the
850-
* individual slices can optionally be adjointed (to adjoint a matrix
851-
* means to transpose and conjugate it) before multiplication by setting
852-
* the {@code adj_x} or {@code adj_y} flag to {@code True}, which are by default {@code False}.
853-
* <p>The input tensors {@code x} and {@code y} are 2-D or higher with shape {@code [..., r_x, c_x]}
854-
* and {@code [..., r_y, c_y]}.
855-
* <p>The output tensor is 2-D or higher with shape {@code [..., r_o, c_o]}, where:
856-
* <pre>
857-
* r_o = c_x if adj_x else r_x
858-
* c_o = r_y if adj_y else c_y
859-
* </pre>
860-
* <p>It is computed as:
861-
* <pre>
862-
* output[..., :, :] = matrix(x[..., :, :]) * matrix(y[..., :, :])
863-
* </pre>
864-
* <p><em>NOTE</em>: {@code BatchMatMulV3} supports broadcasting in the batch dimensions. More
865-
* about broadcasting
866-
* <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html">here</a> .
867-
*
868-
* @param <V> data type for {@code output} output
869-
* @param x 2-D or higher with shape {@code [..., r_x, c_x]}.
870-
* @param y 2-D or higher with shape {@code [..., r_y, c_y]}.
871-
* @param Tout If not spcified, Tout is the same type to input type.
872-
* @param options carries optional attribute values
873-
* @param <V> data type for {@code BatchMatMulV3} output and operands
874-
* @return a new instance of BatchMatMulV3
875-
*/
876-
public <V extends TType> BatchMatMulV3<V> batchMatMulV3(Operand<? extends TType> x,
877-
Operand<? extends TType> y, Class<V> Tout, BatchMatMulV3.Options... options) {
878-
return BatchMatMulV3.create(scope, x, y, Tout, options);
879-
}
880-
881842
/**
882843
* BatchToSpace for 4-D tensors of type T.
883844
* This is a legacy version of the more general BatchToSpaceND.
@@ -5856,25 +5817,6 @@ public <T extends TType> SpaceToBatchNd<T> spaceToBatchNd(Operand<T> input,
58565817
return SpaceToBatchNd.create(scope, input, blockShape, paddings);
58575818
}
58585819

5859-
/**
5860-
* Computes gradients for SparseSegmentSum.
5861-
* Returns tensor &quot;output&quot; with same shape as grad, except for dimension 0 whose
5862-
* value is output_dim0.
5863-
*
5864-
* @param <T> data type for {@code output} output
5865-
* @param grad gradient propagated to the SparseSegmentSum op.
5866-
* @param indices indices passed to the corresponding SparseSegmentSum op.
5867-
* @param segmentIds segment_ids passed to the corresponding SparseSegmentSum op.
5868-
* @param outputDim0 dimension 0 of &quot;data&quot; passed to SparseSegmentSum op.
5869-
* @param <T> data type for {@code SparseSegmentSumGrad} output and operands
5870-
* @return a new instance of SparseSegmentSumGrad
5871-
*/
5872-
public <T extends TNumber> SparseSegmentSumGrad<T> sparseSegmentSumGrad(Operand<T> grad,
5873-
Operand<? extends TNumber> indices, Operand<? extends TNumber> segmentIds,
5874-
Operand<TInt32> outputDim0) {
5875-
return SparseSegmentSumGrad.create(scope, grad, indices, segmentIds, outputDim0);
5876-
}
5877-
58785820
/**
58795821
* Splits a tensor into {@code num_split} tensors along one dimension.
58805822
*
@@ -8099,24 +8041,6 @@ public While whileOp(Iterable<Operand<?>> input, ConcreteFunction cond, Concrete
80998041
return While.create(scope, input, cond, body, options);
81008042
}
81018043

8102-
/**
8103-
* Inverse of XlaSetDynamicDimensionSize. Make an xla bounded
8104-
* <pre>
8105-
* dynamic dimension into a static dimension. The bound of the size of
8106-
* dimension `dim_index` becomes the static dimension size.
8107-
* </pre>
8108-
*
8109-
* @param <T> data type for {@code output} output
8110-
* @param input the input value
8111-
* @param dimIndex the dimIndex value
8112-
* @param <T> data type for {@code XlaRemoveDynamicDimensionSize} output and operands
8113-
* @return a new instance of XlaRemoveDynamicDimensionSize
8114-
*/
8115-
public <T extends TType> XlaRemoveDynamicDimensionSize<T> xlaRemoveDynamicDimensionSize(
8116-
Operand<T> input, Operand<TInt32> dimIndex) {
8117-
return XlaRemoveDynamicDimensionSize.create(scope, input, dimIndex);
8118-
}
8119-
81208044
/**
81218045
* Creates a zeroed tensor given its type and shape.
81228046
*

tensorflow-core/tensorflow-core-api/src/gen/annotations/org/tensorflow/op/SparseOps.java

+20
Original file line numberDiff line numberDiff line change
@@ -52,6 +52,7 @@
5252
import org.tensorflow.op.sparse.SparseSegmentSqrtNGrad;
5353
import org.tensorflow.op.sparse.SparseSegmentSqrtNWithNumSegments;
5454
import org.tensorflow.op.sparse.SparseSegmentSum;
55+
import org.tensorflow.op.sparse.SparseSegmentSumGrad;
5556
import org.tensorflow.op.sparse.SparseSegmentSumWithNumSegments;
5657
import org.tensorflow.op.sparse.SparseSlice;
5758
import org.tensorflow.op.sparse.SparseSliceGrad;
@@ -1053,6 +1054,25 @@ public <T extends TNumber> SparseSegmentSum<T> sparseSegmentSum(Operand<T> data,
10531054
return SparseSegmentSum.create(scope, data, indices, segmentIds);
10541055
}
10551056

1057+
/**
1058+
* Computes gradients for SparseSegmentSum.
1059+
* Returns tensor &quot;output&quot; with same shape as grad, except for dimension 0 whose
1060+
* value is output_dim0.
1061+
*
1062+
* @param <T> data type for {@code output} output
1063+
* @param grad gradient propagated to the SparseSegmentSum op.
1064+
* @param indices indices passed to the corresponding SparseSegmentSum op.
1065+
* @param segmentIds segment_ids passed to the corresponding SparseSegmentSum op.
1066+
* @param outputDim0 dimension 0 of &quot;data&quot; passed to SparseSegmentSum op.
1067+
* @param <T> data type for {@code SparseSegmentSumGrad} output and operands
1068+
* @return a new instance of SparseSegmentSumGrad
1069+
*/
1070+
public <T extends TNumber> SparseSegmentSumGrad<T> sparseSegmentSumGrad(Operand<T> grad,
1071+
Operand<? extends TNumber> indices, Operand<? extends TNumber> segmentIds,
1072+
Operand<TInt32> outputDim0) {
1073+
return SparseSegmentSumGrad.create(scope, grad, indices, segmentIds, outputDim0);
1074+
}
1075+
10561076
/**
10571077
* Computes the sum along sparse segments of a tensor.
10581078
* Like {@code SparseSegmentSum}, but allows missing ids in {@code segment_ids}. If an id is

tensorflow-core/tensorflow-core-api/src/gen/annotations/org/tensorflow/op/TrainOps.java

+6-5
Original file line numberDiff line numberDiff line change
@@ -510,16 +510,17 @@ public <T extends TType> ApplyRmsProp<T> applyRmsProp(Operand<T> var, Operand<T>
510510
* about broadcasting
511511
* <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html">here</a> .
512512
*
513-
* @param <T> data type for {@code output} output
513+
* @param <V> data type for {@code output} output
514514
* @param x 2-D or higher with shape {@code [..., r_x, c_x]}.
515515
* @param y 2-D or higher with shape {@code [..., r_y, c_y]}.
516+
* @param Tout If not spcified, Tout is the same type to input type.
516517
* @param options carries optional attribute values
517-
* @param <T> data type for {@code BatchMatMulV2} output and operands
518+
* @param <V> data type for {@code BatchMatMulV3} output and operands
518519
* @return a new instance of BatchMatMul
519520
*/
520-
public <T extends TType> BatchMatMul<T> batchMatMul(Operand<T> x, Operand<T> y,
521-
BatchMatMul.Options... options) {
522-
return BatchMatMul.create(scope, x, y, options);
521+
public <V extends TType> BatchMatMul<V> batchMatMul(Operand<? extends TType> x,
522+
Operand<? extends TType> y, Class<V> Tout, BatchMatMul.Options... options) {
523+
return BatchMatMul.create(scope, x, y, Tout, options);
523524
}
524525

525526
/**

tensorflow-core/tensorflow-core-api/src/gen/annotations/org/tensorflow/op/XlaOps.java

+19
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,7 @@
3636
import org.tensorflow.op.xla.Recv;
3737
import org.tensorflow.op.xla.Reduce;
3838
import org.tensorflow.op.xla.ReduceWindow;
39+
import org.tensorflow.op.xla.RemoveDynamicDimensionSize;
3940
import org.tensorflow.op.xla.ReplicaId;
4041
import org.tensorflow.op.xla.Scatter;
4142
import org.tensorflow.op.xla.SelectAndScatter;
@@ -369,6 +370,24 @@ public <T extends TType, U extends TNumber> ReduceWindow<T> reduceWindow(Operand
369370
return ReduceWindow.create(scope, input, initValue, windowDimensions, windowStrides, baseDilations, windowDilations, padding, computation);
370371
}
371372

373+
/**
374+
* Inverse of XlaSetDynamicDimensionSize. Make an xla bounded
375+
* <pre>
376+
* dynamic dimension into a static dimension. The bound of the size of
377+
* dimension `dim_index` becomes the static dimension size.
378+
* </pre>
379+
*
380+
* @param <T> data type for {@code output} output
381+
* @param input the input value
382+
* @param dimIndex the dimIndex value
383+
* @param <T> data type for {@code XlaRemoveDynamicDimensionSize} output and operands
384+
* @return a new instance of RemoveDynamicDimensionSize
385+
*/
386+
public <T extends TType> RemoveDynamicDimensionSize<T> removeDynamicDimensionSize(
387+
Operand<T> input, Operand<TInt32> dimIndex) {
388+
return RemoveDynamicDimensionSize.create(scope, input, dimIndex);
389+
}
390+
372391
/**
373392
* Replica ID.
374393
*

0 commit comments

Comments
 (0)