You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+91-56Lines changed: 91 additions & 56 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,22 +1,23 @@
1
-
tf2onnx - Convert TensorFlow models to ONNX.
2
-
========
1
+
# tf2onnx - Convert TensorFlow models to ONNX.
3
2
4
3
| Build Type | OS | Python | Tensorflow | Onnx opset | Status |
5
4
| --- | --- | --- | --- | --- | --- |
6
5
| Unit Test - Basic | Linux, MacOS<sup>\*</sup>, Windows<sup>\*</sup> | 3.6, 3.7 | 1.12-1.15, 2.1 | 7-11 |[](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=16&branchName=master)|
7
6
| Unit Test - Full | Linux, MacOS, Windows | 3.6, 3.7 | 1.12-1.15, 2.1 | 7-11 |[](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=18&branchName=master)||
8
7
8
+
## Supported Versions
9
+
10
+
### ONNX
9
11
10
-
# Supported Versions
11
-
## ONNX
12
12
tensorflow-onnx will use the ONNX version installed on your system and installs the latest ONNX version if none is found.
13
13
14
14
We support opset 6 to 11. By default we use opset 8 for the resulting ONNX graph since most runtimes will support opset 8.
15
15
Support for future opsets add added as they are released.
16
16
17
17
If you want the graph to be generated with a specific opset, use ```--opset``` in the command line, for example ```--opset 11```.
18
18
19
-
## Tensorflow
19
+
### TensorFlow
20
+
20
21
We support all ```tf-1.x graphs```. To keep our test matrix manageable we test tf2onnx running on top of ```tf-1.12 and up```. tf2onnx-1.5.4 was the last version that was tested all the way back to tf-1.4.
21
22
22
23
There is now ```experimental support for tf-2.x```. Basic unit tests are passing as well as control flow.
@@ -26,11 +27,13 @@ All unit tests are running in eager mode and after execution we take the python
26
27
If running under tf-2.x we are using the tensorflow V2 controlflow.
27
28
28
29
You can install tf2onnx on top of tf-1.x or tf-2.x and convert tf-1.x or tf-2.x models.
29
-
30
-
## Python
30
+
31
+
### Python
32
+
31
33
We support Python ```3.6```, ```3.7``` and ```3.8```. tf2onnx-1.5.4 was the last release that supports Python 3.5.
32
34
33
-
# Status
35
+
## Status
36
+
34
37
We support many TensorFlow models. Support for Fully Connected, Convolutional and dynamic LSTM networks is mature.
35
38
A list of models that we use for testing can be found [here](tests/run_pretrained_models.yaml).
36
39
@@ -42,16 +45,20 @@ You find a list of supported Tensorflow ops and their mapping to ONNX [here](sup
42
45
Tensorflow has broad functionality and occasionally mapping it to ONNX creates issues.
43
46
The common issues we run into we try to document here [Troubleshooting Guide](Troubleshooting.md).
44
47
45
-
# Prerequisites
48
+
## Prerequisites
49
+
50
+
### TensorFlow
46
51
47
-
## Install TensorFlow
48
52
If you don't have tensorflow installed already, install the desired tensorflow build, for example:
53
+
49
54
```
50
55
pip install tensorflow
51
56
or
52
57
pip install tensorflow-gpu
53
58
```
54
-
## Install runtime
59
+
60
+
### (Optional) Runtime
61
+
55
62
If you want to run tests, install a runtime that can run ONNX models. For example:
56
63
57
64
ONNX Runtime (available for Linux, Windows, and Mac):
@@ -62,34 +69,61 @@ For pytorch/caffe2, follow the instructions here:
62
69
63
70
```https://pytorch.org/```
64
71
65
-
66
72
We tested with pytorch/caffe2 and onnxruntime and unit tests are passing for those.
Once dependencies are installed, from the tensorflow-onnx folder call:
74
89
75
-
```
76
-
python setup.py install
77
-
or
78
-
python setup.py develop
79
-
```
90
+
```python setup.py install```
91
+
92
+
or
93
+
94
+
```python setup.py develop```
95
+
80
96
tensorflow-onnx requires onnx-1.5 or better and will install/upgrade onnx if needed.
81
97
82
98
To create a distribution:
83
-
```
84
-
python setup.py bdist_wheel
85
-
```
86
99
87
-
# Usage
100
+
```python setup.py bdist_wheel```
101
+
102
+
## Getting started
103
+
104
+
To get started with the `tensorflow-onnx` converter, provide the name of your TensorFlow model directory (where the model is in `saved model` format), and a name for the ONNX output file, to the `t2onnx.convert` command:
The above command uses a default of `7` for the ONNX opset. If you need a newer opset, or want to limit your model to use an older opset then you can provide the `--opset` argument to the command.
If your TensorFlow model is in a format other than `saved model`, then you need to provide the inputs and outputs of the model graph.
88
113
89
-
You find a end to end tutorial for ssd-mobilenet [here](tutorials/ConvertingSSDMobilenetToONNX.ipynb).
114
+
For `graphdef` format:
90
115
91
-
To convert a TensorFlow model, tf2onnx supports ```saved_model```, ```checkpoint``` or ```frozen graph``` formats. We recommend the ```saved_model``` format. If ```checkpoint``` or ```frozen graph``` formats are used, the user needs to specify inputs and outputs for the graph by passing the input and output
92
-
names with ```--inputs INPUTS``` and ```--outputs OUTPUTS```.
If your model is not in `saved model` format and you do not know the input and output nodes of the model, you can use the [summarize_graph](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms) TensorFlow utility. The `summarize_graph` tool does need to be downloaded and built from source. If you have the option of going to your model provider and obtaining the model in `saved model` format, then we recommend doing so.
123
+
124
+
You find an end-to-end tutorial for ssd-mobilenet [here](tutorials/ConvertingSSDMobilenetToONNX.ipynb)
125
+
126
+
## CLI reference
93
127
94
128
```
95
129
python -m tf2onnx.convert
@@ -109,27 +143,27 @@ python -m tf2onnx.convert
109
143
[--verbose]
110
144
```
111
145
112
-
## Parameters
113
-
### --input or --graphdef
146
+
### Parameters
147
+
#### --saved-model
148
+
TensorFlow model as saved_model. We expect the path to the saved_model directory. tf2onnx will try to freeze the graph.
149
+
#### --input or --graphdef
114
150
TensorFlow model as graphdef file. If not already frozen we'll try to freeze the model.
115
151
More information about freezing can be found here: [freeze graph tool](#freeze_graph).
116
-
### --checkpoint
152
+
####--checkpoint
117
153
TensorFlow model as checkpoint. We expect the path to the .meta file. tf2onnx will try to freeze the graph.
118
-
### --saved-model
119
-
TensorFlow model as saved_model. We expect the path to the saved_model directory. tf2onnx will try to freeze the graph.
120
-
### --output
154
+
#### --output
121
155
the target onnx file path.
122
-
### --inputs, --outputs
156
+
####--inputs, --outputs
123
157
Tensorflow model's input/output names, which can be found with [summarize graph tool](#summarize_graph). Those names typically end on ```:0```, for example ```--inputs input0:0,input1:0```. inputs and outputs are ***not*** needed for models in saved-model format.
124
-
### --inputs-as-nchw
158
+
####--inputs-as-nchw
125
159
By default we preserve the image format of inputs (nchw or nhwc) as given in the TensorFlow model. If your hosts (for example windows) native format nchw and the model is written for nhwc, ```--inputs-as-nchw``` tensorflow-onnx will transpose the input. Doing so is convinient for the application and the converter in many cases can optimize the transpose away. For example ```--inputs input0:0,input1:0 --inputs-as-nchw input0:0``` assumes that images are passed into ```input0:0``` as nchw while the TensorFlow model given uses nhwc.
126
-
### --opset
160
+
####--opset
127
161
By default we use the opset 7 to generate the graph. By specifying ```--opset``` the user can override the default to generate a graph with the desired opset. For example ```--opset 5``` would create a onnx graph that uses only ops available in opset 5. Because older opsets have in most cases fewer ops, some models might not convert on a older opset.
128
-
### --target
162
+
####--target
129
163
Some models require special handling to run on some runtimes. In particular, the model may use unsupported data types. Workarounds are activated with ```--target TARGET```. Currently supported values are listed on this [wiki](https://github.com/onnx/tensorflow-onnx/wiki/target). If your model will be run on Windows ML, you should specify the appropriate target value.
130
-
### --custom-ops
164
+
####--custom-ops
131
165
the runtime may support custom ops that are not defined in onnx. A user can asked the converter to map to custom ops by listing them with the --custom-ops option. Tensorflow ops listed here will be mapped to a custom op with the same name as the tensorflow op but in the onnx domain ai.onnx.converters.tensorflow. For example: ```--custom-ops Print``` will insert a op ```Print``` in the onnx domain ```ai.onnx.converters.tensorflow``` into the graph. We also support a python api for custom ops documented later in this readme.
132
-
### --fold_const
166
+
####--fold_const
133
167
when set, TensorFlow fold_constants transformation will be applied before conversion. This will benefit features including Transpose optimization (e.g. Transpose operations introduced during tf-graph-to-onnx-graph conversion will be removed), and RNN unit conversion (for example LSTM). Older TensorFlow version might run into issues with this option depending on the model.
134
168
135
169
Usage example (run following commands in tensorflow-onnx root directory):
@@ -144,13 +178,13 @@ python -m tf2onnx.convert\
144
178
Some models specify placeholders with unknown ranks and dims which can not be mapped to onnx.
145
179
In those cases one can add the shape behind the input name in ```[]```, for example ```--inputs X:0[1,28,28,3]```
146
180
147
-
## <aname="summarize_graph"></a>Tool to get Graph Inputs & Outputs
181
+
###<aname="summarize_graph"></a>Tool to get Graph Inputs & Outputs
148
182
149
183
To find the inputs and outputs for the TensorFlow graph the model developer will know or you can consult TensorFlow's [summarize_graph](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms) tool, for example:
### <aname="save_pretrained_model"></a>Tool to save pre-trained model
235
+
####<aname="save_pretrained_model"></a>Tool to save pre-trained model
202
236
203
237
We provide an [utility](tools/save_pretrained_model.py) to save pre-trained model along with its config.
204
238
Put `save_pretrained_model(sess, outputs, feed_inputs, save_dir, model_name)` in your last testing epoch and the pre-trained model and config will be saved under `save_dir/to_onnx`.
205
239
Please refer to the example in [tools/save_pretrained_model.py](tools/save_pretrained_model.py) for more information.
206
240
Note the minimum required Tensorflow version is r1.6.
207
241
208
-
#Using the Python API
209
-
## TensorFlow to ONNX conversion
242
+
## Python API Reference
243
+
210
244
In some cases it will be useful to convert the models from TensorFlow to ONNX from a python script. You can use the following API:
245
+
211
246
```
212
247
import tf2onnx
213
248
@@ -249,7 +284,7 @@ with tf.Session() as sess:
249
284
with open("/tmp/model.onnx", "wb") as f:
250
285
f.write(model_proto.SerializeToString())
251
286
```
252
-
## Creating custom op mappings from python
287
+
###Creating custom op mappings from python
253
288
For complex custom ops that require graph rewrites or input / attribute rewrites using the python interface to insert a custom op will be the eaiest way to accomplish the task.
254
289
A dictionary of name->custom_op_handler can be passed to tf2onnx.tfonnx.process_tf_graph. If the op name is found in the graph the handler will have access to all internal structures and can rewrite that is needed. For example [examples/custom_op_via_python.py]():
255
290
```
@@ -285,7 +320,7 @@ with tf.Session() as sess:
285
320
f.write(model_proto.SerializeToString())
286
321
```
287
322
288
-
# How tf2onnx works
323
+
##How tf2onnx works
289
324
The converter needs to take care of a few things:
290
325
1. Convert the protobuf format. Since the format is similar this step is straight forward.
291
326
2. TensorFlow types need to be mapped to their ONNX equivalent.
@@ -295,33 +330,33 @@ The converter needs to take care of a few things:
295
330
6. There are some ops like relu6 that are not supported in ONNX but the converter can be composed out of other ONNX ops.
296
331
7. ONNX backends are new and their implementations are not complete yet. For some ops the converter generate ops with deal with issues in existing backends.
297
332
298
-
### Step 1 - start with a frozen graph.
333
+
####Step 1 - start with a frozen graph.
299
334
tf2onnx starts with a frozen graph. This is because of item 3 above.
300
335
301
-
### Step 2 - 1:1 convertion of the protobuf from tensorflow to onnx
336
+
####Step 2 - 1:1 convertion of the protobuf from tensorflow to onnx
302
337
tf2onnx first does a simple convertion from the TensorFlow protobuf format to the ONNX protobuf format without looking at individual ops.
303
338
We do this so we can use the ONNX graph as internal representation and write helper functions around it.
304
339
The code that does the conversion is in tensorflow_to_onnx(). tensorflow_to_onnx() will return the ONNX graph and a dictionary with shape information from TensorFlow. The shape information is helpful in some cases when processing individual ops.
305
340
The ONNX graph is wrapped in a Graph object and nodes in the graph are wrapped in a Node object to allow easier graph manipulations on the graph. All code that deals with nodes and graphs is in graph.py.
306
341
307
-
### Step 3 - rewrite subgraphs
342
+
####Step 3 - rewrite subgraphs
308
343
In the next step we apply graph matching code on the graph to re-write subgraphs for ops like transpose and lstm. For an example looks at rewrite_transpose().
309
344
310
-
### Step 4 - process individual ops
345
+
####Step 4 - process individual ops
311
346
In the fourth step we look at individual ops that need attention. The dictionary _OPS_MAPPING will map tensorflow op types to a method that is used to process the op. The simplest case is direct_op() where the op can be taken as is. Whenever possible we try to group ops into common processing, for example all ops that require dealing with broadcasting are mapped to broadcast_op(). For an op that composes the tensorflow op from multiple onnx ops, see relu6_op().
312
347
313
-
### Step 5 - final processing
348
+
####Step 5 - final processing
314
349
Once all ops are converted, we need to do a topological sort since ONNX requires it. process_tf_graph() is the method that takes care of all above steps.
315
350
316
-
# Extending tf2onnx
351
+
##Extending tf2onnx
317
352
If you like to contribute and add new conversions to tf2onnx, the process is something like:
318
353
1. See if the op fits into one of the existing mappings. If so adding it to _OPS_MAPPING is all that is needed.
319
354
2. If the new op needs extra procesing, start a new mapping function.
320
355
3. If the tensorflow op is composed of multiple ops, consider using a graph re-write. While this might be a little harder initially, it works better for complex patterns.
321
356
4. Add a unit test in tests/test_backend.py. The unit tests mostly create the tensorflow graph, run it and capture the output, than convert to onnx, run against a onnx backend and compare tensorflow and onnx results.
322
357
5. If there are pre-trained models that use the new op, consider adding those to test/run_pretrained_models.py.
0 commit comments