Skip to content

Commit a633585

Browse files
committed
Add getting started section to README
1 parent d91315c commit a633585

File tree

1 file changed

+91
-56
lines changed

1 file changed

+91
-56
lines changed

README.md

Lines changed: 91 additions & 56 deletions
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,23 @@
1-
tf2onnx - Convert TensorFlow models to ONNX.
2-
========
1+
# tf2onnx - Convert TensorFlow models to ONNX.
32

43
| Build Type | OS | Python | Tensorflow | Onnx opset | Status |
54
| --- | --- | --- | --- | --- | --- |
65
| Unit Test - Basic | Linux, MacOS<sup>\*</sup>, Windows<sup>\*</sup> | 3.6, 3.7 | 1.12-1.15, 2.1 | 7-11 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test?branchName=master)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=16&branchName=master) |
76
| Unit Test - Full | Linux, MacOS, Windows | 3.6, 3.7 | 1.12-1.15, 2.1 | 7-11 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test-matrix?branchName=master)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=18&branchName=master) | |
87

8+
## Supported Versions
9+
10+
### ONNX
911

10-
# Supported Versions
11-
## ONNX
1212
tensorflow-onnx will use the ONNX version installed on your system and installs the latest ONNX version if none is found.
1313

1414
We support opset 6 to 11. By default we use opset 8 for the resulting ONNX graph since most runtimes will support opset 8.
1515
Support for future opsets add added as they are released.
1616

1717
If you want the graph to be generated with a specific opset, use ```--opset``` in the command line, for example ```--opset 11```.
1818

19-
## Tensorflow
19+
### TensorFlow
20+
2021
We support all ```tf-1.x graphs```. To keep our test matrix manageable we test tf2onnx running on top of ```tf-1.12 and up```. tf2onnx-1.5.4 was the last version that was tested all the way back to tf-1.4.
2122

2223
There is now ```experimental support for tf-2.x```. Basic unit tests are passing as well as control flow.
@@ -26,11 +27,13 @@ All unit tests are running in eager mode and after execution we take the python
2627
If running under tf-2.x we are using the tensorflow V2 controlflow.
2728

2829
You can install tf2onnx on top of tf-1.x or tf-2.x and convert tf-1.x or tf-2.x models.
29-
30-
## Python
30+
31+
### Python
32+
3133
We support Python ```3.6```, ```3.7``` and ```3.8```. tf2onnx-1.5.4 was the last release that supports Python 3.5.
3234

33-
# Status
35+
## Status
36+
3437
We support many TensorFlow models. Support for Fully Connected, Convolutional and dynamic LSTM networks is mature.
3538
A list of models that we use for testing can be found [here](tests/run_pretrained_models.yaml).
3639

@@ -42,16 +45,20 @@ You find a list of supported Tensorflow ops and their mapping to ONNX [here](sup
4245
Tensorflow has broad functionality and occasionally mapping it to ONNX creates issues.
4346
The common issues we run into we try to document here [Troubleshooting Guide](Troubleshooting.md).
4447

45-
# Prerequisites
48+
## Prerequisites
49+
50+
### TensorFlow
4651

47-
## Install TensorFlow
4852
If you don't have tensorflow installed already, install the desired tensorflow build, for example:
53+
4954
```
5055
pip install tensorflow
5156
or
5257
pip install tensorflow-gpu
5358
```
54-
## Install runtime
59+
60+
### (Optional) Runtime
61+
5562
If you want to run tests, install a runtime that can run ONNX models. For example:
5663

5764
ONNX Runtime (available for Linux, Windows, and Mac):
@@ -62,34 +69,61 @@ For pytorch/caffe2, follow the instructions here:
6269

6370
```https://pytorch.org/```
6471

65-
6672
We tested with pytorch/caffe2 and onnxruntime and unit tests are passing for those.
6773

68-
# Installation
69-
## From pypi
74+
## Installation
75+
76+
### From pypi
77+
7078
```pip install -U tf2onnx```
7179

72-
## From Source
80+
### Latest from GitHub
81+
82+
```pip install git+https://github.com/onnx/tensorflow-onnx```
83+
84+
### From source
85+
86+
```git clone https://github.com/onnx/tensorflow-onnx```
87+
7388
Once dependencies are installed, from the tensorflow-onnx folder call:
7489

75-
```
76-
python setup.py install
77-
or
78-
python setup.py develop
79-
```
90+
```python setup.py install```
91+
92+
or
93+
94+
```python setup.py develop```
95+
8096
tensorflow-onnx requires onnx-1.5 or better and will install/upgrade onnx if needed.
8197

8298
To create a distribution:
83-
```
84-
python setup.py bdist_wheel
85-
```
8699

87-
# Usage
100+
```python setup.py bdist_wheel```
101+
102+
## Getting started
103+
104+
To get started with the `tensorflow-onnx` converter, provide the name of your TensorFlow model directory (where the model is in `saved model` format), and a name for the ONNX output file, to the `t2onnx.convert` command:
105+
106+
```python -m tf2onnx.convert --saved-model tensorflow-model-directory --output model.onnx```
107+
108+
The above command uses a default of `7` for the ONNX opset. If you need a newer opset, or want to limit your model to use an older opset then you can provide the `--opset` argument to the command.
109+
110+
```python -m tf2onnx.convert --saved-model tensorflow-model-directory --opset 10 --output model.onnx```
111+
112+
If your TensorFlow model is in a format other than `saved model`, then you need to provide the inputs and outputs of the model graph.
88113

89-
You find a end to end tutorial for ssd-mobilenet [here](tutorials/ConvertingSSDMobilenetToONNX.ipynb).
114+
For `graphdef` format:
90115

91-
To convert a TensorFlow model, tf2onnx supports ```saved_model```, ```checkpoint``` or ```frozen graph``` formats. We recommend the ```saved_model``` format. If ```checkpoint``` or ```frozen graph``` formats are used, the user needs to specify inputs and outputs for the graph by passing the input and output
92-
names with ```--inputs INPUTS``` and ```--outputs OUTPUTS```.
116+
```python -m tf2onnx.convert --graphdef tensorflow-model-graphdef-file --output model.onnx --inputs input0:0,input1:0 --outputs output0:0```
117+
118+
For `checkpoint` format:
119+
120+
```python -m tf2onnx.convert --checkpoint tensorflow-model-meta-file --output model.onnx --inputs input0:0,input1:0 --outputs output0:0```
121+
122+
If your model is not in `saved model` format and you do not know the input and output nodes of the model, you can use the [summarize_graph](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms) TensorFlow utility. The `summarize_graph` tool does need to be downloaded and built from source. If you have the option of going to your model provider and obtaining the model in `saved model` format, then we recommend doing so.
123+
124+
You find an end-to-end tutorial for ssd-mobilenet [here](tutorials/ConvertingSSDMobilenetToONNX.ipynb)
125+
126+
## CLI reference
93127

94128
```
95129
python -m tf2onnx.convert
@@ -109,27 +143,27 @@ python -m tf2onnx.convert
109143
[--verbose]
110144
```
111145

112-
## Parameters
113-
### --input or --graphdef
146+
### Parameters
147+
#### --saved-model
148+
TensorFlow model as saved_model. We expect the path to the saved_model directory. tf2onnx will try to freeze the graph.
149+
#### --input or --graphdef
114150
TensorFlow model as graphdef file. If not already frozen we'll try to freeze the model.
115151
More information about freezing can be found here: [freeze graph tool](#freeze_graph).
116-
### --checkpoint
152+
#### --checkpoint
117153
TensorFlow model as checkpoint. We expect the path to the .meta file. tf2onnx will try to freeze the graph.
118-
### --saved-model
119-
TensorFlow model as saved_model. We expect the path to the saved_model directory. tf2onnx will try to freeze the graph.
120-
### --output
154+
#### --output
121155
the target onnx file path.
122-
### --inputs, --outputs
156+
#### --inputs, --outputs
123157
Tensorflow model's input/output names, which can be found with [summarize graph tool](#summarize_graph). Those names typically end on ```:0```, for example ```--inputs input0:0,input1:0```. inputs and outputs are ***not*** needed for models in saved-model format.
124-
### --inputs-as-nchw
158+
#### --inputs-as-nchw
125159
By default we preserve the image format of inputs (nchw or nhwc) as given in the TensorFlow model. If your hosts (for example windows) native format nchw and the model is written for nhwc, ```--inputs-as-nchw``` tensorflow-onnx will transpose the input. Doing so is convinient for the application and the converter in many cases can optimize the transpose away. For example ```--inputs input0:0,input1:0 --inputs-as-nchw input0:0``` assumes that images are passed into ```input0:0``` as nchw while the TensorFlow model given uses nhwc.
126-
### --opset
160+
#### --opset
127161
By default we use the opset 7 to generate the graph. By specifying ```--opset``` the user can override the default to generate a graph with the desired opset. For example ```--opset 5``` would create a onnx graph that uses only ops available in opset 5. Because older opsets have in most cases fewer ops, some models might not convert on a older opset.
128-
### --target
162+
#### --target
129163
Some models require special handling to run on some runtimes. In particular, the model may use unsupported data types. Workarounds are activated with ```--target TARGET```. Currently supported values are listed on this [wiki](https://github.com/onnx/tensorflow-onnx/wiki/target). If your model will be run on Windows ML, you should specify the appropriate target value.
130-
### --custom-ops
164+
#### --custom-ops
131165
the runtime may support custom ops that are not defined in onnx. A user can asked the converter to map to custom ops by listing them with the --custom-ops option. Tensorflow ops listed here will be mapped to a custom op with the same name as the tensorflow op but in the onnx domain ai.onnx.converters.tensorflow. For example: ```--custom-ops Print``` will insert a op ```Print``` in the onnx domain ```ai.onnx.converters.tensorflow``` into the graph. We also support a python api for custom ops documented later in this readme.
132-
### --fold_const
166+
#### --fold_const
133167
when set, TensorFlow fold_constants transformation will be applied before conversion. This will benefit features including Transpose optimization (e.g. Transpose operations introduced during tf-graph-to-onnx-graph conversion will be removed), and RNN unit conversion (for example LSTM). Older TensorFlow version might run into issues with this option depending on the model.
134168

135169
Usage example (run following commands in tensorflow-onnx root directory):
@@ -144,13 +178,13 @@ python -m tf2onnx.convert\
144178
Some models specify placeholders with unknown ranks and dims which can not be mapped to onnx.
145179
In those cases one can add the shape behind the input name in ```[]```, for example ```--inputs X:0[1,28,28,3]```
146180

147-
## <a name="summarize_graph"></a>Tool to get Graph Inputs & Outputs
181+
### <a name="summarize_graph"></a>Tool to get Graph Inputs & Outputs
148182

149183
To find the inputs and outputs for the TensorFlow graph the model developer will know or you can consult TensorFlow's [summarize_graph](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms) tool, for example:
150184
```
151185
summarize_graph --in_graph=tests/models/fc-layers/frozen.pb
152186
```
153-
## <a name="freeze_graph"></a>Tool to Freeze Graph
187+
### <a name="freeze_graph"></a>Tool to Freeze Graph
154188

155189
The TensorFlow tool to freeze the graph is [here](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py).
156190

@@ -164,15 +198,15 @@ python -m tensorflow.python.tools.freeze_graph \
164198
--output_graph=tests/models/fc-layers/frozen.pb
165199
```
166200

167-
# Testing
201+
## Testing
168202
There are 2 types of tests.
169203

170-
## Unit test
204+
### Unit test
171205
```
172206
python setup.py test
173207
```
174208

175-
## Validate pre-trained TensorFlow models
209+
### Validate pre-trained TensorFlow models
176210
```
177211
python tests/run_pretrained_models.py
178212
usage: run_pretrained_models.py [-h] [--cache CACHE] [--tests TESTS] [--backend BACKEND] [--verbose] [--debug] [--config yaml-config]
@@ -198,16 +232,17 @@ You call it for example with:
198232
python tests/run_pretrained_models.py --backend onnxruntime --config tests/run_pretrained_models.yaml --perf perf.csv
199233
```
200234

201-
### <a name="save_pretrained_model"></a>Tool to save pre-trained model
235+
#### <a name="save_pretrained_model"></a>Tool to save pre-trained model
202236

203237
We provide an [utility](tools/save_pretrained_model.py) to save pre-trained model along with its config.
204238
Put `save_pretrained_model(sess, outputs, feed_inputs, save_dir, model_name)` in your last testing epoch and the pre-trained model and config will be saved under `save_dir/to_onnx`.
205239
Please refer to the example in [tools/save_pretrained_model.py](tools/save_pretrained_model.py) for more information.
206240
Note the minimum required Tensorflow version is r1.6.
207241

208-
# Using the Python API
209-
## TensorFlow to ONNX conversion
242+
## Python API Reference
243+
210244
In some cases it will be useful to convert the models from TensorFlow to ONNX from a python script. You can use the following API:
245+
211246
```
212247
import tf2onnx
213248
@@ -249,7 +284,7 @@ with tf.Session() as sess:
249284
with open("/tmp/model.onnx", "wb") as f:
250285
f.write(model_proto.SerializeToString())
251286
```
252-
## Creating custom op mappings from python
287+
### Creating custom op mappings from python
253288
For complex custom ops that require graph rewrites or input / attribute rewrites using the python interface to insert a custom op will be the eaiest way to accomplish the task.
254289
A dictionary of name->custom_op_handler can be passed to tf2onnx.tfonnx.process_tf_graph. If the op name is found in the graph the handler will have access to all internal structures and can rewrite that is needed. For example [examples/custom_op_via_python.py]():
255290
```
@@ -285,7 +320,7 @@ with tf.Session() as sess:
285320
f.write(model_proto.SerializeToString())
286321
```
287322

288-
# How tf2onnx works
323+
## How tf2onnx works
289324
The converter needs to take care of a few things:
290325
1. Convert the protobuf format. Since the format is similar this step is straight forward.
291326
2. TensorFlow types need to be mapped to their ONNX equivalent.
@@ -295,33 +330,33 @@ The converter needs to take care of a few things:
295330
6. There are some ops like relu6 that are not supported in ONNX but the converter can be composed out of other ONNX ops.
296331
7. ONNX backends are new and their implementations are not complete yet. For some ops the converter generate ops with deal with issues in existing backends.
297332

298-
### Step 1 - start with a frozen graph.
333+
#### Step 1 - start with a frozen graph.
299334
tf2onnx starts with a frozen graph. This is because of item 3 above.
300335

301-
### Step 2 - 1:1 convertion of the protobuf from tensorflow to onnx
336+
#### Step 2 - 1:1 convertion of the protobuf from tensorflow to onnx
302337
tf2onnx first does a simple convertion from the TensorFlow protobuf format to the ONNX protobuf format without looking at individual ops.
303338
We do this so we can use the ONNX graph as internal representation and write helper functions around it.
304339
The code that does the conversion is in tensorflow_to_onnx(). tensorflow_to_onnx() will return the ONNX graph and a dictionary with shape information from TensorFlow. The shape information is helpful in some cases when processing individual ops.
305340
The ONNX graph is wrapped in a Graph object and nodes in the graph are wrapped in a Node object to allow easier graph manipulations on the graph. All code that deals with nodes and graphs is in graph.py.
306341

307-
### Step 3 - rewrite subgraphs
342+
#### Step 3 - rewrite subgraphs
308343
In the next step we apply graph matching code on the graph to re-write subgraphs for ops like transpose and lstm. For an example looks at rewrite_transpose().
309344

310-
### Step 4 - process individual ops
345+
#### Step 4 - process individual ops
311346
In the fourth step we look at individual ops that need attention. The dictionary _OPS_MAPPING will map tensorflow op types to a method that is used to process the op. The simplest case is direct_op() where the op can be taken as is. Whenever possible we try to group ops into common processing, for example all ops that require dealing with broadcasting are mapped to broadcast_op(). For an op that composes the tensorflow op from multiple onnx ops, see relu6_op().
312347

313-
### Step 5 - final processing
348+
#### Step 5 - final processing
314349
Once all ops are converted, we need to do a topological sort since ONNX requires it. process_tf_graph() is the method that takes care of all above steps.
315350

316-
# Extending tf2onnx
351+
## Extending tf2onnx
317352
If you like to contribute and add new conversions to tf2onnx, the process is something like:
318353
1. See if the op fits into one of the existing mappings. If so adding it to _OPS_MAPPING is all that is needed.
319354
2. If the new op needs extra procesing, start a new mapping function.
320355
3. If the tensorflow op is composed of multiple ops, consider using a graph re-write. While this might be a little harder initially, it works better for complex patterns.
321356
4. Add a unit test in tests/test_backend.py. The unit tests mostly create the tensorflow graph, run it and capture the output, than convert to onnx, run against a onnx backend and compare tensorflow and onnx results.
322357
5. If there are pre-trained models that use the new op, consider adding those to test/run_pretrained_models.py.
323358

324-
# License
359+
## License
325360

326361
[MIT License](LICENSE)
327362

0 commit comments

Comments
 (0)