Skip to content

Commit 1942e49

Browse files
committed
+readme and model gen
1 parent ea7404d commit 1942e49

File tree

2 files changed

+126
-18
lines changed

2 files changed

+126
-18
lines changed

README.md

Lines changed: 27 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -33,15 +33,15 @@ onnxruntime (only avaliable on linux):
3333

3434
```pip install onnxruntime```
3535

36-
For caffe2, follow the instructions here:
36+
For pytorch/caffe2, follow the instructions here:
3737

38-
```https://caffe2.ai/```
38+
```https://pytorch.org/```
3939

4040

41-
We tested with caffe2 and onnxruntime and unit tests are passing for those.
41+
We tested with pytorch/caffe2 and onnxruntime and unit tests are passing for those.
4242

4343
## Supported Tensorflow and Python Versions
44-
We tested with tensorflow 1.5-1.11 and anaconda **3.5,3.6**.
44+
We tested with tensorflow 1.5-1.13 and anaconda **3.5,3.6**.
4545

4646
# Installation
4747
## From Pypi
@@ -64,13 +64,17 @@ python setup.py bdist_wheel
6464

6565
# Usage
6666

67-
To convert a TensorFlow model, tf2onnx expects a ```frozen TensorFlow graph``` and the user needs to specify inputs and outputs for the graph by passing the input and output
67+
To convert a TensorFlow model, tf2onnx prefers a ```frozen TensorFlow graph``` and the user needs to specify inputs and outputs for the graph by passing the input and output
6868
names with ```--inputs INPUTS``` and ```--outputs OUTPUTS```.
6969

7070
```
71-
python -m tf2onnx.convert --input SOURCE_FROZEN_GRAPH_PB
72-
--inputs SOURCE_GRAPH_INPUTS
73-
--outputs SOURCE_GRAPH_OUTPUS
71+
python -m tf2onnx.convert
72+
--input SOURCE_GRAPHDEF_PB
73+
--graphdef SOURCE_GRAPHDEF_PB
74+
--checkpoint SOURCE_CHECKPOINT
75+
--saved-model SOURCE_SAVED_MODEL
76+
[--inputs GRAPH_INPUTS]
77+
[--outputs GRAPH_OUTPUS]
7478
[--inputs-as-nchw inputs_provided_as_nchw]
7579
[--target TARGET]
7680
[--output TARGET_ONNX_GRAPH]
@@ -83,21 +87,26 @@ python -m tf2onnx.convert --input SOURCE_FROZEN_GRAPH_PB
8387
```
8488

8589
## Parameters
86-
### input
87-
frozen TensorFlow graph, which can be created with the [freeze graph tool](#freeze_graph).
88-
### output
90+
### --input or --graphdef
91+
TensorFlow model as graphdef file. If not already frozen we'll try to freeze the model.
92+
More information about freezing can be found here: [freeze graph tool](#freeze_graph).
93+
### --checkpoint
94+
TensorFlow model as checkpoint. We expect the path to the .meta file. tf2onnx will try to freeze the graph.
95+
### --saved-model
96+
TensorFlow model as saved_model. We expect the path to the saved_model directory. tf2onnx will try to freeze the graph.
97+
### --output
8998
the target onnx file path.
90-
### inputs, outputs
91-
Tensorflow graph's input/output names, which can be found with [summarize graph tool](#summarize_graph). Those names typically end on ```:0```, for example ```--inputs input0:0,input1:0```
92-
### inputs-as-nchw
99+
### --inputs, --outputs
100+
Tensorflow model's input/output names, which can be found with [summarize graph tool](#summarize_graph). Those names typically end on ```:0```, for example ```--inputs input0:0,input1:0```. inputs and outputs are ***not*** needed for models in saved-model format.
101+
### --inputs-as-nchw
93102
By default we preserve the image format of inputs (nchw or nhwc) as given in the TensorFlow model. If your hosts (for example windows) native format nchw and the model is written for nhwc, ```--inputs-as-nchw``` tensorflow-onnx will transpose the input. Doing so is convinient for the application and the converter in many cases can optimize the transpose away. For example ```--inputs input0:0,input1:0 --inputs-as-nchw input0:0``` assumes that images are passed into ```input0:0``` as nchw while the TensorFlow model given uses nhwc.
94-
### target
103+
### --target
95104
Some runtimes need workarounds, for example they don't support all types given in the onnx spec. We'll workaround it in some cases by generating a different graph. Those workarounds are activated with ```--target TARGET```.
96-
### opset
105+
### --opset
97106
by default we uses the newest opset 7 to generate the graph. By specifieing ```--opset``` the user can override the default to generate a graph with the desired opset. For example ```--opset 5``` would create a onnx graph that uses only ops available in opset 5. Because older opsets have in most cases fewer ops, some models might not convert on a older opset.
98-
### custom-ops
107+
### --custom-ops
99108
the runtime may support custom ops that are not defined in onnx. A user can asked the converter to map to custom ops by listing them with the --custom-ops option. Tensorflow ops listed here will be mapped to a custom op with the same name as the tensorflow op but in the onnx domain ai.onnx.converters.tensorflow. For example: ```--custom-ops Print``` will insert a op ```Print``` in the onnx domain ```ai.onnx.converters.tensorflow``` into the graph. We also support a python api for custom ops documented later in this readme.
100-
### fold_const
109+
### --fold_const
101110
when set, TensorFlow fold_constants transformation will be applied before conversion. This will benefit features including Transpose optimization (e.g. Transpose operations introduced during tf-graph-to-onnx-graph conversion will be removed), and RNN unit conversion (for example LSTM). Older TensorFlow version might run into issues with this option depending on the model.
102111

103112

tests/make_models.py

Lines changed: 99 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,99 @@
1+
# Copyright (c) Microsoft Corporation. All rights reserved.
2+
# Licensed under the MIT license.
3+
4+
"""Make simple test model in all tensorflow formats."""
5+
6+
from __future__ import division
7+
from __future__ import print_function
8+
from __future__ import unicode_literals
9+
10+
import os
11+
import unittest
12+
from collections import namedtuple
13+
14+
import graphviz as gv
15+
from onnx import TensorProto
16+
from onnx import helper
17+
18+
import tensorflow as tf
19+
from tensorflow.python.framework.graph_util import convert_variables_to_constants
20+
import numpy as np
21+
22+
import os
23+
24+
25+
# pylint: disable=missing-docstring
26+
27+
# Parameters
28+
learning_rate = 0.02
29+
training_epochs = 100
30+
31+
# Training Data
32+
train_X = np.array(
33+
[3.3, 4.4, 5.5, 6.71, 6.93, 4.168, 9.779, 6.182, 7.59, 2.167, 7.042, 10.791, 5.313, 7.997, 5.654, 9.27, 3.1])
34+
train_Y = np.array(
35+
[1.7, 2.76, 2.09, 3.19, 1.694, 1.573, 3.366, 2.596, 2.53, 1.221, 2.827, 3.465, 1.65, 2.904, 2.42, 2.94, 1.3])
36+
test_X = np.array([6.83, 4.668, 8.9, 7.91, 5.7, 8.7, 3.1, 2.1])
37+
test_Y = np.array([1.84, 2.273, 3.2, 2.831, 2.92, 3.24, 1.35, 1.03])
38+
39+
def freeze_session(sess, keep_var_names=None, output_names=None, clear_devices=True):
40+
"""Freezes the state of a session into a pruned computation graph."""
41+
output_names = [i.replace(":0", "") for i in output_names]
42+
graph = sess.graph
43+
with graph.as_default():
44+
freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or []))
45+
output_names = output_names or []
46+
output_names += [v.op.name for v in tf.global_variables()]
47+
input_graph_def = graph.as_graph_def()
48+
if clear_devices:
49+
for node in input_graph_def.node:
50+
node.device = ""
51+
frozen_graph = convert_variables_to_constants(sess, input_graph_def,
52+
output_names, freeze_var_names)
53+
return frozen_graph
54+
55+
def train(model_path):
56+
n_samples = train_X.shape[0]
57+
58+
# tf Graph Input
59+
X = tf.placeholder(tf.float32, name="X")
60+
Y = tf.placeholder(tf.float32, name="Y")
61+
62+
# Set model weights
63+
W = tf.Variable(np.random.randn(), name="W")
64+
b = tf.Variable(np.random.randn(), name="b")
65+
66+
pred = tf.add(tf.multiply(X, W), b)
67+
pred = tf.identity(pred, name="pred")
68+
cost = tf.reduce_sum(tf.pow(pred - Y, 2)) / (2 * n_samples)
69+
70+
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
71+
saver = tf.train.Saver()
72+
73+
# Launch the graph
74+
with tf.Session() as sess:
75+
sess.run(tf.global_variables_initializer())
76+
77+
# Fit all training data
78+
for epoch in range(training_epochs):
79+
for (x, y) in zip(train_X, train_Y):
80+
sess.run(optimizer, feed_dict={X: x, Y: y})
81+
training_cost = sess.run(cost, feed_dict={X: train_X, Y: train_Y})
82+
testing_cost = sess.run(cost, feed_dict={X: test_X, Y: test_Y})
83+
print("train_cost={}, test_cost={}, diff={}"
84+
.format(training_cost, testing_cost, abs(training_cost - testing_cost)))
85+
86+
p = os.path.abspath(os.path.join(model_path, "checkpoint"))
87+
os.makedirs(p, exist_ok=True)
88+
p = saver.save(sess, os.path.join(p, "model"))
89+
90+
frozen_graph = freeze_session(sess, output_names=["pred:0"])
91+
p = os.path.abspath(os.path.join(model_path, "graphdef"))
92+
tf.train.write_graph(frozen_graph, p, "frozen.pb", as_text=False)
93+
94+
p = os.path.abspath(os.path.join(model_path, "saved_model"))
95+
tf.saved_model.simple_save(sess, p, inputs={"X": X}, outputs={"pred": pred})
96+
97+
98+
train("models/regression")
99+

0 commit comments

Comments
 (0)