-
Notifications
You must be signed in to change notification settings - Fork 9
Contributing models to modelhub
This wiki page aims to help contributors standardize file formats before contributing them. Contributions should follow the ONNX binary protobuf format (.proto) that contains both the network architecture and parameters. Pick the original platform you used below and follow the instructions. We found that the ONNX docker is the best place to run these experiments. Here is a summary of supported platforms so far.
platform | effort |
---|---|
torch | export .t7 to pytorch first then to ONNX protobuf |
pytorch | export to ONNX protobuf directly |
caffe | export .caffemodel to caffe2 first then to ONNX protobuf |
caffe2 | export to ONNX protobuf directly |
CNTK | export to ONNX protobuf directly |
tensorflow/keras | in the works.. experimental |
theano/lasagne | in the works.. |
-
ONNX is still in early development so it might be the case that some operations in your model are not supported yet such as 3d operations. Please consult the ONNX repo for the latest updates.
-
Having trouble converting torch->pytorch or caffe->caffe2? If that is the case, we suggest rebuilding your architecture in pytorch/caffe2 followed by loading in the weights/biases.
-
Contributors are responsible for ensuring model integrity is intact when parsing from one platform to another or to the ONNX protobuf - including accuracy and loss rates.
- When parsing from one platform to another, rerun inference on data and ensure the performance matches that of your original model.
- When parsing to the ONNX protobuf, run the following checks and parse the ONNX protobuf into a platform of your choice and rerun inference to ensure performance is maintained.
import onnx
model = onnx.load("onnx_model.proto")
onnx.checker.check_model(model)
onnx.helper.printable_graph(model.graph)
Convert to pytorch model first then jump to the pytorch section. Use one of the following methods:
Method A: Use load_lua with pytorch source.
import torch
from torch.utils.serialization import load_lua
pytorch_model = load_lua('torch_model.t7')
Method B: Use convert_torch_to_pytorch source
Export the model using torch.onnx.export. You must provide dummy input data. source
from torch.autograd import Variable
import torch.onnx
dummy_input = Variable(torch.randn(<tensor size>))
torch.onnx.export(your_model, dummy_input, "onnx_model.proto", verbose=True)
Convert to caffe2 model first then jump to the caffe2 section. Use translators provided by Caffe2. source
Use the onnx_caffe2 library which is a Caffe2 implementation of ONNX. source
import onnx_caffe2.frontend as c2_onnx
from caffe2.proto import caffe2_pb2
c2_net = caffe2_pb2.NetDef()
with open(c2_model_file, 'rb') as f:
c2_net.ParseFromString(f.read())
onnx_graph = c2_onnx.caffe2_net_to_onnx_graph(c2_net)
Save the model directly with the appropriate format. source
import cntk as C
x = C.input_variable(<input shape>)
z = create_model(x)
z.save("onnx_model.proto", format=C.ModelFormat.ONNX)