Skip to content

Commit

Permalink
Raising of the recursions limit for onnx model loading. (Issue #5585) (
Browse files Browse the repository at this point in the history
…#5796)

* Raised the limit of recursions in the creation of the CodedInputStream in the OnnxTransformer (as the default value in the Google.Protobuf). Otherwise some models cannot be loaded (ex. TF2 Efficentdet).

* Updated arcade to the latest version (#5783)

* updated arcade to the latest version

* updated eng/common correctly

* Fixed benchmark test.

* Use dotnet certificate (#5794)

* Use dotnet certificate

* Update 3.1 SDK

Co-authored-by: Prashanth Govindarajan <prgovi@microsoft.com>
Co-authored-by: Michael Sharp <51342856+michaelgsharp@users.noreply.github.com>

* Arm build changes (#5789)

* arm testing

* initial commit with build working on arm64

* windows changes

* build fixes for arm/arm64 with cross compilation

* cross build instructions added

* renamed arm to Arm. Changed TargetArchitecture to default to OS architecture

* fixed some formatting

* fixed capitilization

* fixed Arm Capitilization

* Fix cross-compilation if statement

* building on apple silicon

* removed non build related files

* Changes from PR comments. Removal of FastTreeNative flag.

* Changes from pr comments.

* Fixes from PR comments.

* Changed how we are excluding files.

* Onnx load model (#5782)

* fixed onnx temp model deleting

* random file path fixed

* updates from pr

* Changes from PR comments.

* Changed how auto ml caches.

* PR fixes.

* Update src/Microsoft.ML.AutoML/API/ExperimentSettings.cs

Co-authored-by: Eric Erhardt <eric.erhardt@microsoft.com>

* Tensorflow fixes from PR comments

* fixed filepath issues

Co-authored-by: Eric Erhardt <eric.erhardt@microsoft.com>

Co-authored-by: Michael Sharp <51342856+michaelgsharp@users.noreply.github.com>
Co-authored-by: Matt Mitchell <mmitche@microsoft.com>
Co-authored-by: Prashanth Govindarajan <prgovi@microsoft.com>
Co-authored-by: Eric Erhardt <eric.erhardt@microsoft.com>
  • Loading branch information
5 people authored May 27, 2021
1 parent 7fafbf3 commit 3c3b298
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion src/Microsoft.ML.OnnxTransformer/OnnxUtils.cs
Original file line number Diff line number Diff line change
Expand Up @@ -204,7 +204,7 @@ public OnnxModel(string modelFile, int? gpuDeviceId = null, bool fallbackToCpu =

// The CodedInputStream auto closes the stream, and we need to make sure that our main stream stays open, so creating a new one here.
using (var modelStream = new FileStream(modelFile, FileMode.Open, FileAccess.Read, FileShare.Delete | FileShare.Read))
using (var codedStream = Google.Protobuf.CodedInputStream.CreateWithLimits(modelStream, Int32.MaxValue, 10))
using (var codedStream = Google.Protobuf.CodedInputStream.CreateWithLimits(modelStream, Int32.MaxValue, 100))
model = OnnxCSharpToProtoWrapper.ModelProto.Parser.ParseFrom(codedStream);

// Parse actual input and output types stored in the loaded ONNX model to get their DataViewType's.
Expand Down

0 comments on commit 3c3b298

Please sign in to comment.