-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Closed as not planned
Labels
api:CSharpissues related to the C# APIissues related to the C# APIstaleissues that have not been addressed in a while; categorized by a botissues that have not been addressed in a while; categorized by a bot
Description
Describe the issue
ApplyOnnxModel is throwing on sequence<map<int64,float32>> output.
If I use InferenceSession.Run instead it works.
I expect it to work because of
#156
https://blog.hompus.nl/2020/09/25/get-all-prediction-scores-from-your-onnx-model-with-ml-net/
import numpy as np
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from skl2onnx import convert_sklearn
from skl2onnx.common.data_types import FloatTensorType
# Generate a simple binary classification dataset
X, y = make_classification(n_features=4, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Train a logistic regression model
model = LogisticRegression()
model.fit(X_train, y_train)
# Convert the model to ONNX format
initial_type = [('float_input', FloatTensorType([None, 4]))] # Adjust the shape [None, 4] as necessary
onnx_model = convert_sklearn(model, initial_types=initial_type)
# Save the model to disk
with open("simple_model.onnx", "wb") as f:
f.write(onnx_model.SerializeToString())using System;
using Microsoft.ML;
using Microsoft.ML.Data;
using Microsoft.ML.OnnxRuntime;
using Microsoft.ML.OnnxRuntime.Tensors;
using Microsoft.ML.Transforms.Onnx;
namespace mm;
class Program
{
static void Main(string[] args)
{
var mlContext = new MLContext();
var dataView = mlContext.Data.LoadFromEnumerable(Array.Empty<InputData>());
// Works fine
using var session = new InferenceSession("simple_model.onnx");
var container = new List<NamedOnnxValue>();
var tensorIn = new DenseTensor<float>(new[] { 0.1f, 0.2f, 0.3f, 0.4f }, new int[] { 1, 4 });
var nov = NamedOnnxValue.CreateFromTensor("float_input", tensorIn);
container.Add(nov);
using var outputs = session.Run(container);
var outNode1 = outputs.ElementAt(0);
var ten = outNode1.AsTensor<long>();
var outNode2 = outputs.ElementAt(1);
var seq = outNode2.AsEnumerable<NamedOnnxValue>();
var map = seq.First().AsDictionary<long, float>();
Console.WriteLine($"Predicted value: {map[ten[0]]}");
// Should work but throws
var pipeline = mlContext.Transforms.ApplyOnnxModel(
modelFile: "simple_model.onnx",
outputColumnNames: ["output_label", "output_probability"],
inputColumnNames: ["float_input"]);
var model = pipeline.Fit(dataView);
var predictionEngine = mlContext.Model.CreatePredictionEngine<InputData, OutputData>(model);
var prediction = predictionEngine.Predict(new InputData { FloatInput = [0.1f, 0.2f, 0.3f, 0.4f] });
Console.WriteLine($"Predicted value: {prediction.Probabilities.First()[prediction.Labels[0]]}");
}
public class InputData
{
[VectorType(4)] // Match the number of features used by the model
public float[] FloatInput { get; set; }
}
public class OutputData
{
[ColumnName("output_label")]
public long[] Labels { get; set; }
[ColumnName("output_probability")]
[OnnxSequenceType(typeof(IDictionary<long, float>))]
public IEnumerable<IDictionary<long, float>> Probabilities { get; set; }
}
}Urgency
No response
Target platform
Any
Build script
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net8.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
<PublishAot>false</PublishAot>
<InvariantGlobalization>true</InvariantGlobalization>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.ML" Version="3.0.1" />
<PackageReference Include="Microsoft.ML.OnnxRuntime" Version="1.17.0" />
<PackageReference Include="Microsoft.ML.OnnxTransformer" Version="3.0.1" />
</ItemGroup>
</Project>Error / output
Exception has occurred: CLR/System.InvalidOperationException
An unhandled exception of type 'System.InvalidOperationException' occurred in Microsoft.ML.OnnxTransformer.dll: 'Error initializing model :Microsoft.ML.OnnxRuntime.OnnxRuntimeException: [ErrorCode:Fail] OnnxValueType must either be a tensor or sparse tensor
at Microsoft.ML.OnnxRuntime.NodeMetadata.CheckTensor()
at Microsoft.ML.OnnxRuntime.NodeMetadata.get_Dimensions()
at Microsoft.ML.Transforms.Onnx.OnnxModel.GetOnnxVariablesFromMetadata(IReadOnlyDictionary`2 nodeMetadata, IDictionary`2 shapeDictionary, Dictionary`2 typePool, Dictionary`2 casterPool)
at Microsoft.ML.Transforms.Onnx.OnnxModel..ctor(String modelFile, Nullable`1 gpuDeviceId, Boolean fallbackToCpu, Boolean ownModelFile, IDictionary`2 shapeDictionary, Int32 recursionLimit, Nullable`1 interOpNumThreads, Nullable`1 intraOpNumThreads)
at Microsoft.ML.Transforms.Onnx.OnnxTransformer..ctor(IHostEnvironment env, Options options, Byte[] modelBytes)'
Inner exceptions found, see $exception in variables window for more details.
Innermost exception Microsoft.ML.OnnxRuntime.OnnxRuntimeException : [ErrorCode:Fail] OnnxValueType must either be a tensor or sparse tensor
at Microsoft.ML.OnnxRuntime.NodeMetadata.CheckTensor()
at Microsoft.ML.OnnxRuntime.NodeMetadata.get_Dimensions()
at Microsoft.ML.Transforms.Onnx.OnnxModel.GetOnnxVariablesFromMetadata(IReadOnlyDictionary`2 nodeMetadata, IDictionary`2 shapeDictionary, Dictionary`2 typePool, Dictionary`2 casterPool)
at Microsoft.ML.Transforms.Onnx.OnnxModel..ctor(String modelFile, Nullable`1 gpuDeviceId, Boolean fallbackToCpu, Boolean ownModelFile, IDictionary`2 shapeDictionary, Int32 recursionLimit, Nullable`1 interOpNumThreads, Nullable`1 intraOpNumThreads)
at Microsoft.ML.Transforms.Onnx.OnnxTransformer..ctor(IHostEnvironment env, Options options, Byte[] modelBytes)
Visual Studio Version
No response
GCC / Compiler Version
No response
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
api:CSharpissues related to the C# APIissues related to the C# APIstaleissues that have not been addressed in a while; categorized by a botissues that have not been addressed in a while; categorized by a bot