-
Notifications
You must be signed in to change notification settings - Fork 272
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ML.NET rolling and fixed version CI legs #988
Conversation
…er time (and make it a correct name)
# Conflicts: # azure-pipelines.yml # src/benchmarks/real-world/Microsoft.ML.Benchmarks/HashBench.cs # src/benchmarks/real-world/Microsoft.ML.Benchmarks/Microsoft.ML.Benchmarks.csproj
@DrewScoggins could you please take a look? |
@@ -92,9 +93,9 @@ jobs: | |||
${{ each framework in parameters.frameworks }}: | |||
${{ framework }}: | |||
_Framework: ${{ framework }} | |||
_Configs: CompilationMode=Tiered RunKind="${{ parameters.kind }}" | |||
_Configs: CompilationMode=Tiered RunKind="${{ parameters.kind }}${{ parameters.libraryVersion }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know how the new uploader works so I've just extended the RunKind
string with fixed|rolling
This parameter is optional and null by default, so the "id" should change only for ML.NET runs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like the right way to go about this.
@@ -213,6 +220,10 @@ def __main(args: list) -> int: | |||
target_framework_monikers | |||
) | |||
|
|||
# set the product version using Env Vars handled by the MSBuild project file | |||
if args.lib_version: | |||
os.environ['PERFLAB_LIBRARY_VERSION'] = args.lib_version |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we instead pass it in through /p
on the command line?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BDN also performs dotnet restore
, I would have to extend BDN to be able to parse /p
parameters. With env vars it's simpler - we just set ugly env var and the child processes get the same settings out of the box.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So if I understand what is going on here we still use core-sdk as the catalyst for when runs are triggered even on runs where .NET Core is not moving. So we will not really have anyway to link back to what version of ML.NET that was running. Maybe we can add the version of ML.NET that we are testing as a config flag for the run? Otherwise I wonder how actionable this data will be.
@DrewScoggins - after our today's discussion, should we re-evaluate this? Also - Is it possible to update the ML.NEt benchmarks to the latest https://github.com/dotnet/machinelearning/tree/main/test/Microsoft.ML.PerformanceTests? |
I am happy to update to the latest and run them. Is there anything other than that I need to do? I'm not super familiar with this issue. |
@adamsitnik can chime in, but I want to make sure that we remove all the project references , add back nuget references and then make sure that all the benchmarks run. If you can get to that, I can verify if they run on windows/arm64 as well. |
Existing ML.NET benchmarks that we have in this repo are using a hardcoded version of ML.NET in order to catch regressions in .NET itself. The purpose of this PR was to add another CI leg that would be using latest ML.NET version to detect regressions in ML.NET itself. @kunalspathak I assume you would be interested in updating the benchmarks, ML.NET version and using the benchmarks as part of our automated regression detection suite? |
That's right. It is part of our ongoing goal of having real-world benchmarks to measure the .NET performance. I also am working on adding ImageSharp to this suite in #2149. So help me understand, with the current state, if we just copy the latest code from https://github.com/dotnet/machinelearning/tree/main/test/Microsoft.ML.PerformanceTests and used nuget references to the latest ML.NET, we should be good and this PR is not blocking it, yes? Of course, we certainly want to track ML.NET performance and in order to do that, we need this PR. |
Yes, that is correct. |
In #896 we have agreed that it would be good to have two different configs for ML.NET:
This PR tries to implement that by doing the following:
.yml
template with an optionallibraryVersion
parameterI could not read latest ML.NET version from Maestro because ML.NET is not using Arcade, so I've used an asterisk to obtain latest preview version.
This solution is simple, but requires a version bump after ML.NET increases major|minor version number (few times a year) due to NuGet/Home#912