Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ML.NET rolling and fixed version CI legs #988

Closed
wants to merge 14 commits into from

Conversation

adamsitnik
Copy link
Member

@adamsitnik adamsitnik commented Oct 31, 2019

In #896 we have agreed that it would be good to have two different configs for ML.NET:

  • one that uses latest .NET Core and fixed ML.NET to detect regressions in .NET Core
  • one that uses latest version of ML.NET and LTS .NET Core to detect regressions in ML.NET

This PR tries to implement that by doing the following:

  • extending the benchmarks .yml template with an optional libraryVersion parameter
  • defining two jobs for ML.NET:
    • fixed ML.NET, latest .NET Core
    • latest .NET Core, fixed ML.NET
  • extending the python script used by CI to set env var for controlling the version

I could not read latest ML.NET version from Maestro because ML.NET is not using Arcade, so I've used an asterisk to obtain latest preview version.

<LibraryVersion Condition="'$(PERFLAB_LIBRARY_VERSION)' == 'rolling'">1.5.0-*</LibraryVersion>

This solution is simple, but requires a version bump after ML.NET increases major|minor version number (few times a year) due to NuGet/Home#912

@adamsitnik adamsitnik changed the title [WIP] Ml.NET rolling and fixed version CI legs [WIP] ML.NET rolling and fixed version CI legs Nov 4, 2019
@adamsitnik adamsitnik changed the title [WIP] ML.NET rolling and fixed version CI legs ML.NET rolling and fixed version CI legs Nov 5, 2019
@adamsitnik
Copy link
Member Author

@DrewScoggins could you please take a look?

@adamsitnik adamsitnik marked this pull request as ready for review November 5, 2019 11:47
@@ -92,9 +93,9 @@ jobs:
${{ each framework in parameters.frameworks }}:
${{ framework }}:
_Framework: ${{ framework }}
_Configs: CompilationMode=Tiered RunKind="${{ parameters.kind }}"
_Configs: CompilationMode=Tiered RunKind="${{ parameters.kind }}${{ parameters.libraryVersion }}"
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know how the new uploader works so I've just extended the RunKind string with fixed|rolling This parameter is optional and null by default, so the "id" should change only for ML.NET runs

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like the right way to go about this.

@@ -213,6 +220,10 @@ def __main(args: list) -> int:
target_framework_monikers
)

# set the product version using Env Vars handled by the MSBuild project file
if args.lib_version:
os.environ['PERFLAB_LIBRARY_VERSION'] = args.lib_version
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we instead pass it in through /p on the command line?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BDN also performs dotnet restore, I would have to extend BDN to be able to parse /p parameters. With env vars it's simpler - we just set ugly env var and the child processes get the same settings out of the box.

Copy link
Member

@DrewScoggins DrewScoggins left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So if I understand what is going on here we still use core-sdk as the catalyst for when runs are triggered even on runs where .NET Core is not moving. So we will not really have anyway to link back to what version of ML.NET that was running. Maybe we can add the version of ML.NET that we are testing as a config flag for the run? Otherwise I wonder how actionable this data will be.

@adamsitnik adamsitnik mentioned this pull request Nov 7, 2019
@adamsitnik adamsitnik closed this Apr 24, 2020
@kunalspathak
Copy link
Member

@DrewScoggins - after our today's discussion, should we re-evaluate this?

Also - Is it possible to update the ML.NEt benchmarks to the latest https://github.com/dotnet/machinelearning/tree/main/test/Microsoft.ML.PerformanceTests?

@danmoseley
Copy link
Member

cc @michaelgsharp

@michaelgsharp
Copy link
Member

I am happy to update to the latest and run them. Is there anything other than that I need to do? I'm not super familiar with this issue.

@kunalspathak
Copy link
Member

@adamsitnik can chime in, but I want to make sure that we remove all the project references , add back nuget references and then make sure that all the benchmarks run. If you can get to that, I can verify if they run on windows/arm64 as well.

@adamsitnik
Copy link
Member Author

Existing ML.NET benchmarks that we have in this repo are using a hardcoded version of ML.NET in order to catch regressions in .NET itself.

The purpose of this PR was to add another CI leg that would be using latest ML.NET version to detect regressions in ML.NET itself.

@kunalspathak I assume you would be interested in updating the benchmarks, ML.NET version and using the benchmarks as part of our automated regression detection suite?

@kunalspathak
Copy link
Member

kunalspathak commented Dec 2, 2021

@kunalspathak I assume you would be interested in updating the benchmarks, ML.NET version and using the benchmarks as part of our automated regression detection suite?

That's right. It is part of our ongoing goal of having real-world benchmarks to measure the .NET performance. I also am working on adding ImageSharp to this suite in #2149. So help me understand, with the current state, if we just copy the latest code from https://github.com/dotnet/machinelearning/tree/main/test/Microsoft.ML.PerformanceTests and used nuget references to the latest ML.NET, we should be good and this PR is not blocking it, yes?

Of course, we certainly want to track ML.NET performance and in order to do that, we need this PR.

@adamsitnik
Copy link
Member Author

we should be good and this PR is not blocking it, yes?

Yes, that is correct.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants