Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed X64Fact tests #5057

Merged
merged 4 commits into from
Apr 25, 2020
Merged

Fixed X64Fact tests #5057

merged 4 commits into from
Apr 25, 2020

Conversation

mstfbl
Copy link
Contributor

@mstfbl mstfbl commented Apr 23, 2020

Fixed X64Fact tests with x86 specific baselines added

Currently obtaining baselines from x86 builds from CI

@mstfbl mstfbl marked this pull request as ready for review April 24, 2020 03:34
@mstfbl mstfbl requested a review from a team as a code owner April 24, 2020 03:34
Copy link
Contributor Author

@mstfbl mstfbl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are 4 more X64Fact tests not fully enabled in TestPredictors.cs with relation to Issue #1216

[X64Fact("x86 fails. Associated GitHubIssue: https://github.com/dotnet/machinelearning/issues/1216")]
public void TestEnsembleCombiner()
{
var dataPath = GetDataPath("breast-cancer.txt");
var dataView = ML.Data.LoadFromTextFile(dataPath);
var predictors = new PredictorModel[]
{
FastTree.TrainBinary(ML, new FastTreeBinaryTrainer.Options
{
FeatureColumnName = "Features",
NumberOfTrees = 5,
NumberOfLeaves = 4,
LabelColumnName = DefaultColumnNames.Label,
TrainingData = dataView
}).PredictorModel,
AveragedPerceptronTrainer.TrainBinary(ML, new AveragedPerceptronTrainer.Options()
{
FeatureColumnName = "Features",
LabelColumnName = DefaultColumnNames.Label,
NumberOfIterations = 2,
TrainingData = dataView,
NormalizeFeatures = NormalizeOption.No
}).PredictorModel,
LbfgsLogisticRegressionBinaryTrainer.TrainBinary(ML, new LbfgsLogisticRegressionBinaryTrainer.Options()
{
FeatureColumnName = "Features",
LabelColumnName = DefaultColumnNames.Label,
OptimizationTolerance = 10e-4F,
TrainingData = dataView,
NormalizeFeatures = NormalizeOption.No
}).PredictorModel,
LbfgsLogisticRegressionBinaryTrainer.TrainBinary(ML, new LbfgsLogisticRegressionBinaryTrainer.Options()
{
FeatureColumnName = "Features",
LabelColumnName = DefaultColumnNames.Label,
OptimizationTolerance = 10e-3F,
TrainingData = dataView,
NormalizeFeatures = NormalizeOption.No
}).PredictorModel
};
CombineAndTestEnsembles(dataView, "pe", "oc=average", PredictionKind.BinaryClassification, predictors);
Done();
}
[X64Fact("x86 fails. Associated GitHubIssue: https://github.com/dotnet/machinelearning/issues/1216")]
public void TestMulticlassEnsembleCombiner()
{
var dataPath = GetDataPath("breast-cancer.txt");
var dataView = ML.Data.LoadFromTextFile(dataPath);
var predictors = new PredictorModel[]
{
LightGbm.TrainMulticlass(Env, new LightGbmMulticlassTrainer.Options
{
FeatureColumnName = "Features",
NumberOfIterations = 5,
NumberOfLeaves = 4,
LabelColumnName = DefaultColumnNames.Label,
TrainingData = dataView
}).PredictorModel,
LbfgsMaximumEntropyMulticlassTrainer.TrainMulticlass(Env, new LbfgsMaximumEntropyMulticlassTrainer.Options()
{
FeatureColumnName = "Features",
LabelColumnName = DefaultColumnNames.Label,
OptimizationTolerance = 10e-4F,
TrainingData = dataView,
NormalizeFeatures = NormalizeOption.No
}).PredictorModel,
LbfgsMaximumEntropyMulticlassTrainer.TrainMulticlass(Env, new LbfgsMaximumEntropyMulticlassTrainer.Options()
{
FeatureColumnName = "Features",
LabelColumnName = DefaultColumnNames.Label,
OptimizationTolerance = 10e-3F,
TrainingData = dataView,
NormalizeFeatures = NormalizeOption.No
}).PredictorModel
};
CombineAndTestEnsembles(dataView, "weightedensemblemulticlass", "oc=multiaverage", PredictionKind.MulticlassClassification, predictors);
}

These tests cause crashes during runs, should be explored more in the near future, and re-enabled under a different PR.

@harishsk
Copy link
Contributor

Yes, I noticed that too when I was debugging earlier.

Can you please do me a favor?

Now that we have gone towards different baselines, I think it would be good to be able to easily track which tests are using configuration specific baselines. Can you please add some code in BaseTestBaseline.cs to log whenever we use a configuration specific baseline? I just realized that this is something we should do.


In reply to: 399621483 [](ancestors = 399621483)

@harishsk
Copy link
Contributor

You can do it as a separate PR if you wish.

Copy link
Contributor

@frank-dong-ms-zz frank-dong-ms-zz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:shipit:

@mstfbl
Copy link
Contributor Author

mstfbl commented Apr 24, 2020

I will make a separate PR for logging when a configuration specific benchmark is utilized.

@mstfbl mstfbl merged commit 2b5b9cf into dotnet:master Apr 25, 2020
@ghost ghost locked as resolved and limited conversation to collaborators Mar 18, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants