Skip to content

Enabling Ranking Cross Validation #5263

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 12 commits into from
Jul 10, 2020
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions src/Microsoft.ML.AutoML/API/ColumnInference.cs
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,7 @@ public sealed class ColumnInformation

/// <summary>
/// The dataset column to use as a group ID for computation.
/// If a SamplingKeyColumnName is provided, then it should be the same as this column.
/// </summary>
public string GroupIdColumnName { get; set; }

Expand Down
36 changes: 29 additions & 7 deletions src/Microsoft.ML.AutoML/API/ExperimentBase.cs
Original file line number Diff line number Diff line change
Expand Up @@ -67,11 +67,23 @@ internal ExperimentBase(MLContext context,
public ExperimentResult<TMetrics> Execute(IDataView trainData, string labelColumnName = DefaultColumnNames.Label,
string samplingKeyColumn = null, IEstimator<ITransformer> preFeaturizer = null, IProgress<RunDetail<TMetrics>> progressHandler = null)
{
var columnInformation = new ColumnInformation()
ColumnInformation columnInformation;
if (_task == TaskKind.Ranking)
{
LabelColumnName = labelColumnName,
SamplingKeyColumnName = samplingKeyColumn
};
columnInformation = new ColumnInformation()
{
LabelColumnName = labelColumnName,
GroupIdColumnName = samplingKeyColumn ?? DefaultColumnNames.GroupId
};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: As per the feedback we got from @justinormont today, I think it would be better to set both GroupIdColumnName and SamplingKeyColumnName in here. Something like:

columnInformation = new ColumnInformation()
{
    LabelColumnName = labelColumnName,
    SamplingKeyColumnName = samplingKeyColumn ?? DefaultColumnNames.GroupId,
    GroupIdColumnName = samplingKeyColumn ?? DefaultColumnNames.GroupId // For ranking, we want to enforce having the same column as samplingKeyColum and GroupIdColumn
}

With your current implementation it won't make any difference to do this, but I do think this might be clearer for future AutoML.NET developers.

A similar change would need to take place in the other overload that receives a samplingKeyColumnName but no columnInformation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would lean towards deferring this to the next update. I will take a quick look, but having a column with two column info seems to be causing issues.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's just a two line change (adding the line in here, and in the other overload), and it's just to have it clear in the columnInformation object that we'll be using the samplingKeyColumn provided by the user both as SamplingKeyColumnName and GroupIdColumnName (which is actually what we're doing). So I think it's clearer this way. But whatever you decide is fine 😉


In reply to: 452987600 [](ancestors = 452987600)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to be clear, mapping a groupId column to both SamplingKeyColumnName and GroupIdColumnName doesn't work with the current implementation. The current implementation uses GroupIdColumnName as the SamplingKeyColumnName, so if the user provides a SamplingKeyColumnName, we throw an error (unless they are both the same).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I know the current implementation throws if they're not the same. That's way I suggested using the samplingKeyColumn to set both SamplingKeyColumnName and GroupIdColumnName.

In general if the user provides a ColumnInformation object containing SamplingKeyColumnName and GroupIdColumnName, then we should accept that if both are the same (and in the current implementation this is doable). So I'm just not sure what's the problem in here.


In reply to: 453055824 [](ancestors = 453055824)

}
else
{
columnInformation = new ColumnInformation()
{
LabelColumnName = labelColumnName,
SamplingKeyColumnName = samplingKeyColumn
};
}
return Execute(trainData, columnInformation, preFeaturizer, progressHandler);
}

Expand Down Expand Up @@ -102,19 +114,28 @@ public ExperimentResult<TMetrics> Execute(IDataView trainData, ColumnInformation
const int crossValRowCountThreshold = 15000;

var rowCount = DatasetDimensionsUtil.CountRows(trainData, crossValRowCountThreshold);
var samplingKeyColumnName = GetSamplingKey(columnInformation?.GroupIdColumnName, columnInformation?.SamplingKeyColumnName);
if (rowCount < crossValRowCountThreshold)
{
const int numCrossValFolds = 10;
var splitResult = SplitUtil.CrossValSplit(Context, trainData, numCrossValFolds, columnInformation?.SamplingKeyColumnName);
var splitResult = SplitUtil.CrossValSplit(Context, trainData, numCrossValFolds, samplingKeyColumnName);
return ExecuteCrossValSummary(splitResult.trainDatasets, columnInformation, splitResult.validationDatasets, preFeaturizer, progressHandler);
}
else
{
var splitResult = SplitUtil.TrainValidateSplit(Context, trainData, columnInformation?.SamplingKeyColumnName);
var splitResult = SplitUtil.TrainValidateSplit(Context, trainData, samplingKeyColumnName);
return ExecuteTrainValidate(splitResult.trainData, columnInformation, splitResult.validationData, preFeaturizer, progressHandler);
}
}

private string GetSamplingKey(string groupIdColumnName, string samplingKeyColumnName)
{
UserInputValidationUtil.ValidateSamplingKey(samplingKeyColumnName, groupIdColumnName, _task);
if ( _task == TaskKind.Ranking)
return groupIdColumnName ?? DefaultColumnNames.GroupId;
return samplingKeyColumnName;
}

/// <summary>
/// Executes an AutoML experiment.
/// </summary>
Expand Down Expand Up @@ -194,7 +215,8 @@ public CrossValidationExperimentResult<TMetrics> Execute(IDataView trainData, ui
IProgress<CrossValidationRunDetail<TMetrics>> progressHandler = null)
{
UserInputValidationUtil.ValidateNumberOfCVFoldsArg(numberOfCVFolds);
var splitResult = SplitUtil.CrossValSplit(Context, trainData, numberOfCVFolds, columnInformation?.SamplingKeyColumnName);
var samplingKeyColumnName = GetSamplingKey(columnInformation?.GroupIdColumnName, columnInformation?.SamplingKeyColumnName);
var splitResult = SplitUtil.CrossValSplit(Context, trainData, numberOfCVFolds, samplingKeyColumnName);
return ExecuteCrossVal(splitResult.trainDatasets, columnInformation, splitResult.validationDatasets, preFeaturizer, progressHandler);
}

Expand Down
6 changes: 3 additions & 3 deletions src/Microsoft.ML.AutoML/API/RankingExperiment.cs
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ public sealed class RankingExperimentSettings : ExperimentSettings
public ICollection<RankingTrainer> Trainers { get; }
public RankingExperimentSettings()
{
GroupIdColumnName = "GroupId";
GroupIdColumnName = DefaultColumnNames.GroupId;
OptimizingMetric = RankingMetric.Ndcg;
Trainers = Enum.GetValues(typeof(RankingTrainer)).OfType<RankingTrainer>().ToList();
}
Expand Down Expand Up @@ -77,7 +77,7 @@ public static class RankingExperimentResultExtensions
/// <param name="metric">Metric to consider when selecting the best run.</param>
/// <param name="groupIdColumnName">Name for the GroupId column.</param>
/// <returns>The best experiment run.</returns>
public static RunDetail<RankingMetrics> Best(this IEnumerable<RunDetail<RankingMetrics>> results, RankingMetric metric = RankingMetric.Ndcg, string groupIdColumnName = "GroupId")
public static RunDetail<RankingMetrics> Best(this IEnumerable<RunDetail<RankingMetrics>> results, RankingMetric metric = RankingMetric.Ndcg, string groupIdColumnName = DefaultColumnNames.GroupId)
{
var metricsAgent = new RankingMetricsAgent(null, metric, groupIdColumnName);
var isMetricMaximizing = new OptimizingMetricInfo(metric).IsMaximizing;
Expand All @@ -91,7 +91,7 @@ public static RunDetail<RankingMetrics> Best(this IEnumerable<RunDetail<RankingM
/// <param name="metric">Metric to consider when selecting the best run.</param>
/// <param name="groupIdColumnName">Name for the GroupId column.</param>
/// <returns>The best experiment run.</returns>
public static CrossValidationRunDetail<RankingMetrics> Best(this IEnumerable<CrossValidationRunDetail<RankingMetrics>> results, RankingMetric metric = RankingMetric.Ndcg, string groupIdColumnName = "GroupId")
public static CrossValidationRunDetail<RankingMetrics> Best(this IEnumerable<CrossValidationRunDetail<RankingMetrics>> results, RankingMetric metric = RankingMetric.Ndcg, string groupIdColumnName = DefaultColumnNames.GroupId)
{
var metricsAgent = new RankingMetricsAgent(null, metric, groupIdColumnName);
var isMetricMaximizing = new OptimizingMetricInfo(metric).IsMaximizing;
Expand Down
8 changes: 8 additions & 0 deletions src/Microsoft.ML.AutoML/Utils/UserInputValidationUtil.cs
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,14 @@ public static void ValidateNumberOfCVFoldsArg(uint numberOfCVFolds)
}
}

public static void ValidateSamplingKey(string samplingKeyColumnName, string groupIdColumnName, TaskKind task)
{
if (task == TaskKind.Ranking && samplingKeyColumnName != null && samplingKeyColumnName != groupIdColumnName)
{
throw new ArgumentException($"If provided, {nameof(samplingKeyColumnName)} must be the same as {nameof(groupIdColumnName)} for Ranking Experiments", samplingKeyColumnName);
}
}

private static void ValidateTrainData(IDataView trainData, ColumnInformation columnInformation)
{
if (trainData == null)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -417,7 +417,7 @@ public static void PrintRegressionFoldsAverageMetrics(IEnumerable<TrainCatalogBa

public static void PrintRankingFoldsAverageMetrics(IEnumerable<TrainCatalogBase.CrossValidationResult<RankingMetrics>> crossValidationResults)
{
var max = (crossValidationResults.First().Metrics.NormalizedDiscountedCumulativeGains.Count < 10) ? metrics.NormalizedDiscountedCumulativeGains.Count-1 : 9;
var max = (crossValidationResults.First().Metrics.NormalizedDiscountedCumulativeGains.Count < 10) ? crossValidationResults.First().Metrics.NormalizedDiscountedCumulativeGains.Count-1 : 9;
var NDCG = crossValidationResults.Select(r => r.Metrics.NormalizedDiscountedCumulativeGains[max]);
var DCG = crossValidationResults.Select(r => r.Metrics.DiscountedCumulativeGains[max]);
Console.WriteLine($""*************************************************************************************************************"");
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -336,7 +336,7 @@ else{#>

public static void PrintRankingFoldsAverageMetrics(IEnumerable<TrainCatalogBase.CrossValidationResult<RankingMetrics>> crossValidationResults)
{
var max = (crossValidationResults.First().Metrics.NormalizedDiscountedCumulativeGains.Count < 10) ? metrics.NormalizedDiscountedCumulativeGains.Count-1 : 9;
var max = (crossValidationResults.First().Metrics.NormalizedDiscountedCumulativeGains.Count < 10) ? crossValidationResults.First().Metrics.NormalizedDiscountedCumulativeGains.Count-1 : 9;
var NDCG = crossValidationResults.Select(r => r.Metrics.NormalizedDiscountedCumulativeGains[max]);
var DCG = crossValidationResults.Select(r => r.Metrics.DiscountedCumulativeGains[max]);
Console.WriteLine($"*************************************************************************************************************");
Expand Down
2 changes: 2 additions & 0 deletions src/Microsoft.ML.Data/DataLoadSave/DataOperationsCatalog.cs
Original file line number Diff line number Diff line change
Expand Up @@ -398,6 +398,7 @@ public IDataView TakeRows(IDataView input, long count)
/// <param name="testFraction">The fraction of data to go into the test set.</param>
/// <param name="samplingKeyColumnName">Name of a column to use for grouping rows. If two examples share the same value of the <paramref name="samplingKeyColumnName"/>,
/// they are guaranteed to appear in the same subset (train or test). This can be used to ensure no label leakage from the train to the test set.
/// Note that when performing a Ranking Experiment, the <paramref name="samplingKeyColumnName"/> must be the GroupId column.
/// If <see langword="null"/> no row grouping will be performed.</param>
/// <param name="seed">Seed for the random number generator used to select rows for the train-test split.</param>
/// <example>
Expand Down Expand Up @@ -444,6 +445,7 @@ public TrainTestData TrainTestSplit(IDataView data, double testFraction = 0.1, s
/// <param name="numberOfFolds">Number of cross-validation folds.</param>
/// <param name="samplingKeyColumnName">Name of a column to use for grouping rows. If two examples share the same value of the <paramref name="samplingKeyColumnName"/>,
/// they are guaranteed to appear in the same subset (train or test). This can be used to ensure no label leakage from the train to the test set.
/// Note that when performing a Ranking Experiment, the <paramref name="samplingKeyColumnName"/> must be the GroupId column.
/// If <see langword="null"/> no row grouping will be performed.</param>
/// <param name="seed">Seed for the random number generator used to select rows for cross-validation folds.</param>
/// <example>
Expand Down
2 changes: 1 addition & 1 deletion src/Microsoft.ML.Data/Evaluators/RankingEvaluator.cs
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ internal sealed class RankingEvaluator : EvaluatorBase<RankingEvaluator.Aggregat
/// </value>
public const string GroupSummary = "GroupSummary";

private const string GroupId = "GroupId";
private const string GroupId = DefaultColumnNames.GroupId;

private readonly int _truncationLevel;
private readonly bool _groupSummary;
Expand Down
25 changes: 25 additions & 0 deletions src/Microsoft.ML.Data/TrainCatalog.cs
Original file line number Diff line number Diff line change
Expand Up @@ -674,6 +674,31 @@ public RankingMetrics Evaluate(IDataView data,
var eval = new RankingEvaluator(Environment, options ?? new RankingEvaluatorOptions() { });
return eval.Evaluate(data, labelColumnName, rowGroupColumnName, scoreColumnName);
}

/// <summary>
/// Run cross-validation over <paramref name="numberOfFolds"/> folds of <paramref name="data"/>, by fitting <paramref name="estimator"/>,
/// and respecting <paramref name="rowGroupColumnName"/>if provided.
/// Then evaluate each sub-model against <paramref name="labelColumnName"/> and return metrics.
/// </summary>
/// <param name="data">The data to run cross-validation on.</param>
/// <param name="estimator">The estimator to fit.</param>
/// <param name="numberOfFolds">Number of cross-validation folds.</param>
/// <param name="labelColumnName">The label column (for evaluation).</param>
/// <param name="rowGroupColumnName">The name of the groupId column in <paramref name="data"/>, which is used to group rows.
/// While for other crossvalidation methods this column is called samplingKeyColumnName, ranking requires
/// this column to be <paramref name="rowGroupColumnName"/>.
/// If <see langword="null"/> no row grouping will be performed. </param>
/// <param name="seed"> Seed for the random number generator used to select rows for cross-validation folds.</param>
/// <returns>Per-fold results: metrics, models, scored datasets.</returns>
public IReadOnlyList<CrossValidationResult<RankingMetrics>> CrossValidate(
IDataView data, IEstimator<ITransformer> estimator, int numberOfFolds = 5, string labelColumnName = DefaultColumnNames.Label,
string rowGroupColumnName = DefaultColumnNames.GroupId, int ? seed = null)
{
Environment.CheckNonEmpty(labelColumnName, nameof(labelColumnName));
var result = CrossValidateTrain(data, estimator, numberOfFolds, rowGroupColumnName, seed);
return result.Select(x => new CrossValidationResult<RankingMetrics>(x.Model,
Evaluate(x.Scores, labelColumnName, rowGroupColumnName), x.Scores, x.Fold)).ToArray();
}
}

/// <summary>
Expand Down
8 changes: 4 additions & 4 deletions src/Microsoft.ML.LightGbm/LightGbmRankingTrainer.cs
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@ internal LightGbmRankingTrainer(IHostEnvironment env, Options options)
/// <param name="env">The private instance of <see cref="IHostEnvironment"/>.</param>
/// <param name="labelColumnName">The name of the label column.</param>
/// <param name="featureColumnName">The name of the feature column.</param>
/// <param name="rowGroupdColumnName">The name of the column containing the group ID. </param>
/// <param name="rowGroupIdColumnName">The name of the column containing the group ID. </param>
/// <param name="weightsColumnName">The name of the optional column containing the initial weights.</param>
/// <param name="numberOfLeaves">The number of leaves to use.</param>
/// <param name="learningRate">The learning rate.</param>
Expand All @@ -188,7 +188,7 @@ internal LightGbmRankingTrainer(IHostEnvironment env, Options options)
internal LightGbmRankingTrainer(IHostEnvironment env,
string labelColumnName = DefaultColumnNames.Label,
string featureColumnName = DefaultColumnNames.Features,
string rowGroupdColumnName = DefaultColumnNames.GroupId,
string rowGroupIdColumnName = DefaultColumnNames.GroupId,
string weightsColumnName = null,
int? numberOfLeaves = null,
int? minimumExampleCountPerLeaf = null,
Expand All @@ -200,14 +200,14 @@ internal LightGbmRankingTrainer(IHostEnvironment env,
LabelColumnName = labelColumnName,
FeatureColumnName = featureColumnName,
ExampleWeightColumnName = weightsColumnName,
RowGroupColumnName = rowGroupdColumnName,
RowGroupColumnName = rowGroupIdColumnName,
NumberOfLeaves = numberOfLeaves,
MinimumExampleCountPerLeaf = minimumExampleCountPerLeaf,
LearningRate = learningRate,
NumberOfIterations = numberOfIterations
})
{
Host.CheckNonEmpty(rowGroupdColumnName, nameof(rowGroupdColumnName));
Host.CheckNonEmpty(rowGroupIdColumnName, nameof(rowGroupIdColumnName));
}

private protected override void CheckDataValid(IChannel ch, RoleMappedData data)
Expand Down
35 changes: 35 additions & 0 deletions test/Microsoft.ML.AutoML.Tests/AutoFitTests.cs
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
using Microsoft.ML.TestFramework;
using Microsoft.ML.TestFramework.Attributes;
using Microsoft.ML.TestFrameworkCommon;
using Microsoft.ML.Trainers.LightGbm;
using Xunit;
using Xunit.Abstractions;
using static Microsoft.ML.DataOperationsCatalog;
Expand Down Expand Up @@ -156,6 +157,40 @@ public void AutoFitRankingTest()
Assert.True(col.Name == expectedOutputNames[col.Index]);
}

[LightGBMFact]
public void AutoFitRankingCVTest()
Copy link
Contributor Author

@Lynx1820 Lynx1820 Jun 26, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the way experiments are used within codegen.
Review: Should I add cross validation tests to all other experiments?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should I add cross validation tests to all other experiments?

If I recall it correctly, if your dataset has less than 15000 lines of data, AutoML will run CrossValidation automatically, if you have more than 15000 piece of data, it will use train-test split instead. So the rest of tests in AutoFitTests should all be CV runs considering that the dataset it uses is really small. (@justinormont correct me if I'm wrong)

tests start with AutoFit should test AutoML ranking experiment API, so you shouldn't have to create your pipeline from scratch in this test, If you just want to test Ranking.CrossValidation command, considering rename it more specifically.

Copy link
Member

@antoniovs1029 antoniovs1029 Jul 10, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's the other way around. If it has less than 15000 it runs train test split automatically on one of the Execute overloads, if it has more it runs CV. This only happens on 1 overload, but I believe Keren isn't using that overload on her tests.


In reply to: 447156630 [](ancestors = 447156630)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added CV testing for ranking only. I think it would be good to add testing for other task as well in the future.

{
string labelColumnName = "Label";
string groupIdColumnName = "GroupIdCustom";
string featuresColumnVectorNameA = "FeatureVectorA";
string featuresColumnVectorNameB = "FeatureVectorB";
uint numFolds = 3;

var mlContext = new MLContext(1);
var reader = new TextLoader(mlContext, GetLoaderArgsRank(labelColumnName, groupIdColumnName,
featuresColumnVectorNameA, featuresColumnVectorNameB));
var trainDataView = reader.Load(new MultiFileSource(DatasetUtil.GetMLSRDataset()));

CrossValidationExperimentResult<RankingMetrics> experimentResult = mlContext.Auto()
.CreateRankingExperiment(new RankingExperimentSettings() { GroupIdColumnName = groupIdColumnName, MaxExperimentTimeInSeconds = 5 })
.Execute(trainDataView, numFolds,
new ColumnInformation()
{
LabelColumnName = labelColumnName,
GroupIdColumnName = groupIdColumnName
});

CrossValidationRunDetail<RankingMetrics> bestRun = experimentResult.BestRun;
Assert.True(experimentResult.RunDetails.Count() > 0);
var enumerator = bestRun.Results.GetEnumerator();
while (enumerator.MoveNext())
{
var model = enumerator.Current;
Assert.True(model.ValidationMetrics.NormalizedDiscountedCumulativeGains.Max() > .4);
Assert.True(model.ValidationMetrics.DiscountedCumulativeGains.Max() > 19);
}
}

[Fact]
public void AutoFitRecommendationTest()
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@ using System.Linq;
using Microsoft.ML;
using Microsoft.ML.Data;
using TestNamespace.Model;
using Microsoft.ML.Trainers.LightGbm;

namespace TestNamespace.ConsoleApp
{
Expand Down Expand Up @@ -58,7 +57,7 @@ namespace TestNamespace.ConsoleApp
// Data process configuration with pipeline data transformations
var dataProcessPipeline = mlContext.Transforms.Conversion.Hash("GroupId", "GroupId");
// Set the training algorithm
var trainer = mlContext.Ranking.Trainers.LightGbm(new LightGbmRankingTrainer.Options() { rowGroupColumnName = "GroupId", LabelColumnName = "Label", FeatureColumnName = "Features" });
var trainer = mlContext.Ranking.Trainers.LightGbm(rowGroupColumnName: "GroupId", labelColumnName: "Label", featureColumnName: "Features");

var trainingPipeline = dataProcessPipeline.Append(trainer);

Expand Down Expand Up @@ -115,7 +114,7 @@ namespace TestNamespace.ConsoleApp

public static void PrintRankingFoldsAverageMetrics(IEnumerable<TrainCatalogBase.CrossValidationResult<RankingMetrics>> crossValidationResults)
{
var max = (crossValidationResults.First().Metrics.NormalizedDiscountedCumulativeGains.Count < 10) ? metrics.NormalizedDiscountedCumulativeGains.Count - 1 : 9;
var max = (crossValidationResults.First().Metrics.NormalizedDiscountedCumulativeGains.Count < 10) ? crossValidationResults.First().Metrics.NormalizedDiscountedCumulativeGains.Count - 1 : 9;
var NDCG = crossValidationResults.Select(r => r.Metrics.NormalizedDiscountedCumulativeGains[max]);
var DCG = crossValidationResults.Select(r => r.Metrics.DiscountedCumulativeGains[max]);
Console.WriteLine($"*************************************************************************************************************");
Expand Down
Loading