Skip to content

Commit 22224ec

Browse files
sfilipicodemzs
authored andcommitted
Fixes dotnet#591: typos, adding the type attribute to lists, and moving the name attribute for some examples (dotnet#592)
* Fixes issue 591: typos, adding the type to lists, and fixing the name attribute in OGD and Poisson * getting just the content under the memeber node, not the member itself. * merging from master
1 parent c3730a0 commit 22224ec

File tree

15 files changed

+101
-103
lines changed

15 files changed

+101
-103
lines changed

src/Microsoft.ML.Data/Transforms/doc.xml

Lines changed: 47 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@
2828
</summary>
2929
<remarks>
3030
The TextToKeyConverter transform builds up term vocabularies (dictionaries).
31-
The TextToKey Converter and the <see cref="T:Microsoft.ML.Transforms.HashConverter"/> are the two one primary mechanisms by which raw input is transformed into keys.
31+
The TextToKeyConverter and the <see cref="T:Microsoft.ML.Transforms.HashConverter"/> are the two one primary mechanisms by which raw input is transformed into keys.
3232
If multiple columns are used, each column builds/uses exactly one vocabulary.
3333
The output columns are KeyType-valued.
3434
The Key value is the one-based index of the item in the dictionary.
@@ -49,6 +49,52 @@
4949
</code>
5050
</example>
5151
</example>
52+
53+
<member name="NAHandle">
54+
<summary>
55+
Handle missing values by replacing them with either the default value or the indicated value.
56+
</summary>
57+
<remarks>
58+
This transform handles missing values in the input columns. For each input column, it creates an output column
59+
where the missing values are replaced by one of these specified values:
60+
<list type='bullet'>
61+
<item>
62+
<description>The default value of the appropriate type.</description>
63+
</item>
64+
<item>
65+
<description>The mean value of the appropriate type.</description>
66+
</item>
67+
<item>
68+
<description>The max value of the appropriate type.</description>
69+
</item>
70+
<item>
71+
<description>The min value of the appropriate type.</description>
72+
</item>
73+
</list>
74+
<para>The last three work only for numeric/TimeSpan/DateTime kind columns.</para>
75+
<para>
76+
The output column can also optionally include an indicator vector for which slots were missing in the input column.
77+
This can be done only when the indicator vector type can be converted to the input column type, i.e. only for numeric columns.
78+
</para>
79+
<para>
80+
When computing the mean/max/min value, there is also an option to compute it over the whole column instead of per slot.
81+
This option has a default value of true for variable length vectors, and false for known length vectors.
82+
It can be changed to true for known length vectors, but it results in an error if changed to false for variable length vectors.
83+
</para>
84+
</remarks>
85+
<seealso cref=" Microsoft.ML.Runtime.Data.MetadataUtils.Kinds.HasMissingValues"/>
86+
<seealso cref="T:Microsoft.ML.Data.DataKind"/>
87+
</member>
88+
<example name="NAHandle">
89+
<example>
90+
<code language="csharp">
91+
pipeline.Add(new MissingValueHandler(&quot;FeatureCol&quot;, &quot;CleanFeatureCol&quot;)
92+
{
93+
ReplaceWith = NAHandleTransformReplacementKind.Mean
94+
});
95+
</code>
96+
</example>
97+
</example>
5298

5399
</members>
54100
</doc>

src/Microsoft.ML.FastTree/TreeEnsembleFeaturizer.cs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -807,7 +807,7 @@ public static partial class TreeFeaturize
807807
Desc = TreeEnsembleFeaturizerTransform.TreeEnsembleSummary,
808808
UserName = TreeEnsembleFeaturizerTransform.UserName,
809809
ShortName = TreeEnsembleFeaturizerBindableMapper.LoadNameShort,
810-
XmlInclude = new[] { @"<include file='../Microsoft.ML.FastTree/doc.xml' path='doc/members/member[@name=""TreeEnsembleFeaturizerTransform""]'/>" })]
810+
XmlInclude = new[] { @"<include file='../Microsoft.ML.FastTree/doc.xml' path='doc/members/member[@name=""TreeEnsembleFeaturizerTransform""]/*'/>" })]
811811
public static CommonOutputs.TransformOutput Featurizer(IHostEnvironment env, TreeEnsembleFeaturizerTransform.ArgumentsForEntryPoint input)
812812
{
813813
Contracts.CheckValue(env, nameof(env));

src/Microsoft.ML.FastTree/doc.xml

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -95,7 +95,7 @@
9595
<para>Generally, ensemble models provide better coverage and accuracy than single decision trees.
9696
Each tree in a decision forest outputs a Gaussian distribution.</para>
9797
<para>For more see: </para>
98-
<list>
98+
<list type='bullet'>
9999
<item><description><a href='http://en.wikipedia.org/wiki/Random_forest'>Wikipedia: Random forest</a></description></item>
100100
<item><description><a href='http://jmlr.org/papers/volume7/meinshausen06a/meinshausen06a.pdf'>Quantile regression forest</a></description></item>
101101
<item><description><a href='https://blogs.technet.microsoft.com/machinelearning/2014/09/10/from-stumps-to-trees-to-forests/'>From Stumps to Trees to Forests</a></description></item>
@@ -146,7 +146,7 @@
146146
<summary>
147147
Trains a tree ensemble, or loads it from a file, then maps a numeric feature vector
148148
to three outputs:
149-
<list>
149+
<list type='number'>
150150
<item><description>A vector containing the individual tree outputs of the tree ensemble.</description></item>
151151
<item><description>A vector indicating the leaves that the feature vector falls on in the tree ensemble.</description></item>
152152
<item><description>A vector indicating the paths that the feature vector falls on in the tree ensemble.</description></item>
@@ -157,28 +157,28 @@
157157
</summary>
158158
<remarks>
159159
In machine learning​ it is a pretty common and powerful approach to utilize the already trained model in the process of defining features.
160-
<para>One such example would be the use of model's scores as features to downstream models. For example, we might run clustering on the original features,
160+
<para>One such example would be the use of model&apos;s scores as features to downstream models. For example, we might run clustering on the original features,
161161
and use the cluster distances as the new feature set.
162-
Instead of consuming the model's output, we could go deeper, and extract the 'intermediate outputs' that are used to produce the final score. </para>
162+
Instead of consuming the model&apos;s output, we could go deeper, and extract the &apos;intermediate outputs&apos; that are used to produce the final score. </para>
163163
There are a number of famous or popular examples of this technique:
164-
<list>
165-
<item><description>A deep neural net trained on the ImageNet dataset, with the last layer removed, is commonly used to compute the 'projection' of the image into the 'semantic feature space'.
166-
It is observed that the Euclidean distance in this space often correlates with the 'semantic similarity': that is, all pictures of pizza are located close together,
164+
<list type='bullet'>
165+
<item><description>A deep neural net trained on the ImageNet dataset, with the last layer removed, is commonly used to compute the &apos;projection&apos; of the image into the &apos;semantic feature space&apos;.
166+
It is observed that the Euclidean distance in this space often correlates with the &apos;semantic similarity&apos;: that is, all pictures of pizza are located close together,
167167
and far away from pictures of kittens. </description></item>
168-
<item><description>A matrix factorization and/or LDA model is also often used to extract the 'latent topics' or 'latent features' associated with users and items.</description></item>
169-
<item><description>The weights of the linear model are often used as a crude indicator of 'feature importance'. At the very minimum, the 0-weight features are not needed by the model,
170-
and there's no reason to compute them. </description></item>
168+
<item><description>A matrix factorization and/or LDA model is also often used to extract the &apos;latent topics&apos; or &apos;latent features&apos; associated with users and items.</description></item>
169+
<item><description>The weights of the linear model are often used as a crude indicator of &apos;feature importance&apos;. At the very minimum, the 0-weight features are not needed by the model,
170+
and there&apos;s no reason to compute them. </description></item>
171171
</list>
172172
<para>Tree featurizer uses the decision tree ensembles for feature engineering in the same fashion as above.</para>
173-
<para>Let's assume that we've built a tree ensemble of 100 trees with 100 leaves each (it doesn't matter whether boosting was used or not in training).
173+
<para>Let&apos;s assume that we&apos;ve built a tree ensemble of 100 trees with 100 leaves each (it doesn&apos;t matter whether boosting was used or not in training).
174174
If we associate each leaf of each tree with a sequential integer, we can, for every incoming example x,
175-
produce an indicator vector L(x), where Li(x) = 1 if the example x 'falls' into the leaf #i, and 0 otherwise.</para>
175+
produce an indicator vector L(x), where Li(x) = 1 if the example x &apos;falls&apos; into the leaf #i, and 0 otherwise.</para>
176176
<para>Thus, for every example x, we produce a 10000-valued vector L, with exactly 100 1s and the rest zeroes.
177-
This 'leaf indicator' vector can be considered the ensemble-induced 'footprint' of the example.</para>
178-
<para>The 'distance' between two examples in the L-space is actually a Hamming distance, and is equal to the number of trees that do not distinguish the two examples.</para>
177+
This &apos;leaf indicator&apos; vector can be considered the ensemble-induced &apos;footprint&apos; of the example.</para>
178+
<para>The &apos;distance&apos; between two examples in the L-space is actually a Hamming distance, and is equal to the number of trees that do not distinguish the two examples.</para>
179179
<para>We could repeat the same thought process for the non-leaf, or internal, nodes of the trees (we know that each tree has exactly 99 of them in our 100-leaf example),
180-
and produce another indicator vector, N (size 9900), for each example, indicating the 'trajectory' of each example through each of the trees.</para>
181-
<para>The distance in the combined 19900-dimensional LN-space will be equal to the number of 'decisions' in all trees that 'agree' on the given pair of examples.</para>
180+
and produce another indicator vector, N (size 9900), for each example, indicating the &apos;trajectory&apos; of each example through each of the trees.</para>
181+
<para>The distance in the combined 19900-dimensional LN-space will be equal to the number of &apos;decisions&apos; in all trees that &apos;agree&apos; on the given pair of examples.</para>
182182
<para>The TreeLeafFeaturizer is also producing the third vector, T, which is defined as Ti(x) = output of tree #i on example x.</para>
183183
</remarks>
184184
<example>

src/Microsoft.ML.KMeansClustering/doc.xml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313
YYK-Means observes that there is a lot of redundancy across iterations in the KMeans algorithms and most points do not change their clusters during an iteration.
1414
It uses various bounding techniques to identify this redundancy and eliminate many distance computations and optimize centroid computations.
1515
<para>For more information on K-means, and K-means++ see:</para>
16-
<list>
16+
<list type='bullet'>
1717
<item><description><a href='https://en.wikipedia.org/wiki/K-means_clustering'>K-means</a></description></item>
1818
<item><description><a href='https://en.wikipedia.org/wiki/K-means%2b%2b'>K-means++</a></description></item>
1919
</list>

src/Microsoft.ML.PCA/doc.xml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
Its training is done using the technique described in the paper: <a href='https://arxiv.org/pdf/1310.6304v2.pdf'>Combining Structured and Unstructured Randomness in Large Scale PCA</a>,
1212
and the paper <a href='https://arxiv.org/pdf/0909.4061v2.pdf'>Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions</a>
1313
<para>For more information, see also:</para>
14-
<list>
14+
<list type='bullet'>
1515
<item><description>
1616
<a href='http://web.stanford.edu/group/mmds/slides2010/Martinsson.pdf'>Randomized Methods for Computing the Singular Value Decomposition (SVD) of very large matrices</a>
1717
</description></item>

src/Microsoft.ML.StandardLearners/FactorizationMachine/doc.xml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -15,14 +15,14 @@
1515
<para>See references below for more details.
1616
This trainer is essentially faster the one introduced in [2] because of some implemtation tricks[3].
1717
</para>
18-
<list >
18+
<list type='bullet'>
1919
<item>
20-
[1] <description><a href='http://www.csie.ntu.edu.tw/~cjlin/papers/ffm.pdf'>Field-aware Factorization Machines for CTR Prediction</a></description></item>
20+
<description><a href='http://www.csie.ntu.edu.tw/~cjlin/papers/ffm.pdf'>Field-aware Factorization Machines for CTR Prediction</a></description></item>
2121
<item>
22-
[2] <description><a href='http://jmlr.org/papers/volume12/duchi11a/duchi11a.pdf'>Adaptive Subgradient Methods for Online Learning and Stochastic Optimization</a></description>
22+
<description><a href='http://jmlr.org/papers/volume12/duchi11a/duchi11a.pdf'>Adaptive Subgradient Methods for Online Learning and Stochastic Optimization</a></description>
2323
</item>
2424
<item>
25-
[3] <description><a href='https://github.com/wschin/fast-ffm/blob/master/fast-ffm.pdf'>An Improved Stochastic Gradient Method for Training Large-scale Field-aware Factorization Machine.</a></description>
25+
<description><a href='https://github.com/wschin/fast-ffm/blob/master/fast-ffm.pdf'>An Improved Stochastic Gradient Method for Training Large-scale Field-aware Factorization Machine.</a></description>
2626
</item>
2727
</list>
2828
</remarks>

src/Microsoft.ML.StandardLearners/Standard/MultiClass/MultiClassNaiveBayesTrainer.cs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -123,8 +123,8 @@ public override MultiClassNaiveBayesPredictor Train(TrainContext context)
123123
Desc = "Train a MultiClassNaiveBayesTrainer.",
124124
UserName = UserName,
125125
ShortName = ShortName,
126-
XmlInclude = new[] { @"<include file='../Microsoft.ML.StandardLearners/Standard/MultiClass/doc.xml' path='doc/members/member[@name=""MultiClassNaiveBayesTrainer""]'/>",
127-
@"<include file='../Microsoft.ML.StandardLearners/Standard/MultiClass/doc.xml' path='doc/members/example[@name=""MultiClassNaiveBayesTrainer""]'/>" })]
126+
XmlInclude = new[] { @"<include file='../Microsoft.ML.StandardLearners/Standard/MultiClass/doc.xml' path='doc/members/member[@name=""MultiClassNaiveBayesTrainer""]/*'/>",
127+
@"<include file='../Microsoft.ML.StandardLearners/Standard/MultiClass/doc.xml' path='doc/members/example[@name=""MultiClassNaiveBayesTrainer""]/*'/>" })]
128128
public static CommonOutputs.MulticlassClassificationOutput TrainMultiClassNaiveBayesTrainer(IHostEnvironment env, Arguments input)
129129
{
130130
Contracts.CheckValue(env, nameof(env));

src/Microsoft.ML.StandardLearners/Standard/Online/doc.xml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,8 +13,8 @@
1313
and an option to update the weight vector using the average of the vectors seen over time (averaged argument is set to True by default).
1414
</remarks>
1515
</member>
16-
<example>
17-
<example name="OGD">
16+
<example name="OGD">
17+
<example>
1818
<code language="csharp">
1919
new OnlineGradientDescentRegressor
2020
{

src/Microsoft.ML.StandardLearners/Standard/PoissonRegression/doc.xml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,8 +12,8 @@
1212
Assuming that the dependent variable follows a Poisson distribution, the parameters of the regressor can be estimated by maximizing the likelihood of the obtained observations.
1313
</remarks>
1414
</member>
15-
<example>
16-
<example name="PoissonRegression">
15+
<example name="PoissonRegression">
16+
<example>
1717
<code language="csharp">
1818
new PoissonRegressor
1919
{

src/Microsoft.ML.StandardLearners/Standard/doc.xml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@
2222
In general, the larger the 'L2Const', the faster SDCA converges.
2323
</para>
2424
<para>For more information, see also:</para>
25-
<list>
25+
<list type='bullet'>
2626
<item><description>
2727
<a href='https://www.microsoft.com/en-us/research/wp-content/uploads/2016/06/main-3.pdf'>Scaling Up Stochastic Dual Coordinate Ascent</a>.
2828
</description></item>

src/Microsoft.ML.Transforms/EntryPoints/SelectFeatures.cs

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -14,8 +14,8 @@ public static class SelectFeatures
1414
[TlcModule.EntryPoint(Name = "Transforms.FeatureSelectorByCount",
1515
Desc = CountFeatureSelectionTransform.Summary,
1616
UserName = CountFeatureSelectionTransform.UserName,
17-
XmlInclude = new[] { @"<include file='../Microsoft.ML.Transforms/doc.xml' path='doc/members/member[@name=""CountFeatureSelection""]'/>",
18-
@"<include file='../Microsoft.ML.Transforms/doc.xml' path='doc/members/example[@name=""CountFeatureSelection""]'/>"})]
17+
XmlInclude = new[] { @"<include file='../Microsoft.ML.Transforms/doc.xml' path='doc/members/member[@name=""CountFeatureSelection""]/*'/>",
18+
@"<include file='../Microsoft.ML.Transforms/doc.xml' path='doc/members/example[@name=""CountFeatureSelection""]/*'/>"})]
1919
public static CommonOutputs.TransformOutput CountSelect(IHostEnvironment env, CountFeatureSelectionTransform.Arguments input)
2020
{
2121
Contracts.CheckValue(env, nameof(env));
@@ -31,8 +31,8 @@ public static CommonOutputs.TransformOutput CountSelect(IHostEnvironment env, Co
3131
Desc = MutualInformationFeatureSelectionTransform.Summary,
3232
UserName = MutualInformationFeatureSelectionTransform.UserName,
3333
ShortName = MutualInformationFeatureSelectionTransform.ShortName,
34-
XmlInclude = new[] { @"<include file='../Microsoft.ML.Transforms/doc.xml' path='doc/members/member[@name=""MutualInformationFeatureSelection""]'/>",
35-
@"<include file='../Microsoft.ML.Transforms/doc.xml' path='doc/members/example[@name=""MutualInformationFeatureSelection""]'/>"})]
34+
XmlInclude = new[] { @"<include file='../Microsoft.ML.Transforms/doc.xml' path='doc/members/member[@name=""MutualInformationFeatureSelection""]/*'/>",
35+
@"<include file='../Microsoft.ML.Transforms/doc.xml' path='doc/members/example[@name=""MutualInformationFeatureSelection""]/*'/>"})]
3636
public static CommonOutputs.TransformOutput MutualInformationSelect(IHostEnvironment env, MutualInformationFeatureSelectionTransform.Arguments input)
3737
{
3838
Contracts.CheckValue(env, nameof(env));

src/Microsoft.ML.Transforms/MutualInformationFeatureSelection.cs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@
2121

2222
namespace Microsoft.ML.Runtime.Data
2323
{
24-
/// <include file='doc.xml' path='doc/members/member[@name="MutualInformationFeatureSelection"]' />
24+
/// <include file='doc.xml' path='doc/members/member[@name="MutualInformationFeatureSelection"]/*' />
2525
public static class MutualInformationFeatureSelectionTransform
2626
{
2727
public const string Summary =

0 commit comments

Comments
 (0)