Skip to content
This repository was archived by the owner on Nov 16, 2023. It is now read-only.

More doc fixes #228

Merged
merged 2 commits into from
Aug 15, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 8 additions & 8 deletions src/python/docs/docstrings/EnsembleClassifier.txt
Original file line number Diff line number Diff line change
Expand Up @@ -30,14 +30,14 @@
* ``RandomFeatureSelector``: selects a random subset of the features
for each model.

:param num_models: indicates the number models to train, i.e. the number of
:param num_models: Indicates the number models to train, i.e. the number of
subsets of the training set to sample. The default value is 50. If
batches are used then this indicates the number of models per batch.

:param sub_model_selector_type: Determines the efficient set of models the
``output_combiner`` uses, and removes the least significant models. This is
used to improve the accuracy and reduce the model size. This is also called
pruning.
``output_combiner`` uses, and removes the least significant models.
This is used to improve the accuracy and reduce the model size. This is
also called pruning.

* ``ClassifierAllSelector``: does not perform any pruning and selects
all models in the ensemble to combine to create the output. This is
Expand All @@ -51,9 +51,9 @@
or ``"LogLossReduction"``.


:param output_combiner: indicates how to combine the predictions of the different
models into a single prediction. There are five available output
combiners for clasification:
:param output_combiner: Indicates how to combine the predictions of the
different models into a single prediction. There are five available
outputcombiners for clasification:

* ``ClassifierAverage``: computes the average of the scores produced by
the trained models.
Expand Down Expand Up @@ -92,7 +92,7 @@
and ``0 <= b <= 1`` and ``b - a = 1``. This normalizer preserves
sparsity by mapping zero to zero.

:param batch_size: train the models iteratively on subsets of the training
:param batch_size: Train the models iteratively on subsets of the training
set of this size. When using this option, it is assumed that the
training set is randomized enough so that every batch is a random
sample of instances. The default value is -1, indicating using the
Expand Down
16 changes: 8 additions & 8 deletions src/python/docs/docstrings/EnsembleRegressor.txt
Original file line number Diff line number Diff line change
Expand Up @@ -30,14 +30,14 @@
* ``RandomFeatureSelector``: selects a random subset of the features
for each model.

:param num_models: indicates the number models to train, i.e. the number of
:param num_models: Indicates the number models to train, i.e. the number of
subsets of the training set to sample. The default value is 50. If
batches are used then this indicates the number of models per batch.

:param sub_model_selector_type: Determines the efficient set of models the
``output_combiner`` uses, and removes the least significant models. This is
used to improve the accuracy and reduce the model size. This is also called
pruning.
``output_combiner`` uses, and removes the least significant models.
This is used to improve the accuracy and reduce the model size. This is
also called pruning.

* ``RegressorAllSelector``: does not perform any pruning and selects
all models in the ensemble to combine to create the output. This is
Expand All @@ -51,9 +51,9 @@
``"RSquared"``.


:param output_combiner: indicates how to combine the predictions of the different
models into a single prediction. There are five available output
combiners for clasification:
:param output_combiner: Indicates how to combine the predictions of the
different models into a single prediction. There are five available
output combiners for clasification:

* ``RegressorAverage``: computes the average of the scores produced by
the trained models.
Expand Down Expand Up @@ -86,7 +86,7 @@
and ``0 <= b <= 1`` and ``b - a = 1``. This normalizer preserves
sparsity by mapping zero to zero.

:param batch_size: train the models iteratively on subsets of the training
:param batch_size: Train the models iteratively on subsets of the training
set of this size. When using this option, it is assumed that the
training set is randomized enough so that every batch is a random
sample of instances. The default value is -1, indicating using the
Expand Down
10 changes: 4 additions & 6 deletions src/python/docs/docstrings/LinearSvmBinaryClassifier.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,10 @@
.. remarks::
Linear SVM implements an algorithm that finds a hyperplane in the
feature space for binary classification, by solving an SVM problem.
For instance, with feature values $f_0, f_1,..., f_{D-1}$, the
prediction is given by determining what side of the hyperplane the
point falls into. That is the same as the sign of the feautures'
weighted sum, i.e. $\sum_{i = 0}^{D-1} \left(w_i * f_i \right) + b$,
where $w_0, w_1,..., w_{D-1}$ are the weights computed by the
algorithm, and *b* is the bias computed by the algorithm.
For instance, for a given feature vector, the prediction is given by
determining what side of the hyperplane the point falls into. That is
the same as the sign of the feautures' weighted sum (the weights being
computed by the algorithm) plus the bias computed by the algorithm.

This algorithm implemented is the PEGASOS method, which alternates
between stochastic gradient descent steps and projection steps,
Expand Down
16 changes: 8 additions & 8 deletions src/python/nimbusml/ensemble/ensembleclassifier.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,14 +57,14 @@ class EnsembleClassifier(core, BasePredictor, ClassifierMixin):
* ``RandomFeatureSelector``: selects a random subset of the features
for each model.

:param num_models: indicates the number models to train, i.e. the number of
:param num_models: Indicates the number models to train, i.e. the number of
subsets of the training set to sample. The default value is 50. If
batches are used then this indicates the number of models per batch.

:param sub_model_selector_type: Determines the efficient set of models the
``output_combiner`` uses, and removes the least significant models. This is
used to improve the accuracy and reduce the model size. This is also called
pruning.
``output_combiner`` uses, and removes the least significant models.
This is used to improve the accuracy and reduce the model size. This is
also called pruning.

* ``ClassifierAllSelector``: does not perform any pruning and selects
all models in the ensemble to combine to create the output. This is
Expand All @@ -77,9 +77,9 @@ class EnsembleClassifier(core, BasePredictor, ClassifierMixin):
``"AccuracyMicro"``, ``"AccuracyMacro"``, ``"LogLoss"``,
or ``"LogLossReduction"``.

:param output_combiner: indicates how to combine the predictions of the different
models into a single prediction. There are five available output
combiners for clasification:
:param output_combiner: Indicates how to combine the predictions of the
different models into a single prediction. There are five available
outputcombiners for clasification:

* ``ClassifierAverage``: computes the average of the scores produced by
the trained models.
Expand Down Expand Up @@ -123,7 +123,7 @@ class EnsembleClassifier(core, BasePredictor, ClassifierMixin):
:param train_parallel: All the base learners will run asynchronously if the
value is true.

:param batch_size: train the models iteratively on subsets of the training
:param batch_size: Train the models iteratively on subsets of the training
set of this size. When using this option, it is assumed that the
training set is randomized enough so that every batch is a random
sample of instances. The default value is -1, indicating using the
Expand Down
16 changes: 8 additions & 8 deletions src/python/nimbusml/ensemble/ensembleregressor.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,14 +57,14 @@ class EnsembleRegressor(core, BasePredictor, RegressorMixin):
* ``RandomFeatureSelector``: selects a random subset of the features
for each model.

:param num_models: indicates the number models to train, i.e. the number of
:param num_models: Indicates the number models to train, i.e. the number of
subsets of the training set to sample. The default value is 50. If
batches are used then this indicates the number of models per batch.

:param sub_model_selector_type: Determines the efficient set of models the
``output_combiner`` uses, and removes the least significant models. This is
used to improve the accuracy and reduce the model size. This is also called
pruning.
``output_combiner`` uses, and removes the least significant models.
This is used to improve the accuracy and reduce the model size. This is
also called pruning.

* ``RegressorAllSelector``: does not perform any pruning and selects
all models in the ensemble to combine to create the output. This is
Expand All @@ -77,9 +77,9 @@ class EnsembleRegressor(core, BasePredictor, RegressorMixin):
can be ``"L1"``, ``"L2"``, ``"Rms"``, or ``"Loss"``, or
``"RSquared"``.

:param output_combiner: indicates how to combine the predictions of the different
models into a single prediction. There are five available output
combiners for clasification:
:param output_combiner: Indicates how to combine the predictions of the
different models into a single prediction. There are five available
output combiners for clasification:

* ``RegressorAverage``: computes the average of the scores produced by
the trained models.
Expand Down Expand Up @@ -117,7 +117,7 @@ class EnsembleRegressor(core, BasePredictor, RegressorMixin):
:param train_parallel: All the base learners will run asynchronously if the
value is true.

:param batch_size: train the models iteratively on subsets of the training
:param batch_size: Train the models iteratively on subsets of the training
set of this size. When using this option, it is assumed that the
training set is randomized enough so that every batch is a random
sample of instances. The default value is -1, indicating using the
Expand Down
16 changes: 8 additions & 8 deletions src/python/nimbusml/internal/core/ensemble/ensembleclassifier.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,14 +57,14 @@ class EnsembleClassifier(
* ``RandomFeatureSelector``: selects a random subset of the features
for each model.

:param num_models: indicates the number models to train, i.e. the number of
:param num_models: Indicates the number models to train, i.e. the number of
subsets of the training set to sample. The default value is 50. If
batches are used then this indicates the number of models per batch.

:param sub_model_selector_type: Determines the efficient set of models the
``output_combiner`` uses, and removes the least significant models. This is
used to improve the accuracy and reduce the model size. This is also called
pruning.
``output_combiner`` uses, and removes the least significant models.
This is used to improve the accuracy and reduce the model size. This is
also called pruning.

* ``ClassifierAllSelector``: does not perform any pruning and selects
all models in the ensemble to combine to create the output. This is
Expand All @@ -77,9 +77,9 @@ class EnsembleClassifier(
``"AccuracyMicro"``, ``"AccuracyMacro"``, ``"LogLoss"``,
or ``"LogLossReduction"``.

:param output_combiner: indicates how to combine the predictions of the different
models into a single prediction. There are five available output
combiners for clasification:
:param output_combiner: Indicates how to combine the predictions of the
different models into a single prediction. There are five available
outputcombiners for clasification:

* ``ClassifierAverage``: computes the average of the scores produced by
the trained models.
Expand Down Expand Up @@ -123,7 +123,7 @@ class EnsembleClassifier(
:param train_parallel: All the base learners will run asynchronously if the
value is true.

:param batch_size: train the models iteratively on subsets of the training
:param batch_size: Train the models iteratively on subsets of the training
set of this size. When using this option, it is assumed that the
training set is randomized enough so that every batch is a random
sample of instances. The default value is -1, indicating using the
Expand Down
16 changes: 8 additions & 8 deletions src/python/nimbusml/internal/core/ensemble/ensembleregressor.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,14 +55,14 @@ class EnsembleRegressor(
* ``RandomFeatureSelector``: selects a random subset of the features
for each model.

:param num_models: indicates the number models to train, i.e. the number of
:param num_models: Indicates the number models to train, i.e. the number of
subsets of the training set to sample. The default value is 50. If
batches are used then this indicates the number of models per batch.

:param sub_model_selector_type: Determines the efficient set of models the
``output_combiner`` uses, and removes the least significant models. This is
used to improve the accuracy and reduce the model size. This is also called
pruning.
``output_combiner`` uses, and removes the least significant models.
This is used to improve the accuracy and reduce the model size. This is
also called pruning.

* ``RegressorAllSelector``: does not perform any pruning and selects
all models in the ensemble to combine to create the output. This is
Expand All @@ -75,9 +75,9 @@ class EnsembleRegressor(
can be ``"L1"``, ``"L2"``, ``"Rms"``, or ``"Loss"``, or
``"RSquared"``.

:param output_combiner: indicates how to combine the predictions of the different
models into a single prediction. There are five available output
combiners for clasification:
:param output_combiner: Indicates how to combine the predictions of the
different models into a single prediction. There are five available
output combiners for clasification:

* ``RegressorAverage``: computes the average of the scores produced by
the trained models.
Expand Down Expand Up @@ -115,7 +115,7 @@ class EnsembleRegressor(
:param train_parallel: All the base learners will run asynchronously if the
value is true.

:param batch_size: train the models iteratively on subsets of the training
:param batch_size: Train the models iteratively on subsets of the training
set of this size. When using this option, it is assumed that the
training set is randomized enough so that every batch is a random
sample of instances. The default value is -1, indicating using the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,12 +26,10 @@ class LinearSvmBinaryClassifier(
.. remarks::
Linear SVM implements an algorithm that finds a hyperplane in the
feature space for binary classification, by solving an SVM problem.
For instance, with feature values $f_0, f_1,..., f_{D-1}$, the
prediction is given by determining what side of the hyperplane the
point falls into. That is the same as the sign of the feautures'
weighted sum, i.e. $\sum_{i = 0}^{D-1} \left(w_i * f_i \right) + b$,
where $w_0, w_1,..., w_{D-1}$ are the weights computed by the
algorithm, and *b* is the bias computed by the algorithm.
For instance, for a given feature vector, the prediction is given by
determining what side of the hyperplane the point falls into. That is
the same as the sign of the feautures' weighted sum (the weights being
computed by the algorithm) plus the bias computed by the algorithm.

This algorithm implemented is the PEGASOS method, which alternates
between stochastic gradient descent steps and projection steps,
Expand Down
10 changes: 4 additions & 6 deletions src/python/nimbusml/linear_model/linearsvmbinaryclassifier.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,12 +29,10 @@ class LinearSvmBinaryClassifier(
.. remarks::
Linear SVM implements an algorithm that finds a hyperplane in the
feature space for binary classification, by solving an SVM problem.
For instance, with feature values $f_0, f_1,..., f_{D-1}$, the
prediction is given by determining what side of the hyperplane the
point falls into. That is the same as the sign of the feautures'
weighted sum, i.e. $\sum_{i = 0}^{D-1} \left(w_i * f_i \right) + b$,
where $w_0, w_1,..., w_{D-1}$ are the weights computed by the
algorithm, and *b* is the bias computed by the algorithm.
For instance, for a given feature vector, the prediction is given by
determining what side of the hyperplane the point falls into. That is
the same as the sign of the feautures' weighted sum (the weights being
computed by the algorithm) plus the bias computed by the algorithm.

This algorithm implemented is the PEGASOS method, which alternates
between stochastic gradient descent steps and projection steps,
Expand Down