Skip to content

Commit

Permalink
DOC minor doc fixes for sphinx. (scikit-learn#7357)
Browse files Browse the repository at this point in the history
  • Loading branch information
amueller authored and jnothman committed Sep 8, 2016
1 parent 680ab51 commit 6972d6c
Show file tree
Hide file tree
Showing 11 changed files with 23 additions and 21 deletions.
4 changes: 2 additions & 2 deletions doc/modules/linear_model.rst
Original file line number Diff line number Diff line change
Expand Up @@ -754,7 +754,7 @@ For large dataset, you may also consider using :class:`SGDClassifier` with 'log'

* :ref:`sphx_glr_auto_examples_linear_model_plot_logistic_path.py`

* :ref:`example_linear_model_plot_logistic_multinomial.py`
* :ref:`sphx_glr_auto_examples_linear_model_plot_logistic_multinomial.py`

.. _liblinear_differences:

Expand Down Expand Up @@ -1118,7 +1118,7 @@ in the following ways.

.. topic:: Examples:

* :ref:`example_linear_model_plot_huber_vs_ridge.py`
* :ref:`sphx_glr_auto_examples_linear_model_plot_huber_vs_ridge.py`

.. topic:: References:

Expand Down
10 changes: 5 additions & 5 deletions doc/modules/mixture.rst
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,7 @@ points.

.. topic:: Examples:

* See :ref:`plot_bayesian_gaussian_mixture.py` for a comparaison of
* See :ref:`sphx_glr_auto_examples_plot_bayesian_gaussian_mixture.py` for a comparaison of
the results of the ``BayesianGaussianMixture`` for different values
of the parameter ``dirichlet_concentration_prior``.

Expand All @@ -190,10 +190,10 @@ Pros
expectation-maximization solutions.

:Automatic selection: when `dirichlet_concentration_prior` is small enough and
`n_components` is larger than what is found necessary by the model, the
Variational Bayesian mixture model has a natural tendency to set some mixture
weights values close to zero. This makes it possible to let the model choose a
suitable number of effective components automatically.
`n_components` is larger than what is found necessary by the model, the
Variational Bayesian mixture model has a natural tendency to set some mixture
weights values close to zero. This makes it possible to let the model choose a
suitable number of effective components automatically.

Cons
.....
Expand Down
2 changes: 1 addition & 1 deletion doc/modules/model_evaluation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1083,7 +1083,7 @@ Here is a small example of usage of this function:::

.. topic:: Example:

* See :ref:`example_calibration_plot_calibration.py`
* See :ref:`sphx_glr_calibration_plot_calibration.py`
for an example of Brier score loss usage to perform probability
calibration of classifiers.

Expand Down
4 changes: 3 additions & 1 deletion doc/testimonials/testimonials.rst
Original file line number Diff line number Diff line change
Expand Up @@ -292,7 +292,9 @@ Greg Lamp, Co-founder Yhat
.. raw:: html

</span>
------------------------------------------

`Rangespan <https://www.rangespan.com>_`
----------------------------------------

.. raw:: html

Expand Down
3 changes: 0 additions & 3 deletions doc/tutorial/statistical_inference/finding_help.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,6 @@ Q&A communities with Machine Learning practitioners
also features some interesting discussions:
https://www.quora.com/topic/Machine-Learning

Have a look at the best questions section, eg: `What are some
good resources for learning about machine learning`_.

:Stack Exchange:

The Stack Exchange family of sites hosts `multiple subdomains for Machine Learning questions`_.
Expand Down
6 changes: 4 additions & 2 deletions doc/whats_new.rst
Original file line number Diff line number Diff line change
Expand Up @@ -290,7 +290,7 @@ Enhancements
- Added support for substituting or disabling :class:`pipeline.Pipeline`
and :class:`pipeline.FeatureUnion` components using the ``set_params``
interface that powers :mod:`sklearn.grid_search`.
See :ref:`example_plot_compare_reduction.py`. By `Joel Nothman`_ and
See :ref:`sphx_glr_plot_compare_reduction.py`. By `Joel Nothman`_ and
`Robert McGibbon`_.

- Simplification of the ``clone`` function, deprecate support for estimators
Expand Down Expand Up @@ -395,7 +395,7 @@ Bug fixes
Oliveira <https://github.com/caioaao>`_.

- Fix :class:`linear_model.ElasticNet` sparse decision function to match
output with dense in the multioutput case.
output with dense in the multioutput case.

API changes summary
-------------------
Expand Down Expand Up @@ -4468,3 +4468,5 @@ David Huard, Dave Morrill, Ed Schofield, Travis Oliphant, Pearu Peterson.
.. _Mads Jensen: https://github.com/indianajensen

.. _Sebastián Vanrell: https://github.com/srvanrell

.. _Robert McGibbon: https://github.com/rmcgibbo
1 change: 1 addition & 0 deletions sklearn/datasets/descr/breast_cancer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ Data Set Characteristics:
- WDBC-Benign

:Summary Statistics:

===================================== ====== ======
Min Max
===================================== ====== ======
Expand Down
2 changes: 1 addition & 1 deletion sklearn/decomposition/kernel_pca.py
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ class KernelPCA(BaseEstimator, TransformerMixin):
dual_coef_ : array, (n_samples, n_features)
Inverse transform matrix. If `fit_inverse_transform=False`,
dual_coef_ is not present.
``dual_coef_`` is not present.
X_transformed_fit_ : array, (n_samples, n_components)
Projection of the fitted data on the kernel principal components.
Expand Down
4 changes: 2 additions & 2 deletions sklearn/decomposition/pca.py
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ class PCA(_BasePCA):
components_ : array, [n_components, n_features]
Principal axes in feature space, representing the directions of
maximum variance in the data. The components are sorted by
explained_variance_.
``explained_variance_``.
explained_variance_ : array, [n_components]
The amount of variance explained by each of the selected components.
Expand Down Expand Up @@ -514,7 +514,7 @@ def score(self, X, y=None):

@deprecated("RandomizedPCA was deprecated in 0.18 and will be removed in 0.20. "
"Use PCA(svd_solver='randomized') instead. The new implementation "
"DOES NOT store whiten components_. Apply transform to get them.")
"DOES NOT store whiten ``components_``. Apply transform to get them.")
class RandomizedPCA(BaseEstimator, TransformerMixin):
"""Principal component analysis (PCA) using randomized SVD
Expand Down
4 changes: 2 additions & 2 deletions sklearn/multioutput.py
Original file line number Diff line number Diff line change
Expand Up @@ -147,8 +147,8 @@ def score(self, X, y, sample_weight=None):
predicts the expected value of y, disregarding the input features,
would get a R^2 score of 0.0.
Note
----
Notes
-----
R^2 is calculated by weighting all the targets equally using
`multioutput='uniform_average'`.
Expand Down
4 changes: 2 additions & 2 deletions sklearn/preprocessing/data.py
Original file line number Diff line number Diff line change
Expand Up @@ -933,7 +933,7 @@ class RobustScaler(BaseEstimator, TransformerMixin):
quantile_range : tuple (q_min, q_max), 0.0 < q_min < q_max < 100.0
Default: (25.0, 75.0) = (1st quantile, 3rd quantile) = IQR
Quantile range used to calculate scale_
Quantile range used to calculate ``scale_``.
.. versionadded:: 0.18
Expand Down Expand Up @@ -1101,7 +1101,7 @@ def robust_scale(X, axis=0, with_centering=True, with_scaling=True,
quantile_range : tuple (q_min, q_max), 0.0 < q_min < q_max < 100.0
Default: (25.0, 75.0) = (1st quantile, 3rd quantile) = IQR
Quantile range used to calculate scale_
Quantile range used to calculate ``scale_``.
.. versionadded:: 0.18
Expand Down

0 comments on commit 6972d6c

Please sign in to comment.