Skip to content

Commit ee3bf0e

Browse files
authored
Fix typos (#839)
1 parent 6625d0c commit ee3bf0e

13 files changed

+17
-17
lines changed

CONTRIBUTING.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ $ make notebooks/02_numerical_pipeline_scaling.ipynb
3939
- when saving the notebook inside Jupyter it will actually write to the `.py` file
4040

4141
In our experience, this workflow is less convenient (Visual Studio Code is a
42-
nicer developping environment) and also it tends to add some not very important
42+
nicer developing environment) and also it tends to add some not very important
4343
(and different on everyone's machine) metadata changes in the `.py` file, for
4444
example about jupytext version, Jupyter kernel, Python version, etc ...
4545

@@ -101,7 +101,7 @@ make full-index
101101
## JupyterBook
102102

103103
JupyterBook is the tool we use to generate our .github.io website from our
104-
`.py` and `.md` files (note than `.ipynb` files are not used in our JupyterBook
104+
`.py` and `.md` files (note that `.ipynb` files are not used in our JupyterBook
105105
setup).
106106

107107
```

notebooks/01_tabular_data_exploration.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@
102102
"cell_type": "markdown",
103103
"metadata": {},
104104
"source": [
105-
"An alternative is to omit the `head` method. This would output the intial and\n",
105+
"An alternative is to omit the `head` method. This would output the initial and\n",
106106
"final rows and columns, but everything in between is not shown by default. It\n",
107107
"also provides the dataframe's dimensions at the bottom in the format `n_rows`\n",
108108
"x `n_columns`."

notebooks/linear_models_sol_03.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -98,7 +98,7 @@
9898
"Since we scaled the features, the coefficients of the linear model can be\n",
9999
"meaningful compared directly. `\"capital-gain\"` is the most impacting feature.\n",
100100
"Just be aware not to draw conclusions on the causal effect provided the impact\n",
101-
"of a feature. Interested readers are refered to the [example on Common\n",
101+
"of a feature. Interested readers are referred to the [example on Common\n",
102102
"pitfalls in the interpretation of coefficients of linear\n",
103103
"models](https://scikit-learn.org/stable/auto_examples/inspection/plot_linear_model_coefficient_interpretation.html)\n",
104104
"or the [example on Failure of Machine Learning to infer causal\n",

notebooks/logistic_regression.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -390,7 +390,7 @@
390390
"function](https://en.wikipedia.org/wiki/Softmax_function) to make predictions.\n",
391391
"Giving more details on that scenario is beyond the scope of this MOOC.\n",
392392
"\n",
393-
"In any case, interested users are refered to the [scikit-learn user guide](\n",
393+
"In any case, interested users are referred to the [scikit-learn user guide](\n",
394394
"https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression)\n",
395395
"for a more mathematical description of the `predict_proba` method of the\n",
396396
"`LogisticRegression` and the respective normalization functions."

notebooks/parameter_tuning_grid_search.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -451,7 +451,7 @@
451451
"In this notebook we have seen:\n",
452452
"\n",
453453
"* how to optimize the hyperparameters of a predictive model via a grid-search;\n",
454-
"* that searching for more than two hyperparamters is too costly;\n",
454+
"* that searching for more than two hyperparameters is too costly;\n",
455455
"* that a grid-search does not necessarily find an optimal solution."
456456
]
457457
}

notebooks/trees_classification.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
"In this notebook we illustrate decision trees in a multiclass classification\n",
1010
"problem by using the penguins dataset with 2 features and 3 classes.\n",
1111
"\n",
12-
"For the sake of simplicity, we focus the discussion on the hyperparamter\n",
12+
"For the sake of simplicity, we focus the discussion on the hyperparameter\n",
1313
"`max_depth`, which controls the maximal depth of the decision tree."
1414
]
1515
},

plan.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ Features and samples
5454
A few words about the style and scope of this course: it is centered
5555
around code, though we strive to keep it simple
5656

57-
## Quizz:
57+
## Quiz:
5858
Given a case study (e.g. pricing apartments based on a real estate website database) and sample toy dataset: say whether it’s an application of supervised vs unsupervised, classification vs regression, what are the features, what is the target variable, what is a record.
5959

6060
Propose a hand engineer decision rule that can be used as a baseline
@@ -92,7 +92,7 @@ Simple exploratory data analysis with pandas and matplotlib
9292

9393
### Content
9494

95-
Prepare a train / test split
95+
Prepare a train / test split
9696

9797
Basic model on numerical features only
9898

@@ -113,7 +113,7 @@ Model fitting and performance evaluation with cross-validation
113113
## Notebook module #3: basic parameter tuning and final test score evaluation
114114

115115
### Learning objectives:
116-
- Learn to no trust blindly the default parameters of scikit-learn estimators
116+
- Learn not to trust blindly the default parameters of scikit-learn estimators
117117

118118
### Content
119119
Parameter tuning with Grid and Random hyperparameter search

python_scripts/01_tabular_data_exploration.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -71,7 +71,7 @@
7171
adult_census.head()
7272

7373
# %% [markdown]
74-
# An alternative is to omit the `head` method. This would output the intial and
74+
# An alternative is to omit the `head` method. This would output the initial and
7575
# final rows and columns, but everything in between is not shown by default. It
7676
# also provides the dataframe's dimensions at the bottom in the format `n_rows`
7777
# x `n_columns`.

python_scripts/linear_models_sol_03.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@
6666
# Since we scaled the features, the coefficients of the linear model can be
6767
# meaningful compared directly. `"capital-gain"` is the most impacting feature.
6868
# Just be aware not to draw conclusions on the causal effect provided the impact
69-
# of a feature. Interested readers are refered to the [example on Common
69+
# of a feature. Interested readers are referred to the [example on Common
7070
# pitfalls in the interpretation of coefficients of linear
7171
# models](https://scikit-learn.org/stable/auto_examples/inspection/plot_linear_model_coefficient_interpretation.html)
7272
# or the [example on Failure of Machine Learning to infer causal

python_scripts/logistic_regression.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -266,7 +266,7 @@
266266
# function](https://en.wikipedia.org/wiki/Softmax_function) to make predictions.
267267
# Giving more details on that scenario is beyond the scope of this MOOC.
268268
#
269-
# In any case, interested users are refered to the [scikit-learn user guide](
269+
# In any case, interested users are referred to the [scikit-learn user guide](
270270
# https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression)
271271
# for a more mathematical description of the `predict_proba` method of the
272272
# `LogisticRegression` and the respective normalization functions.

0 commit comments

Comments
 (0)