Skip to content

Commit d9193da

Browse files
Fix typos discovered by codespell (#225)
1 parent 13cc84d commit d9193da

File tree

5 files changed

+9
-9
lines changed

5 files changed

+9
-9
lines changed

.github/workflows/ci-repo2docker.yaml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -17,9 +17,9 @@ jobs:
1717
- name: Install repo2docker
1818
run: |
1919
python -m pip install --upgrade pip
20-
# Explicity adding `six` as a workaround for https://github.com/docker/docker-py/pull/2844
21-
# Explicity adding `chardet` as a workaround for https://github.com/jupyterhub/repo2docker/issues/1065
20+
# Explicitly adding `six` as a workaround for https://github.com/docker/docker-py/pull/2844
21+
# Explicitly adding `chardet` as a workaround for https://github.com/jupyterhub/repo2docker/issues/1065
2222
python -m pip install jupyter-repo2docker six chardet
2323
2424
- name: Build dask-tutorial Docker image
25-
run: jupyter-repo2docker --no-run --debug .
25+
run: jupyter-repo2docker --no-run --debug .

03_array.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -509,7 +509,7 @@
509509
"cell_type": "markdown",
510510
"metadata": {},
511511
"source": [
512-
"There is 2GB of somewhat artifical weather data in HDF5 files in `data/weather-big/*.hdf5`. We'll use the `h5py` library to interact with this data and `dask.array` to compute on it.\n",
512+
"There is 2GB of somewhat artificial weather data in HDF5 files in `data/weather-big/*.hdf5`. We'll use the `h5py` library to interact with this data and `dask.array` to compute on it.\n",
513513
"\n",
514514
"Our goal is to visualize the average temperature on the surface of the Earth for this month. This will require a mean over all of this data. We'll do this in the following steps\n",
515515
"\n",
@@ -814,7 +814,7 @@
814814
" mat = (diff*diff).sum(-1)\n",
815815
" return mat\n",
816816
"\n",
817-
"# the lj function is evaluated over the upper traingle\n",
817+
"# the lj function is evaluated over the upper triangle\n",
818818
"# after removing distances near zero\n",
819819
"def potential(cluster):\n",
820820
" d2 = distances(cluster)\n",

04_dataframe.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212
"\n",
1313
"# Dask DataFrames\n",
1414
"\n",
15-
"We finished Chapter 1 by building a parallel dataframe computation over a directory of CSV files using `dask.delayed`. In this section we use `dask.dataframe` to automatically build similiar computations, for the common case of tabular computations. Dask dataframes look and feel like Pandas dataframes but they run on the same infrastructure that powers `dask.delayed`.\n",
15+
"We finished Chapter 1 by building a parallel dataframe computation over a directory of CSV files using `dask.delayed`. In this section we use `dask.dataframe` to automatically build similar computations, for the common case of tabular computations. Dask dataframes look and feel like Pandas dataframes but they run on the same infrastructure that powers `dask.delayed`.\n",
1616
"\n",
1717
"In this notebook we use the same airline data as before, but now rather than write for-loops we let `dask.dataframe` construct our computations for us. The `dask.dataframe.read_csv` function can take a globstring like `\"data/nycflights/*.csv\"` and build parallel computations on all of our data at once.\n",
1818
"\n",
@@ -81,7 +81,7 @@
8181
"cell_type": "markdown",
8282
"metadata": {},
8383
"source": [
84-
"We create artifical data."
84+
"We create artificial data."
8585
]
8686
},
8787
{

06_distributed_advanced.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -474,7 +474,7 @@
474474
"from dask.distributed import as_completed\n",
475475
"from random import uniform\n",
476476
"\n",
477-
"scale = 5 # Intial random perturbation scale\n",
477+
"scale = 5 # Initial random perturbation scale\n",
478478
"best_point = (0, 0) # Initial guess\n",
479479
"best_score = float('inf') # Best score so far\n",
480480
"startx = [uniform(-scale, scale) for _ in range(10)]\n",

Homework.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@
6060
"**Try the following:**\n",
6161
"\n",
6262
"* Use `dask.bag` to inspect the data\n",
63-
"* Combine `dask.bag` with `nltk` or `gensim` to perform textual analyis on the data\n",
63+
"* Combine `dask.bag` with `nltk` or `gensim` to perform textual analysis on the data\n",
6464
"* Reproduce the work of [Daniel Rodriguez](https://extrapolations.dev/blog/2015/07/reproduceit-reddit-word-count-dask/) and see if you can improve upon his speeds when analyzing this data."
6565
]
6666
},

0 commit comments

Comments
 (0)