Skip to content

Commit

Permalink
Merge pull request cgpotts#121 from ryanprince/Minor-spelling-correct…
Browse files Browse the repository at this point in the history
…ion-

Minor spelling correction.
  • Loading branch information
cgpotts authored Mar 29, 2023
2 parents b9925de + 5da7e8a commit 81f217a
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ Reference implementations for the `torch_*.py` models, designed to reveal more a

## `vsm_*` and `hw_wordrelatedness.ipynb`

A until on vector space models of meaning, covering traditional methods like PMI and LSA as well as newer methods like Autoencoders and GloVe. `vsm.py` provides a lot of the core functionality, and `torch_glove.py` and `torch_autoencoder.py` are the learned models that we cover. `vsm_03_retroffiting.ipynb` is an extension that uses `retrofitting.py`, and `vsm_04_contextualreps.ipynb` explores methods for deriving static representations from contextual models.
A unit on vector space models of meaning, covering traditional methods like PMI and LSA as well as newer methods like Autoencoders and GloVe. `vsm.py` provides a lot of the core functionality, and `torch_glove.py` and `torch_autoencoder.py` are the learned models that we cover. `vsm_03_retroffiting.ipynb` is an extension that uses `retrofitting.py`, and `vsm_04_contextualreps.ipynb` explores methods for deriving static representations from contextual models.


## `sst_*` and `hw_sst.ipynb`
Expand Down
4 changes: 2 additions & 2 deletions hw_sentiment.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -428,7 +428,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Here we merge the sotmax and RNN experiments into a single DataFrame:"
"Here we merge the softmax and RNN experiments into a single DataFrame:"
]
},
{
Expand Down Expand Up @@ -578,7 +578,7 @@
"\n",
"1. Take as inputs (a) a model training wrapper like `fit_softmax_classifier` and (b) an integer `bakeoff_train_size` specifying the number of examples from `bakeoff_dev` that should be included in the train set.\n",
"1. Split `bakeoff_dev` so that the first `bakeoff_train_size` examples are in the train set and the rest are used for evaluation.\n",
"1. Use `sst.experiment` with the user-supplied model training wrapper, `unigram_phi` as defined above, and a train set that consists of SST-3 train and the train portion of `bakeoff_dev` as defined in step 2. The value of `assess_dataframes` should be a list consisting of the SST-3 dev set and the evaluation portion of `bakeoff_dev` as defined in step 2.\n",
"1. Use `sst.experiment` with the user-supplied model training wrapper, `unigrams_phi` as defined above, and a train set that consists of SST-3 train and the train portion of `bakeoff_dev` as defined in step 2. The value of `assess_dataframes` should be a list consisting of the SST-3 dev set and the evaluation portion of `bakeoff_dev` as defined in step 2.\n",
"1. Return the return value of `sst.experiment`.\n",
"\n",
"The function `test_run_mixed_training_experiment` will help you iterate to the required design."
Expand Down

0 comments on commit 81f217a

Please sign in to comment.