|
24 | 24 | "id": "rjZMYxFoznj9"
|
25 | 25 | },
|
26 | 26 | "source": [
|
27 |
| - "This is a tutorial on using Ignite to train neural network models, setup experiments and validate models.\n", |
| 27 | + "This is a tutorial on using Ignite to train neural network models, set up experiments and validate models.\n", |
28 | 28 | "\n",
|
29 | 29 | "In this experiment, we'll be replicating [\n",
|
30 | 30 | "Convolutional Neural Networks for Sentence Classification by Yoon Kim](https://arxiv.org/abs/1408.5882)! This paper uses CNN for text classification, a task typically reserved for RNNs, Logistic Regression, Naive Bayes.\n",
|
|
132 | 132 | "id": "Q22BGKi8znkF"
|
133 | 133 | },
|
134 | 134 | "source": [
|
135 |
| - "`Ignite` is a High-level library to help with training neural networks in PyTorch. It comes with an `Engine` to setup a training loop, various metrics, handlers and a helpful contrib section! \n", |
| 135 | + "`Ignite` is a High-level library to help with training neural networks in PyTorch. It comes with an `Engine` to set up a training loop, various metrics, handlers and a helpful contrib section! \n", |
136 | 136 | "\n",
|
137 | 137 | "Below we import the following:\n",
|
138 | 138 | "* **Engine**: Runs a given process_function over each batch of a dataset, emitting events as it goes.\n",
|
|
284 | 284 | "Let's explore a data sample to see what it looks like.\n",
|
285 | 285 | "Each data sample is a tuple of the format `(label, text)`.\n",
|
286 | 286 | "\n",
|
287 |
| - "The value of label can is either 'pos' or 'neg'.\n" |
| 287 | + "The value of label is either 'pos' or 'neg'.\n" |
288 | 288 | ]
|
289 | 289 | },
|
290 | 290 | {
|
|
465 | 465 | "id": "d2azJGL6znkM"
|
466 | 466 | },
|
467 | 467 | "source": [
|
468 |
| - "Let's actually explore what the output of the iterator is, this way we'll know what the input of the model is, how to compare the label to the output and how to setup are process_functions for Ignite's `Engine`.\n", |
| 468 | + "Let's actually explore what the output of the iterator is, this way we'll know what the input of the model is, how to compare the label to the output and how to set up our process_functions for Ignite's `Engine`.\n", |
469 | 469 | "* `batch[0][0]` is the label of a single example. We can see that `vocab.stoi` was used to map the label that originally text into a float.\n",
|
470 | 470 | "* `batch[1][0]` is the text of a single example. Similar to label, `vocab.stoi` was used to convert each token of the example's text into indices.\n",
|
471 | 471 | "\n",
|
|
604 | 604 | "id": "D7nH55oXznkP"
|
605 | 605 | },
|
606 | 606 | "source": [
|
607 |
| - "Below we create an instance of the TextCNN model and load embeddings in **static** mode. The model is placed on a device and then a loss function of Binary Cross Entropy and Adam optimizer are setup. " |
| 607 | + "Below we create an instance of the TextCNN model and load embeddings in **static** mode. The model is placed on a device and then a loss function of Binary Cross Entropy and Adam optimizer are set up. " |
608 | 608 | ]
|
609 | 609 | },
|
610 | 610 | {
|
|
692 | 692 | "source": [
|
693 | 693 | "### Evaluator Engine - process_function\n",
|
694 | 694 | "\n",
|
695 |
| - "Similar to the training process function, we setup a function to evaluate a single batch. Here is what the eval_function does:\n", |
| 695 | + "Similar to the training process function, we set up a function to evaluate a single batch. Here is what the eval_function does:\n", |
696 | 696 | "\n",
|
697 | 697 | "* Sets model in eval mode.\n",
|
698 |
| - "* Generates x and y from batch.\n", |
699 | 698 | "* With torch.no_grad(), no gradients are calculated for any succeding steps.\n",
|
| 699 | + "* Generates x and y from batch.\n", |
700 | 700 | "* Performs a forward pass on the model to calculate y_pred based on model and x.\n",
|
701 | 701 | "* Returns y_pred and y.\n",
|
702 | 702 | "\n",
|
|
857 | 857 | "source": [
|
858 | 858 | "### EarlyStopping - Tracking Validation Loss\n",
|
859 | 859 | "\n",
|
860 |
| - "Now we'll setup a Early Stopping handler for this training process. EarlyStopping requires a score_function that allows the user to define whatever criteria to stop trainig. In this case, if the loss of the validation set does not decrease in 5 epochs, the training process will stop early. " |
| 860 | + "Now we'll set up a Early Stopping handler for this training process. EarlyStopping requires a score_function that allows the user to define whatever criteria to stop training. In this case, if the loss of the validation set does not decrease in 5 epochs, the training process will stop early. " |
861 | 861 | ]
|
862 | 862 | },
|
863 | 863 | {
|
|
0 commit comments