-
Notifications
You must be signed in to change notification settings - Fork 17
[WIP] Intermediate Tutorial #30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
romesco
wants to merge
10
commits into
main
Choose a base branch
from
examples/mnist_01
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Changes from all commits
Commits
Show all changes
10 commits
Select commit
Hold shift + click to select a range
8f52a85
WIP intermediate example
romesco c5c8613
experimenting with incomplete model config
romesco 36b53ab
possible model config strategy
romesco 18e5455
hot swappable optimizer/scheduler
romesco 5c78f34
configure dataloaders
romesco a7838f7
finish dataset/dataloader/transforms
romesco 95fd318
example updates
romesco 890dcd9
pull master
romesco 3d6f3ea
Reduce ambiguity, mnistconf -> toplvlconf
romesco 07757ae
remove unused imports
romesco File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,5 +1,4 @@ | ||
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved | ||
# flake8: noqa | ||
from __future__ import print_function | ||
import torch | ||
import torch.nn as nn | ||
|
@@ -8,18 +7,18 @@ | |
from torch.optim import Adadelta | ||
from torch.optim.lr_scheduler import StepLR | ||
|
||
###### HYDRA BLOCK ###### | ||
###### HYDRA BLOCK ###### # noqa: E266 | ||
import hydra | ||
from hydra.core.config_store import ConfigStore | ||
from dataclasses import dataclass | ||
|
||
# hydra-torch structured config imports | ||
# hydra-torch structured config imports: | ||
from hydra_configs.torch.optim import AdadeltaConf | ||
from hydra_configs.torch.optim.lr_scheduler import StepLRConf | ||
|
||
|
||
@dataclass | ||
class MNISTConf: | ||
class TopLvlConf: | ||
batch_size: int = 64 | ||
test_batch_size: int = 1000 | ||
epochs: int = 14 | ||
|
@@ -32,13 +31,13 @@ class MNISTConf: | |
adadelta: AdadeltaConf = AdadeltaConf() | ||
steplr: StepLRConf = StepLRConf( | ||
step_size=1 | ||
) # we pass a default for step_size since it is required, but missing a default in PyTorch (and consequently in hydra-torch) | ||
) # we pass a default for step_size since it is required, but missing a default in PyTorch (and consequently in hydra-torch) # noqa: E501 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Just break the comment into multiple lines? |
||
|
||
|
||
cs = ConfigStore.instance() | ||
cs.store(name="mnistconf", node=MNISTConf) | ||
cs.store(name="toplvlconf", node=TopLvlConf) | ||
|
||
###### / HYDRA BLOCK ###### | ||
###### / HYDRA BLOCK ###### # noqa: E266 | ||
|
||
|
||
class Net(nn.Module): | ||
|
@@ -118,9 +117,9 @@ def test(model, device, test_loader): | |
) | ||
|
||
|
||
@hydra.main(config_name="mnistconf") | ||
@hydra.main(config_name="toplvlconf") # DIFF | ||
def main(cfg): # DIFF | ||
print(cfg.pretty()) | ||
print(cfg.pretty()) # DIFF | ||
use_cuda = not cfg.no_cuda and torch.cuda.is_available() # DIFF | ||
torch.manual_seed(cfg.seed) # DIFF | ||
device = torch.device("cuda" if use_cuda else "cpu") | ||
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,12 @@ | ||
##Leftovers | ||
There are two areas in our Hydra-specific imports. First, since we define configs in this file, we need access to the following: | ||
- typing from both `typing` and `omegaconf` | ||
|
||
**[OmegaConf]** is an external library that Hydra is built around. Every config object is a datastructure defined by OmegaConf. For our purposes, we use it to specify typing and special constants such as [`MISSING`] when there is no value specified. | ||
|
||
#### Config Schema | ||
*our config templates - providing type checking and good defaults* | ||
|
||
Second, we import two [config schema] from `hydra-torch`. Think of config schema as recommended templates for commonly used configurations. `hydra-torch` provides config schema for a large subset of common PyTorch classes. In the basic tutorial, we only consider the schema for the PyTorch classes: | ||
- `Adadelta` which resides in `torch.optim` | ||
- `StepLR` which resides in `torch.optim.lr_scheduler` |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,208 @@ | ||
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved | ||
from __future__ import print_function | ||
import torch | ||
import torch.nn as nn | ||
import torch.nn.functional as F | ||
from torchvision import transforms | ||
|
||
###### HYDRA BLOCK ###### # noqa: E266 | ||
import hydra | ||
from hydra.utils import instantiate | ||
from hydra.core.config_store import ConfigStore | ||
from typing import Any | ||
from dataclasses import dataclass | ||
|
||
# hydra-torch structured config imports: | ||
from hydra_configs.torch.optim import AdadeltaConf | ||
from hydra_configs.torch.optim.lr_scheduler import StepLRConf | ||
from hydra_configs.torch.utils.data.dataloader import DataLoaderConf | ||
|
||
from hydra_configs.torchvision.datasets.mnist import MNISTConf | ||
|
||
# NOTE:^ Above still uses hydra_configs namespace, but comes from .torchvision package | ||
|
||
|
||
@dataclass | ||
class MNISTNetConf: | ||
conv1_out_channels: int = 32 | ||
conv2_out_channels: int = 64 | ||
maxpool1_kernel_size: int = 2 | ||
dropout1_prob: float = 0.25 | ||
dropout2_prob: float = 0.5 | ||
fc_hidden_features: int = 128 | ||
|
||
|
||
@dataclass | ||
class TopLvlConf: | ||
epochs: int = 14 | ||
no_cuda: bool = False | ||
dry_run: bool = False | ||
seed: int = 1 | ||
log_interval: int = 10 | ||
save_model: bool = False | ||
checkpoint_name: str = "unnamed.pt" | ||
train_dataloader: DataLoaderConf = DataLoaderConf( | ||
batch_size=64, shuffle=True, num_workers=1, pin_memory=False | ||
) | ||
test_dataloader: DataLoaderConf = DataLoaderConf( | ||
batch_size=1000, shuffle=False, num_workers=1 | ||
) | ||
train_dataset: MNISTConf = MNISTConf(root="../data", train=True, download=True) | ||
test_dataset: MNISTConf = MNISTConf(root="../data", train=False, download=True) | ||
model: MNISTNetConf = MNISTNetConf() | ||
optim: Any = AdadeltaConf() | ||
scheduler: Any = StepLRConf(step_size=1) | ||
|
||
|
||
cs = ConfigStore.instance() | ||
cs.store(name="toplvlconf", node=TopLvlConf) | ||
|
||
###### / HYDRA BLOCK ###### # noqa: E266 | ||
|
||
|
||
class Net(nn.Module): | ||
# DIFF: new model definition with configurable params | ||
def __init__(self, input_shape, output_shape, cfg): | ||
super(Net, self).__init__() | ||
self.conv1 = nn.Conv2d(1, cfg.model.conv1_out_channels, 3, 1) | ||
self.conv2 = nn.Conv2d( | ||
cfg.model.conv1_out_channels, cfg.model.conv2_out_channels, 3, 1 | ||
) | ||
self.dropout1 = nn.Dropout2d(cfg.model.dropout1_prob) | ||
self.dropout2 = nn.Dropout2d(cfg.model.dropout2_prob) | ||
self.maxpool1 = nn.MaxPool2d(cfg.model.maxpool1_kernel_size) | ||
|
||
conv_out_shape = self._compute_conv_out_shape(input_shape) | ||
linear_in_shape = conv_out_shape.numel() | ||
|
||
self.fc1 = nn.Linear(linear_in_shape, cfg.model.fc_hidden_features) | ||
self.fc2 = nn.Linear(cfg.model.fc_hidden_features, output_shape[1]) | ||
|
||
# /DIFF | ||
|
||
# DIFF: new utility method (incidental, not critical) | ||
def _compute_conv_out_shape(self, input_shape): | ||
dummy_input = torch.zeros(input_shape).unsqueeze(0) | ||
with torch.no_grad(): | ||
x = self.conv1(dummy_input) | ||
x = self.conv2(x) | ||
dummy_output = self.maxpool1(x) | ||
return dummy_output.shape | ||
|
||
# /DIFF | ||
|
||
def forward(self, x): | ||
x = self.conv1(x) | ||
x = F.relu(x) | ||
x = self.conv2(x) | ||
x = F.relu(x) | ||
x = self.maxpool1(x) # DIFF | ||
x = self.dropout1(x) | ||
x = torch.flatten(x, 1) | ||
x = self.fc1(x) | ||
x = F.relu(x) | ||
x = self.dropout2(x) | ||
x = self.fc2(x) | ||
output = F.log_softmax(x, dim=1) | ||
return output | ||
|
||
|
||
def train(args, model, device, train_loader, optimizer, epoch): | ||
model.train() | ||
for batch_idx, (data, target) in enumerate(train_loader): | ||
data, target = data.to(device), target.to(device) | ||
optimizer.zero_grad() | ||
output = model(data) | ||
loss = F.nll_loss(output, target) | ||
loss.backward() | ||
optimizer.step() | ||
if batch_idx % args.log_interval == 0: | ||
print( | ||
"Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}".format( | ||
epoch, | ||
batch_idx * len(data), | ||
len(train_loader.dataset), | ||
100.0 * batch_idx / len(train_loader), | ||
loss.item(), | ||
) | ||
) | ||
if args.dry_run: | ||
break | ||
|
||
|
||
def test(model, device, test_loader): | ||
model.eval() | ||
test_loss = 0 | ||
correct = 0 | ||
with torch.no_grad(): | ||
for data, target in test_loader: | ||
data, target = data.to(device), target.to(device) | ||
output = model(data) | ||
test_loss += F.nll_loss( | ||
output, target, reduction="sum" | ||
).item() # sum up batch loss | ||
pred = output.argmax( | ||
dim=1, keepdim=True | ||
) # get the index of the max log-probability | ||
correct += pred.eq(target.view_as(pred)).sum().item() | ||
|
||
test_loss /= len(test_loader.dataset) | ||
|
||
print( | ||
"\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n".format( | ||
test_loss, | ||
correct, | ||
len(test_loader.dataset), | ||
100.0 * correct / len(test_loader.dataset), | ||
) | ||
) | ||
|
||
|
||
@hydra.main(config_name="toplvlconf") | ||
def main(cfg): | ||
print(cfg.pretty()) | ||
use_cuda = not cfg.no_cuda and torch.cuda.is_available() | ||
torch.manual_seed(cfg.seed) | ||
device = torch.device("cuda" if use_cuda else "cpu") | ||
|
||
# DIFF: the following are removed as they are now in DataloaderConf | ||
# train_kwargs = {"batch_size": cfg.batch_size} | ||
# test_kwargs = {"batch_size": cfg.test_batch_size} | ||
# if use_cuda: | ||
# cuda_kwargs = {"num_workers": 1, "pin_memory": True, "shuffle": True} | ||
# train_kwargs.update(cuda_kwargs) | ||
# test_kwargs.update(cuda_kwargs) | ||
# /DIFF | ||
|
||
transform = transforms.Compose( | ||
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))] | ||
) | ||
# DIFF: hotswap enabled datasets with fixed transforms | ||
train_dataset = instantiate(cfg.train_dataset, transform=transform) | ||
test_dataset = instantiate(cfg.test_dataset, transform=transform) | ||
train_loader = instantiate(cfg.train_dataloader, dataset=train_dataset) | ||
test_loader = instantiate(cfg.test_dataloader, dataset=test_dataset) | ||
# /DIFF | ||
|
||
# DIFF: explicit I/O, configurable model | ||
input_shape = (1, 28, 28) | ||
output_shape = (1, 10) | ||
model = Net(input_shape, output_shape, cfg).to(device) | ||
# /DIFF | ||
|
||
# DIFF: hotswap enabled optimizer/scheduler | ||
optimizer = instantiate(cfg.optim, params=model.parameters()) | ||
scheduler = instantiate(cfg.scheduler, optimizer=optimizer) | ||
# /DIFF | ||
|
||
for epoch in range(1, cfg.epochs + 1): | ||
train(cfg, model, device, train_loader, optimizer, epoch) | ||
test(model, device, test_loader) | ||
scheduler.step() | ||
|
||
if cfg.save_model: | ||
torch.save(model.state_dict(), cfg.checkpoint_name) | ||
|
||
|
||
if __name__ == "__main__": | ||
main() |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this HYDRA BLOCK this is a good idea.
imports should be sorted normally. We can call out in the text what things are specific to Hydra.
I wouldn't like to see people copying this HYDRA BLOCK as if it's somehow needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm ok. I was thinking its nice to fence off the code so it is clear what the diffs are. Maybe I can instead annotate it as a 'DIFF' block.