Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: split tests-examples #990

Merged
merged 38 commits into from
Mar 25, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
77872f8
CI: split tests-examples
Borda Mar 1, 2020
482699c
tests without template
Borda Mar 1, 2020
be6001b
comment depends
Borda Mar 1, 2020
06a43aa
CircleCI typo
Borda Mar 1, 2020
888cb3f
add doctest
Borda Mar 1, 2020
92841a1
update test req.
Borda Mar 1, 2020
756ad4c
CI tests
Borda Mar 1, 2020
a7b5aef
setup macOS
Borda Mar 1, 2020
7cc3775
longer train
Borda Mar 2, 2020
2800e28
lover pred acc
Borda Mar 2, 2020
e6b54e7
fix model
Borda Mar 13, 2020
58d435e
rename default model
Borda Mar 13, 2020
1eb63b4
lower tests acc
Borda Mar 13, 2020
6e19ed8
typo
Borda Mar 14, 2020
ae11861
imports
Borda Mar 15, 2020
0e355be
fix test optimizer
Borda Mar 16, 2020
72abd2e
update calls
Borda Mar 16, 2020
66e4710
fix Win
Borda Mar 16, 2020
faecdd5
lower Drone image
Borda Mar 16, 2020
c281d3c
fix call
Borda Mar 16, 2020
ce0c699
pytorch image
Borda Mar 19, 2020
a5def6b
fix test
Borda Mar 19, 2020
7d0e28c
add dev image
Borda Mar 19, 2020
da8aa35
add dev image
Borda Mar 19, 2020
8520263
update image
Borda Mar 19, 2020
d846049
drone volume
Borda Mar 19, 2020
d940ce4
lint
Borda Mar 19, 2020
181f591
update test notes
Borda Mar 19, 2020
044191c
rename tests/models >> tests/base
Borda Mar 19, 2020
bc6fa27
group models
Borda Mar 19, 2020
fe315f0
conftest
Borda Mar 19, 2020
3ff54d0
optim imports
Borda Mar 19, 2020
4707e80
typos
Borda Mar 19, 2020
45e51c8
fix import
Borda Mar 19, 2020
21b6e31
fix tests
Borda Mar 20, 2020
16d9b4b
install AMP
Borda Mar 20, 2020
860aa4e
tests
Borda Mar 20, 2020
a2303d7
fix import
Borda Mar 24, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 21 additions & 4 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@ references:
name: Install Dependences
command: |
pip install "$TORCH_VERSION" --user
# this is temporal fix til test-tube is not merged and released
pip install -r requirements.txt --user
sudo pip install pytest pytest-cov pytest-flake8
pip install -r ./tests/requirements.txt --user
Expand All @@ -21,7 +20,16 @@ references:
name: Testing
command: |
python --version ; pip --version ; pip list
py.test pytorch_lightning tests pl_examples -v --doctest-modules --junitxml=test-reports/pytest_junit.xml
py.test pytorch_lightning tests -v --doctest-modules --junitxml=test-reports/pytest_junit.xml
no_output_timeout: 15m

examples: &examples
run:
name: PL Examples
command: |
pip install -r ./pl_examples/requirements.txt --user
python --version ; pip --version ; pip list
py.test pl_examples -v --doctest-modules --junitxml=test-reports/pytest_junit.xml
no_output_timeout: 15m

install_pkg: &install_pkg
Expand Down Expand Up @@ -84,10 +92,8 @@ jobs:
- TORCH_VERSION: "torch"
steps: &steps
- checkout

- *install_deps
- *tests

- store_test_results:
path: test-reports
- store_artifacts:
Expand Down Expand Up @@ -121,6 +127,16 @@ jobs:
- TORCH_VERSION: "torch>=1.4, <1.5"
steps: *steps

Examples:
docker:
- image: circleci/python:3.7
environment:
- TORCH_VERSION: "torch"
steps:
- checkout
- *install_deps
- *examples

Install-pkg:
docker:
- image: circleci/python:3.7
Expand All @@ -141,3 +157,4 @@ workflows:
- PyTorch-v1.3
- PyTorch-v1.4
- Install-pkg
- Examples
6 changes: 3 additions & 3 deletions .drone.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ name: torch-GPU

steps:
- name: testing
image: nvcr.io/nvidia/pytorch:20.02-py3
image: pytorch/pytorch:1.4-cuda10.1-cudnn7-runtime
environment:
SLURM_LOCALID: 0
CODECOV_TOKEN:
Expand All @@ -16,12 +16,12 @@ steps:
- pip install pip -U
- pip --version
- nvidia-smi
#- pip install torch==1.3
- bash ./tests/install_AMP.sh
- pip install -r requirements.txt --user
- pip install coverage pytest pytest-cov pytest-flake8 codecov
- pip install -r ./tests/requirements.txt --user
- pip list
- python -c "import torch ; print(' & '.join([torch.cuda.get_device_name(i) for i in range(torch.cuda.device_count())]) if torch.cuda.is_available() else 'only CPU')"
- coverage run --source pytorch_lightning -m py.test pytorch_lightning tests pl_examples -v --doctest-modules # --flake8
- coverage run --source pytorch_lightning -m py.test pytorch_lightning tests -v --doctest-modules # --flake8
- coverage report
- codecov --token $CODECOV_TOKEN # --pr $DRONE_PULL_REQUEST --build $DRONE_BUILD_NUMBER --branch $DRONE_BRANCH --commit $DRONE_COMMIT --tag $DRONE_TAG
10 changes: 8 additions & 2 deletions .github/workflows/ci-testing.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ jobs:
python-version: [3.6, 3.7]
requires: ['minimal', 'latest']

# https://stackoverflow.com/a/59076067/4521646
# Timeout: https://stackoverflow.com/a/59076067/4521646
timeout-minutes: 20
steps:
- uses: actions/checkout@v2
Expand All @@ -32,6 +32,12 @@ jobs:
with:
python-version: ${{ matrix.python-version }}

# Github Actions: Run step on specific OS: https://stackoverflow.com/a/57948488/4521646
- name: Setup macOS
if: runner.os == 'macOS'
run: |
brew install libomp # https://github.com/pytorch/pytorch/issues/20030

- name: Set min. dependencies
if: matrix.requires == 'minimal'
run: |
Expand Down Expand Up @@ -71,7 +77,7 @@ jobs:
run: |
# tox --sitepackages
# flake8 .
coverage run --source pytorch_lightning -m py.test pytorch_lightning tests pl_examples -v --doctest-modules --junitxml=junit/test-results-${{ runner.os }}-${{ matrix.python-version }}.xml
coverage run --source pytorch_lightning -m py.test pytorch_lightning tests -v --doctest-modules --junitxml=junit/test-results-${{ runner.os }}-${{ matrix.python-version }}.xml
coverage report

- name: Upload pytest test results
Expand Down
2 changes: 0 additions & 2 deletions .markdownlint.yml

This file was deleted.

4 changes: 2 additions & 2 deletions .run_local_tests.sh
Original file line number Diff line number Diff line change
Expand Up @@ -12,5 +12,5 @@ rm -rf ./tests/cometruns*
rm -rf ./tests/wandb*
rm -rf ./tests/tests/*
rm -rf ./lightning_logs
coverage run --source pytorch_lightning -m py.test pytorch_lightning tests pl_examples -v --doctest-modules --flake8
coverage report -m
python -m coverage run --source pytorch_lightning -m py.test pytorch_lightning tests pl_examples -v --doctest-modules --flake8
python -m coverage report -m
2 changes: 2 additions & 0 deletions environment.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
# This is Conda environment file
# Usage: `conda env update -f environment.yml`

channels:
- conda-forge
- pytorch
Expand Down
19 changes: 18 additions & 1 deletion pl_examples/basic_examples/lightning_module_template.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,24 @@

class LightningTemplateModel(LightningModule):
"""
Sample model to show how to define a template
Sample model to show how to define a template.

Example:

>>> # define simple Net for MNIST dataset
>>> params = dict(
... drop_prob=0.2,
... batch_size=2,
... in_features=28 * 28,
... learning_rate=0.001 * 8,
... optimizer_name='adam',
... data_root='./datasets',
... out_features=10,
... hidden_dim=1000,
... )
>>> from argparse import Namespace
>>> hparams = Namespace(**params)
>>> model = LightningTemplateModel(hparams)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can just pass in a dict now... no need to cast as Namespace

Copy link
Member Author

@Borda Borda Mar 24, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here it is used in init to get some values to do here has to be Namespace
so I will prepare follow-up pr with this change to have hparams setter/getter to accept

"""

def __init__(self, hparams):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@ class UNet(nn.Module):
Link - https://arxiv.org/abs/1505.04597

Parameters:
num_classes (int) - Number of output classes required (default 19 for KITTI dataset)
bilinear (bool) - Whether to use bilinear interpolation or transposed
convolutions for upsampling.
num_classes (int) - Number of output classes required (default 19 for KITTI dataset)
bilinear (bool) - Whether to use bilinear interpolation or transposed
convolutions for upsampling.
'''

def __init__(self, num_classes=19, bilinear=False):
Expand Down
2 changes: 1 addition & 1 deletion pytorch_lightning/core/lightning.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@
from typing import Any, Callable, Dict, List, Optional, Tuple, Union

import torch
from torch import Tensor
import torch.distributed as torch_distrib
from torch import Tensor
from torch.nn.parallel import DistributedDataParallel
from torch.optim import Adam
from torch.optim.optimizer import Optimizer
Expand Down
2 changes: 2 additions & 0 deletions requirements-extra.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# extended list of package dependencies to reach full functionality

neptune-client>=0.4.4
comet-ml>=1.0.56
mlflow>=1.0.0
Expand Down
2 changes: 2 additions & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# the default package dependencies

tqdm>=4.41.0
numpy>=1.16.4
torch>=1.1
Expand Down
10 changes: 8 additions & 2 deletions setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ norecursedirs =
build
python_files =
test_*.py
doctest_plus = disabled
# doctest_plus = disabled
addopts = --strict
markers =
slow
Expand Down Expand Up @@ -41,7 +41,7 @@ ignore =
# setup.cfg or tox.ini
[check-manifest]
ignore =
.travis.yml
*.yml
tox.ini
.github
.github/*
Expand All @@ -51,3 +51,9 @@ ignore =
license_file = LICENSE
# long_description = file:README.md
# long_description_content_type = text/markdown

[pydocstyle]
convention = pep257
# D104, D107: Ignore missing docstrings in __init__ files and methods.
# D202: Ignore a blank line after docstring (collision with Python Black in decorators)
add-ignore = D104, D107, D202
7 changes: 7 additions & 0 deletions tests/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
ARG TORCH_VERSION=1.4
ARG CUDA_VERSION=10.1

FROM pytorch/pytorch:${TORCH_VERSION}-cuda${CUDA_VERSION}-cudnn7-runtime

# Install AMP
RUN bash ./tests/install_AMP.sh
8 changes: 3 additions & 5 deletions tests/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@ To run all tests do the following:
git clone https://github.com/PyTorchLightning/pytorch-lightning
cd pytorch-lightning

# install module locally
pip install -e .
# install AMP support
bash tests/install_AMP.sh

# install dev deps
pip install -r tests/requirements.txt
Expand All @@ -36,15 +36,13 @@ Make sure to run coverage on a GPU machine with at least 2 GPUs and NVIDIA apex
cd pytorch-lightning

# generate coverage (coverage is also installed as part of dev dependencies under tests/requirements.txt)
pip install coverage
coverage run --source pytorch_lightning -m py.test pytorch_lightning tests examples -v --doctest-modules

# print coverage stats
coverage report -m

# exporting resulys
# exporting results
coverage xml
codecov -t 17327163-8cca-4a5d-86c8-ca5f2ef700bc -v
```


57 changes: 57 additions & 0 deletions tests/base/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
"""Models for testing."""

import torch

from tests.base.models import TestModelBase, DictHparamsModel
from tests.base.mixins import (
LightEmptyTestStep,
LightValidationStepMixin,
LightValidationMixin,
LightValidationStepMultipleDataloadersMixin,
LightValidationMultipleDataloadersMixin,
LightTestStepMixin,
LightTestMixin,
LightTestStepMultipleDataloadersMixin,
LightTestMultipleDataloadersMixin,
LightTestFitSingleTestDataloadersMixin,
LightTestFitMultipleTestDataloadersMixin,
LightValStepFitSingleDataloaderMixin,
LightValStepFitMultipleDataloadersMixin,
LightTrainDataloader,
LightTestDataloader,
LightInfTrainDataloader,
LightInfValDataloader,
LightInfTestDataloader,
LightTestOptimizerWithSchedulingMixin,
LightTestMultipleOptimizersWithSchedulingMixin,
LightTestOptimizersWithMixedSchedulingMixin,
LightTestReduceLROnPlateauMixin
)


class LightningTestModel(LightTrainDataloader,
LightValidationMixin,
LightTestMixin,
TestModelBase):
"""Most common test case. Validation and test dataloaders."""

def on_training_metrics(self, logs):
logs['some_tensor_to_test'] = torch.rand(1)


class LightningTestModelWithoutHyperparametersArg(LightningTestModel):
""" without hparams argument in constructor """

def __init__(self):
import tests.base.utils as tutils

# the user loads the hparams in some other way
hparams = tutils.get_default_hparams()
super().__init__(hparams)


class LightningTestModelWithUnusedHyperparametersArg(LightningTestModelWithoutHyperparametersArg):
""" has hparams argument in constructor but is not used """

def __init__(self, hparams):
super().__init__()
2 changes: 1 addition & 1 deletion tests/models/debug.py → tests/base/debug.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@


# from test_models import assert_ok_test_acc, load_model, \
# clear_save_dir, get_test_tube_logger, get_hparams, init_save_dir, \
# clear_save_dir, get_default_testtube_logger, get_default_hparams, init_save_dir, \
# init_checkpoint_callback, reset_seed, set_random_master_port


Expand Down
File renamed without changes.
40 changes: 3 additions & 37 deletions tests/models/base.py → tests/base/models.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
import os
from collections import OrderedDict
from typing import Dict

import torch
import torch.nn as nn
Expand All @@ -8,7 +9,6 @@
from torch.utils.data import DataLoader
from torchvision import transforms
from torchvision.datasets import MNIST
from typing import Dict

try:
from test_tube import HyperOptArgumentParser
Expand Down Expand Up @@ -174,9 +174,8 @@ def configure_optimizers(self):
optimizer = optim.LBFGS(self.parameters(), lr=self.hparams.learning_rate)
else:
optimizer = optim.Adam(self.parameters(), lr=self.hparams.learning_rate)

# test returning only 1 list instead of 2
return optimizer
scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=10)
return [optimizer], [scheduler]

def prepare_data(self):
transform = transforms.Compose([transforms.ToTensor(),
Expand All @@ -201,36 +200,3 @@ def _dataloader(self, train):
)

return loader

@staticmethod
def add_model_specific_args(parent_parser, root_dir): # pragma: no-cover
"""
Parameters you define here will be available to your model through self.hparams
:param parent_parser:
:param root_dir:
:return:
"""
parser = HyperOptArgumentParser(strategy=parent_parser.strategy, parents=[parent_parser])

# param overwrites
# parser.set_defaults(gradient_clip_val=5.0)

# network params
parser.opt_list('--drop_prob', default=0.2, options=[0.2, 0.5], type=float, tunable=False)
parser.add_argument('--in_features', default=28 * 28, type=int)
parser.add_argument('--out_features', default=10, type=int)
# use 500 for CPU, 50000 for GPU to see speed difference
parser.add_argument('--hidden_dim', default=50000, type=int)
# data
parser.add_argument('--data_root', default=os.path.join(root_dir, 'mnist'), type=str)
# training params (opt)
parser.opt_list('--learning_rate', default=0.001 * 8, type=float,
options=[0.0001, 0.0005, 0.001, 0.005], tunable=False)
parser.opt_list('--optimizer_name', default='adam', type=str,
options=['adam'], tunable=False)
# if using 2 nodes with 4 gpus each the batch size here
# (256) will be 256 / (2*8) = 16 per gpu
parser.opt_list('--batch_size', default=256 * 8, type=int,
options=[32, 64, 128, 256], tunable=False,
help='batch size will be divided over all GPUs being used across all nodes')
return parser
Loading