Skip to content

moving monai-minimal prototype code to the main repo #20

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jan 10, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions .flake8
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
[flake8]
select = B,C,E,F,N,P,T4,W,B9
max-line-length = 120
# C408 ignored because we like the dict keyword argument syntax
# E501 is not flexible enough, we're using B950 instead
ignore =
E203,E305,E402,E501,E721,E741,F403,F405,F821,F841,F999,W503,W504,C408,E302,W291,E303,
# these ignores are from flake8-bugbear; please fix!
B007,B008,
# these ignores are from flake8-comprehensions; please fix!
C400,C401,C402,C403,C404,C405,C407,C411,C413,C414,C415,C416
per-file-ignores = __init__.py: F401
exclude = docs/src,*.pyi,.git
31 changes: 31 additions & 0 deletions .github/workflows/pythonapp.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
name: Python application

on: [push]

jobs:
build:

runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v1
- name: Set up Python 3.7
uses: actions/setup-python@v1
with:
python-version: 3.7
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Lint with flake8
run: |
pip install flake8
pip install pep8-naming
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings.
flake8 . --count --statistics
# - name: Test with pytest
# run: |
# pip install pytest
# pytest
105 changes: 105 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# pyenv
.python-version

# celery beat schedule file
celerybeat-schedule

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
examples/scd_lvsegs.npz
295 changes: 295 additions & 0 deletions examples/cardiac_segmentation.ipynb

Large diffs are not rendered by default.

14 changes: 14 additions & 0 deletions monai/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@

# monai

The framework is divided into a few large modules:

* **application**: Contains the `NetworkManager` type, monitors, and utilities

* **data**: Contains the data stream definition types and example datasets

* **networks**: Contains network definitions, component definitions, and Pytorch specific utilities

* **test**: Contains the unit tests for the framework

* **utils**: Contains a set of utility files for doing namespace aliasing, auto module loading, and other facilities
13 changes: 13 additions & 0 deletions monai/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
import os, sys
from .utils.moduleutils import loadSubmodules


__copyright__ = "(c) 2019 MONAI Consortium"
__version__tuple__ = (0, 0, 1)
__version__ = "%i.%i.%i" % (__version__tuple__)

__basedir__ = os.path.dirname(__file__)


loadSubmodules(sys.modules[__name__], False) # load directory modules only, skip loading individual files
loadSubmodules(sys.modules[__name__], True) # load all modules, this will trigger all export decorations
13 changes: 13 additions & 0 deletions monai/application/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@

# Application

Contains the code for managing the training process. This includes the `NetworkManager` and its subclasses, monitors,
and other facilities.

* **config**: This has a few facilities for configuration and diagnostic output.

* **metrics**: Defines metric tracking types.

* **handlers**: Defines handlers for implementing functionality at various stages in the training process.

* **engine**: Eventually will have Engine-derived classes for extending Ignite behaviour.
2 changes: 2 additions & 0 deletions monai/application/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@


2 changes: 2 additions & 0 deletions monai/application/config/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@


37 changes: 37 additions & 0 deletions monai/application/config/deviceconfig.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
import os, sys
from collections import OrderedDict
import monai
import numpy as np
import torch

try:
import ignite
ignite_version=ignite.__version__
except ImportError:
ignite_version='NOT INSTALLED'

export = monai.utils.export("monai.application.config")


@export
def getConfigValues():
output = OrderedDict()

output["MONAI version"] = monai.__version__
output["Python version"] = sys.version.replace("\n", " ")
output["Numpy version"] = np.version.full_version
output["Pytorch version"] = torch.__version__
output["Ignite version"] = ignite_version

return output


@export
def printConfig(file=sys.stdout):
for kv in getConfigValues().items():
print("%s: %s" % kv, file=file, flush=True)


@export
def setVisibleDevices(*devInds):
os.environ["CUDA_VISIBLE_DEVICES"] = ",".join(map(str, devInds))
2 changes: 2 additions & 0 deletions monai/application/engine/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@


Empty file.
29 changes: 29 additions & 0 deletions monai/application/handlers/metric_logger.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
from collections import defaultdict

import monai


@monai.utils.export("monai.application.handlers")
@monai.utils.alias("metriclogger")
class MetricLogger:
def __init__(self,loss_transform=lambda x:x, metric_transform=lambda x:x):
self.loss_transform=loss_transform
self.metric_transform=metric_transform
self.loss=[]
self.metrics=defaultdict(list)

def attach(self,engine):
return engine.add_event_handler(monai.application.engine.Events.ITERATION_COMPLETED,self)

def __call__(self,engine):
self.loss.append(self.loss_transform(engine.state.output))

for m,v in engine.state.metrics.items():
v=self.metric_transform(v)
# # metrics may not be added on the first timestep, pad the list if this is the case
# # so that each metric list is the same length as self.loss
# if len(self.metrics[m])==0:
# self.metrics[m].append([v[0]]*len(self.loss))

self.metrics[m].append(v)

Empty file.
50 changes: 50 additions & 0 deletions monai/data/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@

# Data

This implements the data streams classes and contains a few example datasets. Data streams are iterables which produce
single data items or batches thereof from source iterables (usually). Chaining these together is how data pipelines are
implemented in the framework. Data augmentation routines are also provided here which can applied to data items as they
pass through the stream, either singly or in parallel.

For example, the following stream reads image/segmentation pairs from `imSrc` (any iterable), applies the augmentations
to convert the array format and apply simple augmentations (rotation, transposing, flipping, shifting) using mutliple
threads, and wraps the whole stream in a buffering thread stream:

```
def normalizeImg(im,seg):
im=utils.arrayutils.rescaleArray(im)
im=im[None].astype(np.float32)
seg=seg[None].astype(np.int32)
return im, seg

augs=[
normalizeImg,
augments.rot90,
augments.transpose,
augments.flip,
partial(augments.shift,dimFract=5,order=0,nonzeroIndex=1),
]

src=data.augments.augmentstream.ThreadAugmentStream(imSrc,200,augments=augs)
src=data.streams.ThreadBufferStream(src)
```

In this code, `src` is now going to yield batches of 200 images in a separate thread when iterated over. This can be
fed directly into a `NetworkManager` class as its `src` parameter.

Module breakdown:

* **augments**: Contains definitions and stream types for doing data augmentation. An augment is simply a callable which
accepts one or more Numpy arrays and returns the augmented result. The provided decorators are for adding probability
and other facilities to a function.

* **readers**: Subclasses of `DataStream` for reading data from arrays and various file formats.

* **streams**: Contains the definitions of the stream classes which implement a number of operations on streams. The
root of the stream classes is `DataStream` which provides a very simple iterable facility. It iterates over its `src`
member, passes each item into its `generate()` generator method and yields each resulting value. This allows subclasses
to implement `generate` to modify data as it moves through the stream. The `streamgen` decorator is provided to simplify
this by being applied to a generator function to fill this role in a new object. Other subclasses implement buffering,
batching, merging from multiple sources, cycling between sources, prefetching, and fetching data from the source in a
separate thread.

2 changes: 2 additions & 0 deletions monai/data/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@


2 changes: 2 additions & 0 deletions monai/data/augments/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@


Loading