Skip to content

Commit 1a396f6

Browse files
committed
Merge branch 'code_owners' of github.com:ericspod/MONAI into code_owners
2 parents cda27bd + 5d947fe commit 1a396f6

File tree

12 files changed

+222
-52
lines changed

12 files changed

+222
-52
lines changed

.github/CODEOWNERS

Lines changed: 15 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,17 @@
11
/monai/ @KumoLiu @ericspod @Nic-Ma
2-
/docs/ @Project-MONAI/core-reviewers
3-
/tests/ @Project-MONAI/core-reviewers
2+
/docs/ @KumoLiu @ericspod @Nic-Ma
3+
/tests/ @KumoLiu @ericspod @Nic-Ma
44
/.github/ @KumoLiu
5+
/monai/networks/schedulers/ @virginiafdez
6+
/monai/inferers/inferer.py @virginiafdez
7+
/monai/losses/adversarial_loss.py @virginiafdez
8+
/monai/losses/perceptual.py @virginiafdez
9+
/monai/networks/blocks/spade_norm.py @virginiafdez
10+
/monai/networks/nets/autoencoderkl.py @virginiafdez
11+
/monai/networks/nets/controlnet.py @virginiafdez
12+
/monai/networks/nets/diffusion_model_unet.py @virginiafdez
13+
/monai/networks/nets/patchgan_discriminator.py @virginiafdez
14+
/monai/networks/nets/spade_autoencoderkl.py @virginiafdez
15+
/monai/networks/nets/spade_diffusion_model_unet.py @virginiafdez
16+
/monai/networks/nets/spade_network.py @virginiafdez
17+
/monai/networks/nets/vqvae.py @virginiafdez

.github/codecov.yml

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,8 +24,13 @@ coverage:
2424
flags: null
2525
paths: null
2626

27-
# Disable comments on PR
28-
comment: false
27+
comment: # enable code coverage comment on PR
28+
layout: "diff, flags, files"
29+
behavior: default
30+
require_changes: false
31+
require_base: false
32+
require_head: true
33+
hide_project_coverage: true
2934

3035
ignore:
3136
- "versioneer.py"

CONTRIBUTING.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -105,9 +105,9 @@ MONAI tests are located under `tests/`.
105105
A bash script (`runtests.sh`) is provided to run all tests locally.
106106
Please run ``./runtests.sh -h`` to see all options.
107107

108-
To run a particular test, for example `tests/test_dice_loss.py`:
108+
To run a particular test, for example `tests/losses/test_dice_loss.py`:
109109
```
110-
python -m tests.test_dice_loss
110+
python -m tests.losses.test_dice_loss
111111
```
112112

113113
Before submitting a pull request, we recommend that all linting and unit tests

README.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,15 @@ Its ambitions are as follows:
3333
- customizable design for varying user expertise;
3434
- multi-GPU multi-node data parallelism support.
3535

36+
## Requirements
37+
38+
MONAI works with the [currently supported versions of Python](https://devguide.python.org/versions), and depends directly on NumPy and PyTorch with many optional dependencies.
39+
40+
* Major releases of MONAI will have dependency versions stated for them. The current state of the `dev` branch in this repository is the unreleased development version of MONAI which typically will support current versions of dependencies and include updates and bug fixes to do so.
41+
* PyTorch support covers [the current version](https://github.com/pytorch/pytorch/releases) plus three previous minor versions. If compatibility issues with a PyTorch version and other dependencies arise, support for a version may be delayed until a major release.
42+
* Our support policy for other dependencies adheres for the most part to [SPEC0](https://scientific-python.org/specs/spec-0000), where dependency versions are supported where possible for up to two years. Discovered vulnerabilities or defects may require certain versions to be explicitly not supported.
43+
* See the `requirements*.txt` files for dependency version information.
44+
3645
## Installation
3746

3847
To install [the current release](https://pypi.org/project/monai/), you can simply run:

docs/source/installation.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -58,10 +58,10 @@ pip install monai-weekly
5858
```
5959

6060
The weekly build is released to PyPI every Sunday with a pre-release build number `dev[%y%U]`.
61-
To report any issues on the weekly preview, please include the version and commit information:
61+
To report any issues on the weekly preview, please include the version information:
6262

6363
```bash
64-
python -c "import monai; print(monai.__version__); print(monai.__commit_id__)"
64+
python -c "import monai; print(monai.__version__)"
6565
```
6666

6767
Coexistence of package `monai` and `monai-weekly` in a system may cause namespace conflicts
@@ -101,20 +101,20 @@ for the latest features:
101101
### Option 1 (as a part of your system-wide module):
102102

103103
```bash
104-
pip install git+https://github.com/Project-MONAI/MONAI#egg=monai
104+
pip install git+https://github.com/Project-MONAI/MONAI
105105
```
106106

107107
or, to build with MONAI C++/CUDA extensions:
108108

109109
```bash
110-
BUILD_MONAI=1 pip install git+https://github.com/Project-MONAI/MONAI#egg=monai
110+
BUILD_MONAI=1 pip install git+https://github.com/Project-MONAI/MONAI
111111
```
112112

113113
To build the extensions, if the system environment already has a version of Pytorch installed,
114114
`--no-build-isolation` might be preferred:
115115

116116
```bash
117-
BUILD_MONAI=1 pip install --no-build-isolation git+https://github.com/Project-MONAI/MONAI#egg=monai
117+
BUILD_MONAI=1 pip install --no-build-isolation git+https://github.com/Project-MONAI/MONAI
118118
```
119119

120120
this command will download and install the current `dev` branch of [MONAI from

monai/apps/auto3dseg/auto_runner.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -570,7 +570,7 @@ def set_device_info(
570570
self.device_setting["CUDA_VISIBLE_DEVICES"] = ",".join([str(x) for x in cuda_visible_devices])
571571
self.device_setting["n_devices"] = len(cuda_visible_devices)
572572
else:
573-
logger.warn(f"Wrong format of cuda_visible_devices {cuda_visible_devices}, devices not set")
573+
logger.warning(f"Wrong format of cuda_visible_devices {cuda_visible_devices}, devices not set")
574574

575575
if num_nodes is None:
576576
num_nodes = int(os.environ.get("NUM_NODES", 1))

monai/bundle/workflows.py

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -226,7 +226,7 @@ def add_property(self, name: str, required: str, desc: str | None = None) -> Non
226226
if self.properties is None:
227227
self.properties = {}
228228
if name in self.properties:
229-
logger.warn(f"property '{name}' already exists in the properties list, overriding it.")
229+
logger.warning(f"property '{name}' already exists in the properties list, overriding it.")
230230
self.properties[name] = {BundleProperty.DESC: desc, BundleProperty.REQUIRED: required}
231231

232232
def check_properties(self) -> list[str] | None:
@@ -421,7 +421,7 @@ def __init__(
421421
for _config_file in _config_files:
422422
_config_file = Path(_config_file)
423423
if _config_file.parent != config_root_path:
424-
logger.warn(
424+
logger.warning(
425425
f"Not all config files are in {config_root_path}. If logging_file and meta_file are"
426426
f"not specified, {config_root_path} will be used as the default config root directory."
427427
)
@@ -434,11 +434,11 @@ def __init__(
434434
self.config_root_path = config_root_path
435435
logging_file = str(self.config_root_path / "logging.conf") if logging_file is None else logging_file
436436
if logging_file is False:
437-
logger.warn(f"Logging file is set to {logging_file}, skipping logging.")
437+
logger.warning(f"Logging file is set to {logging_file}, skipping logging.")
438438
else:
439439
if not os.path.isfile(logging_file):
440440
if logging_file == str(self.config_root_path / "logging.conf"):
441-
logger.warn(f"Default logging file in {logging_file} does not exist, skipping logging.")
441+
logger.warning(f"Default logging file in {logging_file} does not exist, skipping logging.")
442442
else:
443443
raise FileNotFoundError(f"Cannot find the logging config file: {logging_file}.")
444444
else:
@@ -503,17 +503,17 @@ def check_properties(self) -> list[str] | None:
503503
"""
504504
ret = super().check_properties()
505505
if self.properties is None:
506-
logger.warn("No available properties had been set, skipping check.")
506+
logger.warning("No available properties had been set, skipping check.")
507507
return None
508508
if ret:
509-
logger.warn(f"Loaded bundle does not contain the following required properties: {ret}")
509+
logger.warning(f"Loaded bundle does not contain the following required properties: {ret}")
510510
# also check whether the optional properties use correct ID name if existing
511511
wrong_props = []
512512
for n, p in self.properties.items():
513513
if not p.get(BundleProperty.REQUIRED, False) and not self._check_optional_id(name=n, property=p):
514514
wrong_props.append(n)
515515
if wrong_props:
516-
logger.warn(f"Loaded bundle defines the following optional properties with wrong ID: {wrong_props}")
516+
logger.warning(f"Loaded bundle defines the following optional properties with wrong ID: {wrong_props}")
517517
if ret is not None:
518518
ret.extend(wrong_props)
519519
return ret

monai/inferers/inferer.py

Lines changed: 60 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -839,6 +839,7 @@ def sample(
839839
mode: str = "crossattn",
840840
verbose: bool = True,
841841
seg: torch.Tensor | None = None,
842+
cfg: float | None = None,
842843
) -> torch.Tensor | tuple[torch.Tensor, list[torch.Tensor]]:
843844
"""
844845
Args:
@@ -851,6 +852,7 @@ def sample(
851852
mode: Conditioning mode for the network.
852853
verbose: if true, prints the progression bar of the sampling process.
853854
seg: if diffusion model is instance of SPADEDiffusionModel, segmentation must be provided.
855+
cfg: classifier-free-guidance scale, which indicates the level of strengthening on the conditioning.
854856
"""
855857
if mode not in ["crossattn", "concat"]:
856858
raise NotImplementedError(f"{mode} condition is not supported")
@@ -877,15 +879,31 @@ def sample(
877879
if isinstance(diffusion_model, SPADEDiffusionModelUNet)
878880
else diffusion_model
879881
)
880-
if mode == "concat" and conditioning is not None:
881-
model_input = torch.cat([image, conditioning], dim=1)
882+
if (
883+
cfg is not None
884+
): # if classifier-free guidance is used, a conditioned and unconditioned bit is generated.
885+
model_input = torch.cat([image] * 2, dim=0)
886+
if conditioning is not None:
887+
uncondition = torch.ones_like(conditioning)
888+
uncondition.fill_(-1)
889+
conditioning_input = torch.cat([uncondition, conditioning], dim=0)
890+
else:
891+
conditioning_input = None
892+
else:
893+
model_input = image
894+
conditioning_input = conditioning
895+
if mode == "concat" and conditioning_input is not None:
896+
model_input = torch.cat([model_input, conditioning_input], dim=1)
882897
model_output = diffusion_model(
883898
model_input, timesteps=torch.Tensor((t,)).to(input_noise.device), context=None
884899
)
885900
else:
886901
model_output = diffusion_model(
887-
image, timesteps=torch.Tensor((t,)).to(input_noise.device), context=conditioning
902+
model_input, timesteps=torch.Tensor((t,)).to(input_noise.device), context=conditioning_input
888903
)
904+
if cfg is not None:
905+
model_output_uncond, model_output_cond = model_output.chunk(2)
906+
model_output = model_output_uncond + cfg * (model_output_cond - model_output_uncond)
889907

890908
# 2. compute previous image: x_t -> x_t-1
891909
if not isinstance(scheduler, RFlowScheduler):
@@ -1166,6 +1184,7 @@ def sample( # type: ignore[override]
11661184
mode: str = "crossattn",
11671185
verbose: bool = True,
11681186
seg: torch.Tensor | None = None,
1187+
cfg: float | None = None,
11691188
) -> torch.Tensor | tuple[torch.Tensor, list[torch.Tensor]]:
11701189
"""
11711190
Args:
@@ -1180,6 +1199,7 @@ def sample( # type: ignore[override]
11801199
verbose: if true, prints the progression bar of the sampling process.
11811200
seg: if diffusion model is instance of SPADEDiffusionModel, or autoencoder_model
11821201
is instance of SPADEAutoencoderKL, segmentation must be provided.
1202+
cfg: classifier-free-guidance scale, which indicates the level of strengthening on the conditioning.
11831203
"""
11841204

11851205
if (
@@ -1203,6 +1223,7 @@ def sample( # type: ignore[override]
12031223
mode=mode,
12041224
verbose=verbose,
12051225
seg=seg,
1226+
cfg=cfg,
12061227
)
12071228

12081229
if save_intermediates:
@@ -1381,6 +1402,7 @@ def sample( # type: ignore[override]
13811402
mode: str = "crossattn",
13821403
verbose: bool = True,
13831404
seg: torch.Tensor | None = None,
1405+
cfg: float | None = None,
13841406
) -> torch.Tensor | tuple[torch.Tensor, list[torch.Tensor]]:
13851407
"""
13861408
Args:
@@ -1395,6 +1417,7 @@ def sample( # type: ignore[override]
13951417
mode: Conditioning mode for the network.
13961418
verbose: if true, prints the progression bar of the sampling process.
13971419
seg: if diffusion model is instance of SPADEDiffusionModel, segmentation must be provided.
1420+
cfg: classifier-free-guidance scale, which indicates the level of strengthening on the conditioning.
13981421
"""
13991422
if mode not in ["crossattn", "concat"]:
14001423
raise NotImplementedError(f"{mode} condition is not supported")
@@ -1413,14 +1436,31 @@ def sample( # type: ignore[override]
14131436
progress_bar = iter(zip(scheduler.timesteps, all_next_timesteps))
14141437
intermediates = []
14151438

1439+
if cfg is not None:
1440+
cn_cond = torch.cat([cn_cond] * 2, dim=0)
1441+
14161442
for t, next_t in progress_bar:
1443+
# Controlnet prediction
1444+
if cfg is not None:
1445+
model_input = torch.cat([image] * 2, dim=0)
1446+
if conditioning is not None:
1447+
uncondition = torch.ones_like(conditioning)
1448+
uncondition.fill_(-1)
1449+
conditioning_input = torch.cat([uncondition, conditioning], dim=0)
1450+
else:
1451+
conditioning_input = None
1452+
else:
1453+
model_input = image
1454+
conditioning_input = conditioning
1455+
1456+
# Diffusion model prediction
14171457
diffuse = diffusion_model
14181458
if isinstance(diffusion_model, SPADEDiffusionModelUNet):
14191459
diffuse = partial(diffusion_model, seg=seg)
14201460

1421-
if mode == "concat" and conditioning is not None:
1461+
if mode == "concat" and conditioning_input is not None:
14221462
# 1. Conditioning
1423-
model_input = torch.cat([image, conditioning], dim=1)
1463+
model_input = torch.cat([model_input, conditioning_input], dim=1)
14241464
# 2. ControlNet forward
14251465
down_block_res_samples, mid_block_res_sample = controlnet(
14261466
x=model_input,
@@ -1437,20 +1477,28 @@ def sample( # type: ignore[override]
14371477
mid_block_additional_residual=mid_block_res_sample,
14381478
)
14391479
else:
1480+
# 1. Controlnet forward
14401481
down_block_res_samples, mid_block_res_sample = controlnet(
1441-
x=image,
1482+
x=model_input,
14421483
timesteps=torch.Tensor((t,)).to(input_noise.device),
14431484
controlnet_cond=cn_cond,
1444-
context=conditioning,
1485+
context=conditioning_input,
14451486
)
1487+
# 2. predict noise model_output
14461488
model_output = diffuse(
1447-
image,
1489+
model_input,
14481490
timesteps=torch.Tensor((t,)).to(input_noise.device),
1449-
context=conditioning,
1491+
context=conditioning_input,
14501492
down_block_additional_residuals=down_block_res_samples,
14511493
mid_block_additional_residual=mid_block_res_sample,
14521494
)
14531495

1496+
# If classifier-free guidance isn't None, we split and compute the weighting between
1497+
# conditioned and unconditioned output.
1498+
if cfg is not None:
1499+
model_output_uncond, model_output_cond = model_output.chunk(2)
1500+
model_output = model_output_uncond + cfg * (model_output_cond - model_output_uncond)
1501+
14541502
# 3. compute previous image: x_t -> x_t-1
14551503
if not isinstance(scheduler, RFlowScheduler):
14561504
image, _ = scheduler.step(model_output, t, image) # type: ignore
@@ -1714,6 +1762,7 @@ def sample( # type: ignore[override]
17141762
mode: str = "crossattn",
17151763
verbose: bool = True,
17161764
seg: torch.Tensor | None = None,
1765+
cfg: float | None = None,
17171766
) -> torch.Tensor | tuple[torch.Tensor, list[torch.Tensor]]:
17181767
"""
17191768
Args:
@@ -1730,6 +1779,7 @@ def sample( # type: ignore[override]
17301779
verbose: if true, prints the progression bar of the sampling process.
17311780
seg: if diffusion model is instance of SPADEDiffusionModel, or autoencoder_model
17321781
is instance of SPADEAutoencoderKL, segmentation must be provided.
1782+
cfg: classifier-free-guidance scale, which indicates the level of strengthening on the conditioning.
17331783
"""
17341784

17351785
if (
@@ -1757,6 +1807,7 @@ def sample( # type: ignore[override]
17571807
mode=mode,
17581808
verbose=verbose,
17591809
seg=seg,
1810+
cfg=cfg,
17601811
)
17611812

17621813
if save_intermediates:

tests/bundle/test_bundle_download.py

Lines changed: 0 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -316,20 +316,6 @@ def test_load_weights(self, bundle_files, bundle_name, repo, device, model_file)
316316
output_2 = model_2.forward(input_tensor)
317317
assert_allclose(output_2, expected_output, atol=1e-4, rtol=1e-4, type_test=False)
318318

319-
model_3 = load(
320-
name=bundle_name,
321-
model_file=model_file,
322-
bundle_dir=tempdir,
323-
progress=False,
324-
device=device,
325-
net_name=model_name,
326-
source="github",
327-
**net_args,
328-
)
329-
model_3.eval()
330-
output_3 = model_3.forward(input_tensor)
331-
assert_allclose(output_3, expected_output, atol=1e-4, rtol=1e-4, type_test=False)
332-
333319
@parameterized.expand([TEST_CASE_8])
334320
@skip_if_quick
335321
@skipUnless(has_huggingface_hub, "Requires `huggingface_hub`.")

tests/inferers/test_controlnet_inferers.py

Lines changed: 14 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -482,16 +482,20 @@ def test_sample_intermediates(self, model_params, controlnet_params, input_shape
482482
scheduler = DDPMScheduler(num_train_timesteps=10)
483483
inferer = ControlNetDiffusionInferer(scheduler=scheduler)
484484
scheduler.set_timesteps(num_inference_steps=10)
485-
sample, intermediates = inferer.sample(
486-
input_noise=noise,
487-
diffusion_model=model,
488-
scheduler=scheduler,
489-
controlnet=controlnet,
490-
cn_cond=mask,
491-
save_intermediates=True,
492-
intermediate_steps=1,
493-
)
494-
self.assertEqual(len(intermediates), 10)
485+
486+
for cfg in [5, None]:
487+
sample, intermediates = inferer.sample(
488+
input_noise=noise,
489+
diffusion_model=model,
490+
scheduler=scheduler,
491+
controlnet=controlnet,
492+
cn_cond=mask,
493+
save_intermediates=True,
494+
intermediate_steps=1,
495+
cfg=cfg,
496+
)
497+
498+
self.assertEqual(len(intermediates), 10)
495499

496500
@parameterized.expand(CNDM_TEST_CASES)
497501
@skipUnless(has_einops, "Requires einops")

0 commit comments

Comments
 (0)