Skip to content
This repository was archived by the owner on Nov 16, 2023. It is now read-only.

Commit a34c613

Browse files
maxkazmsftyalaudah
andauthored
Docker README corrections and pretrained model checking (#398)
* added better instructions to Docker readme; removed HRNet references * added checking of pre-trained models on startup * Update docker/README.md Co-authored-by: yalaudah <yazeed.alaudah@microsoft.com> * added more README changes and a video link with overview * readme tweaks Co-authored-by: yalaudah <yazeed.alaudah@microsoft.com>
1 parent 6fedb0b commit a34c613

File tree

11 files changed

+52
-20
lines changed

11 files changed

+52
-20
lines changed

README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,9 @@ The repository provides sample notebooks, data loaders for seismic data, utiliti
1111

1212
DeepSeismic currently focuses on Seismic Interpretation (mainly facies classification) with experimental code provided around Seismic Imaging in the contrib folder.
1313

14+
For a quick overview you can watch the video showcasing the latest 0.2 release of this repository by clicking on the GIF below:
15+
[![watch the video](./assets/ds.gif)](https://www.screencast.com/t/BRIad19jv)
16+
1417
### Quick Start
1518

1619
Our repo is Docker-enabled and we provide a Docker file which you can use to quickly demo our codebase. If you are in a hurry and just can't wait to run our code, follow the [Docker README](https://github.com/microsoft/seismic-deeplearning/blob/master/docker/README.md) to build and run our repo from [Dockerfile](https://github.com/microsoft/seismic-deeplearning/blob/master/docker/Dockerfile).

assets/ds.gif

4.36 MB
Loading

cv_lib/cv_lib/segmentation/models/patch_deconvnet.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -305,7 +305,7 @@ def get_seg_model(cfg, **kwargs):
305305
), f"Patch deconvnet is not implemented to accept {cfg.MODEL.IN_CHANNELS} channels. Please only pass 1 for cfg.MODEL.IN_CHANNELS"
306306
model = patch_deconvnet(n_classes=cfg.DATASET.NUM_CLASSES)
307307
# load the pre-trained model
308-
if "PRETRAINED" in cfg.MODEL.keys():
308+
if "PRETRAINED" in cfg.MODEL.keys() and os.path.exists(cfg.MODEL.PRETRAINED) and os.path.isfile(cfg.MODEL.PRETRAINED):
309309
trained_model = torch.load(cfg.MODEL.PRETRAINED)
310310
trained_model = {k.replace("module.", ""): v for (k, v) in trained_model.items()}
311311
model.load_state_dict(trained_model, strict=True)

cv_lib/cv_lib/segmentation/models/patch_deconvnet_skip.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -305,7 +305,7 @@ def get_seg_model(cfg, **kwargs):
305305
), f"Patch deconvnet is not implemented to accept {cfg.MODEL.IN_CHANNELS} channels. Please only pass 1 for cfg.MODEL.IN_CHANNELS"
306306
model = patch_deconvnet_skip(n_classes=cfg.DATASET.NUM_CLASSES)
307307
# load the pre-trained model
308-
if "PRETRAINED" in cfg.MODEL.keys():
308+
if "PRETRAINED" in cfg.MODEL.keys() and os.path.exists(cfg.MODEL.PRETRAINED) and os.path.isfile(cfg.MODEL.PRETRAINED):
309309
trained_model = torch.load(cfg.MODEL.PRETRAINED)
310310
trained_model = {k.replace("module.", ""): v for (k, v) in trained_model.items()}
311311
model.load_state_dict(trained_model, strict=True)

cv_lib/cv_lib/segmentation/models/resnet_unet.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -368,7 +368,7 @@ def get_seg_model(cfg, **kwargs):
368368
), f"SEResnet Unet deconvnet is not implemented to accept {cfg.MODEL.IN_CHANNELS} channels. Please only pass 3 for cfg.MODEL.IN_CHANNELS"
369369
model = Res34Unetv4(n_classes=cfg.DATASET.NUM_CLASSES)
370370
# load the pre-trained model
371-
if "PRETRAINED" in cfg.MODEL.keys():
371+
if "PRETRAINED" in cfg.MODEL.keys() and os.path.exists(cfg.MODEL.PRETRAINED) and os.path.isfile(cfg.MODEL.PRETRAINED):
372372
trained_model = torch.load(cfg.MODEL.PRETRAINED)
373373
trained_model = {k.replace("module.", ""): v for (k, v) in trained_model.items()}
374374
model.load_state_dict(trained_model, strict=True)

cv_lib/cv_lib/segmentation/models/section_deconvnet.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -305,7 +305,7 @@ def get_seg_model(cfg, **kwargs):
305305
), f"Section deconvnet is not implemented to accept {cfg.MODEL.IN_CHANNELS} channels. Please only pass 1 for cfg.MODEL.IN_CHANNELS"
306306
model = section_deconvnet(n_classes=cfg.DATASET.NUM_CLASSES)
307307
# load the pre-trained model
308-
if "PRETRAINED" in cfg.MODEL.keys():
308+
if "PRETRAINED" in cfg.MODEL.keys() and os.path.exists(cfg.MODEL.PRETRAINED) and os.path.isfile(cfg.MODEL.PRETRAINED):
309309
trained_model = torch.load(cfg.MODEL.PRETRAINED)
310310
trained_model = {k.replace("module.", ""): v for (k, v) in trained_model.items()}
311311
model.load_state_dict(trained_model, strict=True)

cv_lib/cv_lib/segmentation/models/section_deconvnet_skip.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -305,7 +305,7 @@ def get_seg_model(cfg, **kwargs):
305305
), f"Section deconvnet is not implemented to accept {cfg.MODEL.IN_CHANNELS} channels. Please only pass 1 for cfg.MODEL.IN_CHANNELS"
306306
model = section_deconvnet_skip(n_classes=cfg.DATASET.NUM_CLASSES)
307307
# load the pre-trained model
308-
if "PRETRAINED" in cfg.MODEL.keys():
308+
if "PRETRAINED" in cfg.MODEL.keys() and os.path.exists(cfg.MODEL.PRETRAINED) and os.path.isfile(cfg.MODEL.PRETRAINED):
309309
trained_model = torch.load(cfg.MODEL.PRETRAINED)
310310
trained_model = {k.replace("module.", ""): v for (k, v) in trained_model.items()}
311311
model.load_state_dict(trained_model, strict=True)

cv_lib/cv_lib/segmentation/models/seg_hrnet.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -444,6 +444,6 @@ def init_weights(
444444

445445
def get_seg_model(cfg, **kwargs):
446446
model = HighResolutionNet(cfg, **kwargs)
447-
if "PRETRAINED" in cfg.MODEL.keys():
447+
if "PRETRAINED" in cfg.MODEL.keys() and os.path.exists(cfg.MODEL.PRETRAINED) and os.path.isfile(cfg.MODEL.PRETRAINED):
448448
model.init_weights(cfg.MODEL.PRETRAINED)
449449
return model

cv_lib/cv_lib/segmentation/models/unet.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -114,7 +114,7 @@ def forward(self, x):
114114
def get_seg_model(cfg, **kwargs):
115115
model = UNet(cfg.MODEL.IN_CHANNELS, cfg.DATASET.NUM_CLASSES)
116116
# load the pre-trained model
117-
if "PRETRAINED" in cfg.MODEL.keys():
117+
if "PRETRAINED" in cfg.MODEL.keys() and os.path.exists(cfg.MODEL.PRETRAINED) and os.path.isfile(cfg.MODEL.PRETRAINED):
118118
trained_model = torch.load(cfg.MODEL.PRETRAINED)
119119
trained_model = {k.replace("module.", ""): v for (k, v) in trained_model.items()}
120120
model.load_state_dict(trained_model, strict=True)

docker/README.md

Lines changed: 41 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,5 @@
11
This Docker image allows the user to run the notebooks in this repository on any Unix based operating system without having to setup the environment or install anything other than the Docker engine. We recommend using [Azure Data Science Virtual Machine (DSVM) for Linux (Ubuntu)](https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro) as outlined [here](../README.md#compute-environment). For instructions on how to install the Docker engine, click [here](https://www.docker.com/get-started).
22

3-
# Download the HRNet model:
4-
5-
To run the [`Dutch_F3_patch_model_training_and_evaluation.ipynb`](https://github.com/microsoft/seismic-deeplearning/blob/master/examples/interpretation/notebooks/Dutch_F3_patch_model_training_and_evaluation.ipynb), you will need to manually download the [HRNet-W48-C](https://1drv.ms/u/s!Aus8VCZ_C_33dKvqI6pBZlifgJk) pretrained model. You can follow the instructions [here](../README.md#pretrained-models).
6-
7-
If you are using an Azure Virtual Machine to run this code, you can download the model to your local machine, and then copy it to your Azure VM through the command below. Please make sure you update the `<azureuser>` and `<azurehost>` feilds.
8-
```bash
9-
scp hrnetv2_w48_imagenet_pretrained.pth <azureuser>@<azurehost>:/home/<azureuser>/seismic-deeplearning/docker/hrnetv2_w48_imagenet_pretrained.pth
10-
```
11-
Once you have the model downloaded (ideally under the `docker` directory), you can proceed to build the Docker image.
12-
133
# Build the Docker image:
144

155
In the `docker` directory, run the following command to build the Docker image and tag it as `seismic-deeplearning`:
@@ -22,13 +12,52 @@ This process will take a few minutes to complete.
2212
# Run the Docker image:
2313
Once the Docker image is built, you can run it anytime using the following command:
2414
```bash
25-
sudo docker run --rm -it -p 9000:9000 -p 9001:9001 --gpus=all --shm-size 11G --mount type=bind,source=$PWD/hrnetv2_w48_imagenet_pretrained.pth,target=/home/username/seismic-deeplearning/docker/hrnetv2_w48_imagenet_pretrained.pth seismic-deeplearning
15+
sudo docker run --rm -it -p 9000:9000 -p 9001:9001 --gpus=all --shm-size 11G seismic-deeplearning
2616
```
27-
If you have saved the pretrained model in a different directory, make sure you replace `$PWD/hrnetv2_w48_imagenet_pretrained.pth` with the **absolute** path to the pretrained HRNet model. The command above will run a Jupyter Lab instance that you can access by clicking on the link in your terminal. You can then navigate to the notebook or script that you would like to run.
17+
The command above will run a [JupyterLab](https://jupyterlab.readthedocs.io/en/stable/) instance that you can access by clicking on the link in your terminal. You can then navigate to the notebook or script that you would like to run.
18+
19+
We recommend using [Google Chrome](https://www.google.com/chrome/) web browser for any visualizations shown in the notebook.
20+
21+
You can alternatively use [Jupyter](https://jupyter.org/) notebook instead of Jupyter Lab by changing the last line in the Dockerfile from
22+
```bash
23+
jupyter lab --allow-root --ip 0.0.0.0 --port 9000
24+
```
25+
to
26+
```bash
27+
jupyter notebook --allow-root --ip 0.0.0.0 --port 9000
28+
```
29+
and rebuilding the Docker image.
2830

2931
# Run TensorBoard:
3032
To run Tensorboard to visualize the logged metrics and results, open a terminal in Jupyter Lab, navigate to the parent of the `output` directory of your model, and run the following command:
3133
```bash
3234
tensorboard --logdir output/ --port 9001 --bind_all
3335
```
3436
Make sure your VM has the port 9001 allowed in the networking rules, and then you can open TensorBoard by navigating to `http://<vm_ip_address>:9001/` on your browser where `<vm_ip_address>` is your public VM IP address (or private VM IP address if you are using a VPN).
37+
38+
# Experimental
39+
40+
We also offer the ability so use a semantic segmentation [HRNet](https://github.com/HRNet/HRNet-Semantic-Segmentation) model with the repository from
41+
[Microsoft Research](https://www.microsoft.com/en-us/research/). Its use is currently experimental.
42+
43+
## Download the HRNet model:
44+
45+
To run the [`Dutch_F3_patch_model_training_and_evaluation.ipynb`](https://github.com/microsoft/seismic-deeplearning/blob/master/examples/interpretation/notebooks/Dutch_F3_patch_model_training_and_evaluation.ipynb), you will need to manually download the [HRNet-W48-C](https://1drv.ms/u/s!Aus8VCZ_C_33dKvqI6pBZlifgJk) pretrained model. You can follow the instructions [here](../README.md#pretrained-models).
46+
47+
If you are using an Azure Virtual Machine to run this code, you can download the HRNet model to your local machine, and then copy it to your Azure VM through the command below. Please make sure you update the `<azureuser>` and `<azurehost>` feilds.
48+
```bash
49+
scp hrnetv2_w48_imagenet_pretrained.pth <azureuser>@<azurehost>:/home/<azureuser>/seismic-deeplearning/docker/hrnetv2_w48_imagenet_pretrained.pth
50+
```
51+
52+
## Run the Docker image:
53+
54+
Once you have the model downloaded (ideally under the `docker` directory), you can proceed to build the Docker image: go to the [Build the Docker image](#build-the-docker-image) section above to do so.
55+
56+
Once the Docker image is built, you can run it anytime using the following command:
57+
```bash
58+
sudo docker run --rm -it -p 9000:9000 -p 9001:9001 --gpus=all --shm-size 11G --mount type=bind,source=$PWD/hrnetv2_w48_imagenet_pretrained.pth,target=/home/username/seismic-deeplearning/docker/hrnetv2_w48_imagenet_pretrained.pth seismic-deeplearning
59+
```
60+
61+
If you have saved the pretrained model in a different directory, make sure you replace `$PWD/hrnetv2_w48_imagenet_pretrained.pth` with the **absolute** path to the pretrained HRNet model.
62+
The command above will run a Jupyter Lab instance that you can access by clicking on the link in your terminal. You can then navigate to the notebook or script that you would like to run.
63+

0 commit comments

Comments
 (0)