MoveHack Global Mobility Hackathon 2018
Original Image | Mask generated by the model | Mask overlaid on original |
---|---|---|
![]() |
![]() |
![]() |
In this work, we implement the U-Net segmentation architecture on the Mnih et. al. Massachusetts Roads Dataset for the task of road network extraction. The trained model achieves a mask accuracy of 96% on the test set. The model was trained on AWS P2.x instance. Inference time on a single K80 Tesla GPU(AWS P2.x instance) is 1.5 second and on CPU is 6 seconds.
The Massachusetts Roads Dataset by Mnih et. al. is freely available at here. There is also a torrent link available here.
Description | Size | Files |
---|---|---|
mass_roads/train | 9.5GB | 1108 |
mass_roads/valid | 150MB | 14 |
mass_roads/test | 450MB | 49 |
The following installation has been tested on MacOSX 10.13.6 and Ubuntu 16.04.
- Clone the repo. (NOTE: This repo requires Python 3.6)
git clone https://github.com/akshaybhatia10/RoadNetworkExtraction-MoveHack.git
- The project requires the fastai library. To install it, simply run the setup.sh script. (OPTIONAL: The default installation is for CPU. To install for GPU, in setup.sh file, change line 5 i.e conda env update -f environment-cpu.yml to conda env update.)
chmod 777 setup.sh
./setup.sh
To train the model, proceed with step 3 and then Quick start step 1. To test the pretrained model, simply skip to Quick start step 2.
- Download the dataset from here or here. It is recommended to download the dataset using the torrent link since it downloads the files in the appropriate directories.
You have 2 options
- Train: To train the model on the dataset, make sure to download the dataset and follow the repo structure as:
The repo structure should be as follows:
RoadNetworkExtraction-MoveHack
|_ mass_roads/
| |_ train/
| | |_sat/
| | | |img1.tiff
| | | |img2.tiff
| | | |......
| | |_map/
| | | |img2.tif
| | | |img2.tif
| | | |......
| |_ valid/
| | |_sat/
| | | |img1.tiff
| | | |img2.tiff
| | | |......
| | |_map/
| | | |img2.tif
| | | |img2.tif
| | | |......
| |_ test/
| | |_sat/
| | | |img1.tiff
| | | |img2.tiff
| | | |......
| | |_map/
| | | |img2.tif
| | | |img2.tif
| | | |......
|_ fastai
|_ dataset.py
|_ model.py
|_ models/
|_ ....
|_ (other files)
Now, start training with the following command- (NOTE: This will first set up the necessary folders and convert .tiff files to png and save them. Then it will start trained the u-net model for num_epochs(default is 1) and with cycle_len(default is 4). The trained model will be saved to the models/ directory. The trained model achieves a mask accuracy of 96% on test set.)
python main.py --mode train
usage: main.py [--data_dir] [--learning_rate] [--mode]
[--model_dir] [--num_epoch] [--cycle_len]
[--test_img]
Arguments:
--data_dir Path to the dataset(REQUIRED if mode is train, DEFAULT mass_roads/)
--mode One of train or test(DEFAULT test)
--learning_rate learning rate(OPTIONAL, DEFAULT 0.1)
--model_dir Path to save model files(OPTIONAL, DEFAULT models)
--test_img Path to test image for inference(OPTIONAL, DEFAULT test_images/10378780_15.png)
--num_epoch number of epochs(OPTIONAL, DEFAULT 1)
--cycle_len cycle length(OPTIONAL, DEFAULT 4)
- Test: To test the pretrained model available in the models directory(with mask accuracy score of 96%), run the following-
python main.py --mode test
This will save 3 different images in the current folder- a 1024x1024 version of original image, 1024x1024 generated mask image(output of the model), 1024x1024 mask overlayed on original image.