Skip to content

akshaybhatia10/RoadNetworkExtraction-MoveHack

Repository files navigation

Road Network Extraction using Satellite Imagery

MoveHack Global Mobility Hackathon 2018

Results of the implemented system on Mnih et. al. test set image

Original Image Mask generated by the model Mask overlaid on original

Table of Contents

  1. About
  2. Dataset Summary
  3. Installation
  4. Quick start

About

In this work, we implement the U-Net segmentation architecture on the Mnih et. al. Massachusetts Roads Dataset for the task of road network extraction. The trained model achieves a mask accuracy of 96% on the test set. The model was trained on AWS P2.x instance.

Massachusetts Roads Dataset Summary

The Massachusetts Roads Dataset by Mnih et. al. is freely available at here. There is also a torrent link available here.It is recommended to download the dataset from this link.

Description Size Files
mass_roads/train 9.5GB 1108
mass_roads/valid 150MB 14
mass_roads/test 450MB 49

Installation

The following installation has been tested on MacOSX 10.13.6 and Ubuntu 16.04

  1. Clone the repo. (NOTE: This repo requires Python 3.6)
git clone https://github.com/akshaybhatia10/RoadNetworkExtraction-MoveHack.git
  1. The project requires the fastai library. To install it, simply run the setup.sh script. (OPTIONAL: The default installation is for CPU. To install for GPU, in setup.sh file, change line 5 i.e conda env update -f environment-cpu.yml to conda env update.)
chmod 777 setup.sh
./setup.sh
  1. To train the road network extraction, download the dataset from here or here. For running the pretrained model, proceed without this step.

Quick start

You have 2 options

  1. Train: To train the model on the dataset, make sure to download the dataset and follow the repo structure as:

The repo structure should be as follows:

RoadNetworkExtraction-MoveHack
|_ mass_roads/
|  |_ train/
|  |  |_sat/
|  |  |  |img1.tiff
|  |  |  |img2.tiff
|  |  |  |......
|  |  |_map/
|  |  |  |img2.tif
|  |  |  |img2.tif
|  |  |  |......
|  |_ valid/
|  |  |_sat/
|  |  |  |img1.tiff
|  |  |  |img2.tiff
|  |  |  |......
|  |  |_map/
|  |  |  |img2.tif
|  |  |  |img2.tif
|  |  |  |......
|  |_ test/
|  |  |_sat/
|  |  |  |img1.tiff
|  |  |  |img2.tiff
|  |  |  |......
|  |  |_map/
|  |  |  |img2.tif
|  |  |  |img2.tif
|  |  |  |......
|_ fastai
|_ dataset.py
|_ model.py
|_ models/
|_ ....
|_ (other files)

Now, start training with the following command- (NOTE: This will first set up the necessary folders and convert .tiff files to png and save them. Then it will start trained the u-net model for num_epochs(default is 1) and with cycle_len(default is 4). The trained model will be saved to the models/ directory. The trained model achieves a mask accuracy of 96% on test set.)

python main.py --mode train

usage: main.py  [--data_dir] [--learning_rate] [--mode]
                [--model_dir] [--num_epoch] [--cycle_len]
                [--test_img]

Arguments:
  --data_dir			 Path to the dataset(DEFAULT mass_roads/)
  --mode 				 One of train or test(DEFAULT test)
  --learning_rate        learning rate(DEFAULT 0.1)
  --model_dir            model file dir(DEFAULT models)
  --test_img             test image for inference(DEFAULT test_images/10378780_15.png)
  --num_epoch            number of epcohs(DEFAULT 1)
  --cycle_len            cycle len(DEFAULT 4)
  1. Test: To test the pretrained model available in the models directory(with mask accuracy score of 96%), run the following-
python main.py --mode test --test_img test_images/

This will save 3 different images the the current folder- 1024x1024 version of original images, 1024x1024 generated mask image(output of the model), 1024x1024 mask overlayed on original image.

About

Submission for MoveHack Global Mobility Hackathon 2018

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published