An end-to-end Computer Vision project focused on the topic of Image Segmentation (specifically Semantic Segmentation). Although this project has primarily been built with the LandCover.ai dataset, the project template can be applied to train a model on any semantic segmentation dataset and extract inference outputs from the model in a promptable fashion. Though this is not even close to actual promptable AI, the term is being used here because of a specific functionality that has been integrated here.
The model can be trained on any or all the classes present in the semantic segmentation dataset, thereafter the user can pass the prompt (in the form of the config variable 'test_classes') of the selected classes that the user wants to be present in the masks predicted by the trained model.
For example, suppose the model has been trained on all the 30 classes of the CityScapes dataset and while inferencing, the user only wants the class 'parking' to be present in the predicted mask due to a specific use-case application. Therefore, the user can provide the prompt as 'test_classes = ['parking']' in the config file and get the desired output.
1. Training the model on LandCover.ai dataset with 'train_classes': ['background', 'building', 'woodland', 'water']...
2. Testing the trained model for all the classes used to train the model, i.e. 'test_classes': ['background', 'building', 'woodland', 'water']...
3. Testing the trained model for selective classes as per user input, i.e. 'test_classes': ['background', 'building', 'water']...
- Dataset prerequisite for training:
Before starting to train a model, make sure to download the dataset from LandCover.ai or from kaggle/LandCover.ai, and copy/move over the downloaded directories 'images' and 'masks' to the 'train' directory of the project.
- Programming environment prerequisite to run the project:
If using an installed conda package manager, i.e. either Anaconda or Miniconda, create the conda environment following the steps mentioned below:
conda create --name <environment-name> python=3.9
conda activate <environment-name>
If using a directly installed python software, create the virtual environment following the steps mentioned below:
python -m venv <environment-name>
<environment-name>\Scripts\activate
- Clone the repository:
git clone https://github.com/souvikmajumder26/Land-Cover-Semantic-Segmentation-PyTorch.git
- Change to the project directory:
cd Land-Cover-Semantic-Segmentation-PyTorch
- Install the dependencies:
pip install -r requirements.txt
Running the model training and testing/inferencing scripts from the project directory. It is not necessary to train the model first mandatorily, as a simple trained model has been provided to run the test and check outputs before trying to fine-tune the model.
- Run the model training script:
cd src
python train.py
- Run the model testing/inferencing script:
cd src
python test.py
@misc{Souvik2023,
Author = {Souvik Majumder},
Title = {Land Cover Semantic Segmentation PyTorch},
Year = {2023},
Publisher = {GitHub},
Journal = {GitHub repository},
Howpublished = {\url{https://github.com/souvikmajumder26/Land-Cover-Semantic-Segmentation-PyTorch}}
}
Project is distributed under MIT License
@misc{Iakubovskii:2019,
Author = {Pavel Iakubovskii},
Title = {Segmentation Models Pytorch},
Year = {2019},
Publisher = {GitHub},
Journal = {GitHub repository},
Howpublished = {\url{https://github.com/qubvel/segmentation_models.pytorch}}
}