Skip to content

Latest commit

 

History

History
69 lines (64 loc) · 2.5 KB

README.md

File metadata and controls

69 lines (64 loc) · 2.5 KB

Lama-with-MaskDINO

demo

It was inspired by Auto-LaMa.

Unlike Auto-Lama, it differs in:

  1. Use the object instance segmentation model MaskDINO instead of the object detection model DETR.
  2. Use LaMa with refiner for better results.

simple demo with gradio

webui

Environment setup

A minimum of 12 gb memory gpu is required.

  1. Download pre-trained weights MaskDINO and LaMa
  2. Put the directory like this
  .root
  ├─demo.py
  ├─ckpt
  │  ├──maskdino_swinl_50ep_300q_hid2048_3sd1_instance_maskenhanced_mask52.3ap_box59.0ap.pth
  │  └─models
  │      ├──config.yaml
  │      └─models
  │          └─best.ckpt
  └─images
       ├──buildings.png
       ├──cat.png
       └──park.png     
  1. conda environment setup
conda create --name maskdino python=3.8 -y
conda activate maskdino
conda install pytorch==1.9.0 torchvision==0.10.0 cudatoolkit=11.1 -c pytorch -c nvidia
pip install -U opencv-python

mkdir repo
git clone git@github.com:facebookresearch/detectron2.git
cd detectron2
pip install -e .
pip install git+https://github.com/cocodataset/panopticapi.git

cd ..
git clone -b quickfix/infer_demo --single-branch https://github.com/MeAmarP/MaskDINO.git
cd MaskDINO
pip install -r requirements.txt
cd maskdino/modeling/pixel_decoder/ops
python setup.py build install
cd ../../../../..

git clone https://github.com/geomagical/lama-with-refiner.git
cd lama-with-refiner
pip install -r requirements.txt 
pip install --upgrade numpy==1.23.0
cd ../..
pip install gradio
  1. Run
#localhost http://127.0.0.1:7860
python demo.py

Acknowledgments

Many thanks to these excellent opensource projects