Skip to content

Latest commit

 

History

History
464 lines (307 loc) · 21 KB

README_EN.md

File metadata and controls

464 lines (307 loc) · 21 KB
hivision_logo

HivisionIDPhoto

English / 中文 / 日本語 / 한국어



Related Projects

  • SwanLab: Used throughout the training of the portrait matting model for analysis and monitoring, as well as collaboration with lab colleagues, significantly improving training efficiency.

Table of Contents


🤩 Recent Updates

  • Online Experience: SwanHub DemoSpaces

  • 2024.09.24: API interface adds base64 image input option | Gradio Demo adds Layout Photo Cropping Lines feature

  • 2024.09.22: Gradio Demo adds Beast Mode and DPI parameter

  • 2024.09.18: Gradio Demo adds Share Template Photos feature and American Style background option

  • 2024.09.17: Gradio Demo adds Custom Background Color-HEX Input feature | (Community Contribution) C++ Version - HivisionIDPhotos-cpp contributed by zjkhahah

  • 2024.09.16: Gradio Demo adds Face Rotation Alignment feature, custom size input supports millimeters

  • 2024.09.14: Gradio Demo adds Custom DPI feature, adds Japanese and Korean support, adds Adjust Brightness, Contrast, Sharpness feature

  • 2024.09.12: Gradio Demo adds Whitening feature | API interface adds Watermark, Set Photo KB Size, ID Photo Cropping

  • 2024.09.11: Added transparent image display and download feature to Gradio Demo.


Project Overview

🚀 Thank you for your interest in our work. You may also want to check out our other achievements in the field of image processing, feel free to reach out: zeyi.lin@swanhub.co.

HivisionIDPhoto aims to develop a practical and systematic intelligent algorithm for producing ID photos.

It utilizes a comprehensive AI model workflow to recognize various user photo-taking scenarios, perform matting, and generate ID photos.

HivisionIDPhoto can achieve:

  1. Lightweight matting (purely offline, fast inference with CPU only)
  2. Generate standard ID photos and six-inch layout photos based on different size specifications
  3. Support pure offline or edge-cloud inference
  4. Beauty effects (waiting)
  5. Intelligent formal wear change (waiting)

If HivisionIDPhoto helps you, please star this repo or recommend it to your friends to solve the urgent ID photo production problem!


🏠 Community

We have shared some interesting applications and extensions of HivisionIDPhotos built by the community:

ComfyUI workflow

HivisionIDPhotos-wechat-weapp

HivisionIDPhotos-uniapp


🔧 Preparation

Environment installation and dependencies:

  • Python >= 3.7 (project primarily tested on Python 3.10)
  • OS: Linux, Windows, MacOS

1. Clone the Project

git clone https://github.com/Zeyi-Lin/HivisionIDPhotos.git
cd  HivisionIDPhotos

2. Install Dependency Environment

It is recommended to create a python3.10 virtual environment using conda, then execute the following commands

pip install -r requirements.txt
pip install -r requirements-app.txt

3. Download Weight Files

Method 1: Script Download

python scripts/download_model.py --models all

Method 2: Direct Download

Store in the project's hivision/creator/weights directory:

  • modnet_photographic_portrait_matting.onnx (24.7MB): Official weights of MODNet, download
  • hivision_modnet.onnx (24.7MB): Matting model with better adaptability for pure color background replacement, download
  • rmbg-1.4.onnx (176.2MB): Open-source matting model from BRIA AI, download and rename to rmbg-1.4.onnx
  • birefnet-v1-lite.onnx(224MB): Open-source matting model from ZhengPeng7, download and rename to birefnet-v1-lite.onnx

4. Face Detection Model Configuration (Optional)

Extended Face Detection Model Description Documentation
MTCNN Offline face detection model, high-performance CPU inference, default model, lower detection accuracy Use it directly after cloning this project
RetinaFace Offline face detection model, moderate CPU inference speed (in seconds), and high accuracy Download and place it in the hivision/creator/retinaface/weights directory
Face++ Online face detection API launched by Megvii, higher detection accuracy, official documentation Usage Documentation

5. Performance Reference

Test environment: Mac M1 Max 64GB, non-GPU acceleration, test image resolution: 512x715(1) and 764×1146(2).

Model Combination Memory Occupation Inference Time (1) Inference Time (2)
MODNet + mtcnn 410MB 0.207s 0.246s
MODNet + retinaface 405MB 0.571s 0.971s
birefnet-v1-lite + retinaface 6.20GB 7.063s 7.128s

6. GPU Inference Acceleration (Optional)

In the current version, the model that can be accelerated by NVIDIA GPUs is birefnet-v1-lite, and please ensure you have around 16GB of VRAM.

If you want to use NVIDIA GPU acceleration for inference, after ensuring you have installed CUDA and cuDNN, find the corresponding onnxruntime-gpu version to install according to the onnxruntime-gpu documentation, and find the corresponding pytorch version to install according to the pytorch official website.

# If your computer is installed with CUDA 12.x and cuDNN 8
# Installing torch is optional. If you can't configure cuDNN, try installing torch
pip install onnxruntime-gpu==1.18.0
pip install torch --index-url https://download.pytorch.org/whl/cu121

After completing the installation, call the birefnet-v1-lite model to utilize GPU acceleration for inference.

TIP: CUDA installations are backward compatible. For example, if your CUDA version is 12.6 but the highest version currently matched by torch is 12.4, it's still possible to install version 12.4 on your computer.


🚀 Run Gradio Demo

python app.py

Running the program will generate a local web page where you can perform operations and interact with ID photos.


🚀 Python Inference

Core parameters:

  • -i: Input image path
  • -o: Output image path
  • -t: Inference type, options are idphoto, human_matting, add_background, generate_layout_photos
  • --matting_model: Portrait matting model weight selection
  • --face_detect_model: Face detection model selection

More parameters can be viewed by running python inference.py --help

1. ID Photo Creation

Input 1 photo to obtain 1 standard ID photo and 1 high-definition ID photo in 4-channel transparent PNG.

python inference.py -i demo/images/test0.jpg -o ./idphoto.png --height 413 --width 295

2. Portrait Matting

Input 1 photo to obtain 1 4-channel transparent PNG.

python inference.py -t human_matting -i demo/images/test0.jpg -o ./idphoto_matting.png --matting_model hivision_modnet

3. Add Background Color to Transparent Image

Input 1 4-channel transparent PNG to obtain 1 3-channel image with added background color.

python inference.py -t add_background -i ./idphoto.png -o ./idphoto_ab.jpg -c 4f83ce -k 30 -r 1

4. Generate Six-Inch Layout Photo

Input 1 3-channel photo to obtain 1 six-inch layout photo.

python inference.py -t generate_layout_photos -i ./idphoto_ab.jpg -o ./idphoto_layout.jpg --height 413 --width 295 -k 200

5. ID Photo Cropping

Input 1 4-channel photo (the image after matting) to obtain 1 standard ID photo and 1 high-definition ID photo in 4-channel transparent PNG.

python inference.py -t idphoto_crop -i ./idphoto_matting.png -o ./idphoto_crop.png --height 413 --width 295

⚡️ Deploy API Service

Start Backend

python deploy_api.py

Request API Service

For detailed request methods, please refer to the API Documentation, which includes the following request examples:


🐳 Docker Deployment

1. Pull or Build Image

Choose one of the following methods

Method 1: Pull the latest image:

docker pull linzeyi/hivision_idphotos

Method 2: Directly build the image from Dockerfile:

After ensuring that at least one matting model weight file is placed in the hivision/creator/weights directory, execute the following in the project root directory:

docker build -t linzeyi/hivision_idphotos .

Method 3: Build using Docker Compose:

After ensuring that at least one matting model weight file is placed in the hivision/creator/weights directory, execute the following in the project root directory:

docker compose build

2. Run Services

Start Gradio Demo Service

Run the following command, and you can access it locally at http://127.0.0.1:7860.

docker run -d -p 7860:7860 linzeyi/hivision_idphotos

Start API Backend Service

docker run -d -p 8080:8080 linzeyi/hivision_idphotos python3 deploy_api.py

Start Both Services Simultaneously

docker compose up -d

Environment Variables

This project provides some additional configuration options, which can be set using environment variables:

Environment Variable Type Description Example
FACE_PLUS_API_KEY Optional This is your API key obtained from the Face++ console 7-fZStDJ····
FACE_PLUS_API_SECRET Optional Secret corresponding to the Face++ API key VTee824E····
RUN_MODE Optional Running mode, with the option of beast (beast mode). In beast mode, the face detection and matting models will not release memory, achieving faster secondary inference speeds. It is recommended to try to have at least 16GB of memory. beast

Example of using environment variables in Docker:

docker run  -d -p 7860:7860 \
    -e FACE_PLUS_API_KEY=7-fZStDJ···· \
    -e FACE_PLUS_API_SECRET=VTee824E···· \
    -e RUN_MODE=beast \
    linzeyi/hivision_idphotos 

📖 Cite Projects

  1. MTCNN:
@software{ipazc_mtcnn_2021,
    author = {ipazc},
    title = {{MTCNN}},
    url = {https://github.com/ipazc/mtcnn},
    year = {2021},
    publisher = {GitHub}
}
  1. ModNet:
@software{zhkkke_modnet_2021,
    author = {ZHKKKe},
    title = {{ModNet}},
    url = {https://github.com/ZHKKKe/MODNet},
    year = {2021},
    publisher = {GitHub}
}

Q&A

1. How to modify preset sizes and colors?

  • Size: After modifying size_list_EN.csv, run app.py again. The first column is the size name, the second column is the height, and the third column is the width.
  • Color: After modifying color_list_EN.csv, run app.py again. The first column is the color name, and the second column is the Hex value.

2. How to Change the Watermark Font?

  1. Place the font file in the hivision/plugin/font folder.
  2. Change the font_file parameter value in hivision/plugin/watermark.py to the name of the font file.

3. How to Add Social Media Template Photos?

  1. Place the template image in the hivision/plugin/template/assets folder. The template image should be a 4-channel transparent PNG.
  2. Add the latest template information to the hivision/plugin/template/assets/template_config.json file. Here, width is the template image width (px), height is the template image height (px), anchor_points are the coordinates (px) of the four corners of the transparent area in the template; rotation is the rotation angle of the transparent area relative to the vertical direction, where >0 is counterclockwise and <0 is clockwise.
  3. Add the name of the latest template to the TEMPLATE_NAME_LIST variable in the _generate_image_template function of demo/processor.py.

4. How to Modify the Top Navigation Bar of the Gradio Demo?

  • Modify the demo/assets/title.md file.

📧 Contact Us

If you have any questions, please email zeyi.lin@swanhub.co


Contributors

Zeyi-LinSAKURA-CATFeudalmanswpfYKaikaikaifangShaohonChenKashiwaByte


Thanks for support

Stargazers repo roster for @Zeyi-Lin/HivisionIDPhotos

Forkers repo roster for @Zeyi-Lin/HivisionIDPhotos

Star History Chart

Lincese

This repository is licensed under the Apache-2.0 License.