Homography-based perspective video transition using LoFTR feature matching and DeepLabV3 person segmentation for seamless foreground-aware stitching between clips.
This project generates a perspective-aligned transition between two videos so that a hard cut appears as a continuous camera movement.
It works by:
- Matching features between the last frame of Video A and a reference frame of Video B using LoFTR
- Estimating a homography with RANSAC
- Warping Video B into Video A’s perspective
- Segmenting people using DeepLabV3
- Blending foreground and background with temporal fade control
- Rendering a final stitched output
- Python 3.10+
torch>=2.0
torchvision>=0.15
opencv-python>=4.8
kornia>=0.7
numpy>=1.23
pip install -r requirements.txt
Install PyTorch with CUDA from:
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121
python video_stitcher.py -a path/to/videoA.mp4 -b path/to/videoB.mp4 -o output.mp4 --overlap 42 --loftr-max-dim 1152 --fade-in 10 --fade-out 10
OR simply go with default command:
python video_stitcher.py -a path/to/videoA.mp4 -b path/to/videoB.mp4 -o output.mp4
| Argument | Short | Type | Default | Description |
|---|---|---|---|---|
--video-a |
-a |
str | pre.mp4 |
Path to first video clip (Outgoing) |
--video-b |
-b |
str | post.mp4 |
Path to second video clip (Incoming) |
--output |
-o |
str | transition.mp4 |
Output video path |
--overlap |
int | 40 |
Number of frames to overlap/transition between videos | |
--loftr-max-dim |
int | 1152 |
Maximum dimension for LoFTR feature matching | |
--fade-in |
int | 10 |
Number of frames to fade in the foreground at the start | |
--fade-out |
int | 10 |
Number of frames to fade out the pre-clip at the end |
Use the video_stitcher_layers.py to get separate outputs synced layers for compositing.
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1000)>
If you encounter this error while downloading loftr_indoor_ds_new.ckpt, simply download it manually instead (this happens due to some HTTP issues).
Steps
- Download the
indoor_ds_new.ckptfrom this drive link: Google Drive - Rename the file from
indoor_ds_new.ckpttoloftr_indoor_ds_new.ckpt - Paste the file in
C:\Users\user\.cache\torch\hub\checkpoints
That's it, it will solve the error. (This is not an issue with other models)
| PRE CLIP | POST CLIP | STITCHED CLIP |
|---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Tip: You can change the Segmentation model to generate better masks. Check Here
Copyright (c) 2026 Akash Bora
Get more video effects at www.akascape.com 👈





