AI-Gait-Visualizer is an interactive application that extracts human pose keypoints and gait metrics from short walking videos.
It uses YOLOv8-Pose for keypoint detection and a lightweight gait-analysis pipeline to estimate stride length proxies and cadence.
This application is built with Ultralytics YOLOv8, OpenCV, and Gradio. It allows you to upload a walking video and instantly visualize gait characteristics. You do not need to install anything or write a single line of code. Just open it, upload your video, and explore how AI interprets human motion.
🟢 Try it directly here:
👉 https://huggingface.co/spaces/rmehmood786/AI-Gait-Visualizer
Upload a walking video → click Run Analysis → view the annotated video and download CSV metrics.
- 🎯 Pose Estimation: Real-time body keypoints using YOLOv8-Pose.
- 🦶 Gait Metrics: Calculates stride proxy, step duration, and approximate cadence.
- 📊 Downloadable Results: Exports a CSV file with per-frame gait data.
- 💡 Interactive Interface: Built with Gradio for smooth use.
- ☁️ Cloud-Ready: Runs directly on Hugging Face Spaces.
| Component | Description |
|---|---|
| Model | YOLOv8-Pose (Ultralytics) |
| Frameworks | Gradio, OpenCV, NumPy, SciPy, Pandas |
| Deployment | Hugging Face Spaces |
| Language | Python 3.10+ |
🟢 Try it directly here:
👉 https://huggingface.co/spaces/rmehmood786/AI-Gait-Visualizer
Upload a walking video → click Run Analysis → view the annotated video and download CSV metrics.
- 🎯 Pose Estimation: Real-time body keypoints using YOLOv8-Pose.
- 🦶 Gait Metrics: Calculates stride proxy, step duration, and approximate cadence.
- 📊 Downloadable Results: Exports a CSV file with per-frame gait data.
- 💡 Interactive Interface: Built with Gradio for smooth use.
- ☁️ Cloud-Ready: Runs directly on Hugging Face Spaces.
| Component | Description |
|---|---|
| Model | YOLOv8-Pose (Ultralytics) |
| Frameworks | Gradio, OpenCV, NumPy, SciPy, Pandas |
| Deployment | Hugging Face Spaces |
| Language | Python 3.10+ |
AI-Gait-Visualizer/
│
├── app/
│ └── app.py # Gradio app entry point
│
├── src/
│ └── gait_viz.py # Gait extraction + pose processing
│
├── requirements.txt # Dependencies
├── app_file # tells Hugging Face to run app/app.py
└── README.md
- The uploaded video is read frame-by-frame using OpenCV.
- YOLOv8-Pose detects body keypoints per frame.
- Gait parameters (stride proxy, step duration, cadence) are computed from keypoint motion.
- Processed frames are written back into an annotated video.
- The app displays results and offers a downloadable CSV.
git clone https://github.com/rmehmood786/AI-Gait-Visualizer.git
cd AI-Gait-Visualizer
pip install -r requirements.txt
python app/app.pyThen open the local Gradio link in your browser.
gradio>=4.0.0
ultralytics>=8.2.0
opencv-python>=4.10.0
numpy>=1.24
pandas>=2.1
scipy>=1.11
matplotlib>=3.8
tqdm
rich
| Input | Output |
|---|---|
![]() |
![]() |
- Integrate pose estimation with gait recognition for identity prediction.
- Visualize motion trajectories in 3D using Open3D or Matplotlib.
- Add real-time gait visualization using webcam input.
- Try comparing different YOLO variants (v8, v10) for accuracy vs. speed.
- Use temporal smoothing or Kalman filtering for stable keypoints.
- Build a web-based dashboard for visualizing stride metrics interactively.
Rashid Mehmood
AI Researcher — Computer Vision & Gait Analysis
Released under the MIT License.
You are free to use, modify, and share with proper attribution.
Built using:

