[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型
-
Updated
Sep 22, 2025 - Python
[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型
OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark
Video classification tools using 3D ResNet
[ICLR2022] official implementation of UniFormer
Implementation of TimeSformer from Facebook AI, a pure attention-based solution for video classification
[CVPR 2021] TDN: Temporal Difference Networks for Efficient Action Recognition
Video Classification using 2 stream CNN
To classify video into various classes using keras library with tensorflow as back-end.
deep learning sex position classifier
[ICCV 2019 (Oral)] Temporal Attentive Alignment for Large-Scale Video Domain Adaptation (PyTorch)
[ICLR 2022] TAda! Temporally-Adaptive Convolutions for Video Understanding. This codebase provides solutions for video classification, video representation learning and temporal detection.
Tutorial about 3D convolutional network
Explore Action Recognition
Appearance-and-Relation Networks
Exploration of different solutions to action recognition in video, using neural networks implemented in PyTorch.
Classify UCF101 videos using one frame at a time with a CNN(InceptionV3)
Easiest way of fine-tuning HuggingFace video classification models
Implementation of STAM (Space Time Attention Model), a pure and simple attention model that reaches SOTA for video classification
Surveillance Perspective Human Action Recognition Dataset: 7759 Videos from 14 Action Classes, aggregated from multiple sources, all cropped spatio-temporally and filmed from a surveillance-camera like position.
Official repository for "Self-Supervised Video Transformer" (CVPR'22)
Add a description, image, and links to the video-classification topic page so that developers can more easily learn about it.
To associate your repository with the video-classification topic, visit your repo's landing page and select "manage topics."