- π Iβm currently working on multi-modal transformers and multi-task learning
- π± Iβm currently learning to play Table Tennis π
- π« How to reach me: muhammad.maaz@mbzuai.ac.ae
- Abu Dhabi, UAE
- https://www.muhammadmaaz.com
Pinned Loading
-
facebookresearch/perception_models
facebookresearch/perception_models PublicState-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!
-
mbzuai-oryx/Video-ChatGPT
mbzuai-oryx/Video-ChatGPT Public[ACL 2024 π₯] Video-ChatGPT is a video conversation model capable of generating meaningful conversation about videos. It combines the capabilities of LLMs with a pretrained visual encoder adapted foβ¦
-
mbzuai-oryx/groundingLMM
mbzuai-oryx/groundingLMM Public[CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses that are seamlessly integrated with object segmentation masks.
-
mbzuai-oryx/VideoGPT-plus
mbzuai-oryx/VideoGPT-plus PublicOfficial Repository of paper VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding
-
mbzuai-oryx/LLaVA-pp
mbzuai-oryx/LLaVA-pp Publicπ₯π₯ LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)
If the problem persists, check the GitHub status page or contact support.