-
The Hong Kong University of Science and Technology
-
20:20
(UTC +08:00)
Highlights
- Pro
Lists (14)
Sort Name ascending (A-Z)
3⃣️ 3D generation&reconstruction
🚀Adversarial Attack
Adversarial attack resources🧗 Embodied AI
A list for Embodied AI.🌟Federated Learning
This a repository list for federated learning algorithms.👀General deep learning
A general deep learning list includes GAN, knowledge distillation, computer vision, NLP, etc.🧐🧐🧐General research and writing
This is a list of general research methods, writing skills, and information helpers!🤩Interesting computer works
A repository for some interesting computer works, such as obtaining information from websites, API usage(ChatGPT, etc.), and secrete computer technique.job job job
💥💥💥LLMs
🔥🔥🔥Multi modal and diffusion
A repository for Multi-modal and diffusion model🌛Privacy attack and defense
Learning resources for privacy attack and defense, such as MIA and gradient inversion .etc.🤔Reinforcement learning
This is a list of reinforcement learning resources.🧠Thinking and working
This is a list about some findings in computer science, math, reading, work, .etc.🛸🛸🛸 World Model
Starred repositories
HunyuanVideo-I2V: A Customizable Image-to-Video Model based on HunyuanVideo
Wan: Open and Advanced Large-Scale Video Generative Models
Official implementation for "RIFLEx: A Free Lunch for Length Extrapolation in Video Diffusion Transformers"
Production-tested AI infrastructure tools for efficient AGI development and community-driven innovation
SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation
Code for the project "MegaSaM: Accurate, Fast and Robust Structure and Motion from Casual Dynamic Videos"
Official PyTorch implementation for "Large Language Diffusion Models"
【三年面试五年模拟】AI算法工程师面试秘籍。涵盖AIGC、传统深度学习、自动驾驶、机器学习、计算机视觉、自然语言处理、强化学习、具身智能、元宇宙、AGI等AI行业面试笔试经验与干货知识。
🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes.
Collect some World Models for Autonomous Driving (and Robotic) papers.
A comprehensive list of papers about Robot Manipulation, including papers, codes, and related websites.
Official PyTorch Implementation of "History-Guided Video Diffusion"
Video Generation Foundation Models: https://saiyan-world.github.io/goku/
Official implementation of Continuous 3D Perception Model with Persistent State
Clean, minimal, accessible reproduction of DeepSeek R1-Zero
High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)
Train transformer language models with reinforcement learning.
A fork to add multimodal model training to open-r1
Fully open reproduction of DeepSeek-R1
High-Resolution 3D Assets Generation with Large Scale Hunyuan3D Diffusion Models.
An open-source library for GPU-accelerated robot learning and sim-to-real transfer.
openvla / openvla
Forked from TRI-ML/prismatic-vlmsOpenVLA: An open-source vision-language-action model for robotic manipulation.
[ARXIV'25] GameFactory: Creating New Games with Generative Interactive Videos