-
The Hong Kong University of Science and Technology
-
23:01
(UTC +08:00)
Highlights
- Pro
3⃣️ 3D generation&reconstruction
Curated list of papers and resources focused on 3D Gaussian Splatting, intended to keep pace with the anticipated surge of research in the coming months.
CUDA accelerated rasterization of gaussian splatting
Single Image to 3D using Cross-Domain Diffusion for 3D Generation
Code repository for Zero123++: a Single Image to Consistent Multi-view Diffusion Base Model.
Generative Models by Stability AI
Lifting ControlNet for Generalized Depth Conditioning
[ECCV 2024 Oral] LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation.
[CVPR2024 (Highlight)] RichDreamer: A Generalizable Normal-Depth Diffusion Model for Detail Richness in Text-to-3D. Live Demo:https://modelscope.cn/studios/Damo_XR_Lab/3D_AIGC
[CVPR 2024 Highlight] PhysGaussian: Physics-Integrated 3D Gaussians for Generative Dynamics
[CVPR 2024] Official PyTorch implementation of SuGaR: Surface-Aligned Gaussian Splatting for Efficient 3D Mesh Reconstruction and High-Quality Mesh Rendering
[ECCV 2024] Single Image to 3D Textured Mesh in 10 seconds with Convolutional Reconstruction Model.
V3D: Video Diffusion Models are Effective 3D Generators
Large Gaussian Reconstruction Model for Efficient 3D Reconstruction and Generation
CLAY: A Controllable Large-scale Generative Model for Creating High-quality 3D Assets
InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models
[ECCV 2024] Implementation of latentSplat: Autoencoding Variational Gaussians for Fast Generalizable 3D Reconstruction
[SIGGRAPH'24] 2D Gaussian Splatting for Geometrically Accurate Radiance Fields
[CVPR 2024 Highlight] Official repository for the paper "3DGStream: On-the-fly Training of 3D Gaussians for Efficient Streaming of Photo-Realistic Free-Viewpoint Videos".
CraftsMan: High-fidelity Mesh Generation with 3D Native Diffusion and Interactive Geometry Refiner
DIAMOND (DIffusion As a Model Of eNvironment Dreams) is a reinforcement learning agent trained in a diffusion world model. NeurIPS 2024 Spotlight.
Code implementation of CVPR 2024 highlight paper "PhyScene: Physically Interactable 3D Scene Synthesis for Embodied AI"
[TPAMI 2025, NeurIPS 2024] Vidu4D: Single Generated Video to High-Fidelity 4D Reconstruction with Dynamic Gaussian Surfels
"Diffusion4D: Fast Spatial-temporal Consistent 4D Generation via Video Diffusion Models", Hanwen Liang*, Yuyang Yin*, Dejia Xu, Hanxue Liang, Zhangyang Wang, Konstantinos N. Plataniotis, Yao Zhao, …
Code for paper: Freeplane: Unlocking Free Lunch in Triplane-Based Sparse-View Reconstruction Models