Hanzhi Chang*1,
Ruijie Zhu*1,2,
Wenjie Chang1,
Mulin Yu2,
Yanzhe Liang1,
Jiahao Lu1,
Zhuoyuan Li1,
Tianzhu Zhang1
1 USTC
2 Shanghai AI Lab
AAAI 2026
Overview of MeshSplat. Taken a pair of images as input, MeshSplat begins with a multi-view backbone to extract feature maps for each view. After that, we construct per-view cost volumes via the plane-sweeping method. We use these cost volumes to generate coarse depth maps in order to get 3D point clouds and apply our proposed Weighted Chamfer Distance Loss. Then we feed cost volumes and feature maps into our gaussian prediction network, which consist of a depth refinement network and a normal prediction network, to obtain pixel-aligned 2DGS. Finally we can apply novel view synthesis and reconstruct the scene mesh using these 2DGS.
TODO: The full code might be released in several months. Stay tuned!
If you find our work useful, please cite:
@article{chang2025meshsplat,
title={MeshSplat: Generalizable Sparse-View Surface Reconstruction via Gaussian Splatting},
author={Hanzhi Chang and Ruijie Zhu and Wenjie Chang and Mulin Yu and Yanzhe Liang and Jiahao Lu and Zhuoyuan Li and Tianzhu Zhang},
journal={arXiv preprint arXiv:2508.17811},
year={2025}
}Our code is based on MVSplat and 2DGS. We thank the authors for their excellent work!
