Skip to content

Latest commit

 

History

History
346 lines (293 loc) · 17.5 KB

File metadata and controls

346 lines (293 loc) · 17.5 KB

Neural Fields for Robotics Resources

A repo collating papers and other material related to neural radiance fields (NeRFs), neural scene representations and associated works with a focus towards applications in robotics.

This repo is maintained by the Robotic Imaging Research Group at the University of Sydney. We are embedded within the Australian Centre for Robotics in the Faculty of Engineering.

To contribute, please see the how_to_add.md file.

Contents

Review Papers

[1] A. Tewari et al., “State of the Art on Neural Rendering,” Computer Graphics Forum, Jul. 2020, Accessed: Apr. 04, 2023. [Online]. Available: https://onlinelibrary.wiley.com/doi/full/10.1111/cgf.14022

[2] Y. Xie et al., “Neural Fields in Visual Computing and Beyond,” Computer Graphics Forum, May 2022, Accessed: Apr. 04, 2023. [Online]. Available: https://onlinelibrary.wiley.com/doi/full/10.1111/cgf.14505

[3] A. Tewari et al., “Advances in Neural Rendering,” arXiv:2111.05849 [cs], Nov. 2021, Accessed: Nov. 27, 2021. [Online]. Available: http://arxiv.org/abs/2111.05849

[4] M. Toschi, R. D. Matteo, R. Spezialetti, D. D. Gregorio, L. D. Stefano, and S. Salti, “ReLight my NeRF: A dataset for novel view synthesis and relighting of real world objects.” 2023. Available: https://arxiv.org/abs/2304.10448

[5] M. Tancik et al., “Nerfstudio: A modular framework for neural radiance field development,” arXiv preprint arXiv:2302.04264, 2023.

[6] M. Wallingford et al., “Neural radiance field codebooks,” arXiv preprint arXiv:2301.04101, 2023.

NeRF + Architecture Improvements

[1] M. Niemeyer, J. T. Barron, B. Mildenhall, M. S. M. Sajjadi, A. Geiger, and N. Radwan, “RegNeRF: Regularizing neural radiance fields for view synthesis from sparse inputs,” in Proc. IEEE conf. On computer vision and pattern recognition (CVPR), 2022. Available: https://arxiv.org/abs/2112.00724

[2] Z. Kuang, K. Olszewski, M. Chai, Z. Huang, P. Achlioptas, and S. Tulyakov, “NeROIC: Neural object capture and rendering from online image collections,” Computing Research Repository (CoRR), vol. abs/2201.02533, 2022.

[3] F. Wimbauer, S. Wu, and C. Rupprecht, “De-rendering 3D Objects in the Wild,” arXiv:2201.02279 [cs], Jan. 2022, Accessed: Jan. 23, 2022. [Online]. Available: http://arxiv.org/abs/2201.02279

[4] M. Kim, S. Seo, and B. Han, “InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering,” arXiv:2112.15399 [cs, eess], Dec. 2021, Accessed: Jan. 23, 2022. [Online]. Available: http://arxiv.org/abs/2112.15399

[5] C. C. Yoonwoo Jeong Seokjun Ahn and J. Park, “Self-Calibrating Neural Radiance Fields,” in ICCV, 2021.

[6] Y. Xiangli et al., “CityNeRF: Building NeRF at City Scale,” arXiv preprint arXiv:2112.05504, 2021.

[7] M. Tancik et al., “Block-NeRF: Scalable Large Scene Neural View Synthesis,” arXiv, 2022.

[8] K. Rematas, R. Martin-Brualla, and V. Ferrari, “ShaRF: Shape-conditioned Radiance Fields from a Single View.” 2021.

[9] B. Kaya, S. Kumar, F. Sarno, V. Ferrari, and L. V. Gool, “Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo.” 2021.

[10] Q. Xu et al., “Point-NeRF: Point-based Neural Radiance Fields,” arXiv preprint arXiv:2201.08845, 2022.

[11] C. Xie, K. Park, R. Martin-Brualla, and M. Brown, “FiG-NeRF: Figure-Ground Neural Radiance Fields for 3D Object Category Modelling,” arXiv:2104.08418 [cs], Apr. 2021, Accessed: Sep. 25, 2021. [Online]. Available: http://arxiv.org/abs/2104.08418

[12] A. Yu, R. Li, M. Tancik, H. Li, R. Ng, and A. Kanazawa, “PlenOctrees for Real-time Rendering of Neural Radiance Fields,” arXiv:2103.14024 [cs], Aug. 2021, Accessed: Sep. 25, 2021. [Online]. Available: http://arxiv.org/abs/2103.14024

[13] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis,” in Computer Vision – ECCV 2020, A. Vedaldi, H. Bischof, T. Brox, and J.-M. Frahm, Eds., in Lecture Notes in Computer Science. Cham: Springer International Publishing, 2020, pp. 405–421. doi: gj826m.

[14] A. Yu, V. Ye, M. Tancik, and A. Kanazawa, “pixelNeRF: Neural Radiance Fields From One or Few Images,” 2021, pp. 4578–4587. Accessed: Sep. 25, 2021. [Online]. Available: https://openaccess.thecvf.com/content/CVPR2021/html/Yu_pixelNeRF_Neural_Radiance_Fields_From_One_or_Few_Images_CVPR_2021_paper.html

[15] R. Martin-Brualla, N. Radwan, M. S. M. Sajjadi, J. T. Barron, A. Dosovitskiy, and D. Duckworth, “NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections,” 2021, pp. 7210–7219. Accessed: Sep. 25, 2021. [Online]. Available: https://openaccess.thecvf.com/content/CVPR2021/html/Martin-Brualla_NeRF_in_the_Wild_Neural_Radiance_Fields_for_Unconstrained_Photo_CVPR_2021_paper.html

[16] L. Yen-Chen, P. Florence, J. T. Barron, A. Rodriguez, P. Isola, and T.-Y. Lin, “INeRF: Inverting Neural Radiance Fields for Pose Estimation,” arXiv:2012.05877 [cs], Aug. 2021, Accessed: Sep. 25, 2021. [Online]. Available: http://arxiv.org/abs/2012.05877

[17] C. Gao, Y. Shih, W.-S. Lai, C.-K. Liang, and J.-B. Huang, “Portrait Neural Radiance Fields from a Single Image,” arXiv:2012.05903 [cs], Apr. 2021, Accessed: Sep. 25, 2021. [Online]. Available: http://arxiv.org/abs/2012.05903

[18] C.-H. Lin, W.-C. Ma, A. Torralba, and S. Lucey, “BARF: Bundle-Adjusting Neural Radiance Fields,” arXiv:2104.06405 [cs], Aug. 2021, Accessed: Sep. 25, 2021. [Online]. Available: http://arxiv.org/abs/2104.06405

[19] K. Zhang, G. Riegler, N. Snavely, and V. Koltun, “NeRF++: Analyzing and Improving Neural Radiance Fields,” arXiv:2010.07492 [cs], Oct. 2020, Accessed: Sep. 25, 2021. [Online]. Available: http://arxiv.org/abs/2010.07492

[20] C. Reiser, S. Peng, Y. Liao, and A. Geiger, “KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs,” arXiv:2103.13744 [cs], Aug. 2021, Accessed: Sep. 25, 2021. [Online]. Available: http://arxiv.org/abs/2103.13744

[21] D. Rebain, W. Jiang, S. Yazdani, K. Li, K. M. Yi, and A. Tagliasacchi, “DeRF: Decomposed Radiance Fields,” 2021, pp. 14153–14161. Accessed: Sep. 25, 2021. [Online]. Available: https://openaccess.thecvf.com/content/CVPR2021/html/Rebain_DeRF_Decomposed_Radiance_Fields_CVPR_2021_paper.html

[22] J. T. Barron, B. Mildenhall, M. Tancik, P. Hedman, R. Martin-Brualla, and P. P. Srinivasan, “Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields,” arXiv:2103.13415 [cs], Aug. 2021, Accessed: Sep. 25, 2021. [Online]. Available: http://arxiv.org/abs/2103.13415

[23] P. Hedman, P. P. Srinivasan, B. Mildenhall, J. T. Barron, and P. Debevec, “Baking Neural Radiance Fields for Real-Time View Synthesis,” arXiv:2103.14645 [cs], Mar. 2021, Accessed: Sep. 25, 2021. [Online]. Available: http://arxiv.org/abs/2103.14645

[24] Z. Wang, S. Wu, W. Xie, M. Chen, and V. A. Prisacariu, “NeRF–: Neural Radiance Fields Without Known Camera Parameters,” arXiv:2102.07064 [cs], Feb. 2021, Accessed: Sep. 25, 2021. [Online]. Available: http://arxiv.org/abs/2102.07064

[25] J. Li, Z. Feng, Q. She, H. Ding, C. Wang, and G. H. Lee, “MINE: Towards Continuous Depth MPI with NeRF for Novel View Synthesis,” arXiv:2103.14910 [cs], Jul. 2021, Accessed: Oct. 11, 2021. [Online]. Available: http://arxiv.org/abs/2103.14910

Light Fields + Plenoxels

[1] J. Ost, I. Laradji, A. Newell, Y. Bahat, and F. Heide, “Neural Point Light Fields,” CoRR, vol. abs/2112.01473, 2021, Available: https://arxiv.org/abs/2112.01473

[2] M. Suhail, C. Esteves, L. Sigal, and A. Makadia, “Light field neural rendering.” 2021. Available: https://arxiv.org/abs/2112.09687

[3] Alex Yu and Sara Fridovich-Keil, M. Tancik, Q. Chen, B. Recht, and A. Kanazawa, “Plenoxels: Radiance fields without neural networks.” 2021. Available: https://arxiv.org/abs/2112.05131

[4] V. Sitzmann, S. Rezchikov, W. T. Freeman, J. B. Tenenbaum, and F. Durand, “Light field networks: Neural scene representations with single-evaluation rendering,” in Proc. NeurIPS, 2021.

Dynamic Scenes + Rendering

[1] A. Pumarola, E. Corona, G. Pons-Moll, and F. Moreno-Noguer, “D-NeRF: Neural Radiance Fields for Dynamic Scenes,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2021, pp. 10318–10327.

[2] E. Tretschk, A. Tewari, V. Golyanik, M. Zollhöfer, C. Lassner, and C. Theobalt, “Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Oct. 2021, pp. 12959–12970.

[3] Z. Li, S. Niklaus, N. Snavely, and O. Wang, “Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2021, pp. 6498–6508.

[4] K. Park et al., “Nerfies: Deformable Neural Radiance Fields,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Oct. 2021, pp. 5865–5874.

[5] C.-Y. Weng, B. Curless, P. P. Srinivasan, J. T. Barron, and I. Kemelmacher-Shlizerman, “HumanNeRF: Free-viewpoint Rendering of Moving People from Monocular Video,” arXiv, 2022.

[6] K. Park et al., “HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields,” ACM Trans. Graph., vol. 40, no. 6, Dec. 2021.

[7] G. Yang, M. Vo, N. Natalia, D. Ramanan, V. Andrea, and J. Hanbyul, “BANMo: Building Animatable 3D Neural Models from Many Casual Videos,” arXiv preprint arXiv:2112.12761, 2021.

[8] S. Peng et al., “Animatable neural radiance fields for modeling dynamic human bodies,” in Proceedings of the IEEE/CVF international conference on computer vision (ICCV), 2021, pp. 14314–14323.

Speed Improvements

[1] T. Müller, A. Evans, C. Schied, and A. Keller, “Instant Neural Graphics Primitives with a Multiresolution Hash Encoding,” arXiv:2201.05989, Jan. 2022.

[2] K. Deng, A. Liu, J.-Y. Zhu, and D. Ramanan, “Depth-supervised NeRF: Fewer Views and Faster Training for Free,” arXiv preprint arXiv:2107.02791, 2021.

[3] L. Li, Z. Shen, Z. Wang, L. Shen, and L. Bo, “Compressing volumetric radiance fields to 1 MB,” arXiv preprint arXiv:2211.16386, 2022.

[4] J. E. Johnson, R. Lguensat, R. Fablet, E. Cosme, and J. L. Sommer, “Neural fields for fast and scalable interpolation of geophysical ocean variables,” arXiv preprint arXiv:2211.10444, 2022.

[5] K. Deng, A. Liu, J.-Y. Zhu, and D. Ramanan, “Depth-supervised NeRF: Fewer views and faster training for free,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), 2022.

[6] F. Wang, S. Tan, X. Li, Z. Tian, and H. Liu, “Mixed neural voxels for fast multi-view video synthesis,” arXiv preprint arXiv:2212.00190, 2022.

[7] P. Wang et al., “F2-NeRF: Fast neural radiance field training with free camera trajectories,” CVPR, 2023.

[8] S. Lee, G. Park, H. Son, J. Ryu, and H. J. Chae, “FastSurf: Fast neural RGB-d surface reconstruction using per-frame intrinsic refinement and TSDF fusion prior learning,” arXiv preprint arXiv:2303.04508, 2023.

[9] Y. Wang, Q. Han, M. Habermann, K. Daniilidis, C. Theobalt, and L. Liu, “NeuS2: Fast learning of neural implicit surfaces for multi-view reconstruction.” arXiv, 2022. doi: 10.48550/ARXIV.2212.05231.

Robotics Applications

[1] D. Driess, Z. Huang, Y. Li, R. Tedrake, and M. Toussaint, “Learning Multi-Object Dynamics with Compositional Neural Radiance Fields,” arXiv preprint arXiv:2202.11855, 2022.

[2] L. Yen-Chen, P. Florence, J. T. Barron, T.-Y. Lin, A. Rodriguez, and P. Isola, “NeRF-Supervision: Learning Dense Object Descriptors from Neural Radiance Fields,” in IEEE Conference on Robotics and Automation (ICRA), 2022.

[3] Z. Zhu et al., “NICE-SLAM: Neural Implicit Scalable Encoding for SLAM,” arXiv, 2021.

[4] M. Adamkiewicz et al., “Vision-Only Robot Navigation in a Neural Radiance World,” arXiv:2110.00168 [cs], Sep. 2021, Accessed: Oct. 11, 2021. [Online]. Available: http://arxiv.org/abs/2110.00168

[5] E. Sucar, S. Liu, J. Ortiz, and A. J. Davison, “iMAP: Implicit Mapping and Positioning in Real-Time,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Oct. 2021, pp. 6229–6238.