-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question for render feature? #7
Comments
As far as I understand, in the rasterization process, they use shared memory for calculating the collected features/colors and for gradient calculation. The shared memory is limited by specific GPU. In this paper, they dynamically allocate a cuda array as a cache for the collected features to avoid using shared memory (of course it's the tradeoff between the need for dimension and shared memory issue). You can see the implementation here: feature-3dgs/submodules/diff-gaussian-rasterization/cuda_rasterizer/rasterizer_impl.cu Line 398 in 9e714ff
If I misunderstand, please point me out. |
graphdeco-inria/gaussian-splatting#41 (comment) you can try this: adding "-Xcompiler -fno-gnu-unique" option in submodules/diff-gaussian-rasterization/setup.py: line 29 resolves the illegal memory access error in training.
|
Thanks very very very much. |
Thanks |
Hello:
I previously attempted to render features with 256 dimensions, but CUDA indicated insufficient shared memory, allowing for a maximum of only 40 dimensions to be rendered. May I ask what changes you made to enable it to render 256 dimensions?
The text was updated successfully, but these errors were encountered: