You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using the NeRF synthetic dataset with the LEGO scene. After running convert.py with a total of 300 images from the train and test folders of the LEGO dataset, only 15 images were saved in the images folder. I searched the GitHub issues and found this post(#806), which suggested modifying the code. After making the necessary changes, all 300 images were successfully saved in the images folder.
I then ran train.py, and after training, I executed render.py. However, when rendering, some of the images were rendered incorrectly, as shown in the attached images.
After training, I visualized the point cloud, and I noticed that the results looked like the images below. The first image is the input point cloud, and the following image is from the point cloud after 30,000 iterations of training.
Before I modified the convert.py code, only 15 images were saved in the images folder. When I trained the model using only those 15 images, I observed that the point clouds were well-aligned with the object. Could it be that the initial point cloud values from COLMAP were highly incorrect?
However, I don’t think that’s the case because when I ran the training by using the Blender dataset with random initialization (without running convert.py), the training worked well.
I need help understanding why this is happening.
Thank you.
The text was updated successfully, but these errors were encountered:
I am using the NeRF synthetic dataset with the LEGO scene. After running convert.py with a total of 300 images from the train and test folders of the LEGO dataset, only 15 images were saved in the images folder. I searched the GitHub issues and found this post(#806), which suggested modifying the code. After making the necessary changes, all 300 images were successfully saved in the images folder.
I then ran train.py, and after training, I executed render.py. However, when rendering, some of the images were rendered incorrectly, as shown in the attached images.
After training, I visualized the point cloud, and I noticed that the results looked like the images below. The first image is the input point cloud, and the following image is from the point cloud after 30,000 iterations of training.
Before I modified the convert.py code, only 15 images were saved in the images folder. When I trained the model using only those 15 images, I observed that the point clouds were well-aligned with the object. Could it be that the initial point cloud values from COLMAP were highly incorrect?
However, I don’t think that’s the case because when I ran the training by using the Blender dataset with random initialization (without running convert.py), the training worked well.
I need help understanding why this is happening.
Thank you.
The text was updated successfully, but these errors were encountered: