Skip to content

Speedup by merging repeated computation #58

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Jun 10, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,12 @@
</p>

<p align="center">
Gaussian Opacity Fields (GOF) enables geometry extraction with 3D Gaussians directly by indentifying its level set. Our regularization improves surface reconstruction and we utilize Marching Tetrahedra for compact and adaptive mesh extraction.</p>
Gaussian Opacity Fields (GOF) enables geometry extraction with 3D Gaussians directly by indentifying its level set. Our regularization improves surface reconstruction and we utilize Marching Tetrahedra for adaptive and compact mesh extraction.</p>
<br>

# Updates

* **[2024.06.10]**: 🔥 Improve the training speed by 2x with [merged operations](https://github.com/autonomousvision/gaussian-opacity-fields/pull/58). 6 scenes in TNT dataset can be trained in ~24 mins and the bicycle scene in the Mip-NeRF 360 dataset can be trained in ~45 mins. Please pull the latest code and reinstall with `pip install submodules/diff-gaussian-rasterization` to use it.

# Installation
Clone the repository and create an anaconda environment using
Expand Down
12 changes: 6 additions & 6 deletions evaluate_dtu_mesh.py
Original file line number Diff line number Diff line change
Expand Up @@ -130,13 +130,13 @@ def cull_mesh(cameras, mesh):
mesh.update_faces(face_mask)

# Taking the biggest connected component
print("Taking the biggest connected component")
components = mesh.split(only_watertight=False)
areas = np.array([c.area for c in components], dtype=np.float32)
mesh_clean = components[areas.argmax()]

return mesh_clean
# print("Taking the biggest connected component")
# components = mesh.split(only_watertight=False)
# areas = np.array([c.area for c in components], dtype=np.float32)
# mesh_clean = components[areas.argmax()]

# return mesh_clean
return mesh

def evaluate_mesh(dataset : ModelParams, iteration : int, DTU_PATH : str):

Expand Down
15 changes: 14 additions & 1 deletion scene/gaussian_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -196,7 +196,20 @@ def get_view2gaussian(self, viewmatrix):

# transpose view2gaussian to match glm in CUDA code
V2G = V2G.transpose(2, 1).contiguous()
return V2G

# precompute results to reduce computation and IO
scales = self.get_scaling_with_3D_filter
S_inv_square = 1.0 / (scales ** 2)
R = V2G[:, :3, :3].transpose(1, 2)
t2 = V2G[:, 3:, :3]

C = torch.sum((t2 ** 2) * S_inv_square[:, None, :], dim=2)
S_inv_square_R = S_inv_square[:, :, None] * R
B = t2 @ S_inv_square_R
Sigma = R.transpose(1, 2) @ S_inv_square_R
merged = torch.cat([Sigma[:, :, 0], Sigma[:, 1:, 1], Sigma[:, 2:, 2], B.squeeze(), C], dim=1)

return merged

@torch.no_grad()
def compute_3D_filter(self, cameras):
Expand Down
33 changes: 33 additions & 0 deletions scripts/show_nerfsynthetic.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
import os
import json
import numpy as np
import trimesh

scenes = ["chair", "drums", "ficus", "hotdog", "lego", "materials", "mic", "ship"]

output_dirs = ["exp_nerf_synthetic/release"]

results = []
for scene in scenes:
print(scene,)
for output in output_dirs:
json_file = f"{output}/{scene}/results.json"
data = json.load(open(json_file))
data = data['ours_30000'] if 'ours_30000' in data else data['ours_7000']

iteration = "30K iter: "
point_cloud_file = f"{output}/{scene}/point_cloud/iteration_30000/point_cloud.ply"
if not os.path.exists(point_cloud_file):
point_cloud_file = f"{output}/{scene}/point_cloud/iteration_7000/point_cloud.ply"
iteration = "7K iter: "
print(iteration, data.values(), trimesh.load(point_cloud_file).vertices.shape)
results.append(data['PSNR'])

results = np.array(results).reshape(8, -1)

print("===================")
print("PSNR:")
print(results)
print("mean:")
print(results.mean(axis=0))

Loading