Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Broken Offline NeRFCapture #59

Open
guaMass opened this issue Jan 17, 2024 · 5 comments
Open

Broken Offline NeRFCapture #59

guaMass opened this issue Jan 17, 2024 · 5 comments
Labels
enhancement New feature or request question Further information is requested

Comments

@guaMass
Copy link

guaMass commented Jan 17, 2024

Thanks for sharing your paper and codes first, they look really cool.

My approach

I tried to run the codes on my own computer, but I don't know anything about cyclonedds at all. So I can only try to run splatam by manually putting the rgb, depth and transforms.json file captured by NeRFCapture on my computer. Here is what I tried to do:

  1. Transfer files captured by NeRFCapture to your computer, and unzip it at <SplaTAM>/experiments/iPhone_Captures/offline_demo.
  2. Create new rgb and depth folders, the file structure should now look like:
offline_demo
├───depth
├───rgb
├───images
└───transforms.json
  1. Go to the images folder and run the following script, it will move the images to the depth and rgb folders respectively.
import os
import shutil

# get current path
current_dir = os.getcwd()

# create target folder
depth_dir = os.path.join(current_dir, '..', 'depth')
rgb_dir = os.path.join(current_dir, '..', 'rgb')

# iter all files in current folder
for filename in os.listdir(current_dir):
    if filename.endswith('.png'):
        if '.depth' in filename:
            # move it to depth folder
            shutil.move(os.path.join(current_dir, filename), depth_dir)
        else:
            # move it to rgb folder
            shutil.move(os.path.join(current_dir, filename), rgb_dir)
  1. Go to the images folder and synthesize a three-channel depth map into a single channel.
import os
import cv2
import numpy as np
import imageio

folder_path = os.path.dirname(os.path.abspath(__file__))
depth_scale=10.0
full_res_width = 1920
full_res_height = 1440

for filename in os.listdir(folder_path):
    if filename.endswith(".jpg") or filename.endswith(".png"):
        image_path = os.path.join(folder_path, filename)
        image = cv2.imread(image_path)
        # rgb to gray
        gray_img = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
        # depth16_img = cv2.convertScaleAbs(gray_img, alpha=65535.0/255.0, beta=0.0)
        dimensions = gray_img.ndim
        print(f"Image: {filename}, Dimensions: {dimensions}")
        # save changes
        cv2.imwrite(image_path, gray_img)
  1. Changes the file names from <num>.depth.png to <num>.png
  2. Return to the offline_demo folder and run this to change the images name in json.
import json

with open('transforms.json', 'r') as f:
    data = json.load(f)
for frame in data['frames']:
    # print(frame)
    if '.png' not in frame['file_path']:
        frame['file_path'] += '.png'
        frame['file_path'] = frame['file_path'].replace('images', 'rgb')
        print(frame['file_path'])
    if '.depth.png' in frame['depth_path']:
        frame['depth_path'] = frame['depth_path'].replace('.depth.png', '.png')
        frame['depth_path'] = frame['depth_path'].replace('images', 'depth')
        print(frame['depth_path'])
with open('transforms.json', 'w') as f:
    json.dump(data, f, indent=4)

Question

The code runs without any problems, but unfortunately the results very poorly (even though I have GT Poses for Tracking turned on).
Screenshot 2024-01-17 163024
0001
Without turning on GT, the generated Gaussian sphere clusters are all squished together, and both Ground Truth Depth and Rasterized Depth within the eval show 0. I'm wondering what I'm doing wrong? One possibility I can think of is that I'm getting the wrong depth map because the same code behaves fine when running replica data.
Screenshot 2024-01-17 164003
Any suggestions on adjusting the NeRFCapture depth map please? Or any way to get depth from images/videos.
Appreciate it.

@Nik-V9
Copy link
Contributor

Nik-V9 commented Jan 21, 2024

Hi, Thanks for trying out our code!

As suggested by this comment, the offline mode in NeRFCapture is broken and tends to give erroneous depth maps: #7 (comment).

I believe that this is the problem you are currently facing. We don't have an alternative for the NeRFCapture setup yet. We hope to release an updated variant of the demo soon.

Another possibility is to use apps such as Record3D (however, the input format might have to be looked into).

@Nik-V9 Nik-V9 changed the title Poor performance in useing offline NeRFCapture data on Win11 with RTX4080 Broken Offline NeRFCapture Jan 21, 2024
@Nik-V9 Nik-V9 added question Further information is requested enhancement New feature or request labels Jan 21, 2024
@Zhangyangrui916
Copy link

Zhangyangrui916 commented Feb 26, 2024

I try to fix the offlineMode of NerfCapture. (https://github.com/Zhangyangrui916/NeRFCapture)
It output depth as binary.
This is the scripts I ran on PC to process it:

def readDepthImage(file_path):
    with open(file_path, 'rb') as f:
        data = f.read()
    arr = np.frombuffer(data, dtype=np.float32)
    arr = arr.reshape(transformsJson['depth_map_height'], transformsJson['depth_map_width'])
    return arr  # 1 unit = 1 meter

directory = 'Z:/240223175130/'

#读取json文件
import json
with open(directory + 'transforms.json') as f:
    transformsJson = json.load(f)

os.mkdir(directory + 'rgb/')
os.mkdir(directory + 'depth/')
paths = os.listdir(directory + 'images/')
for path in paths:
    if path.endswith('.png'):
        os.rename(directory + 'images/' + path, directory + 'rgb/' + path)
    elif path.endswith('.depth'):
        depthMap = readDepthImage(directory + 'images/' + path)
        save_depth = (depthMap*65535/float(10)).astype(np.uint16)
        cv2.imwrite(directory + 'depth/' + path.replace('.depth', '.png'), save_depth)

for frame in transformsJson['frames']:
    frame['file_path'] = frame['file_path'].replace('images', 'rgb')
    frame['file_path'] += '.png'
    frame['depth_path'] = frame['depth_path'].replace('.depth.png', '.png')
    frame['depth_path'] = frame['depth_path'].replace('images', 'depth')

with open(directory + 'transforms.json', 'w') as f:
    json.dump(transformsJson, f)

This is what i get:
image

@guaMass
Copy link
Author

guaMass commented Mar 14, 2024

For who are still struggling for offline data, pls use https://spectacularai.github.io/docs/sdk/tools/nerf.html instead of Nerfcapture.

@guaMass guaMass closed this as completed Mar 14, 2024
@Nik-V9
Copy link
Contributor

Nik-V9 commented Mar 15, 2024

Thanks for sharing this, looks very cool! We will update our README to reflect the same.

@Yiiii19
Copy link

Yiiii19 commented Sep 22, 2024

For who are still struggling for offline data, pls use https://spectacularai.github.io/docs/sdk/tools/nerf.html instead of Nerfcapture.

Hi, i also noticed the nerfcapture offline is broken. I tested the spectacular app with nerfstudio and it works.
For Splatam, do you need to modify the dataloader or other stuff of splatam to run the recrded data from specular? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants