Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use captured depth images? #10

Open
jgnooo opened this issue Jul 11, 2023 · 8 comments
Open

How to use captured depth images? #10

jgnooo opened this issue Jul 11, 2023 · 8 comments

Comments

@jgnooo
Copy link

jgnooo commented Jul 11, 2023

Hello,
Thank you for the great work!!

I'm using NeRFCapture app for obtaining RGB-D images and poses.
However, when generating point cloud using these data, It is something weird.

image

I think it is because of depth image scale.
How to resolve it, and What is the scale of the depth map from the NeRFCapture?

Thank you 🥹

@jc211
Copy link
Owner

jc211 commented Jul 11, 2023

The depth image is upscaled quite a bit. I think its something in the order of 250x120 (not sure what the exact number is). I think what you are seeing is mostly the interpolation.

@garylidd
Copy link

Hi, I have a problem converting the captured depth image to real depth.
The depth map is 3 times larger than the color image and I don't know how to extract the true depth correctly. I have attached relevant images for your reference, can you help me with that ?
ref.zip

@jc211
Copy link
Owner

jc211 commented Aug 31, 2023

Looks like the offline depth is not working properly. I'll try and fix it soon. Sorry about that.

@oseiskar
Copy link

oseiskar commented Dec 7, 2023

As a work-around, we (Spectacular AI), have released an alternative offline-focused iOS data collection app that can record both depth and high-resolution video at high FPS. Its outputs can be used for InstantNGP, Nerfstudio (step-by-step tutorial) and probably even SplaTAM (see here).

@pablovela5620
Copy link

I would also recommend looking into polycam app, they enable a developer mode that allows for getting rgb + depth + camera poses/intrinsic parameters.

https://github.com/PolyCam/polyform

this may simplify the need for having to use a network connection

@nitthilan
Copy link

Any update on the fix for the offline version?

@jc211
Copy link
Owner

jc211 commented Jan 12, 2024

Hey sorry I have not had the chance to look at this yet. However, if anyone is compiling from source and would like to have a go at fixing it, these are the lines that need changing:

https://github.com/jc211/NeRFCapture/blob/312cb01efd5bd84e30cf9dea32a3a12a70abbc5b/NeRFCapture/DatasetWriter.swift#L173C1-L176C18

The problem is that depthBuffer!.pngData() does not work and should be replaced by a manual operation that converts the depth buffer into a 16 bit number that can be written to a png.

@Zhangyangrui916
Copy link

I'm not familiar with processing pixels on IOS. So I save depth as binary and process it on PC with Python.
Zhangyangrui916@3707209

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants