-
Notifications
You must be signed in to change notification settings - Fork 205
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When will the AzureKinect branch be finalized? #52
Comments
Hi, thanks for reaching out. Due to work from home I unfortunately do not have access to a Kinect, so it's hard for me to make fixes to this branch. Having said that, last time I checked, the app worked fine. Let's try to debug this. Here are some questions:
|
Apologies for the confusion. I am working with multiple SDK's here and mistakenly wrote 2.0, but yes -- I am using v1.4.1 direct from the repo. Also, the client is not reading data from the Kinect at all. I see a "capture device failed to initialize". |
I see, looks like Can you try setting breakpoints in those lines and seeing which it is? Here are some possible causes depending on the lines:
|
Thank you for the additional guidance. Looking through the code, I noticed that there as no AzureKinectCapture class, and it suddenly became apparent that I had not switched from Master to AzureKinect branch. Once I made that switch, it is now able to capture and record frames. Thank you for taking the time to respond and sorry for the distraction! |
I would like to ask one last question regarding point clouds. Is there a reason they are rendered upside down? Would you be able to provide a tip as to where in the code the rotation on Z axis could be turned 180 degrees? Live view also displays the point cloud upside down. |
Hi, this is due to the Azure Kinect's coordinate system being different than Kinect v2. The simplest way to change it is to perform calibration using the markers in the docs section. The easiest way to do this would be:
Marek |
This is not specifically related to the original issue, but I want to keep it in the AzureKinect branch category. I am attempting to capture timestamps for all frames, and am storing them in an array inside the KinectServer.KinectServer GetStoredFrame method. Unfortunately, when these are streamed to file along with the PLYs, the timestamps appear to start after the recording is stopped. Is there a more appropriate place to capture a timestamp for each frame? I noticed you are storing the current time intFPSUpdateTimer inside OpenGLWindow.cs, however, I would have presumed this happens after the frame is received and GetStoredFrame is invoked. If you have a moment, please let me know what I have missed, as I would very much like to be able to move the depth camera around and know, to the microsecond, when each frame is being captured. |
I don't know if this helps you, but in the PR #49 I changed the timestamp to be taken directly from the kinect device, rather than the PC it runs on. That could be a bit more accurate. But please note that this timestamp is not a synchronized global time (e.g. 11:12 AM) but rather a timer which starts when the device starts it capture (e.g. 5.7192 seconds after the device start.) |
Yes, I was aware of this caveat, and therefore did not try to work with the device's time. Our workflow uses multiple capture devices (audio, video, LiDAR, depth, DSLR), and everything is being synced to LTC / SMPTE timecode. Even if I were to capture system time and then increment it by the Kinect's timer, I am quite certain it would not be in sync with any other device capturing at the same time. System time is about as close as we have, especially if the devices are on a single system. It sounds to me that the answer here is to feed an LTC timesync into the Kinect's audio stream. Do you implement such a stream anywhere in your code? I did not see a feature to capture audio, but Kinect has 360-degree spatial capability, and it might be quite novel to use it, should multiple "clusters" of Kinects be implemented for parallel volumetric capture. I will be happy to move this in to the feature request section, should you feel it is something worth pursuing. Most of our code is written in Python, so if you have any CPP/C# snippets of examples lying around that implements such a feature, please let me know. Thank you for your time! |
The app does not read the audio stream anywhere in the code unfortunately and I don't think I have any snippets of such code laying around. |
This branch seems to be in an incomplete state. It compiles properly, but the server-client pair do not produce any RGBD data in the viewer and upon saving point clouds, it streams thousands of 1Kb PLY files to the out directory -- it is impossible to stop this save process by clicking on "Stop Saving". The program is unresponsive and must be manually shutdown. Without looking through the code, it appears you are missing the libraries for the Azure Kinect SDK v2.0 (specifically k4a and k4arecord).
The text was updated successfully, but these errors were encountered: