How to know the orientation angle of real Sense depth Camera? - orientation

How to know the orientation angle of real Sense depth camera? i.e. if it is kept horizontal or vertical?Please help me.

There are several realsense cameras.
In the bottom of the following link there is the description of SR300 and D400-series:
https://github.com/IntelRealSense/librealsense/wiki/Projection-in-RealSense-SDK-2.0#d400

Related

ArUco markers, why does the pose change when I change image resolution?

I use this reference https://automaticaddison.com/how-to-perform-pose-estimation-using-an-aruco-marker/ to estimate pose of a marker.
When I obtain cam matrix and distortion matrix I used the full camera resolution.
However, when I change the resolution (image size) before pose estimation, I am getting different results. I am not sure why and which resolution would be correct to use.
Should we always use the same resolution as what was used for camera calibration?
I expected the pose to be somewhat independent from image size other than minor changes. Any thoughts?
Yes, always use the same resolution.
One could recalculate the camera matrix and distortion coefficients to fit a different resolution but that's a hassle, and requires some knowledge of how the camera made these pictures (binning, cropping). Unless you understand the math behind it, just stick with same resolution.

3D image reconstruction from 3 Fixed Camera?

I see some 3D Facial devices that using 3 Camera and find the 3D picture of face.
IS there any specific angle these camera should be fixed for this
calculation?
Is there any SDK, or tools in this domain that could simplify producing 3D image
from these fixed camera?
The less angle you have, the less information about depth you will get from the cameras. So an angle is important, but i cannot say it will need x° degrees.

Stereo Calibration with Nikon D3400

I am trying to calibrate Camera-Projector 3D System. First, I used Logitech C920 webcam and I got an acceptable results in term of calibration accuracy (0.8 reprojection error). However, colors and resolution were bad.
Now, I got a professional camera (Nikon D3400 18-55). I did not manage to get better calibration results than 5.5! I did the calibration using exactly the same projector, the same pattern and the same algorithm.
All settings are fixed in my Camera including Focus, Iso, Aperture, Optical zoom and shutter speed.
What did I miss? What are the possible causes of this problem?
I know that my question is a bit board but it seems that there is a stupid mistake that I have made so any clue is appreciated.
I do not think that it is matter but I am using Brown University 3D Scanning Software which uses OpenCV 2.4.9.
First, your reprojection error is in pixels. What was the resolution of your webcam and your Nicon? I am guessing that the Nicon has much higher resolution, so the pixel size in much smaller. That would make the error in pixels higher, although 5.5 pixels still seems way too high.
The next thing I would worry about is lens distortion. What does the undistorted Nicon image look like? It may be that you do not have enough calibration points close to the edges of the image, which would mean that you are not estimating the distortion coefficients accurately. Or it may be that you have a wide-angle lens, and the distortion is simply too great for this camera model to handle.
So, what you should do is look at the undistorted Nicon image. If that looks strangely warped, then try capturing more calibration images with the pattern close to the edges of the image.
I am also confused by what you wrote about the colors and resolution being bad. Are you talking about undistorted or rectified images? Why would colors be bad?

How to create 3D perspective views of an image using OpenCV?

I have an image on the wall. I'd like to create its 3D perspective views by myself. Suppose the points on the images, camera location, orientation of the camera are given, how can I do to obtain the 3d perspective matrix to play with the original image?
I understand I can use the orientation of the camera to calculate the 3d rotation matrix, but I've no idea how to calculate the subsequent projection matrix...
I've come across this link (see Section Perspective Projection), but I don't understand what's going on after projection.. And what is the difference between the camera position and the viewer's position?
Thanks a lot.
use openGl and its open example to solve your problem.
in bellow link there are good samples to undestand 3d reconstruction:
http://www.songho.ca/opengl/gl_transform.html
wish helpful

two images with camera position and angle to 3d data?

Suppose I've got two images taken by the same camera. I know the 3d position of the camera and the 3d angle of the camera when each picture was taken. I want to extract some 3d data from the images on the portion of them that overlaps. It seems that OpenCV could help me solve this problem, but I can't seem to find where my camera position and angle would be used in their method stack. Help? Is there some other C library that would be more helpful? I don't even know what keywords to search for on the web. What's the technical term for overlapping image content?
You need to learn a little more about camera geometry, and stereo rig geometry. Unless your camera was mounted on a special rig, it's rather doubtful that its pose at each image can be specified with just an angle and a point. Rather, you'd need three angles (e.g. roll, pitch, yaw). Plus, if you want your reconstruction to be metrical accurate, you need to calibrate accurately the focal length of the camera (at a minimum).

Resources