Finding Intrinsic parameters of a Digital Camera - opencv

What is the best way to find the intrinsic parameters of a Digital Camera?
I have tried using the opencv Camera Calibration tool but it gives me different values after every calibration. So Im not sure if it is correct or not. Whereas the same method works perfectly fine for a USB camera.
I am not sure if the process to get intrinsic parameters for digital camera is slightly different and if im missing something.
If someone has any clue regarding this... please help!

Changes in ISO and white balance happen "behind" the lens+sensor combo, so they cannot directly affect the calibrated geometry model of the camera.
However, they may affect the results of a calibration procedure by biasing the locations of the detected features on whatever rig you are using ("checkerboard" or similar) - anything that affects the appearance of grayscale gradients can potentially do that.
The general rule is: "Find the lens and exposure settings that work for your application (not the calibration), then lock them down and disable any and all auto-gizmo features that may modify them". If those settings don't work for the particular calibration setup you are using, change it (for example, dim the lights or switch the target pattern to gray/black from white/black if the application lighting causes the target to be overexposed/saturated).

Related

Why we use fixed focus for AR tracking in ARCore?

I am using ARCore to track an image. Based on the following reference, the FOCUSMODE of the camera should be set to FIXED for better AR tracking performance.
Since for each frame we can get camera intrinsic parameter of focal length, why we need to use a fixed focus?
With Fixed Camera Focus ARCore can better calculate a parallax (no near or distant real-world objects must be out of focus), so your Camera Tracking will be reliable and accurate. At Tracking Stage, your gadget should be able to clearly distinguish all textures of surrounding objects and feature points – to build correct 3D scene.
Also, Scene Understanding stage requires fixed focus as well (to correctly detect planes, catch lighting intensity and direction, etc). That's what you expect from ARCore, don't you?
Fixed Focus also guarantees that your "in-focus" rendered 3D model will be placed in scene beside the real-world objects that are "in-focus" too. However, if we're using Depth API we can defocus real-world and virtual objects.
P.S.
In the future ARCore engineers may change the aforementioned behaviour of camera focus.

Multi-camera calibration with opencv: Two cameras facing each other

I want to compute the extrinsic calibration of two cameras w.r.t each other and am using cv::stereoCalibrate() function to do this. However, the result does not correspond to the reality. What could be wrong ?
Setup: Two cameras mounted 7 meters high, facing each other while looking downwards. They have lot of field of view intersection and I captured checkerboard images that I used in calibration.
I am not flipping any of the images.
Do I need to flip the images ? or do I need to do something else to tell that the cameras are actually facing each other ?
Note: The same function perfectly calibrates cameras that are next to each other facing in the same direction (like any typical stereo camera).
Thanks
In order to "tell that the cameras are actually facing each other" you have to specify imagePoints1 and imagePoints2 correctly, such that points with matching indices correspond to a same physical point.
If in your case function works perfectly when the cameras are oriented in the same direction and doesn't work with your configuration - discrepancy between point indexing might be a probable reason (most likely points are flipped both vertically and horizontally).
One way to debug this is to either draw indices near the points on each of the frames, or color-code them and make sure they match between the images.
One question though - why do you use cv::stereoCalibrate()? The setting you described doesn't seem to be a good use-case for it. If you want to estimate extrinsic parameters of cameras you can use cv::calibrateCamera(). The only downside is that it assumes that intrinsic parameters are same for all provided views (all images were taken with same or very similar cameras). If it is not the case - indeed cv::stereoCalibrate() would be a better fit (but the manual suggests that you still estimate each camera intrinsic parameters individually using cv::calibrateCamera())

AR With External Tracking - Alignment is wrong, values are right

I recently managed to get my augmented reality application up and running close to what is expected. However, I'm having an issue where, even though the values are correct, the augmentation is still off by some translation! It would be wonderful to get this solved as I'm so close to having this done.
The system utilizes an external tracking system (Polaris Spectra stereo optical tracker) with IR-reflective markers to establish global and reference frames. I have a LEGO structure with a marker attached which is the target of the augmentation, a 3D model of the LEGO structure created using CAD with the exact specs of its real-world counterpart, a tracked pointer tool, and a camera with a world reference marker attached to it. The virtual space was registered to the real world using a toolset in 3D Slicer, a medical imaging software which is the environment I'm developing in. Below are a couple of photos just to clarify exactly the system I'm dealing with (May or may not be relevant to the issue).
So a brief overview of exactly what each marker/component does (Markers are the black crosses with four silver balls):
The world marker (1st image on right) is the reference frame for all other marker's transformations. It is fixed to the LEGO model so that a single registration can be done for the LEGO's virtual equivalent.
The camera marker (1st image, attached to camera) tracks the camera. The camera is registered to this marker by an extrinsic calibration performed using cv::solvePnP().
The checkerboard is used to acquire data for extrinsic calibration using a tracked pointer (unshown) and cv::findChessboardCorners().
Up until now I've been smashing my face against the mathematics behind the system until everything finally lined up. When I move where I estimate the camera origin to be to the reference origin, the translation vector between the two is about [0; 0; 0]. So all of the registration appears to work correctly. However, when I run my application, I get the following results:
As you can see, there's a strange offset in the augmentation. I've tried removing distortion correction on the image (currently done with cv::undistort()), but it just makes the issue worse. The rotations are all correct and, as I said before, the translations all seem fine. I'm at a loss for what could be causing this. Of course, there's so much that can go wrong during implementation of the rendering pipeline, so I'm mostly posting this here under the hope that someone has experienced a similar issue. I already performed this project using a webcam-based tracking method and experienced no issues like this even though I used the same rendering process.
I've been purposefully a little ambiguous in this post to avoid bogging down readers with the minutia of the situation as there are so many different details I could include. If any more information is needed I can provide it. Any advice or insight would be massively appreciated. Thanks!
Here are a few tests that you could do to validate that each module works well.
First verify your extrinsic and intrinsic calibrations:
Check that the position of the virtual scene-marker with respect to the virtual lego scene accurately corresponds to the position of the real scene-marker with respect to the real lego scene (e.g. the real scene-marker may have moved since you last measured its position).
Same for the camera-marker, which may have moved since you last calibrated its position with respect to the camera optical center.
Check that the calibration of the camera is still accurate. For such a camera, prefer a camera matrix of the form [fx,0,cx;0,fy,cy;0,0,1] (i.e. with a skew fixed to zero) and estimate the camera distortion coefficients (NB: OpenCV's undistort functions do not support camera matrices with non-zero skews; using such matrices may not raise any exception but will result in erroneous undistortions).
Check that the marker tracker does not need to be recalibrated.
Then verify the rendering pipeline, e.g. by checking that the scene-marker reprojects correctly into the camera image when moving the camera around.
If it does not reproject correctly, there is probably an error with the way you map the OpenCV camera matrix into the OpenGL projection matrix, or with the way you map the OpenCV camera pose into the OpenGL model view matrix. Try to determine which one is wrong using toy examples with simple 3D points and simple projection and modelview matrices.
If it reprojects correctly, then there probably is a calibration problem (see above).
Beyond that, it is hard to guess what could be wrong without directly interacting with the system. If I were you and I still had no idea where the problem could be after doing the tests above, I would try to start back from scratch and validate each intermediate step using toy examples.

Structure from Motion (SfM) in a tunnel-like structure?

I have a very specific application in which I would like to try structure from motion to get a 3D representation. For now, all the software/code samples I have found for structure from motion are like this: "A fixed object that is photographed from all angle to create the 3D". This is not my case.
In my case, the camera is moving in the middle of a corridor and looking forward. Sometimes, the camera can look on other direction (Left, right, top, down). The camera will never go back or look back, it always move forward. Since the corridor is small, almost everything is visible (no hidden spot). The corridor can be very long sometimes.
I have tried this software and it doesn't work in my particular case (but it's fantastic with normal use). Does anybody can suggest me a library/software/tools/paper that could target my specific needs? Or did you ever needed to implement something like that? Any help is welcome!
Thanks!
What kind of corridors are you talking about and what kind of precision are you aiming for?
A priori, I don't see why your corridor would not be a fixed object photographed from different angles. The quality of your reconstruction might suffer if you only look forward and you can't get many different views of the scene, but standard methods should still work. Are you sure that the programs you used aren't failing because of your picture quality, arrangement or other reasons?
If you have to do the reconstruction yourself, I would start by
1) Calibrating your camera
2) Undistorting your images
3) Matching feature points in subsequent image pairs
4) Extracting a 3D point cloud for each image pair
You can then orient the point clouds with respect to one another, for example via ICP between two subsequent clouds. More sophisticated methods might not yield much difference if you don't have any closed loops in your dataset (as your camera is only moving forward).
OpenCV and the Point Cloud Library should be everything you need for these steps. Visualization might be more of a hassle, but the pretty pictures are what you pay for in commercial software after all.
Edit (2017/8): I haven't worked on this in the meantime, but I feel like this answer is missing some pieces. If I had to answer it today, I would definitely suggest looking into the keyword monocular SLAM, which has recently seen a lot of activity, not least because of drones with cameras. Notably, LSD-SLAM is open source and may not be as vulnerable to feature-deprived views, as it operates directly on the intensity. There even seem to be approaches combining inertial/odometry sensors with the image matching algorithms.
Good luck!
FvD is right in the sense that your corridor is a static object. Your scenario is the same and moving around and object and taking images from multiple views. Your views are just not arranged to provide a 360 degree view of the object.
I see you mentioned in your previous comment that the data is coming from a video? In that case, the problem could very well be the camera calibration. A camera calibration tells the SfM algorithm about the internal parameters of the camera (focal length, principal point, lens distortion etc.) In the absence of knowledge about these, the bundler in VSfM uses information from the EXIF data of the image. However, I don't think video stores any EXIF information (not a 100% sure). As a result, I think the entire algorithm is running with bad focal length information and cannot solve for the orientation.
Can you extract a few frames from the video and see if there is any EXIF information?

Map image model to data

I'm working on a software that checks if some laser-cut parts were cut correctly, using AutoCAD data as reference. I have parsed the dxf-files, converted them to a bmp (and to an xml File that gives me all the information), and now I want to compare this to the real, acquired data.
I have applied enough preprocessing to get a reasonably thresholded, binary picture. This is, however, distorted (unfortunately, telecentric lenses are expensive and the user places the object into a device, causing some translation, some scalation and a tiny amount of rotation, as in 1-2degs).
I have considered Hough transform, but memory is an issue. I have played around with bounding box transformation, but the unknown shape makes this hard. I've read about TILT (no symmetry) and registration algorithms, but I'd like to get another opinion.
I'm looking for some papers, some ideas, some pointers on how to go on.
Thanks.
First step is to undistort the image ( see camera calibration - ignore the 3d part).
Then think about the shape matching. Depending on how small the error you are trying to find, this could be very easy or very very difficult, but those links should get you started
You may want to look at features that can discriminate the two. Are there simple features that can accurately distinguish a properly cut piece vs. an incorrectly cut piece? If so, you can use the same idea as the Hough transform/template matching, but reducing the template to certain distinguishing features (edges, corners, etc.) to reduce the memory required.
You may want to look at the SIFT/SURF features that aim to match images by a certain set of features while being invariant to the rotation and scale of the objects within the image. There are libraries out there that implement these features (shown on the SURF page).
This however, wont help with the distortion. If you're using the same camera for all images, then you should be able to de-skew them accordingly.

Resources