I need to understand that how calibration works for cameras. I do not have basics good enough. Can someone please help me in understanding that how can I apply calibration data to an image taken with a camera for which I already have the calibration matrices such as Camera Matrix, Rotation and Translation matrix, and Distortion Matrix. I got all these using opencv, but I really do not understand how it works. Reading some tutorials helped but still not good enough. Please Please Please help me!!!!!!!!!!!
Thanks in advance,
Sumit
It depends on what you are trying to do. You can use the calibration parameters to remove lens distortion from an image. With a single calibrated camera you can measure planar objects, or you can do 3-D reconstruction from multiple images, assuming you know the extrinsics. With a calibrated stereo pair of cameras you can do 3-D reconstruction more easily.
I am not sure what you mean by "manually". Generally, this is a rather broad topic, which cannot be explained in a few paragraphs. I would recommend you start with the "Learning OpenCV" book by Bradski and Kaehler.
Related
In OpenCV implementation, instrinsic parameters of the camera is used to correct geometric distortion.
So camera calibration is performed to obtain instrinsic parameters using multiple chessboard images.
Currently I learned that geometric distortion can be corrected using only one chessboard image.
I try to figure out how it is done, but still can't find one way to do it.
http://www.imatest.com/docs/distortion-methods-and-modules/
https://www.edmundoptics.com/resources/application-notes/imaging/distortion/
I find the two above links. It describes the radial distortion. However we can't
guarantee that the camera is parallel to the chessboard when capturing the chessboard.
I can detect the corners of the chessboard, but some corners is distorted, so I can't
fit lines because fitting can only handle noise.
Any help are appreciated.
Please take a look at this paper and this paper. Moreover, this paper proves that you can correct distortion using single image without calibration target based on identifying straight lines on image such as edges of the buildings.
I don't know whether this functionality is implemented in OpenCV but the math in those papers is should be relatively easy to implement it using OpenCV.
I am trying to reconstruct the real-world coordinates of 3D points from two images taken from the same camera. The camera is not calibrated, but the movement (translation and rotation) is known. In short:
Requirement:
No calibration
Extra constraints other than image point correspondences:
Known camera translation and rotation
Same camera used in all views
I understand that, from image point correspondences alone, a scene can be reconstructed only up to a projective transformation. With more constraints, an affine or similarity reconstruction may be done. In my case, I need a similarity reconstruction.
Given the above constraints, is a similarity reconstruction possible? If possible, how should I go about doing it?
I have tried to attack the problem from a few angles. Since I am not mathematically fluent, I try to use opencv as much as possible.
findFundamentalMat() from the two images, hopefully extract the two camera matrices somehow, then triangulatePoints(). As you could have guessed, I got stuck in the middle, unable to obtain camera matrices from fundamental matrix.
The textbook "Multiple View Geometry in Computer Vision" (by Hartley and Zisserman) gives an expression (p.256, Result 9.14) that expresses the camera matrices in terms of fundamental matrix and one of the epipoles. However, without knowing the camera's intrinsic parameters (requirement: no calibration), I don't see how I can get the epipole.
I also try to treat my problem as a stereo system and use opencv's stereo*** functions. But they all seem to require human intervention to calibrate, which violates my requirement.
So, that's why I ask the question here today. The key is still, given those extra constraints, is a similarity reconstruction possible? I am not smart enough to understand the wealth of knowledge out there, and not able to come up with my own solution. Any help is appreciated.
I am trying to write a program that stitches images using SURF detector and I would like to know the difference between the two homography estimator.
I understand findHomography uses RANSAC, is HomographyBasedEstimator using RANSAC too?
If it isn't, would someone point me to the paper HomographyBasedEstimator used?
Thanks in advance
The main difference between both functions is that findHomography, as the name says, is used to find an homography, and HomographyBasesEstimator uses an already existing homographies to calculate the rotation of the cameras.
I mean, HomographyBasesEstimator doesn't find the homographies, it use them to do the computation of camera motion and all the other camera parameters such as focal lengths and optical centers.
I hope this can help you.
Actually, findHomography has been called in BestOf2NearestMatcher.
Documentation doesn't seem to say, but it suggests that HomographyBasedEstimator finds a rotation matrix, which is a special case of the homography matrix that requires the focal length. If you're doing stitching, HomographyBasedEstimator is probably the way to go. (My guess is that it's doing RANSAC internally.)
I am a beginner when it comes to computer vision so I apologize in advance. Basically, the idea I am trying to code is that given two cameras that can simulate a multiple baseline stereo system; I am trying to estimate the pose of one camera given the other.
Looking at the same scene, I would incorporate some noise in the pose of the second camera, and given the clean image from camera 1, and slightly distorted/skewed image from camera 2, I would like to estimate the pose of camera 2 from this data as well as the known baseline between the cameras. I have been reading up about homography matrices and related implementation in opencv, but I am just trying to get some suggestions about possible approaches. Most of the applications of the homography matrix that I have seen talk about stitching or overlaying images, but here I am looking for a six degrees of freedom attitude of the camera from that.
It'd be great if someone can shed some light on these questions too: Can an approach used for this be extended to more than two cameras? And is it also possible for both the cameras to have some 'noise' in their pose, and yet recover the 6dof attitude at every instant?
Let's clear up your question first. I guess You are looking for the pose of the camera relative to another camera location. This is described by Homography only for pure camera rotations. For General motion that includes translation this is described by rotation and translation matrices. If the fields of view of the cameras overlap the task can be solved with structure from motion which still estimates only 5 dof. This means that translation is estimated up to scale. If there is a chessboard with known dimensions in the cameras' field of view you can easily solve for 6dof by running a PnP algorithm. Of course, cameras should be calibrated first. Finally, in 2008 Marc Pollefeys came up with an idea how to estimate 6 dof from two moving cameras with non-overlapping fields of view without using any chess boards. To give you more detail please tell a bit for the intended appljcation you are looking for.
I am using OpenCV Haar Algorithm to track the Head and overlay an image over the Head.
What I am doing is saving frames generated by camera and overlaying image over each frames.
And time is not a constraint as I am not doing it Real-Time.
My code is working fine for say 45 degree of left and right rotation of Head.
But I need something which will track up to 90 degree of rotation.
Even I got many reference of OpenCV functions and link to estimate Head Pose
Please provide me some reference. Code Examples will be cool.
Thanks in advance
You can use an algorithm like SURF (you have samples in OpenCV package) and use it over a picture of the face, the over the image, and then use SURF descriptors to match the points and estimate the 3d position of the face in the image.
You can use the same code on the sample "find_obj" but replace the image by the face picture you want to track.
Hope this helps.
There is a functionc in openCV called POSIT that permit to estimate the pose of 3d object in a single image. It implements POSIT algorithm. Try to have a look there.
You could check the EHCI project at http://code.google.com/p/ehci/ as it gives a nice overview about POSIT and Lukas Kanade.