Draw epipolar lines for spherical images with known pose - opencv

I have a couple of spherical images, given in equirectangular projection, looking at the same object from different positions. I know the absolute pose of each image e.g. position in geographical coordinates and roll/pitch/yaw angles. Given the pixel coordinate of a point in one image I would like to find a way to draw the epipolar line (where the correspondent point lies) in the other one.
I tried to deal with Essential/Fundamental matrix in python using OpenCV but I did'nt figure out how to achieve this.
Any help is really appreciated.
Thanks

Related

Undistort single point after calibration

I have question regrading undistortion of a single point using either Scaramuzza or Mei's opencv
I have done the calibration on a dataset and extracted camera matrix and distortion coefficient (for mei) and the necessary parameters for Scaramuzza, after getting mapx (map1) and mapy (map2) I want to apply the undistortion on a single point.
for mei:
we have a position for a point (an intersection in a chess board) in a fisheye image, I was able to find its position using findchessboardcoreners (I know this can be used for calibration but I want to know a position for a well-known point in the image), now I have the undistorted image and I want to know the position of that point after the distortion correction,
I have read many links, suggesting to use undistortpoints method, or by using remap method, and I read links describing that dst(x,y)=src(mapx(x,y),mapy(x,y)) and I applied them all but when I draw the resulted point it wasn't on the same intersection of the chessboard it was even out of the board closer to its position in the fisheye
for Scaramuzza:
I tried to understand world2cam and cam2world methods but still I can't get it right
so
is there a method to know the position of a single point after the distortion correction if we have its position before the distortion? also can someone explain in deep way mapx and mapy .. I have read examples about them and how they can be used but whenever I wanted to implement the mapping between the distorted point and the undistorted one I got confused, for example: mapx and mapy should have the size of the src (in my case it is a point) so how can I use remap method here? or I should get them form the camera matrix and distortion coefficient and use dst(x,y)=src(map1(x,y),map2(x,y) ?
note
I have applied estimateNewCameraMatrixForUndistortRectify, initUndistortRectifyMap and remap successfully on images (for mei's) and I have also applied the undistortion method which was implemented by Scaramuzza on images with a very satisfying result (better than mei)
I was able to solve it by undistortpoints openCV function, the problem was I did not use the fisheye::undistortPoints but I was using the original one, still the surrounded points are not in their right position but the result was kind of acceptable

Calculating transformation of an object in an image using OpenCV

I have two images.
Say one is a 10x10 which we call trainImage and then there is another queryImage which is the same chessboard photographed using a phone camera. Now I have to find the position of camera in (x,y,z) coordinates. Using openCV and feature detection I have been able to identify the chessboard object in photographed object, but how to go ahead with calculating the transformations on chessboard so that I can eventually calculate the position of camera. Any pointers to start looking upon will also be really appreciated. Thanks.
Edit:
Reframing the problem statement again, I have two images trainImage and queryImage. I need to find the position of camera i.e. (x,y,z) if we assume that trainImage is at (0,0,0) in queryImage. I did some reading to find this I need rvec(rotation vector) and tvec(translation vector).
When I use findHomography() function on two images I get a 3x3 homgraphy matrix using which I can find the pixels points(x,y) in queryImage by multiplying to pixel points(x,y) in trainImage. How can I use this homographyMatrix for calculating tvec and rvec.

openCV method or standard practice to get size of a rectangle in 3d space

I need to find the size or coordinates of a rectangle that is displayed as a quadrilateral in a 3D image. The quadrilateral is on a plane that lines up with 3d world vanishing points. To clarify, the quadrilateral IS a rectangle in the 3D world, and that's the rectangle I want the size of.
I do not need to get all the textures and make a new image. I also do not know the coordinates of the target rectangle as required by the homography (perspective transformation) solutions I've seen, because I don't know the aspect ratio it's supposed to have.
I've read through this thread: proportions of a perspective-deformed rectangle and the guy seemed to find an algorithm that works. However I've read other research papers that claim to calculate a homography yet they don't say how they did it. Also it seems such a basic function there would be something in the existing openCV library.
Thanks.

centroid ellipse MSER OPENCV

I am working on an image registration method and I would like to work with region based feature detectors. As representative and because it is already implemented in opencv, i thought of MSER.
I know how to detect the MSER regions.MSER detector gives the MSER regions inside of a vector of points, a contour.I would like to retrieve the centroid of these contours. I could fit a ellipse on them, but then I don't as well how could I retrieve the centroid of these ellipses.
Does someone know if there is an already implemented function that could take care of this task? Or do i have to develop an algorithm?
The reason is that I would like to perform the point correspondence using this centroid points as interesting points.
Thanks
Iván
The centroid of the region can be computed by calculating the mean of all the x values and the mean of all the y values. The resulting (meanX, meanY) point is the region's centroid.

Calculating object position from image processing

I am looking for an efficient way to calculate the position of an oject on a surface based on an image taken from a certain perspective.
Let me explain a little further.
There is an object on a rectangular flat surface.
I have a picture taken of this setup with the camera positioned at one of the corners of the surface area at a rather low angle.
On the picture I will thus see a somewhat distorted, diamond-shaped view of the surface area and somewhere on it the object.
Through some image processing I do have the coordinates of the object on the picture but now have to calculate the actual position of the object on the surface.
So I do know that the center of the object is at the pixel-coordinates (x/y) on the picture and I know the coordinates of the 4 reference points that represent the corners of the area.
How can I now calculate the "real world" position of the object most efficiently (x and y coordinates on the surface)?
Any input is highly appreciated since I have worked so hard on this I can't even think straight anymore.
Best regards,
Tom
You have to find a perspective transformation.
Here you may find an explanation and code in Matlab
HTH!
How good is your linear algebra? A perspective transformation can be described by a homography matrix. You can estimate that matrix using the four corner points, invert it and the calculate the world coordinates of every pixel in your image.
Or you can just let OpenCV do that for you.

Resources