I am working on an image registration method and I would like to work with region based feature detectors. As representative and because it is already implemented in opencv, i thought of MSER.
I know how to detect the MSER regions.MSER detector gives the MSER regions inside of a vector of points, a contour.I would like to retrieve the centroid of these contours. I could fit a ellipse on them, but then I don't as well how could I retrieve the centroid of these ellipses.
Does someone know if there is an already implemented function that could take care of this task? Or do i have to develop an algorithm?
The reason is that I would like to perform the point correspondence using this centroid points as interesting points.
Thanks
Iván
The centroid of the region can be computed by calculating the mean of all the x values and the mean of all the y values. The resulting (meanX, meanY) point is the region's centroid.
Related
I have a couple of spherical images, given in equirectangular projection, looking at the same object from different positions. I know the absolute pose of each image e.g. position in geographical coordinates and roll/pitch/yaw angles. Given the pixel coordinate of a point in one image I would like to find a way to draw the epipolar line (where the correspondent point lies) in the other one.
I tried to deal with Essential/Fundamental matrix in python using OpenCV but I did'nt figure out how to achieve this.
Any help is really appreciated.
Thanks
I have two images.
Say one is a 10x10 which we call trainImage and then there is another queryImage which is the same chessboard photographed using a phone camera. Now I have to find the position of camera in (x,y,z) coordinates. Using openCV and feature detection I have been able to identify the chessboard object in photographed object, but how to go ahead with calculating the transformations on chessboard so that I can eventually calculate the position of camera. Any pointers to start looking upon will also be really appreciated. Thanks.
Edit:
Reframing the problem statement again, I have two images trainImage and queryImage. I need to find the position of camera i.e. (x,y,z) if we assume that trainImage is at (0,0,0) in queryImage. I did some reading to find this I need rvec(rotation vector) and tvec(translation vector).
When I use findHomography() function on two images I get a 3x3 homgraphy matrix using which I can find the pixels points(x,y) in queryImage by multiplying to pixel points(x,y) in trainImage. How can I use this homographyMatrix for calculating tvec and rvec.
how do you get the centroid of an irregular shape using OpenCV?
I'd recommend looking at the cv::Moments (C++) or cvMoments (C) function.
This StackOverflow thread gives some example code for a problem very similar to yours.
This post goes into some of the theory related to finding object center-points.
What do you mean by centroid?
If it's the center of mass, you can compute the average of the coordinates of the points that are inside your shape. But the center of mass can be outside the shape, for "irregular" (non-convex) shapes.
If you want the point inside the shape that is the further away to any of the contour point, you can have a look at distTransform function.
I have a small cube with n (you can assume that n = 4) distinguished points on its surface. These points are numbered (1-n) and form a coordinate space, where point #1 is the origin.
Now I'm using a tracking camera to get the coordinates of those points, relative to the camera's coordinate space. That means that I now have n vectors p_i pointing from the origin of the camera to the cube's surface.
With that information, I'm trying to compute the affine transformation matrix (rotation + translation) that represents the transformation between those two coordinate spaces. The translation part is fairly trivial, but I'm struggling with the computation of the rotation matrix.
Is there any build-in functionality in OpenCV that might help me solve this problem?
Sounds like cvGetPerspectiveTransform is what you're looking for; cvFindHomograpy might also be helpful.
solvePnP should give you the rotation matrix and the translation vector. Try it with CV_EPNP or CV_ITERATIVE.
Edit: Or perhaps you're looking for RQ decomposition.
Look at the Stereo Camera tutorial for OpenCV. OpenCV uses a planar chessboard for all the computation and sets its Z-dimension to 0 to build its list of 3D points. You already have 3D points so change the code in the tutorial to reflect your list of 3D points. Then you can compute the transformation.
I found contours on two images with same object and I want to find displacement and rotation of this object. I've tried with rotated bounding boxes of this contours and then its angles and center points but rotations of bounding boxes don't tell about contour rotation correctly because it's the same for angles a+0, a+90, a+180 etc. degrees.
Is it any other good way to find rotation and displacement of contours? Maybe some use of convex hull, convexity defects? I've read in Learning OpenCv about matching contours but it hasn't helped. Could someone give some example?
//edit:
Maybe there is some way to use something similar to freeman chains to this? But I can't figure out algorithm at the moment. Making chain with angles between sequence point and then checking sequence match isn't working good...
If the object has convexity defects then you could choose one defect, make a vector from the centroid of the first contour to the centroid of this defect.
Then you could check the defects in the second contour and match the one that you used before.Again a vector from the centroid of the contour to the centroid of the matched defect.
From this you get 2 segments (vectors) from which you could obtain a displacement and a rotation.