Now, I have a stitching project. I need to find the best stitching seam, but I have no idea of the function below. I have already seen the illustration of OpenCV documentation, I think it’s unclear.
seam_finder = new detail::GraphCutSeamFinder(GraphCutSeamFinderBase::COST_COLOR);
seam_finder->find(images_warped_f, corners, masks_warped);
Can someone help me? tell me the meaning of images_warped_f and corners.Thank u so much!
For every image we have type defined. You can check here for more information. images_warped_f is the vector of images converted to type 5.
Corners is the vector of left corner points of the images you are trying to stitch. If you use the SphericalWarper to warp your image, the warp function would perform a spherical projectiona and return you the top left corner of the result image. You can check here for reference.
Related
I have question regrading undistortion of a single point using either Scaramuzza or Mei's opencv
I have done the calibration on a dataset and extracted camera matrix and distortion coefficient (for mei) and the necessary parameters for Scaramuzza, after getting mapx (map1) and mapy (map2) I want to apply the undistortion on a single point.
for mei:
we have a position for a point (an intersection in a chess board) in a fisheye image, I was able to find its position using findchessboardcoreners (I know this can be used for calibration but I want to know a position for a well-known point in the image), now I have the undistorted image and I want to know the position of that point after the distortion correction,
I have read many links, suggesting to use undistortpoints method, or by using remap method, and I read links describing that dst(x,y)=src(mapx(x,y),mapy(x,y)) and I applied them all but when I draw the resulted point it wasn't on the same intersection of the chessboard it was even out of the board closer to its position in the fisheye
for Scaramuzza:
I tried to understand world2cam and cam2world methods but still I can't get it right
so
is there a method to know the position of a single point after the distortion correction if we have its position before the distortion? also can someone explain in deep way mapx and mapy .. I have read examples about them and how they can be used but whenever I wanted to implement the mapping between the distorted point and the undistorted one I got confused, for example: mapx and mapy should have the size of the src (in my case it is a point) so how can I use remap method here? or I should get them form the camera matrix and distortion coefficient and use dst(x,y)=src(map1(x,y),map2(x,y) ?
note
I have applied estimateNewCameraMatrixForUndistortRectify, initUndistortRectifyMap and remap successfully on images (for mei's) and I have also applied the undistortion method which was implemented by Scaramuzza on images with a very satisfying result (better than mei)
I was able to solve it by undistortpoints openCV function, the problem was I did not use the fisheye::undistortPoints but I was using the original one, still the surrounded points are not in their right position but the result was kind of acceptable
I am learning JavaCV and want to extract part of images dynamically based on color.
As identification I am outlining the region which I need to extract with a color. Is there anyway I can do extract ROI based on color outline. Any help appreciated.
Here is the Sample Image
it is quite simple. Since your figure has 4 corners hence you ought to follow the following steps.
1.identify the orientation of the image and store the points in a MatofPoint2f in a specific order.
(clock wise or anti clockwise- For this you can use Math.atan2(p1(y)-centerpoint(y),p1(x)-centerpoint(x)) and then sort the points according to the result of the equation. find the center point by finding the avg all the xcoords and y coords or any method you prefer).
2.Create a MatofPoint2f containing the corner coords of the result image size you want the cropped image in.
3.use Imgproc.getPerspectiveTransform() to perform the cropping.
4.Finally use Imgproc.warpPerspective() to obtain the output that is desired.
And for creating the border of the ROI the best way to go is to threshold the image by using some specific range so as to extract only those parts of the spectrum which is required.
I would like to measure the displacement of an object between two images. The displacement can be anything in the image plane. The result should give the displacement, if possible in sub pixel accuracy.
There are some assumptions, which should make it easier, but didn't help me so far:
the camara objective is virtualy distortion free (telecentric) and oriented perpendicular to the object plane
the object plane never changes
the flat marker object (could be known image, e.g. a play card) is always in the object plane, so it isn't scaled or warped -> only rotational and translational changing.
My first approach was to take the feature recognition example from EmguCV, find the first object in the first image, take the relevant piece of that picture, use it now as template and search it in the second image. This did work, but a little unsatisfactory. There was scaling and warpping in the homography matrix (probably because of some points, that where assigned wrong) and the placing accuracy was quite bad.
I tried this once with the demo of the commercial image processing software Halcon and it worked like a charm in sub pixel accuracy. There you can do some sort of least square fit of a template to the image you are searching the object in. The result is an affine transform matrix and very precise.
Is there something comparable in EmguCV/OpenCV?
Thank you in advance!
Edit:
Found the solution in EmguCV in the function
CameraCalibration.EstimateRigidTransform(PointF[] src, PointF[] dest, bool fullAffine);
with fullAffine set to false. My problem before was, that I was using
Features2DToolbox.GetHomographyMatrixFromMatchedFeatures();
from the matching example.
Found the solution in EmguCV in the function
CameraCalibration.EstimateRigidTransform(PointF[] src, PointF[] dest, bool fullAffine);
with fullAffine set to false. My problem before was, that I was using
Features2DToolbox.GetHomographyMatrixFromMatchedFeatures();
from the matching example.
The only problem left was the small scaling still produced by EstimateRigidTransform, but I was able to calculate it out of the result.
Does someone have an idea to get the size and the position from an object? The Object is detected in a binary image with white pixels:
For example: Detected / Original
http://ivrgwww.epfl.ch/supplementary_material/RK_CVPR09/Images/segmentation/2_sal/0_12_12171.jpg
http://ivrgwww.epfl.ch/supplementary_material/RK_CVPR09/Images/comparison/orig/0_12_12171.jpg
I know about the CvMoments- Method. But I don't know how to use it in this case.
By the way: How can I make my mask more clearly?
Simple algorithm:
Delete small areas of white pixels using morphological operations (erosion).
Use findContours to find all contours.
Use countNonZero or contourArea to find area of each contour.
Cycle throught all points of each contour and find mean of them. This will be the center of contour.
If the object is tree, you should delete small areas by using morphology as Astor written.
Alternative of finding mass, and mass center is using moments:
http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=moments#moments
m00 as doc says is mass
There are also formulas for mass center.
This approach works when only your object remains on image after segmentation.
I am looking for the right set of algorithms to solve this image processing problem:
I have a distorted binary image containing a distorted rectangle
I need to find a good approximation of the 4 corner points of this rectangle
I can calculate the contour using OpenCV, but as the image is distorted it will often contain more than 4 corner points.
Is there a good approximation algorithm (preferably using OpenCV operations) to find the rectangle corner points using the binary image or the contour description?
The image looks like this:
Thanks!
Dennis
Use cvApproxPoly function to eliminate number of nodes of your contour, then filter out those contours that have too many nodes or have angles which much differ from 90 degrees. See also similar answer
little different answer, see
http://opencv.willowgarage.com/documentation/cpp/camera_calibration_and_3d_reconstruction.html
Look at the opencv function ApproxPoly. It approximates a polygon from a contour.
Try Harris Corner Detector. There is example in OpenCV package. You need to play with params for your image.
And see other OpenCV algorithms: http://www.comp.leeds.ac.uk/vision/opencv/opencvref_cv.html#cv_imgproc_features
I would try generalised Hough Transform it is a bit slow but deals well with distorted/incomplete shapes.
http://en.wikipedia.org/wiki/Hough_transform
This will work even if you start with some defects, i.e. your approxPolly call returns pent/hexagons. It will reduce any contour, transContours in example, to a quad, or whatever poly you wish.
vector<Point> cardPoly;// Quad storage
int PolyLines = 0;//PolyPoly counter ;)
double simplicity = 0.5;//Increment of adjustment, lower numbers may be more precise vs. high numbers being faster to cycle.
while(PolyLines != 4)//Adjust this
{
approxPolyDP(transContours, Poly, simplicity, true);
PolyLines = Poly.size();
simplicity += 0.5;
}