I am trying to create a panorama and I am stuck on the part where I have two separate warped images in two cv::Mat's and now I need to align them and create one single cv::Mat. I also need to average the pixel color value where the pixels in the images overlap to do elementary blending. Is there a built in function in opencv that can do this for me? I have been following the Basic Stitching Pipeline. I'm not sure how I can align and blend the images. I looked up a solution that does feature matching between the images and then we get the homography and just use the translation vector to align the images. Is this what I should be using?
Here are the warped images:
Image 1:
Image 1:
Generating a panaroma from a set of images is usually done using homographies. The reason for this is explained very well here.
You can refer to the code given by Eduardo here. It is also based on feature matching though.
You are right, you need to start with finding descriptors for features in the image (Brief descriptor might be a good idea) and then do feature matching. Once you have the correspondences, you will use those correspondences to estimate the homography. The homography will help you warp one of the image with respect to the other. Post this, you can simply blend them together (by simply add the two images, or taking the maximum value of the at each pixel between the two images)
Related
I'm working on an algorithm that counts patterns (bars) in a specific image. It seemed to me very simple at the first look, but I realized the complexity quickly.
I have tried simple thresholding, template matching (small sliding windows), edge detection...
I have just few images like this one. so I think that a machine learning algorithm can't give better results! but I still need suggestions.
I think you have enough data from your images. You need to crop from your images only the bars. You would get several dozens of small images for each image. After that you can resize all the images to some predefined size (for example 24X24 pixels) use a descriptor like HOG and SVM for the learning. For the false just use any other areas from your images.
This may not work in all cases, but since these are round bars, you can also try using circle detection. Both matlab(find circles) and opencv(hough circle transform) support this hough circle transformation. One issue is that you have to play with the parameters a bit (matlab is more simplistic than open cv) but that is true of almost any method.
These methods work better with larger images so I resized yours. You also need to know the radius of the circles to look for. If your camera position is constant, this shouldn't change much. This code is taken from the matlab documentation page I linked. It doensn't find all the circles, but some tuning may help
im = imread('http://i.stack.imgur.com/NRwUq.jpg');
%find circles doesn't work well on small images, I made the image
%three times larger, if you have larger images you should use those for
%better results
bim = imresize(im, 3*size(im));
%find and display circles
[centers, radii] = imfindcircles(bim,[8 20],'ObjectPolarity','bright',...
'Sensitivity',0.9);
imshow(bim);
h = viscircles(centers,radii);
number_of_bars = numel(centers)
I added green dots to circles the detector missed and blue X's over incorrect detection. I did these by hand, but the red circles were located by matlab.
I have two binary images of hand which are almost same.How should I compare them to know whether they represent almost same shape or not.I have tried finding euclidean distance between two images but its not giving correct answer if the image is slightly changed or moved to left or right or slight decrease in size.I have also tried HOG descriptors in opencv still I am unable to get correct answer if I compare more than one image.What is the best way to compare two binary images based on shape or any feature to know nearly matching images not considering the size of the image.Links to images are http://postimg.org/image/w20tuuzmv/ and http://postimg.org/image/jndr4br9x/
I think that Generalized Hough transform might be a good solution for you. Here is a tutorial about it.
Alternatively uou can try to cut hand from one image (just use contour bounding rect) and than use it as a template and search for it in second image using template matching technique - here you can read more about. When you will find point with highest correlation value, you need to decide whether it is big enough - you need to find threshold on your own.
Are the images just rotated, translated and scaled? If so you could compute the principal components of the images using PCA, then rotate the images so that the first component is in a certain direction (e.g. always vertical) you could then compute the centroids of the images and translate them to be always in the same position (e.g. center of the image), to use always the same scale you could resize the images so that the sum of the distances between each white pixel with the centroid is the same in both images. Now it's easy to compare the images for example score = np.sum(A==B)
I have a processed binary image of dimension 300x300. This processed image contains few object(person or vehicle).
I also have another RGB image of the same scene of dimensiion 640x480. It is taken from a different position
note : both cameras are not the same
I can detect objects to some extent in the first image using background subtraction. I want to detect corresponding objects in the 2nd image. I went through opencv functions
getAffineTransform
getPerspectiveTransform
findHomography
estimateRigidTransform
All these functions require corresponding points(coordinates) in two images
In the 1st binary image, I have only the information that an object is present,it does not have features exactly similar to second image(RGB).
I thought conventional feature matching to determine corresponding control points which could be used to estimate the transformation parameters is not feasible because I think I cannot determine and match features from binary and RGB image(am I right??).
If I am wrong, what features could I take, how should I proceed with Feature matching, find corresponding points, estimate the transformation parameters.
The solution which I tried more of Manual marking to estimate transformation parameters(please correct me if I am wrong)
Note : There is no movement of both cameras.
Manually marked rectangles around objects in processed image(binary)
Noted down the coordinates of the rectangles
Manually marked rectangles around objects in 2nd RGB image
Noted down the coordinates of the rectangles
Repeated above steps for different samples of 1st binary and 2nd RGB images
Now that I have some 20 corresponding points, I used them in the function as :
findHomography(src_pts, dst_pts, 0) ;
So once I detect an object in 1st image,
I drew a bounding box around it,
Transform the coordinates of the vertices using the above found transformation,
finally draw a box in 2nd RGB image with transformed coordinates as vertices.
But this doesnt mark the box in 2nd RGB image exactly over the person/object. Instead it is drawn somewhere else. Though I take several sample images of binary and RGB and use several corresponding points to estimate the transformation parameters, it seems that they are not accurate enough..
What are the meaning of CV_RANSAC and CV_LMEDS option, ransacReprojecThreshold and how to use them?
Is my approach good...what should I modify/do to make the registration accurate?
Any alternative approach to be used?
I'm fairly new to OpenCV myself, but my suggestions would be:
Seeing as you have the objects identified in the first image, I shouldn't think it would be hard to get keypoints and extract features? (or maybe you have this already?)
Identify features in the 2nd image
Match the features using OpenCV FlannBasedMatcher or similar
Highlight matching features in 2nd image or whatever you want to do.
I'd hope that because all your features in the first image should be positives (you know they are the features you want), then it'll be relatively straight forward to get accurate matches.
Like I said, I'm new to this so the ideas may need some elaboration.
It might be a little late to answer this and the asker might not see this, but if the 1st image is originally a grayscale then this could be done:
1.) 2nd image ----> grayscale ------> gray2ndimg
2.) Point to Point correspondences b/w gray1stimg and gray2ndimg by matching features.
I'm working on an image stitching project, and I understand there's different approaches on dealing with contrast and brightness of an image. I could of course deal with this issue before I even stitched the image, but yet the result is not as consistent as I would hope. So my question is if it's possible by any chance to "balance" or rather "equalize" the contrast and brightness in color pictures after the stitching has taken place?
You want to determine the histogram equalization function not from the entire images, but on the zone where they will touch or overlap. You obviously want to have identical histograms in the overlap area, so this is where you calculate the functions. You then apply the equalization functions that accomplish this on the entire images. If you have more than two stitches, you still want to have global equalization beforehand, and then use a weighted application of the overlap-equalizing functions that decreases the impact as you move away from the stitched edge.
Apologies if this is all obvious to you already, but your general question leads me to a general answer.
You may want to have a look at the Exposure Compensator class provided by OpenCV.
Exposure compensation is done in 3 steps:
Create your exposure compensator
Ptr<ExposureCompensator> compensator = ExposureCompensator::createDefault(expos_comp_type);
You input all of your images along with the top left corners of each of them. You can leave the masks completely white by default unless you want to specify certain parts of the image to work on.
compensator->feed(corners, images, masks);
Now it has all the information of how the images overlap, you can compensate each image individually
compensator->apply(image_index, corners[image_index], image, mask);
The compensated image will be stored in image
I intend to make a program which will take stereo pair images, taken by a single camera, and then correct and crop them so that when the images are viewed side by side with the parallel or cross eye method, the best 3D effect will be achieved. The left image will be the reference image, the right image will be modified for corrections. I believe OpenCV will be the best software for these purposes. So far I believe the processing will occur something like this:
Correct for rotation between images.
Correct for y axis shift.
Doing so will I imagine result in irregular black borders above and below the right image so:
Crop both images to the same height to remove borders.
Compute stereo-correspondence/disparity
Compute optimal disparity
Correct images for optimal disparity
Okay, so that's my take on what needs doing and the order it occurs in, what I'm asking is, does that seem right, is there anything I've missed, anything in the wrong order etc. Also, which specific functions of OpenCV would I need to use for all the necessary steps to complete this project? Or is OpenCV not the way to go? Much thanks.
OpenCV is great for this.
There is a whole chapter in:
And all the sample code for this in the book ships with the opencv distribution
edit: Roughly the steps are:
Remap each image to remove lens distortions and rotate/translate views to image center.
Crop pixels that don't appear in both views (optional)
Find matching objects in each view (stereoblock matching) create disparity map
Reproject disparity map into 3D model