How to stitch images with differing orientations - opencv

I have a set of images collected from a drone that I now want to stitch together. The approach I started going towards is to rotate all the images to the proper orientation, then try to stitch those together. However since the rotated images are no longer rectangular, I have large empty areas with no image data. As expected, these dark areas have caused poor stitching results.
I have seen the api for stitch:
Status stitch (InputArrayOfArrays images, const std::vector< std::vector< Rect > > &rois, OutputArray pano)
I like that there is an "ROI" parameter, however the data type is a "Rect" instead of something like a "RotatedRect" or a mask. So this looks like it won't work either.
The only other approach I can think of is to further crop the image to remove the no-data areas of the image (which would require more images to make up for the lost data).
I am not an expert in OpenCV nor in image stitching, so I'm looking for some awesome ideas. Is there a better way to approach this?

Related

Is there Inter_area texture minFilter in three.js like in opencv?

I have a three.js canvas to which I uploaded a 2730x4096 resolution image. I have given options to download different resolutions of images from the canvas. When I download a lower resoltuion image - say 960x1440 image, I get a very blurry image with jagged edges.
I tried to increase the sharpness by setting anisotropy to max(16 in my case) and also tried using map.minFilter= THREE.LinearFilter, which also sharpened the image further but the edges are still jagged.
I tried to run it through an FXAA anti aliasing composer, but the anti-aliasing is not great. May be I'm not giving correct parameters.
All the while antialias from renderer is active (renderer.antialias=true)
When I try to do the same in opencv, I used cv2.INTER_AREA interpolation for downsizing the 2730x4096 image which gives me a very sharp images with absolutely no jagged edges.
So I was thinking if implementing INTER_AREA interpolation for the minFilter instead of THREE.LinearFilter might yield better results. Is there something existing already in three.js that I'm not utilizing, or if I have to use this new interpolation method, how to go about it?
Illustration:
PFA two files - one file is downloaded using three.js canvas directly at 960x1440 resolution (bottom one) and other is an image which is downsized from 2730x4096 to 960x1440 using opencv (top one). In the opencv downsized image, the details are sharper and the edges are cleaner than the three.js image. I'm starting to believe this is because of the INTER_AREA interpolation for downsizing in opencv. Is that replicable in three.js?
The original high resolution image can be downloaded from here
Not sure what your setup is, but by enabling THREE.LinearFilter, you're no longer using Mipmapping, which is very effective at downsampling large textures. The default is LinearMipMapLinearFilter but you have several other options, as outlined in the texture constants docs page. I've found that using anything but the default gives you crunchy textures.
Also, make sure you're enabling antialiasing with renderer.antialias = true; if you need it. Not sure if you're already doing this step, given the limited scope of the code in your question.

counting patterns in image

I'm working on an algorithm that counts patterns (bars) in a specific image. It seemed to me very simple at the first look, but I realized the complexity quickly.
I have tried simple thresholding, template matching (small sliding windows), edge detection...
I have just few images like this one. so I think that a machine learning algorithm can't give better results! but I still need suggestions.
I think you have enough data from your images. You need to crop from your images only the bars. You would get several dozens of small images for each image. After that you can resize all the images to some predefined size (for example 24X24 pixels) use a descriptor like HOG and SVM for the learning. For the false just use any other areas from your images.
This may not work in all cases, but since these are round bars, you can also try using circle detection. Both matlab(find circles) and opencv(hough circle transform) support this hough circle transformation. One issue is that you have to play with the parameters a bit (matlab is more simplistic than open cv) but that is true of almost any method.
These methods work better with larger images so I resized yours. You also need to know the radius of the circles to look for. If your camera position is constant, this shouldn't change much. This code is taken from the matlab documentation page I linked. It doensn't find all the circles, but some tuning may help
im = imread('http://i.stack.imgur.com/NRwUq.jpg');
%find circles doesn't work well on small images, I made the image
%three times larger, if you have larger images you should use those for
%better results
bim = imresize(im, 3*size(im));
%find and display circles
[centers, radii] = imfindcircles(bim,[8 20],'ObjectPolarity','bright',...
'Sensitivity',0.9);
imshow(bim);
h = viscircles(centers,radii);
number_of_bars = numel(centers)
I added green dots to circles the detector missed and blue X's over incorrect detection. I did these by hand, but the red circles were located by matlab.

Balancing contrast and brightness between stitched images

I'm working on an image stitching project, and I understand there's different approaches on dealing with contrast and brightness of an image. I could of course deal with this issue before I even stitched the image, but yet the result is not as consistent as I would hope. So my question is if it's possible by any chance to "balance" or rather "equalize" the contrast and brightness in color pictures after the stitching has taken place?
You want to determine the histogram equalization function not from the entire images, but on the zone where they will touch or overlap. You obviously want to have identical histograms in the overlap area, so this is where you calculate the functions. You then apply the equalization functions that accomplish this on the entire images. If you have more than two stitches, you still want to have global equalization beforehand, and then use a weighted application of the overlap-equalizing functions that decreases the impact as you move away from the stitched edge.
Apologies if this is all obvious to you already, but your general question leads me to a general answer.
You may want to have a look at the Exposure Compensator class provided by OpenCV.
Exposure compensation is done in 3 steps:
Create your exposure compensator
Ptr<ExposureCompensator> compensator = ExposureCompensator::createDefault(expos_comp_type);
You input all of your images along with the top left corners of each of them. You can leave the masks completely white by default unless you want to specify certain parts of the image to work on.
compensator->feed(corners, images, masks);
Now it has all the information of how the images overlap, you can compensate each image individually
compensator->apply(image_index, corners[image_index], image, mask);
The compensated image will be stored in image

Is it possible to detect blur, exposure, orientation of an image programmatically?

I need to sort a huge number of photos, and remove the blurry images (due to camera shake), the over/under exposed ones and detect whether the image was shot in the landscape or portrait orientation. Can these things be done on an image using an image processing library or are they still beyond the realms of an algorithmic solution ?
Let's look at your question as three separate question.
Can I find blurry images?
There are some methods for finding blurry images either from :
Sharpening an image and comparing it to the original
Using wavelets to detect blurring ( Link1 )
Hough Transform ( Link )
Can I find images that are under or over exposed?
The only way I can think of this is that your overall brightness is either really high or really low. But the problem is that you would have know if the picture was taken at night or day. You could create a histogram of your image and see if it is really skewed one way or the other and that might be some indication of over/under exposure.
Can I determine the orientation of the image?
There are techniques that have been used such as SVM, Color Moments, Edge Direction Histograms, Bayesian Framework using cues.
Can I find images that are under or over exposed?
here histograms is recommended.

Using OpenCV to correct stereo images

I intend to make a program which will take stereo pair images, taken by a single camera, and then correct and crop them so that when the images are viewed side by side with the parallel or cross eye method, the best 3D effect will be achieved. The left image will be the reference image, the right image will be modified for corrections. I believe OpenCV will be the best software for these purposes. So far I believe the processing will occur something like this:
Correct for rotation between images.
Correct for y axis shift.
Doing so will I imagine result in irregular black borders above and below the right image so:
Crop both images to the same height to remove borders.
Compute stereo-correspondence/disparity
Compute optimal disparity
Correct images for optimal disparity
Okay, so that's my take on what needs doing and the order it occurs in, what I'm asking is, does that seem right, is there anything I've missed, anything in the wrong order etc. Also, which specific functions of OpenCV would I need to use for all the necessary steps to complete this project? Or is OpenCV not the way to go? Much thanks.
OpenCV is great for this.
There is a whole chapter in:
And all the sample code for this in the book ships with the opencv distribution
edit: Roughly the steps are:
Remap each image to remove lens distortions and rotate/translate views to image center.
Crop pixels that don't appear in both views (optional)
Find matching objects in each view (stereoblock matching) create disparity map
Reproject disparity map into 3D model

Resources