How to detect perspective distortion from single image in OpenCV? - opencv

I am making a program that recognizes horizontal/vertically straight lines from an image file and creates a bunch of line data for other purpose.
However I got a problem that when I take pictures from diagonally sideways(or up/downwards), that picture shouldn't have horizontally/vertically straight lines so I cannot use that picture.
So I have to make image pre-processing method to invert perspective warping. To do so, I must find current projection value of the image first.
Unfortunately I couldn't find a way with OpenCV, unless I add precalculating camera matrix progress before taking picture.
I assume that most of lines in input images should be horizontal/vertically straight. Is there any methods to solve my problem in OpenCV?
For example:
This image is Perspectively warped. I wanna make it image like this :

Related

How to stretch/modify an image to a given template?

I have an image that contains an outline of a triangle.
I also have a piece of paper with the same triangle which a researcher allows a chimpanzee to color in. Then I take a photo of that piece of paper.
I want to process that photo and manipulate it so that the triangle in the photo is now just like the triangle in the reference image file even if the photo has to be stretched, rotated, etc.
I found opencv's template matching which seems like it might handle the first bit- the identifying the reference template in the photo. But now I need to find methods to modify the photo to fit the template.
Can anyone point me to a good place to get started?
What you are looking for is to find the affine transform between the two images. After you will find the transformation between them you will apply it on the photo.
In order to find the affine transform you need to find a set of 3 corresponding points between the two images. In your case a good choice will be just the 3 vertices of the triangle. In order to get the transform in opencv use getAffineTransform.
After that apply the transform on the photo image using opencv warpAffine.
A good tutorial on this you can find at
http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_geometric_transformations/py_geometric_transformations.html

opencv: Correcting these distorted images

What will be the procedure to correct the following distorted images ? It looks like the images are bulging out from center. These are of the same QR code, and so a combination of such images can be used to arrive at a single correct and straight image.
Please advice.
The distortion you are experiencing is called "barrel distortion". A technical name is "combination of radial distortion and tangential distortions"
The solution for your problem is openCV camera calibration module. Just google it and you will find documentations in openCV wiki. More over, openCV already has built in source code examples of how to calibrate the camera.
Basically, You need to print an image of a chess board, take a few pictures of it, run the calibration module (built in method) and get as output transformation matrix. For each video frame you apply this matrix (I think the method called cvUndistort()) and it will straighten the curved lines in the image.
Note: It will not work if you change the zoom or focal length of the camera.
If camera details are not available and uncontrollable - then your problem is very serious. There is a way to solve the distortion, but I don't know if openCV has built in modules for that. I am afraid that you will need to write a lot of code.
Basically - you need to detect as much as possible long lines. Then from those lines (vertical and horizontal) you build a grid of intersection points. Finally you fit the grid of those points to openCV calibration module.
If you have enough intersection points (say 20 or more) you will be able to calculate the distortion matrix and un-distort the image.
You will not be able to fully calibrate the camera. In other words, you will not be able to run a one time process that calculates the expected distortion. Rather - in each and every video frame, you will calculate the distortion matrix directly - reverse it and un-distort the image.
If you are not familiar with image processing techniques or unable to find a reliable open source code which directly solves your problem - then I am afraid that you will not be able to remove the distortion. sorry

Image Registration by Manual marking of corresponding points using OpenCV

I have a processed binary image of dimension 300x300. This processed image contains few object(person or vehicle).
I also have another RGB image of the same scene of dimensiion 640x480. It is taken from a different position
note : both cameras are not the same
I can detect objects to some extent in the first image using background subtraction. I want to detect corresponding objects in the 2nd image. I went through opencv functions
getAffineTransform
getPerspectiveTransform
findHomography
estimateRigidTransform
All these functions require corresponding points(coordinates) in two images
In the 1st binary image, I have only the information that an object is present,it does not have features exactly similar to second image(RGB).
I thought conventional feature matching to determine corresponding control points which could be used to estimate the transformation parameters is not feasible because I think I cannot determine and match features from binary and RGB image(am I right??).
If I am wrong, what features could I take, how should I proceed with Feature matching, find corresponding points, estimate the transformation parameters.
The solution which I tried more of Manual marking to estimate transformation parameters(please correct me if I am wrong)
Note : There is no movement of both cameras.
Manually marked rectangles around objects in processed image(binary)
Noted down the coordinates of the rectangles
Manually marked rectangles around objects in 2nd RGB image
Noted down the coordinates of the rectangles
Repeated above steps for different samples of 1st binary and 2nd RGB images
Now that I have some 20 corresponding points, I used them in the function as :
findHomography(src_pts, dst_pts, 0) ;
So once I detect an object in 1st image,
I drew a bounding box around it,
Transform the coordinates of the vertices using the above found transformation,
finally draw a box in 2nd RGB image with transformed coordinates as vertices.
But this doesnt mark the box in 2nd RGB image exactly over the person/object. Instead it is drawn somewhere else. Though I take several sample images of binary and RGB and use several corresponding points to estimate the transformation parameters, it seems that they are not accurate enough..
What are the meaning of CV_RANSAC and CV_LMEDS option, ransacReprojecThreshold and how to use them?
Is my approach good...what should I modify/do to make the registration accurate?
Any alternative approach to be used?
I'm fairly new to OpenCV myself, but my suggestions would be:
Seeing as you have the objects identified in the first image, I shouldn't think it would be hard to get keypoints and extract features? (or maybe you have this already?)
Identify features in the 2nd image
Match the features using OpenCV FlannBasedMatcher or similar
Highlight matching features in 2nd image or whatever you want to do.
I'd hope that because all your features in the first image should be positives (you know they are the features you want), then it'll be relatively straight forward to get accurate matches.
Like I said, I'm new to this so the ideas may need some elaboration.
It might be a little late to answer this and the asker might not see this, but if the 1st image is originally a grayscale then this could be done:
1.) 2nd image ----> grayscale ------> gray2ndimg
2.) Point to Point correspondences b/w gray1stimg and gray2ndimg by matching features.

How to remove distortion due to motion, from an image

I am trying to track motion of a toy car. I have recorded few videos and now trying to calculate rotation.
My problem is extracting features from object surface is quit challenging due to motion blur. Below image shows a cropped image from a video frame. The distortion happen in horizontal lines. The distortion seen in this image happens when object is moving. When the object is not moving there is no distortion.
Image shows distorted image of the car when its moving forward in a diagonal path cross the image frame.
I tried a wiener filter, based on median and variance but it didn't do much improvement. It only gave me a smoothed image as if Gaussian blur was applied on it.
What type of enhancements should I do to get a better image?
video - 720 x 576 frames - 25fps
from the picture provided it looks like you need to de-interlace the video rather than just trying to filter what's there; i remember doing this by just taking every other scan line and then doing a resize to put it back in perspective.
i found a pretty cool site that talks about deinterlacing in case you'd like to see if you might have other possibilities:
http://www.100fps.com/
(oh, and i have not inspected the image very closely so it's possible that there is some other interlacing scheme going on than just every other line; in which case my first answer wouldn't work properly. and it does imply that you will lose some resolution but that's just the nature of interlaced video...)
Given that your camera outputs interlaced video, you are better off using one field of the video. Either only use the even lines of the image or only the odd lines. The image will be squashed but you won't be mixing two images together.
Yep, that image needs to be de-interlaced. Correcting "distortion" due to linear movement is a different thing, you need to do a linear directional filtering depending on the speed of the vehicle, the distance to the camera and the obturation speed.
You have to first calculate the impulse response for a given set of conditions (those above, which represent the deviation or the distance between the same point taken at the beggining of the capture and the end of it), and then apply the inverse filtering. You may need to use some filtering or image processing toolkit, if using Matlab it's going to be easy.
Did you try:
deconvblind
Follow the example on deconvblind mathworks. It might work well on your example image.
Another example - Image Restoration
The following algorithm is a very simple de-interlaceing method:
cv::Mat input = cv::imread("img.jpg");
cv::Mat tmp(input.rows/2, input.cols*2, input.type(), input.data);
tmp = tmp.colRange(0, input.cols);
cv::Mat output;
cv::resize(tmp, output, Size(), 1, 2);

Using OpenCV to correct stereo images

I intend to make a program which will take stereo pair images, taken by a single camera, and then correct and crop them so that when the images are viewed side by side with the parallel or cross eye method, the best 3D effect will be achieved. The left image will be the reference image, the right image will be modified for corrections. I believe OpenCV will be the best software for these purposes. So far I believe the processing will occur something like this:
Correct for rotation between images.
Correct for y axis shift.
Doing so will I imagine result in irregular black borders above and below the right image so:
Crop both images to the same height to remove borders.
Compute stereo-correspondence/disparity
Compute optimal disparity
Correct images for optimal disparity
Okay, so that's my take on what needs doing and the order it occurs in, what I'm asking is, does that seem right, is there anything I've missed, anything in the wrong order etc. Also, which specific functions of OpenCV would I need to use for all the necessary steps to complete this project? Or is OpenCV not the way to go? Much thanks.
OpenCV is great for this.
There is a whole chapter in:
And all the sample code for this in the book ships with the opencv distribution
edit: Roughly the steps are:
Remap each image to remove lens distortions and rotate/translate views to image center.
Crop pixels that don't appear in both views (optional)
Find matching objects in each view (stereoblock matching) create disparity map
Reproject disparity map into 3D model

Resources