How to remove distortion due to motion, from an image - image-processing

I am trying to track motion of a toy car. I have recorded few videos and now trying to calculate rotation.
My problem is extracting features from object surface is quit challenging due to motion blur. Below image shows a cropped image from a video frame. The distortion happen in horizontal lines. The distortion seen in this image happens when object is moving. When the object is not moving there is no distortion.
Image shows distorted image of the car when its moving forward in a diagonal path cross the image frame.
I tried a wiener filter, based on median and variance but it didn't do much improvement. It only gave me a smoothed image as if Gaussian blur was applied on it.
What type of enhancements should I do to get a better image?
video - 720 x 576 frames - 25fps

from the picture provided it looks like you need to de-interlace the video rather than just trying to filter what's there; i remember doing this by just taking every other scan line and then doing a resize to put it back in perspective.
i found a pretty cool site that talks about deinterlacing in case you'd like to see if you might have other possibilities:
http://www.100fps.com/
(oh, and i have not inspected the image very closely so it's possible that there is some other interlacing scheme going on than just every other line; in which case my first answer wouldn't work properly. and it does imply that you will lose some resolution but that's just the nature of interlaced video...)

Given that your camera outputs interlaced video, you are better off using one field of the video. Either only use the even lines of the image or only the odd lines. The image will be squashed but you won't be mixing two images together.

Yep, that image needs to be de-interlaced. Correcting "distortion" due to linear movement is a different thing, you need to do a linear directional filtering depending on the speed of the vehicle, the distance to the camera and the obturation speed.
You have to first calculate the impulse response for a given set of conditions (those above, which represent the deviation or the distance between the same point taken at the beggining of the capture and the end of it), and then apply the inverse filtering. You may need to use some filtering or image processing toolkit, if using Matlab it's going to be easy.

Did you try:
deconvblind
Follow the example on deconvblind mathworks. It might work well on your example image.
Another example - Image Restoration

The following algorithm is a very simple de-interlaceing method:
cv::Mat input = cv::imread("img.jpg");
cv::Mat tmp(input.rows/2, input.cols*2, input.type(), input.data);
tmp = tmp.colRange(0, input.cols);
cv::Mat output;
cv::resize(tmp, output, Size(), 1, 2);

Related

OpenCV - align stack of images - different cameras

We have this camera array arranged in an arc around a person (red dot). Think The Matrix - each camera fires at the same time and then we create an animated gif from the output. The problem is that it is near impossible to align the cameras exactly and so I am looking for a way in OpenCV to align the images better and make it smoother.
Looking for general steps. I'm unsure of the order I would do it. If I start with image 1 and match 2 to it, then 2 is further from three than it was at the start. And so matching 3 to 2 would be more change... and the error would propagate. I have seen similar alignments done though. Any help much appreciated.
Here's a thought. How about performing a quick and very simple "calibration" of the imaging system by using a single reference point?
The best thing about this is you can try it out pretty quickly and even if results are too bad for you, they can give you some more insight into the problem. But the bad thing is it may just not be good enough because it's hard to think of anything "less advanced" than this. Here's the description:
Remove the object from the scene
Place a small object (let's call it a "dot") to position that rougly corresponds to center of mass of object you are about to record (the center of area denoted by red circle).
Record a single image with each camera
Use some simple algorithm to find the position of the dot on every image
Compute distances from dot positions to image centers on every image
Shift images by (-x, -y), where (x, y) is the above mentioned distance; after that, the dot should be located in the center of every image.
When recording an actual object, use these precomputed distances to shift all images. After you translate the images, they will be roughly aligned. But since you are shooting an object that is three-dimensional and has considerable size, I am not sure whether the alignment will be very convincing ... I wonder what results you'd get, actually.
If I understand the application correctly, you should be able to obtain the relative pose of each camera in your array using homographies:
https://docs.opencv.org/3.4.0/d9/dab/tutorial_homography.html
From here, the next step would be to correct for alignment issues by estimating the transform between each camera's actual position and their 'ideal' position in the array. These ideal positions could be computed relative to a single camera, or relative to the focus point of the array (which may help simplify calculation). For each image, applying this corrective transform will result in an image that 'looks like' it was taken from the 'ideal' position.
Note that you may need to estimate relative camera pose in 3-4 array 'sections', as it looks like you have a full 180deg array (e.g. estimate homographies for 4-5 cameras at a time). As long as you have some overlap between sections it should work out.
Most of my experience with this sort of thing comes from using MATLAB's stereo camera calibrator app and related functions. Their help page gives a good overview of how to get started estimating camera pose. OpenCV has similar functionality.
https://www.mathworks.com/help/vision/ug/stereo-camera-calibrator-app.html
The cited paper by Zhang gives a great description of the mathematics of pose estimation from correspondence, if you're interested.

opencv: undistort part of the image

I am trying to understand how to apply cv2.undistort function only on a subset of the image.
Camera calibration was done through cv2.findChessboardCorner and it seems to be working fine. I find that undistortion, however, is very slow, averaging around 9 fps on a 1080x1920 image. For the purpose of the project, I am interested only in the fixed subset of image, usually something like img[100:400].
What is the good way to approach this problem? It seems to be wasteful to undistort entire image only when stripe of 100 pixels is needed.
From the docs:
The function is simply a combination of cv::initUndistortRectifyMap (with unity R ) and cv::remap (with bilinear interpolation). See the former function for details of the transformation being performed.
So by calling undistort in a loop you are recomputing the un-distortion maps over and over - there is no caching, and their computation is expensive, since it involves solving a polynomial equation for every pixel. IIUC your calibration is fixed, so you should compute them only once using initUndistortRectifyMap(), and then pass the maps to remap() in your main loop.
The kind of "cropping" you describe is doable, but it may require some work and little experimentation because the undistortion maps used by OpenCV are in 1:1 correspondence with the un-distorted image, and store pixel (vector) offsets. This means that you need to crop out the maps portions corresponding to the rectangle of the image you care about, and then edit their values, which are x and y offsets from the un-distorted to the distorted image. Specifically, you need to apply to them the inverse of the 2D translation that brings that rectangle you care about to the top-left corner of the image canvas. Then you should be able to call remap() passing as destination image just an empty image the size of the undistorted cropped rectangle.
All in all, I'd first try the first recommendation (don't call undistort, separate map generation from remapping), and only try the second step if really can't keep up with the frame rate.

locating a moved object without using keypoints

I am trying to determine the movement and rotation of an object (can be plain-colored, but does not have to be) on a not completely constant background. Here is an example:
Using keypoints to find the transformation as in the tutorials does not work because the objects I am dealing with do not necessarily provide enough edges for this.
Building the difference image and doing a segmentation there also often fails, because of the changed background. In this example it is not that bad, but there could be changed reflections or slight deformations.
Any ideas on how I to find the transformation matrix (affine, with only four degrees of freedom) that maps the object (in this example the blue thing) from one image to the other?
Use binary threshold on result of images subtraction(as described here - Foreground Extraction) - it should delete small changes(which are results of changes in lightning). Before that you may try to use some filter to blur edges, median filter may be a good option(but try different filters too) - try using this technique on both input images and result of images subtraction.
//edit:
For determine the transformation you may try to use SURF - http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html
If you don't need to calculate rotation try to use optical flow technique - http://robots.stanford.edu/cs223b05/notes/CS%20223-B%20T1%20stavens_opencv_optical_flow.pdf or much simpler method:
1. Calculate geometrical center(just add positions of all points and divide the result by number of points) of marker contour on first and on second image(name them contour1 and contour2). Alternatively you may calculate center of mass of filled contour - it's up to you.
2. You transformation is: movementVector = centerOfContour2 - centerOfContour1
If the results will be not accurate enough, try to find biggest contour on image and draw this contour on empty image(so you won't have any noises, artifacts etc). Perform all operations on the new image.

Image Rectification for Shake Correction on OpenCV

I've 2 pictures of the same scene from an uncalibrated camera. The pics are from a slightly different angle and scale(zoom) and I'd like to superpose them, rejecting any kind of shake. In other words, I should transform them so the shake becomes imperceptible, do a Motion Compensation.
I've already tried using a simple SURF (feature) detector along with Homography but sometimes the result isn't satisfactory. So I am thinking about trying Image Rectification to compensate the motion.
- Would it work with slight changes, such as user shake?
- Would it really work to reject shake for these 2 frames? And for a bigger buffer of pictures (10 maybe)?
- Anyone knows if it would fix scale disparity (different zoom in the images)?
- What the algorithm really do? Will it transform both pictures into a third orientation?
If there is a better solution, I would be glad to know =)
EDIT
I don't aim to compensate blur motion but the displacement itself. For example, in this file the author compensates the angle difference between two cameras by Image Rectification. How does it actually work? Does it always create an intermediate picture orientation or can I specify that one of the pictures shall remains still??
Also, would I be able to apply this to many frames or it would always find an intermediate orientation for each two frames I put in?
Cheers,
I'm not sure how well superimposing the images would work. Another way to remove blur (including motion blur which should dominate in handheld camera devices) from an image is by blind deconvolution. It is basically a method of finding the inverse of the blur filter that was physically applied (camera shaken) to the real image. There's plenty of techniques out on the web. I've specifically had good results using a modified version of the algorithm in this paper: http://www.cse.cuhk.edu.hk/~leojia/all_final_papers/motion_deblur_cvpr07.pdf
It also comes with an executable file somewhere around the web so you can see if it's fit for your purpose.
Good luck out there!

Using OpenCV to correct stereo images

I intend to make a program which will take stereo pair images, taken by a single camera, and then correct and crop them so that when the images are viewed side by side with the parallel or cross eye method, the best 3D effect will be achieved. The left image will be the reference image, the right image will be modified for corrections. I believe OpenCV will be the best software for these purposes. So far I believe the processing will occur something like this:
Correct for rotation between images.
Correct for y axis shift.
Doing so will I imagine result in irregular black borders above and below the right image so:
Crop both images to the same height to remove borders.
Compute stereo-correspondence/disparity
Compute optimal disparity
Correct images for optimal disparity
Okay, so that's my take on what needs doing and the order it occurs in, what I'm asking is, does that seem right, is there anything I've missed, anything in the wrong order etc. Also, which specific functions of OpenCV would I need to use for all the necessary steps to complete this project? Or is OpenCV not the way to go? Much thanks.
OpenCV is great for this.
There is a whole chapter in:
And all the sample code for this in the book ships with the opencv distribution
edit: Roughly the steps are:
Remap each image to remove lens distortions and rotate/translate views to image center.
Crop pixels that don't appear in both views (optional)
Find matching objects in each view (stereoblock matching) create disparity map
Reproject disparity map into 3D model

Resources