OpenCV Image Matching - opencv

I have two images from a stereo camera of the same scene, but few different perspectives (imgLeft and imgRight).
Now, I want to find a ROI (red rectangle in the image below) of the right image in the left one. I need to do this very fast, because I'm doing this in a video. How can I do this? I do not have the nonfree of OpenCV; but I have CUDA installed.
imgRight:
imgLeft:

This should be your friend http://docs.opencv.org/2.4/modules/video/doc/motion_analysis_and_object_tracking.html#calcopticalflowpyrlk
All you need to do is to find feature points inside this rectangle and pass them to the cv::calcopticalflowpyrlk to get there peers in the second image. You may need to make some filtering for the points to make sure that the tracking was perfect like for example pass them to cv::findHomography using CV_RANSAC flag and check the mask output.
The operation is fast and real-time. There is also a CUDA version of this method.

Related

opencv: Correcting these distorted images

What will be the procedure to correct the following distorted images ? It looks like the images are bulging out from center. These are of the same QR code, and so a combination of such images can be used to arrive at a single correct and straight image.
Please advice.
The distortion you are experiencing is called "barrel distortion". A technical name is "combination of radial distortion and tangential distortions"
The solution for your problem is openCV camera calibration module. Just google it and you will find documentations in openCV wiki. More over, openCV already has built in source code examples of how to calibrate the camera.
Basically, You need to print an image of a chess board, take a few pictures of it, run the calibration module (built in method) and get as output transformation matrix. For each video frame you apply this matrix (I think the method called cvUndistort()) and it will straighten the curved lines in the image.
Note: It will not work if you change the zoom or focal length of the camera.
If camera details are not available and uncontrollable - then your problem is very serious. There is a way to solve the distortion, but I don't know if openCV has built in modules for that. I am afraid that you will need to write a lot of code.
Basically - you need to detect as much as possible long lines. Then from those lines (vertical and horizontal) you build a grid of intersection points. Finally you fit the grid of those points to openCV calibration module.
If you have enough intersection points (say 20 or more) you will be able to calculate the distortion matrix and un-distort the image.
You will not be able to fully calibrate the camera. In other words, you will not be able to run a one time process that calculates the expected distortion. Rather - in each and every video frame, you will calculate the distortion matrix directly - reverse it and un-distort the image.
If you are not familiar with image processing techniques or unable to find a reliable open source code which directly solves your problem - then I am afraid that you will not be able to remove the distortion. sorry

OpenCV Histogram Backprojection Alternative

I start with creating an initial mask of an object in an image. Using this mask, a histogram is created which is then used to process subsequent images.
I use the calcBackProject function to find pixels in the image that belong to the histogram. The problem I am having is that too much of the image is being accepted because certain objects are similar to the color of the initial object. Is there any alternative to calcBackProject? In my application, I can't afford to get objects that do not belong. All of this assumes that I have a perfect initial mask.
There are many ways to track an object, and it can be very difficult. Within OpenCV you may want to try the meanshift/camshift tracker to see if these are any better. If not then you may have to stray out of the opencv world and try tracking-learning-detection frameworks.
Meanshift/Camshift/etc in OpenCV
http://docs.opencv.org/modules/video/doc/video.html
http://docs.opencv.org/trunk/doc/py_tutorials/py_video/py_meanshift/py_meanshift.html
Tracking-Learning-Detection in C++:
STRUCK: http://www.samhare.net/research/struck (uses opencv)
Tracking-Learning-Detection in Matlab:
Preditor: http://personal.ee.surrey.ac.uk/Personal/Z.Kalal/tld.html

How to extract and locate the specific region of image by using OpenCV?

I am a newbie to OpenCV. I would like to work on a small project for tracking the rotation speed of a gear (by using webcam). However, until now, I have no idea how to work on this.
The posted image shows a machine which contains two 'big' gears. What I am interested in the gear only on left hand side (the red line as I highlighted).
Link
My plan is:
Extract the Interested gear region .
Mask all unrelated region. So, the masked image shows the left gear only (ROI).
.....
The problem is, how can I locate/extract/mask the ROI and mask?.I go through some example about cvMatchTemplate(), but it doesn't support rotations and scalings. Due to using webcam, the captured image may scaled or rotated. cvfindcontour() will extract all contours in the image rather then ROI.
If you previously know the gear you can use a picture of it to extract keypoints with SIFT, SURF, FAST or any corner detection algorithm. Then do as follows:
1- Apply FAST on every frame to detect keypoints.
2- Extract SIFT descriptors from those keypoints
3- Match detected points in the scene with your previously extracted points from the image. You can use FLANN matcher for this.
4- Those matches will define a region in the scene containing the gear you are looking for.
This is not trivial so you will need to look for information in OpenCV documentation for using all these functions.

Using OpenCV to correct stereo images

I intend to make a program which will take stereo pair images, taken by a single camera, and then correct and crop them so that when the images are viewed side by side with the parallel or cross eye method, the best 3D effect will be achieved. The left image will be the reference image, the right image will be modified for corrections. I believe OpenCV will be the best software for these purposes. So far I believe the processing will occur something like this:
Correct for rotation between images.
Correct for y axis shift.
Doing so will I imagine result in irregular black borders above and below the right image so:
Crop both images to the same height to remove borders.
Compute stereo-correspondence/disparity
Compute optimal disparity
Correct images for optimal disparity
Okay, so that's my take on what needs doing and the order it occurs in, what I'm asking is, does that seem right, is there anything I've missed, anything in the wrong order etc. Also, which specific functions of OpenCV would I need to use for all the necessary steps to complete this project? Or is OpenCV not the way to go? Much thanks.
OpenCV is great for this.
There is a whole chapter in:
And all the sample code for this in the book ships with the opencv distribution
edit: Roughly the steps are:
Remap each image to remove lens distortions and rotate/translate views to image center.
Crop pixels that don't appear in both views (optional)
Find matching objects in each view (stereoblock matching) create disparity map
Reproject disparity map into 3D model

How do I detect small blobs using EmguCV?

I'm trying to track the position of a robot from an overhead webcam. However, as I don't have much access to the robot or the environment, so I have been working with snapshots from the webcam.
The robot has 5 bright LEDs positioned strategically which are a different enough color from the robot and the environment so as to easily isolate.
I have been able to do just that using EmguCV, resulting in a binary image like the one below. My question is now, how to I get the positions of the five blobs and use those positions to determine the position and orientation of the robot?
I have been experimenting with the Emgu.CV.VideoSurveillance.BlobTrackerAuto class, but it stubbornly refuses to detect the blobs in the above image. Being a bit of a newbie when it comes to any of this, I'm not sure what I could be doing wrong.
So what would be the best method of obtaining the positions of the blobs in the above image?
I can't tell you how to do it with emgucv in particular, you'd need to translate the calls from opencv to emgucv. You'd use cv::findContours to get the blobs and cv::moments to get the position of the blobs (the formula to get the middle points of the blobs is in the documentation of cv::moments). Then you'd use cv::estimateRigidTransform to get the position and orientation of the robot.
I use cvBlob library to work blobs. Yesterday i worked with it to detect small blobs and works fine.
I wrote a python module to do this very thing.
http://letsmakerobots.com/node/38883#comments

Resources