Open CV Capture blurred - opencv

I am using opencv to capture from an IP camera and can capture the feed fine.
The feed is of a door entrance, and I am capturing people coming in the door.
However, when some people move too quickly, that person becomes slightly blurred due to the motion.
Does anyone know how to capture a frame differently or how to run an algorithm to fix the image?
Here is a sample image:
I have hidden the face of the image but you should get the idea.
As you can see the gate, which is stationary, is in focus.
Here is the key part of the frame capture code (obviously there is more)
this->_cvCap = cvCaptureFromCAM(-1);
IplImage * image = cvQueryFrame(this->_cvCap);
cvSaveImage(filenamename, image);

The blurring is likely due to low shutter speed (high integration time) which gives a long exposure time.
You can theoretically set this using OpenCV with the cvSetCaptureProperty function.
Be aware though, that this is not supported for a lot of cameras.
Here is a related question on SO: Setting Camera Parameters in OpenCV/Python

Related

People detect using Hog not finding anyone

I have a video of soccer in which the players are relatively far away from the camera and thus represent small portions of the image. I'm using background subtraction to detect the players and the results are fine but I have been asked to try detecting using Hog.
I tried using the detect MultiScale using the default descriptors presented on opencv but i cant get any detection. I dont really understand how can I make it work on this case, because on other sequences where the people are closer to the camera, the detector works fine.
Here is a sample image link
Thanks.
The descriptor you use with HOG determines the minimum size of person you can detect: with the DefaultPeopleDetector the detection window is 128 pixels high x 64 wide, so you can detect people around 90px high. With the Daimler descriptor the size you can detect is a bit smaller.
Your pedestrians are still too small for this, so you may need to magnify the whole image, or just the parts which show up as foreground using background segmentation.
Have a look at the function definition for detectMultiscale http://docs.opencv.org/modules/objdetect/doc/cascade_classification.html#cascadeclassifier-detectmultiscale
It might be that you need to reduced the value of minsize so as to detect smaller people or the people might just be too far away.

Take photo during video-input

I'm currently trying to take an image in the best quality during capturing video at a lower quality. The problem is, that i'm using the video stream to check if face are in front of the cam and this needs lot's of resources, so i'm using a lower quality video stream and if there are any faces detected I want to take a photo in high quality.
Best regards and thank's for your help!
You can not have multiple capture sessions so at some point you will need to swap to higher resolution. First thing you are saying that face detection takes too much resources when using high res snapshots.. Why not try to simply down-sample the image and keep using high resolution all the time (send the down sampled one to the face detection, display the high res):
I would start with most common apple's graphic context and try to down scale it. If that takes too much cpu you could try to do the same on the GPU (find some library that does that or just create a simple program) or you could even try to simply drop odd lines and columns of the image as the raw data. In any of those cases you should also note that you probably do not need the face detection on the same thread as displaying, also you most likely don't even need a high frame rate for the detection (you display camera a full FPS but update the face recognition at 10 FPS for instance).
Another thing you can do is simply have the whole thing in low res, then when you need to take the image stop the session, start high res session, take a screenshot and swap back to low res for face detection.

iOS Camera Color Recognition in Real Time: Tracking a Ball

I have been looking for a bit and know that people are able to track faces with core image and openGL. However I am not that sure where to start the process of tracking a colored ball with the iOS camera.
Once I have a lead to being able to track the ball. I hope to create something to detect. when the ball changes directions.
Sorry I don't have source code, but I am unsure where to even start.
The key point is image preprocessing and filtering. You can use the Camera API-s to get the video stream from the camera. Take a snapshot picture from it, then you should use a Gaussian-blur on it (spatial enhance), then a Luminance Average Threshold Filter (to make black and white image). After that a morphological preprocessing should be wise (opening, closing operators), to hide the small noises. Then an Edge detection algorithm (with for example a Prewitt-operator). After these processes only the edges remain, your ball should be a circle (when the recording environment was ideal) After that you can use a Hough-transform to find the center of the ball. You should record the ball position and in the next frame, the small part of the picture can be processed (around the ball only).
Other keyword could be: blob detection
A fast library for image processing (on GPU with openGL) is Brad Larsons: GPUImage library https://github.com/BradLarson/GPUImage
It implements all the needed filter (except Hough-transformation)
The tracking process can be defined as following:
Having the initial coordinate and dimensions of an object with a given visual characteristics (image features)
In the next video frame, find the same visual characteristics near the coordinate of the last frame.
Near means considering basic transformations related to the last frame:
translation in each direction;
scale;
rotation;
The variation of these tranformations are strictly related with the frame rate. Higher the frame rate, nearest the position will be in the next frame.
Marvin Framework provides plug-ins and examples to perform this task. It's not compatible with iOs yet. However, it is open source and I think you can port the source code easily.
This video demonstrates some tracking features, starting at 1:10.

Image Rectification for Shake Correction on OpenCV

I've 2 pictures of the same scene from an uncalibrated camera. The pics are from a slightly different angle and scale(zoom) and I'd like to superpose them, rejecting any kind of shake. In other words, I should transform them so the shake becomes imperceptible, do a Motion Compensation.
I've already tried using a simple SURF (feature) detector along with Homography but sometimes the result isn't satisfactory. So I am thinking about trying Image Rectification to compensate the motion.
- Would it work with slight changes, such as user shake?
- Would it really work to reject shake for these 2 frames? And for a bigger buffer of pictures (10 maybe)?
- Anyone knows if it would fix scale disparity (different zoom in the images)?
- What the algorithm really do? Will it transform both pictures into a third orientation?
If there is a better solution, I would be glad to know =)
EDIT
I don't aim to compensate blur motion but the displacement itself. For example, in this file the author compensates the angle difference between two cameras by Image Rectification. How does it actually work? Does it always create an intermediate picture orientation or can I specify that one of the pictures shall remains still??
Also, would I be able to apply this to many frames or it would always find an intermediate orientation for each two frames I put in?
Cheers,
I'm not sure how well superimposing the images would work. Another way to remove blur (including motion blur which should dominate in handheld camera devices) from an image is by blind deconvolution. It is basically a method of finding the inverse of the blur filter that was physically applied (camera shaken) to the real image. There's plenty of techniques out on the web. I've specifically had good results using a modified version of the algorithm in this paper: http://www.cse.cuhk.edu.hk/~leojia/all_final_papers/motion_deblur_cvpr07.pdf
It also comes with an executable file somewhere around the web so you can see if it's fit for your purpose.
Good luck out there!

How to remove distortion due to motion, from an image

I am trying to track motion of a toy car. I have recorded few videos and now trying to calculate rotation.
My problem is extracting features from object surface is quit challenging due to motion blur. Below image shows a cropped image from a video frame. The distortion happen in horizontal lines. The distortion seen in this image happens when object is moving. When the object is not moving there is no distortion.
Image shows distorted image of the car when its moving forward in a diagonal path cross the image frame.
I tried a wiener filter, based on median and variance but it didn't do much improvement. It only gave me a smoothed image as if Gaussian blur was applied on it.
What type of enhancements should I do to get a better image?
video - 720 x 576 frames - 25fps
from the picture provided it looks like you need to de-interlace the video rather than just trying to filter what's there; i remember doing this by just taking every other scan line and then doing a resize to put it back in perspective.
i found a pretty cool site that talks about deinterlacing in case you'd like to see if you might have other possibilities:
http://www.100fps.com/
(oh, and i have not inspected the image very closely so it's possible that there is some other interlacing scheme going on than just every other line; in which case my first answer wouldn't work properly. and it does imply that you will lose some resolution but that's just the nature of interlaced video...)
Given that your camera outputs interlaced video, you are better off using one field of the video. Either only use the even lines of the image or only the odd lines. The image will be squashed but you won't be mixing two images together.
Yep, that image needs to be de-interlaced. Correcting "distortion" due to linear movement is a different thing, you need to do a linear directional filtering depending on the speed of the vehicle, the distance to the camera and the obturation speed.
You have to first calculate the impulse response for a given set of conditions (those above, which represent the deviation or the distance between the same point taken at the beggining of the capture and the end of it), and then apply the inverse filtering. You may need to use some filtering or image processing toolkit, if using Matlab it's going to be easy.
Did you try:
deconvblind
Follow the example on deconvblind mathworks. It might work well on your example image.
Another example - Image Restoration
The following algorithm is a very simple de-interlaceing method:
cv::Mat input = cv::imread("img.jpg");
cv::Mat tmp(input.rows/2, input.cols*2, input.type(), input.data);
tmp = tmp.colRange(0, input.cols);
cv::Mat output;
cv::resize(tmp, output, Size(), 1, 2);

Resources