Foreground detection without an image sequence - image-processing

I need to detect the foreground object on an image and cut it off from that image. There are lots of background/foreground subtraction or object recognition algorithms but these algorithms are working on the videos or image sequences. I have only an image (it can be a picture of a man in front of a white wall. ) as an input. Do you know any useful approaches that can be applicable on a single image file instead of video or image sequences.

What you are looking for is a figure-ground segmentation algorithm. Does it have to be fully automatic? If you can draw an initial contour of the object by hand, you can use a class of algorithms called "active contours". If you need this to be fully automatic, you can use an algorithm called N-cuts.
If you are using MATLAB, and you are ok with semi-automatic segmentation, try the Image Segmenter App in the Image Processing Toolbox.

search as an "saliency detection"
like
"Global contrast based salient region detection"
http://mmcheng.net/salobj/

Related

Background Subtraction in OpenCV

I am trying to subtract two images using absdiff function ,to extract moving object, it works good but sometimes background appears in front of foreground.
This actually happens when the background and foreground colors are similar,Is there any solution to overcome this problem?
It may be description of the problem above not enough; so I attach images in the following
link .
Thanks..
You can use some pre-processing techniques like edge detection and some contrast stretching algorithm, which will give you some extra information for subtracting the image. Since color is same but new object should have texture feature like edge; if the edge gets preserved properly then when performing image subtraction you will obtain the object.
Process flow:
Use edge detection algorithm.
Contrast stretching algorithm(like histogram stretching).
Use the detected edge top of the contrast stretched image.
Now use the image subtraction algorithm from OpenCV.
There isn't enough information to formulate a complete solution to your problem but there are some tips I can offer:
First, prefilter the input and background images using a strong
median (or gaussian) filter. This will make your results much more
robust to image noise and confusion from minor, non-essential detail
(like the horizontal lines of your background image). Unless you want
to detect a single moving strand of hair, you don't need to process
the raw pixels.
Next, take the advice offered in the comments to test all 3 color
channels as opposed to going straight to grayscale.
Then create a grayscale image from the the max of the 3 absdiffs done
on each channel.
Then perform your closing and opening procedure.
I don't know your requirements so I can't take them into account. If accuracy is of the utmost importance. I'd use the median filter on input image over gaussian. If speed is an issue I'd scale down the input images for processing by at least half, then scale the result up again. If the camera is in a fixed position and you have a pre-calibrated background, then the current naive difference method should work. If the system has to determine movement from a real world environment over an extended period of time (moving shadows, plants, vehicles, weather, etc) then a rolling average (or gaussian) background model will work better. If the camera is moving you will need to do a lot more processing, probably some optical flow and/or fourier transform tests. All of these things need to be considered to provide the best solution for the application.

Proper approach to feature detection with opencv

My goal is to find known logos in static image and videos. I want to achieve that by using feature detection with KAZE or AKAZE and RanSac.
I am aiming for a similar result to: https://www.youtube.com/watch?v=nzrqH...
While experimenting with the detection example from the docs which is great btw, i was facing several issues:
Object resolution: Differences in size between the known object and
the resolution of the scene where the object should be located
sometimes breaks the detection algorithm - the object won't be
recognized in images with a low resolution although the image quality
is still allright for a human eye.
Color contrast with the background: It seems, that the detection can
easily be distracted by different background contrasts (eg: object is
logo black on white background, logo in scene is white on black
background). How can I make the detection more robust against
different luminations and background contrasts?
Preprocessing: Should there be done any kind of preprocessing of the
object / scene? For example enlarge the scene up to a specific size?
Is there any guideline how to approach the feature detection in
several steps to get the best results?
I think your issue is more complicated than feature-descriptor-matching-homography process.
It is more likely oriented to pattern recognition or classification.
You can check this extended paper review of shape matching:
http://www.staff.science.uu.nl/~kreve101/asci/vir2001.pdf
Firstly, the resolution of images is very important,
because usually matching operation makes a pixel intensity cross-correlation
between your sample image (logo) and your process image, so you will get the best-crosscorrelated area.
In the same way, the background colour intensity
is very important because background illumination could affect severally to your final result.
Feature-based methods are widely researched:
http://docs.opencv.org/2.4/modules/features2d/doc/feature_detection_and_description.html
http://docs.opencv.org/2.4/modules/features2d/doc/common_interfaces_of_descriptor_extractors.html
So for example, you can try alternative methods such as:
Hog descritors: Histogram oriented gradients:
https://en.wikipedia.org/wiki/Histogram_of_oriented_gradients
Pattern matching or template matching
http://docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/template_matching/template_matching.html
I think the lastest (Pattern matching) is the easiest to check your algorithm.
Hope these references helps.
Cheers.
Unai.

After Effect's Rotoscoping brush algorithms

I don't think I'm going to get any replies but here goes: I'm developing an iOS app that performs image segmentation functions. I'm trying to implement the easiest way to crop out a subject from an image without the need of a greenscreen/keying. Most automated solutions like using OpenCV just aren't cutting it.
I've found the rotoscope brush tool in After Effects to be effective at giving hints on where the app should be cutting out. Anyone know what kind of algorithms the rotoscope brush tool is using?
Check out this page, which contains a couple of video presentations from SIGGRAPH (a computer graphics conference) about the Roto Brush tool. Also take a look at Jue Wang's paper on Video SnapCut. As Damien guessed, object extraction relies on some pretty intense image processing algorithms. You might be able to implement something similar in OpenCV depending on how clever/masochistic you're feeling.
The algorithm is a graph-cut based segmentation algorithm where Gaussian Mixture Models (GMM) are trained using color pixels in "local" regions as well as "globally", together with some sort of shape prior.
OpenCV has a "cheap hack" implementation of the "GrabCut" paper where the user specifies a bounding box around the object he wish to segment. Typically, using just the bounding box will not give good results. You will need the user to specify the "foreground" and "background" pixels (as is done in Adobe's Rotoscoping tool) to help the algorithm build foreground and background color models (in this case GMMs) so that it will know what are the typical colors in the foreground object you wish to segment, and those for the background that you want to leave out.
A basic graph-cut implementation can be found on this blog. You can probably start from there and experiment with different ways to compute the cost terms to get better results.
Lastly, the "soften" the edges, a cheap hack is to blur the binary mask to obtain a mask with values between 0 and 1. Then recomposite your image using the mask i.e. c[i][j] = mask[i][j] * fgd[i][j] + (1 - mask[i][j]) * bgd[i][j], where you are blending the foreground you segmented (fgd), with a new background image (bgd) using the mask values as blending weights.

OpenCV intrusion detection

For a project of mine, I'm required to process images differences with OpenCV. The goal is to detect an intrusion in a zone.
To be a little more clear, here are the inputs and outputs:
Inputs:
An image of reference
A second image from approximately the same point of view (can be an error margin)
Outputs:
Detection of new objects in the scene.
Bonus:
Recognition of those objects.
For me, the most difficult part of it is to take off small differences (luminosity, camera position margin error, movement of trees...)
I already read a lot about OpenCV image processing (subtraction, erosion, threshold, SIFT, SURF...) and have some good results.
What I would like is a list of steps you think is the best to have a good detection (humans, cars...), and the algorithms to do each step.
Many thanks for your help.
Track-by-Detect, human tracker:
You apply the Hog detector to detect humans.
You draw a respective rectangle as foreground area on the foreground mask.
You pass this mask to "The OpenCV Video Surveillance / Blob Tracker Facility"
You can, now, group the passing humans based on their blob.{x,y} values into public/restricted areas.
I had to deal with this problem the last year.
I suggest an adaptive background-foreground estimation algorithm which produced a foreground mask.
On top of that, you add a blob detector and tracker, and then calculate if an intersection takes place between the blobs and your intrusion area.
Opencv comes has samples of all of these within the legacy code. Ofcourse, if you want you can also use your own or other versions of these.
Links:
http://opencv.willowgarage.com/wiki/VideoSurveillance
http://experienceopencv.blogspot.gr/2011/03/blob-tracking-video-surveillance-demo.html
I would definitely start with a running average background subtraction if the camera is static. Then you can use findContours() to find the intruding object's location and size. If you want to detect humans that are walking around in a scene, I would recommend looking at using the built-in haar classifier:
http://docs.opencv.org/doc/tutorials/objdetect/cascade_classifier/cascade_classifier.html#cascade-classifier
where you would just replace the xml with the upperbody classifier.

Background subtraction

I'm doing background subtraction using opencv. The problem is the foreground object is not always detected correctly. To deal with this I would like to use four or five images, and take their average as the background image. How can I do that?
Perhaps go through all the images, and if the pixel in question is within a certain range of colour variation for all the images, disregard it as background?
Then I suppose the size of the range would determine how picky you were and how confident you are in the stability and consistency of your camera.
You should try using the included background detector in OpenCV (under cvaux.h). They also have blob detector if you want to find object blob.
By combining blob information and optical flow information, you can usually find the foreground object.

Resources