Background subtraction in OpenCV / EmguCV with fixed background - opencv

I need some samples / source code for a background subtraction algorithm for using with fixed backgrounds. The background I use is a fixed color background and unfortunately all the samples I've seen so far work for dynamic backgrounds.
Thank you.

Keep in mind that you will suffer from a lot of noise with the subtraction technique. To avoid it, you could always use your best friend in fighting noise: blur, or GaussianBlur.
There's also a fascinating discussion of a statistical approach to this, called 'Code Booking,' in the O'Reilly book "Learning OpenCV."
Another technique for improving your results, provided your intended foreground objects are slow and/or your camera is fast, is to use many images and their differences, as opposed to just two. the programming for that should be easy enough, but as they say in most books: I leave it as an exercise for the reader ;).

What about simple frame subtraction? One frame is always the same - it's background, and another frame is frame from videostream. Convert both of them to grayscale and do absdiff operation. Here's my video result (look at center frame).

Related

How to extract foreground form a moving camera by using Opencv

I am performing motion detection using OpenCV. Challenge is that camera is moving so frame differencing is not a technique to be used directly. So I am trying to separate foreground and background and after that performing frame differencing on foreground images.
Ques is that how to separate foreground and background from a video taken from moving camera??
Any help from your side would be thankful to you!!
Subtracting background is an important technique to generate a foreground mask and its used widely in applications.
Many of the techniques which are used to get background subtraction assumes that camera is constant. Up to now I didn't see any paper or example which works with a moving camera.
You may have a look at the opencv example in here, which is very useful for background subtraction. Especially MOG2 and KNN algorithms are really good at to find shadows which is a problem in background subtraction. I suggest you to take the history parameters of these algorithms so low(1 to 10), by doing this you may get some good results even I don't think.
The best but the difficult way I suggest you is that using an AI. If your desired objects are specific (like people, car etc. ), you can use an AI to detect those objects and subtract the rest of the frame. The most proper AI algorithm for this problem is Mask R-CNN which will detect the mask of the objects also. Here are the some examples of this:
Reference 1
Reference 2
Reference 3
Reference 4

How to subtract a non-stationary background in real time

I know that have some opencv algorthms ( here and here, for example) that handles effectiveness with stationary background subtraction. But i need do this in a non-stationary background. Thus using Hand-held camera.
I would appreciate if someone could give some tips.
Segmentation of a dynamic scene is a topic of ongoing research in CV... However, for a quick win, you might want to try this rather nice library. It builds beyond OpenCV and provides 37 different algorithms for you to try - perhaps one might work well for your use-case. Good luck!

After Effect's Rotoscoping brush algorithms

I don't think I'm going to get any replies but here goes: I'm developing an iOS app that performs image segmentation functions. I'm trying to implement the easiest way to crop out a subject from an image without the need of a greenscreen/keying. Most automated solutions like using OpenCV just aren't cutting it.
I've found the rotoscope brush tool in After Effects to be effective at giving hints on where the app should be cutting out. Anyone know what kind of algorithms the rotoscope brush tool is using?
Check out this page, which contains a couple of video presentations from SIGGRAPH (a computer graphics conference) about the Roto Brush tool. Also take a look at Jue Wang's paper on Video SnapCut. As Damien guessed, object extraction relies on some pretty intense image processing algorithms. You might be able to implement something similar in OpenCV depending on how clever/masochistic you're feeling.
The algorithm is a graph-cut based segmentation algorithm where Gaussian Mixture Models (GMM) are trained using color pixels in "local" regions as well as "globally", together with some sort of shape prior.
OpenCV has a "cheap hack" implementation of the "GrabCut" paper where the user specifies a bounding box around the object he wish to segment. Typically, using just the bounding box will not give good results. You will need the user to specify the "foreground" and "background" pixels (as is done in Adobe's Rotoscoping tool) to help the algorithm build foreground and background color models (in this case GMMs) so that it will know what are the typical colors in the foreground object you wish to segment, and those for the background that you want to leave out.
A basic graph-cut implementation can be found on this blog. You can probably start from there and experiment with different ways to compute the cost terms to get better results.
Lastly, the "soften" the edges, a cheap hack is to blur the binary mask to obtain a mask with values between 0 and 1. Then recomposite your image using the mask i.e. c[i][j] = mask[i][j] * fgd[i][j] + (1 - mask[i][j]) * bgd[i][j], where you are blending the foreground you segmented (fgd), with a new background image (bgd) using the mask values as blending weights.

Algorithm for Background subtraction

I am trying to implement a simple background subtraction method for the detection of moving objects in a particular scene. The objective is to kind of segment out a particular motion out of a video to use it in another video.
The algorithm i am following is:
1. Take the first 25frames from the video and average them to get a background model.
2. Find the standard deviation of those 25frames and store the values in another image.
3. Now i am calculating the absolute difference between each frame and average background model pixel wise.
The output i am getting is kind of a transparent motion being highlighted in white (the absolute differencing is resulting in the transparency i think). I want to know whether my approach is right or not considering that i will be doing a segmentation upon this output as next step? And also i am getting no idea as to how to use the standard deviation image. Any help will be appreciated.
Please let me know if this is not the type of question that i should post in stack overflow. In that case any reference or links to other sites will be helpful.
You said it looks like transparent.
This is what you saw right?→ See YouTube Video - Background Subtraction (approximate median)
The reason is you use the median value of all frames to create the background.
What you saw in white in your video is the difference of your foreground(average image) and your background. Actually, median filtered background subtraction method is simple, but it's not a robust method.
You can try another background subtraction method like Gaussian Mixture Models(GMMs), Codebook, SOBS-Self-organization background subtraction and ViBe background subtraction method.
See YouTube Video - Background Subtraction using Gaussian Mixture Models (GMMs)
You should take a look at that blog. http://mateuszstankiewicz.eu/?p=189
You will find a start of Answer. Moreover I think there is a specific module for video analysis in Opencv.
Try these papers:
Improved adaptive Gaussian mixture model for background subtraction
Efficient Adaptive Density Estimapion per Image Pixel for the Task of Background Subtraction

single person tracking from video sequence

As a part of my thesis work, I need to build a program for human tracking from video or image sequence like the KTH or IXMAS dataset with the assumptions:
Illumination remains unchanged
Only one person appear in the scene
The program need to perform in real-time
I have searched a lot but still can not find a good solution.
Please suggest me a good method or an existing program that is suitable.
Case 1 - If camera static
If the camera is static, it is really simple to track one person.
You can apply a method called background subtraction.
Here, for better results, you need a bare image from camera, with no persons in it. It is the background. ( It can also be done, even if you don't have this background image. But if you have it, better. I will tell at end what to do if no background image)
Now start capture from camera. Take first frame,convert both to grayscale, smooth both images to avoid noise.
Subtract background image from frame.
If the frame has no change wrt background image (ie no person), you get a black image ( Of course there will be some noise, we can remove it). If there is change, ie person walked into frame, you will get an image with person and background as black.
Now threshold the image for a suitable value.
Apply some erosion to remove small granular noise. Apply dilation after that.
Now find contours. Most probably there will be one contour,ie the person.
Find centroid or whatever you want for this person to track.
Now suppose you don't have a background image, you can find it using cvRunningAvg function. It finds running average of frames from your video which you use to track. But you can obviously understand, first method is better, if you get background image.
Here is the implementation of above method using cvRunningAvg.
Case 2 - Camera not static
Here background subtraction won't give good result, since you can't get a fixed background.
Then OpenCV come with a sample for people detection sample. Use it.
This is the file: peopledetect.cpp
I also recommend you to visit this SOF which deals with almost same problem: How can I detect and track people using OpenCV?
One possible solution is to use feature points tracking algorithm.
Look at this book:
Laganiere Robert - OpenCV 2 Computer Vision Application Programming Cookbook - 2011
p. 266
Full algorithm is already implemented in this book, using opencv.
The above method : a simple frame differencing followed by dilation and erosion would work, in case of a simple clean scene with just the motion of the person walking with absolutely no other motion or illumination changes. Also you are doing a detection every frame, as opposed to tracking. In this specific scenario, it might not be much more difficult to track either. Movement direction and speed : you can just run Lucas Kanade on the difference images.
At the core of it, what you need is a person detector followed by a tracker. Tracker can be either point based (Lucas Kanade or Horn and Schunck) or using Kalman filter or any of those kind of tracking for bounding boxes or blobs.
A lot of vision problems are ill-posed, some some amount of structure/constraints, helps to solve it considerably faster. Few questions to ask would be these :
Is the camera moving : No quite easy, Yes : Much harder, exactly what works depends on other conditions.
Is the scene constant except for the person
Is the person front facing / side-facing most of the time : Detect using viola jones or train one (adaboost with Haar or similar features) for side-facing face.
How spatially accurate do you need it to be, will a bounding box do it, or do you need a contour : Bounding box : just search (intelligently :)) in neighbourhood with SAD (sum of Abs Differences). Contour : Tougher, use contour based trackers.
Do you need the "tracklet" or only just the position of the person at each frame, temporal accuracy ?
What resolution are we speaking about here, since you need real time :
Is the scene sparse like the sequences or would it be cluttered ?
Is there any other motion in the sequence ?
Offline or Online ?
If you develop in .NET you can use the Aforge.NET framework.
http://www.aforgenet.com/
I was a regular visitor of the forums and I seem to remember there are plenty of people using it for tracking people.
I've also used the framework for other non-related purposes and can say I highly recommend it for its ease of use and powerful features.

Resources