I am trying to implement a simple background subtraction method for the detection of moving objects in a particular scene. The objective is to kind of segment out a particular motion out of a video to use it in another video.
The algorithm i am following is:
1. Take the first 25frames from the video and average them to get a background model.
2. Find the standard deviation of those 25frames and store the values in another image.
3. Now i am calculating the absolute difference between each frame and average background model pixel wise.
The output i am getting is kind of a transparent motion being highlighted in white (the absolute differencing is resulting in the transparency i think). I want to know whether my approach is right or not considering that i will be doing a segmentation upon this output as next step? And also i am getting no idea as to how to use the standard deviation image. Any help will be appreciated.
Please let me know if this is not the type of question that i should post in stack overflow. In that case any reference or links to other sites will be helpful.
You said it looks like transparent.
This is what you saw right?→ See YouTube Video - Background Subtraction (approximate median)
The reason is you use the median value of all frames to create the background.
What you saw in white in your video is the difference of your foreground(average image) and your background. Actually, median filtered background subtraction method is simple, but it's not a robust method.
You can try another background subtraction method like Gaussian Mixture Models(GMMs), Codebook, SOBS-Self-organization background subtraction and ViBe background subtraction method.
See YouTube Video - Background Subtraction using Gaussian Mixture Models (GMMs)
You should take a look at that blog. http://mateuszstankiewicz.eu/?p=189
You will find a start of Answer. Moreover I think there is a specific module for video analysis in Opencv.
Try these papers:
Improved adaptive Gaussian mixture model for background subtraction
Efficient Adaptive Density Estimapion per Image Pixel for the Task of Background Subtraction
Related
I am performing motion detection using OpenCV. Challenge is that camera is moving so frame differencing is not a technique to be used directly. So I am trying to separate foreground and background and after that performing frame differencing on foreground images.
Ques is that how to separate foreground and background from a video taken from moving camera??
Any help from your side would be thankful to you!!
Subtracting background is an important technique to generate a foreground mask and its used widely in applications.
Many of the techniques which are used to get background subtraction assumes that camera is constant. Up to now I didn't see any paper or example which works with a moving camera.
You may have a look at the opencv example in here, which is very useful for background subtraction. Especially MOG2 and KNN algorithms are really good at to find shadows which is a problem in background subtraction. I suggest you to take the history parameters of these algorithms so low(1 to 10), by doing this you may get some good results even I don't think.
The best but the difficult way I suggest you is that using an AI. If your desired objects are specific (like people, car etc. ), you can use an AI to detect those objects and subtract the rest of the frame. The most proper AI algorithm for this problem is Mask R-CNN which will detect the mask of the objects also. Here are the some examples of this:
Reference 1
Reference 2
Reference 3
Reference 4
I am trying to subtract two images using absdiff function ,to extract moving object, it works good but sometimes background appears in front of foreground.
This actually happens when the background and foreground colors are similar,Is there any solution to overcome this problem?
It may be description of the problem above not enough; so I attach images in the following
link .
Thanks..
You can use some pre-processing techniques like edge detection and some contrast stretching algorithm, which will give you some extra information for subtracting the image. Since color is same but new object should have texture feature like edge; if the edge gets preserved properly then when performing image subtraction you will obtain the object.
Process flow:
Use edge detection algorithm.
Contrast stretching algorithm(like histogram stretching).
Use the detected edge top of the contrast stretched image.
Now use the image subtraction algorithm from OpenCV.
There isn't enough information to formulate a complete solution to your problem but there are some tips I can offer:
First, prefilter the input and background images using a strong
median (or gaussian) filter. This will make your results much more
robust to image noise and confusion from minor, non-essential detail
(like the horizontal lines of your background image). Unless you want
to detect a single moving strand of hair, you don't need to process
the raw pixels.
Next, take the advice offered in the comments to test all 3 color
channels as opposed to going straight to grayscale.
Then create a grayscale image from the the max of the 3 absdiffs done
on each channel.
Then perform your closing and opening procedure.
I don't know your requirements so I can't take them into account. If accuracy is of the utmost importance. I'd use the median filter on input image over gaussian. If speed is an issue I'd scale down the input images for processing by at least half, then scale the result up again. If the camera is in a fixed position and you have a pre-calibrated background, then the current naive difference method should work. If the system has to determine movement from a real world environment over an extended period of time (moving shadows, plants, vehicles, weather, etc) then a rolling average (or gaussian) background model will work better. If the camera is moving you will need to do a lot more processing, probably some optical flow and/or fourier transform tests. All of these things need to be considered to provide the best solution for the application.
I am doing some work regarding tracking a person, I am using this dataset. I am trying right now to extract foreground using background subtraction method i.e. Mean Filter
My background is like
and if I try to subtract my current frame like this
so after subtraction I am getting image like this
and after thresholding of 0.15 or 38
I get this mask
So if you notice this mask, it is splitting this foreground in to two pieces because of occlusion of person and chair. I dont know how to solve this problem. Any suggestions?
It's not a perfect solution, but maybe it will be enough for you - on mask image find all contours, join them(usually contours are represented as vectors of points so put all contours into one vector) and then find the convex hull of connected contour (if you are using opencv - use convexHull function http://docs.opencv.org/doc/tutorials/imgproc/shapedescriptors/hull/hull.html).
It's also not a perfect solution. Please reduce the number of frames to create background image in background subtraction method it may help you. Or initialize the background subtraction structure frequently.
If I understand you correctly, you are trying to do background subtraction using frame differences, like mean filter as you mentioned. But please keep in mind that it will only detect moving foregrounds, and manually providing threshold is difficult. I suggest you to instead try Mixture of Gaussian method, which is more effective, and implemented in OpenCV.
To solve you particular problem of joining separate parts, use dilation http://docs.opencv.org/modules/imgproc/doc/filtering.html?highlight=dilate#dilate
As a part of my thesis work, I need to build a program for human tracking from video or image sequence like the KTH or IXMAS dataset with the assumptions:
Illumination remains unchanged
Only one person appear in the scene
The program need to perform in real-time
I have searched a lot but still can not find a good solution.
Please suggest me a good method or an existing program that is suitable.
Case 1 - If camera static
If the camera is static, it is really simple to track one person.
You can apply a method called background subtraction.
Here, for better results, you need a bare image from camera, with no persons in it. It is the background. ( It can also be done, even if you don't have this background image. But if you have it, better. I will tell at end what to do if no background image)
Now start capture from camera. Take first frame,convert both to grayscale, smooth both images to avoid noise.
Subtract background image from frame.
If the frame has no change wrt background image (ie no person), you get a black image ( Of course there will be some noise, we can remove it). If there is change, ie person walked into frame, you will get an image with person and background as black.
Now threshold the image for a suitable value.
Apply some erosion to remove small granular noise. Apply dilation after that.
Now find contours. Most probably there will be one contour,ie the person.
Find centroid or whatever you want for this person to track.
Now suppose you don't have a background image, you can find it using cvRunningAvg function. It finds running average of frames from your video which you use to track. But you can obviously understand, first method is better, if you get background image.
Here is the implementation of above method using cvRunningAvg.
Case 2 - Camera not static
Here background subtraction won't give good result, since you can't get a fixed background.
Then OpenCV come with a sample for people detection sample. Use it.
This is the file: peopledetect.cpp
I also recommend you to visit this SOF which deals with almost same problem: How can I detect and track people using OpenCV?
One possible solution is to use feature points tracking algorithm.
Look at this book:
Laganiere Robert - OpenCV 2 Computer Vision Application Programming Cookbook - 2011
p. 266
Full algorithm is already implemented in this book, using opencv.
The above method : a simple frame differencing followed by dilation and erosion would work, in case of a simple clean scene with just the motion of the person walking with absolutely no other motion or illumination changes. Also you are doing a detection every frame, as opposed to tracking. In this specific scenario, it might not be much more difficult to track either. Movement direction and speed : you can just run Lucas Kanade on the difference images.
At the core of it, what you need is a person detector followed by a tracker. Tracker can be either point based (Lucas Kanade or Horn and Schunck) or using Kalman filter or any of those kind of tracking for bounding boxes or blobs.
A lot of vision problems are ill-posed, some some amount of structure/constraints, helps to solve it considerably faster. Few questions to ask would be these :
Is the camera moving : No quite easy, Yes : Much harder, exactly what works depends on other conditions.
Is the scene constant except for the person
Is the person front facing / side-facing most of the time : Detect using viola jones or train one (adaboost with Haar or similar features) for side-facing face.
How spatially accurate do you need it to be, will a bounding box do it, or do you need a contour : Bounding box : just search (intelligently :)) in neighbourhood with SAD (sum of Abs Differences). Contour : Tougher, use contour based trackers.
Do you need the "tracklet" or only just the position of the person at each frame, temporal accuracy ?
What resolution are we speaking about here, since you need real time :
Is the scene sparse like the sequences or would it be cluttered ?
Is there any other motion in the sequence ?
Offline or Online ?
If you develop in .NET you can use the Aforge.NET framework.
http://www.aforgenet.com/
I was a regular visitor of the forums and I seem to remember there are plenty of people using it for tracking people.
I've also used the framework for other non-related purposes and can say I highly recommend it for its ease of use and powerful features.
I'm doing background subtraction using opencv. The problem is the foreground object is not always detected correctly. To deal with this I would like to use four or five images, and take their average as the background image. How can I do that?
Perhaps go through all the images, and if the pixel in question is within a certain range of colour variation for all the images, disregard it as background?
Then I suppose the size of the range would determine how picky you were and how confident you are in the stability and consistency of your camera.
You should try using the included background detector in OpenCV (under cvaux.h). They also have blob detector if you want to find object blob.
By combining blob information and optical flow information, you can usually find the foreground object.