How do I detect multiple people and track them in opencv/emgucv - opencv

I have been trying to detect multiple people in a small space and hence track them.
Input: CCTV feed from a camera mounted in a small room.
Expected Output: Track and hence store the path that people take while moving from one end of the room to the other.
I tried to implement some of the basic methods like background subtraction and pedestrian detection. But the results are not as desired.
In the results obtained by implementing background subtraction, due to occlusion the blob is not one single entity(the blob of one person is broken into multiple small blobs) hence, detecting it as a single person is very difficult.
Now, consider the case when there are many people standing close to each other. In this case detecting people using simple background subtraction is a complete disaster.
Is there a better way to detect multiple people?
Or maybe is there a way to improve the result of background subtraction?
And please suggest a good way for tracking multiple people?

that's a quite hard problem and there is no out-of-the-box solution, so you might have to try different methods.
In the beginning you will want to make some assumptions like static camera position and everything that's not background is a person or part of a person, maybe multiple persons. Persons can't appear within the image but they will have to 'enter' it (and are detected on entering and tracked after detection).
Detection and tracking can both be difficult problems so you might want to focus on one of them first. I would start with tracking and choose a probabilisic tracking method, since simple tracking methods like tracking by detection probably can't handle overlap and multiple targets very well.
Tracking:
I would try a particle filter, like http://www.irisa.fr/vista/Papers/2002/perez_hue_eccv02.pdf
which is capable to track multiple targets.
Detection: There is a HoG Person Detector in OpenCV which works quite fine for upright persons
HOGDescriptor hog;
hog.setSVMDetector(HOGDescriptor::getDefaultPeopleDetector());
but it's good to know the approximate size of a person in the image and scale the image accordingly. You can do this after background subtraction by scaling the blobs or combination of blobs, or you use a calibration of your camera and scale image parts of size 1.6m to 2.0m to your HoG detector size. Otherwise you might have many misses and many false alarms.
In the end you will have to work and research for some time to get the things running, but don't expect early success or 100% hit rates ;)
I would create a sample video and work on that, manually masking entering people as detection and implement the tracker with those detections.

Related

Vehicle Counting Based on BackgroundSubtractorMOG

I'm working on a project called ATCS(Automatic Traffic Controller System), it will modify traffic light duration based on the vehicle amount in front of the traffic light.
I used openCV and backgroundsubtractorMOG to detect vehicle, it runs successfully when vehicles are moving, but when the red signal turned on, all vehicle are uncountable. Of course that will make my software doesn't work.
So far I know that backgroundsubtractorMOG is best solution because this system work in many variation of wheather, light intensity, etc. It will compare current frame and previous frame so the moving object are detected as foreground (CMIIW). so how about the vehicle that was move and have stopped -- because the red signal of the traffic light is on and it force the driver to stop their vehicle? Will it still be detected as foreground object?
So I want to ask the most suitable algorithm to do it. How to count amount of vehicle when it moving, also when the vehicle stop moving because the red signal -- it still detected as vehicle.
thank you :)
How do you update your background? Because of changes in the lighting condition (clouds, day, night, dusk, weather) you cannot keep it statistic, however the presence of a stopped car could be still detectable if you still know the appearance of the background, that is the appearance of the road if the car is not there.
If you have an area in the image where car do not pass, you can use that to understand if the lighting conditions are changing.
Which is your view angle for the vehicles? There is chance that combining a Viola Jones detector with a KLT tracker you obtain better and more general results.
If background subtraction works for you (as you said), I would try to add another background model. Then you can perform background subtraction two times, once for the previous image (works for all moving objects) and once for your long term background model which will detect all stopped vehicles (and moving ones too) but might have some disadvantages for differing lighting conditions.
You could have a look at ViBe or Gaussian-Mixture-Models for creating those background models.
The other way would be to introduce some tracking mechanism as Antonio already mentioned. Once a vehicle is detected by background subtraction (only moving objects will appear in the image), you start a tracking and you will know that they're there even if they aren't detected again (because they don't move).
So you need a tracking method that's not "tracking by detection" but some other method. I would recommend Kalman Filter or Particle Filtering, or maybe mean-shift tracking.
EDIT: one method often used for vehicle detection which is similar to background subtraction techniques are Local Binary Patterns (LBP)
I would suggest using latent SVM detector with the "car" and "bus" model to detect vehicles and then apply simple tracking on the bounding boxes you get.
Latent SVM detector:
http://docs.opencv.org/modules/objdetect/doc/latent_svm.html

Industrial grade foreground/background separation in image processing

My question may be not reasonable. But I like to know what is the real industrial grade foreground/background separation in image processing. My application is I need to extract objects from the background. It is easy to track moving foreground object in the successive images, but for the stationary image (just one image) rather than using the threshold, what else will be more efficient than thresholding?
Thanks
There are actually many other methods that you can try. Threshold is actually a pretty good method to extract object in my opinion. But if you like, there are actually many other methods you can consider depending on the scenario.
1)If background is fixed, you can just simply just try background subtraction. Whatever remains is the odd one out, in this case, the object you are trying to extract.
2)If the object you are trying to extract is something specific regardless of background, you can use feature extraction, classifiers(Haar for instance)
3)If the object have a specific shape, like circle, rectangle, you can use Hough alongside Canny, for instance, or shape detection, alot available on the web.
4)If the object you are trying to extract have a specific colour, you can take a look at HSV, LAB space, both which are much much better than RGB. You can also try to use pre-processing methods like watershed for example(got alot other more)
And many many many many others depending on scenarios. Hope that helps.
I don't know if this is an answer to your question but if the camera is stationary you can look at optical flow. This tracks a moving object in a video stream. It looks at the changes in the images and then can seperate the background and foreground. [1]: http://www.mathworks.com/discovery/optical-flow.html
Hope it helps you

How to Handle Occlusion and Fragmentation

I am trying to implement a people counting system using computer vision for uni project. Currently, my method is:
Background subtraction using MOG2
Morphological filter to remove noise
Track blob
Count blob passing a specified region (a line)
The problem is if people come as group, my method only counts one people. From my readings, I believe this is what called as occlusion. Another problem is when people looks similar to background (use dark clothing and passing a black pillar/wall), the blob is separated while it is actually one person.
From what I read, I should implement a detector + tracker (e.g. detect human using HOG). But my detection result is poor (e.g. 50% false positives with 50% hit rate; using OpenCV human detector and my own trained detector) so I am not convinced to use the detector as basis for tracking. Thanks for your answers and time for reading this post!
Tracking people in video surveillance sequences is still an open problem in the research community. However particule filters (PF) (aka sequential monte-carlo) gives good results towards occlusion and complex scene. You should read this. There is also extra links to example source code after biblio.
An advantage on using PF is the gain in computational time towards tracking by detection (only).
If you go this way, feel free to ask for better understanding about the maths behind the PF.
There is no single "good" answer to this as handling occlusion (and background substraction) are still open problems! There are several pointers that can be given that might help you along with your project.
You want to detect if a "blob" is one person or a group of people. There are several things you could do to handle this.
Use multiple cameras (it's unlikely that a group of people is detected as a single blob from all angles)
Try to detect parts of the human body. If you detect two heads on a single blob, there are multiple people. Same can be said for 3 legs, 5 shoulders, etc.
On the area of tracking a "lost" person (one walking behind another object), is to extrapolate it's position. You know that a person can only move so much in between frames. By holding this into account, you know that it's impossible for a user to be detected in the middle of your image and then suddenly disappear. After several frames of not seeing that person, you can discard the observation, as the person might have had enough time to move away.

single person tracking from video sequence

As a part of my thesis work, I need to build a program for human tracking from video or image sequence like the KTH or IXMAS dataset with the assumptions:
Illumination remains unchanged
Only one person appear in the scene
The program need to perform in real-time
I have searched a lot but still can not find a good solution.
Please suggest me a good method or an existing program that is suitable.
Case 1 - If camera static
If the camera is static, it is really simple to track one person.
You can apply a method called background subtraction.
Here, for better results, you need a bare image from camera, with no persons in it. It is the background. ( It can also be done, even if you don't have this background image. But if you have it, better. I will tell at end what to do if no background image)
Now start capture from camera. Take first frame,convert both to grayscale, smooth both images to avoid noise.
Subtract background image from frame.
If the frame has no change wrt background image (ie no person), you get a black image ( Of course there will be some noise, we can remove it). If there is change, ie person walked into frame, you will get an image with person and background as black.
Now threshold the image for a suitable value.
Apply some erosion to remove small granular noise. Apply dilation after that.
Now find contours. Most probably there will be one contour,ie the person.
Find centroid or whatever you want for this person to track.
Now suppose you don't have a background image, you can find it using cvRunningAvg function. It finds running average of frames from your video which you use to track. But you can obviously understand, first method is better, if you get background image.
Here is the implementation of above method using cvRunningAvg.
Case 2 - Camera not static
Here background subtraction won't give good result, since you can't get a fixed background.
Then OpenCV come with a sample for people detection sample. Use it.
This is the file: peopledetect.cpp
I also recommend you to visit this SOF which deals with almost same problem: How can I detect and track people using OpenCV?
One possible solution is to use feature points tracking algorithm.
Look at this book:
Laganiere Robert - OpenCV 2 Computer Vision Application Programming Cookbook - 2011
p. 266
Full algorithm is already implemented in this book, using opencv.
The above method : a simple frame differencing followed by dilation and erosion would work, in case of a simple clean scene with just the motion of the person walking with absolutely no other motion or illumination changes. Also you are doing a detection every frame, as opposed to tracking. In this specific scenario, it might not be much more difficult to track either. Movement direction and speed : you can just run Lucas Kanade on the difference images.
At the core of it, what you need is a person detector followed by a tracker. Tracker can be either point based (Lucas Kanade or Horn and Schunck) or using Kalman filter or any of those kind of tracking for bounding boxes or blobs.
A lot of vision problems are ill-posed, some some amount of structure/constraints, helps to solve it considerably faster. Few questions to ask would be these :
Is the camera moving : No quite easy, Yes : Much harder, exactly what works depends on other conditions.
Is the scene constant except for the person
Is the person front facing / side-facing most of the time : Detect using viola jones or train one (adaboost with Haar or similar features) for side-facing face.
How spatially accurate do you need it to be, will a bounding box do it, or do you need a contour : Bounding box : just search (intelligently :)) in neighbourhood with SAD (sum of Abs Differences). Contour : Tougher, use contour based trackers.
Do you need the "tracklet" or only just the position of the person at each frame, temporal accuracy ?
What resolution are we speaking about here, since you need real time :
Is the scene sparse like the sequences or would it be cluttered ?
Is there any other motion in the sequence ?
Offline or Online ?
If you develop in .NET you can use the Aforge.NET framework.
http://www.aforgenet.com/
I was a regular visitor of the forums and I seem to remember there are plenty of people using it for tracking people.
I've also used the framework for other non-related purposes and can say I highly recommend it for its ease of use and powerful features.

What is the best method for object detection in low-resolution moving video?

I'm looking for the fastest and more efficient method of detecting an object in a moving video. Things to note about this video: It is very grainy and low resolution, also both the background and foreground are moving simultaneously.
Note: I'm trying to detect a moving truck on a road in a moving video.
Methods I've tried:
Training a Haar Cascade - I've attempted training the classifiers to identify the object by taking copping multiple images of the desired object. This proved to produce either many false detects or no detects at all (the object desired was never detected). I used about 100 positive images and 4000 negatives.
SIFT and SURF Keypoints - When attempting to use either of these methods which is based on features, I discovered that the object I wanted to detect was too low in resolution, so there were not enough features to match to make an accurate detection. (Object desired was never detected)
Template Matching - This is probably the best method I've tried. It's the most accurate although the most hacky of them all. I can detect the object for one specific video using a template cropped from the video. However, there is no guaranteed accuracy because all that is known is the best match for each frame, no analysis is done on the percentage template matches the frame. Basically, it only works if the object is always in the video, otherwise it will create a false detect.
So those are the big 3 methods I've tried and all have failed. What would work best is something like template matching but with scale and rotation invariance (which led me to try SIFT/SURF), but i have no idea how to modify the template matching function.
Does anyone have any suggestions how to best accomplish this task?
Apply optical flow to the image and then segment it based on flow field. Background flow is very different from "object" flow (which mainly diverges or converges depending on whether it is moving towards or away from you, with some lateral component also).
Here's an oldish project which worked this way:
http://users.fmrib.ox.ac.uk/~steve/asset/index.html
This vehicle detection paper uses a Gabor filter bank for low level detection and then uses the response to create the features space where it trains an SVM classifier.
The technique seems to work well and is at least scale invariant. I am not sure about rotation though.
Not knowing your application, my initial impression is normalized cross-correlation, especially since I remember seeing a purely optical cross-correlator that had vehicle-tracking as the example application. (Tracking a vehicle as it passes using only optical components and an image of the side of the vehicle - I wish I could find the link.) This is similar (if not identical) to "template matching", which you say kind of works, but this won't work if the images are rotated, as you know.
However, there's a related method based on log-polar coordinates that will work regardless of rotation, scale, shear, and translation.
I imagine this would also enable tracking that the object has left the scene of the video, too, since the maximum correlation will decrease.
How low resolution are we talking? Could you also elaborate on the object? Is it a specific color? Does it have a pattern? The answers affect what you should be using.
Also, I might be reading your template matching statement wrong, but it sounds like you are overtraining it (by testing on the same video you extracted the object from??).
A Haar Cascade is going to require significant training data on your part, and will be poor for any adjustments in orientation.
Your best bet might be to combine template matching with an algorithm similar to camshift in opencv (5,7MB PDF), along with a probabilistic model (you'll have to figure this one out) of whether the truck is still in the image.

Resources