How to subtract a non-stationary background in real time - opencv

I know that have some opencv algorthms ( here and here, for example) that handles effectiveness with stationary background subtraction. But i need do this in a non-stationary background. Thus using Hand-held camera.
I would appreciate if someone could give some tips.

Segmentation of a dynamic scene is a topic of ongoing research in CV... However, for a quick win, you might want to try this rather nice library. It builds beyond OpenCV and provides 37 different algorithms for you to try - perhaps one might work well for your use-case. Good luck!

Related

Real time dominant color detection on iOS camera input

I'm trying to implement an algorithm that detects the dominant color in real time of the iPhone's camera.
I Already tried to implement some of the algorithms I found, but had loss of performance.
I would like some advice and direction for research.
If I can't find anything I will have to try a parallel implementation with Accelerate, SIMD, Metal, or dispatches.
Any recommendations? I'm new to this stuff!
Thanks!
I've worked for the last 3 months for Metal image processing framework. I think this library may provided a ready-made solution and save you time: https://bitbucket.org/degrader/degradr-core-3.

Real Time Optical Flow

I'm using optical flow as a real time obstacle detection and avoidance system for the visually impaired. I'm developing the application in c# and using Emgu Cv for image processing. I use the Lucas and Kanade method and I'm pretty satisfied with the speed of the algorithm. I am using monocular vision thus making it hard for me to compute the depth accurately to each of the features being tracked and to alert the user accordingly. I plan on using an ultrasonic sensor to help with the obstacle detection due to the fact that depth computation is hard with monocular camera. Any suggestions on how I could get an accurate estimation of depth using the camera alone?
You might want to check out this paper: A Robust Visual Odometry and Precipice Detection System Using Consumer-grade Monocular Vision. They usea nice trick for detecting as well obstacles as holes in the field of view.
Hate to give such a generic answer, but you'd be best off starting with a standard text on structure-from-motion to get an overview of techniques. A good one is Richard Szeliski's recent book available online (Chapter 7), and its references. After that, for your application you may want to look at recent work in SLAM - Oxford's Active Vision group have published some great work and Andrew Davison's group too.
more a comment on RobAu's answer below,
'structure from motion' might give better search results, than '3d from video'
Depth from one care will only work if you have movement of the camera. You could look into some 3d from video approaches. It is a very hard problem, especially when the objects in the field of view of the camera are moving as well.

Background subtraction in OpenCV / EmguCV with fixed background

I need some samples / source code for a background subtraction algorithm for using with fixed backgrounds. The background I use is a fixed color background and unfortunately all the samples I've seen so far work for dynamic backgrounds.
Thank you.
Keep in mind that you will suffer from a lot of noise with the subtraction technique. To avoid it, you could always use your best friend in fighting noise: blur, or GaussianBlur.
There's also a fascinating discussion of a statistical approach to this, called 'Code Booking,' in the O'Reilly book "Learning OpenCV."
Another technique for improving your results, provided your intended foreground objects are slow and/or your camera is fast, is to use many images and their differences, as opposed to just two. the programming for that should be easy enough, but as they say in most books: I leave it as an exercise for the reader ;).
What about simple frame subtraction? One frame is always the same - it's background, and another frame is frame from videostream. Convert both of them to grayscale and do absdiff operation. Here's my video result (look at center frame).

single person tracking from video sequence

As a part of my thesis work, I need to build a program for human tracking from video or image sequence like the KTH or IXMAS dataset with the assumptions:
Illumination remains unchanged
Only one person appear in the scene
The program need to perform in real-time
I have searched a lot but still can not find a good solution.
Please suggest me a good method or an existing program that is suitable.
Case 1 - If camera static
If the camera is static, it is really simple to track one person.
You can apply a method called background subtraction.
Here, for better results, you need a bare image from camera, with no persons in it. It is the background. ( It can also be done, even if you don't have this background image. But if you have it, better. I will tell at end what to do if no background image)
Now start capture from camera. Take first frame,convert both to grayscale, smooth both images to avoid noise.
Subtract background image from frame.
If the frame has no change wrt background image (ie no person), you get a black image ( Of course there will be some noise, we can remove it). If there is change, ie person walked into frame, you will get an image with person and background as black.
Now threshold the image for a suitable value.
Apply some erosion to remove small granular noise. Apply dilation after that.
Now find contours. Most probably there will be one contour,ie the person.
Find centroid or whatever you want for this person to track.
Now suppose you don't have a background image, you can find it using cvRunningAvg function. It finds running average of frames from your video which you use to track. But you can obviously understand, first method is better, if you get background image.
Here is the implementation of above method using cvRunningAvg.
Case 2 - Camera not static
Here background subtraction won't give good result, since you can't get a fixed background.
Then OpenCV come with a sample for people detection sample. Use it.
This is the file: peopledetect.cpp
I also recommend you to visit this SOF which deals with almost same problem: How can I detect and track people using OpenCV?
One possible solution is to use feature points tracking algorithm.
Look at this book:
Laganiere Robert - OpenCV 2 Computer Vision Application Programming Cookbook - 2011
p. 266
Full algorithm is already implemented in this book, using opencv.
The above method : a simple frame differencing followed by dilation and erosion would work, in case of a simple clean scene with just the motion of the person walking with absolutely no other motion or illumination changes. Also you are doing a detection every frame, as opposed to tracking. In this specific scenario, it might not be much more difficult to track either. Movement direction and speed : you can just run Lucas Kanade on the difference images.
At the core of it, what you need is a person detector followed by a tracker. Tracker can be either point based (Lucas Kanade or Horn and Schunck) or using Kalman filter or any of those kind of tracking for bounding boxes or blobs.
A lot of vision problems are ill-posed, some some amount of structure/constraints, helps to solve it considerably faster. Few questions to ask would be these :
Is the camera moving : No quite easy, Yes : Much harder, exactly what works depends on other conditions.
Is the scene constant except for the person
Is the person front facing / side-facing most of the time : Detect using viola jones or train one (adaboost with Haar or similar features) for side-facing face.
How spatially accurate do you need it to be, will a bounding box do it, or do you need a contour : Bounding box : just search (intelligently :)) in neighbourhood with SAD (sum of Abs Differences). Contour : Tougher, use contour based trackers.
Do you need the "tracklet" or only just the position of the person at each frame, temporal accuracy ?
What resolution are we speaking about here, since you need real time :
Is the scene sparse like the sequences or would it be cluttered ?
Is there any other motion in the sequence ?
Offline or Online ?
If you develop in .NET you can use the Aforge.NET framework.
http://www.aforgenet.com/
I was a regular visitor of the forums and I seem to remember there are plenty of people using it for tracking people.
I've also used the framework for other non-related purposes and can say I highly recommend it for its ease of use and powerful features.

Robust motion detection in C#

Can anyone point me to a robust motion detection sample/implementation? I know EMGU has a motion detection sample, but its not very good, even small changes in light will be falsely detected as motion. I don't need to track objects. I am looking for a way to detect motion in a video that will not be falsely triggered by changing light conditions.
Have a look at AForge. Everything you should need is there (though you'll need to spend some time putting it all together), and it has a robust community if you need specific help.
I concur with nizmahone. use Aforge:
Here is a link with soem motiond etection in C#:
http://www.codeproject.com/KB/audio-video/Motion_Detection.aspx

Resources