Can anyone point me to a robust motion detection sample/implementation? I know EMGU has a motion detection sample, but its not very good, even small changes in light will be falsely detected as motion. I don't need to track objects. I am looking for a way to detect motion in a video that will not be falsely triggered by changing light conditions.
Have a look at AForge. Everything you should need is there (though you'll need to spend some time putting it all together), and it has a robust community if you need specific help.
I concur with nizmahone. use Aforge:
Here is a link with soem motiond etection in C#:
http://www.codeproject.com/KB/audio-video/Motion_Detection.aspx
Related
I'm trying to figure out how to detect whether a human that I've identified in a video is speaking. I'm using some of the multi-person multi-camera tracking code posted here to detect individuals and I want to determine whether someone identified is speaking at any given time. Is anyone aware of good CV projects that might be able to do this? I've trawled around the action recognition literature a bit but haven't found anything that seems to directly address this. Detection of speaking needs to be done only with video.
There is an implementation of face pose estimation in an open source library.
As you can see from this figure : there are lines around lips.By digging into source code of example you can track movement of lips as you try this example on your own environment you will see that lines covering lips are also moving depending on movement of lips.
I know that have some opencv algorthms ( here and here, for example) that handles effectiveness with stationary background subtraction. But i need do this in a non-stationary background. Thus using Hand-held camera.
I would appreciate if someone could give some tips.
Segmentation of a dynamic scene is a topic of ongoing research in CV... However, for a quick win, you might want to try this rather nice library. It builds beyond OpenCV and provides 37 different algorithms for you to try - perhaps one might work well for your use-case. Good luck!
I am working on an app in iOS that will occur an event if camera detects some changes in image or we can say motion in image. Here I am not asking about face recognition or a particular colored image motion, And I got all result for OpenCV when I searched, And I also found that we can achieve this by using gyroscope and accelerometer both , but how??
I am beginner in iOS.So my question is , Is there any framework or any easy way to detect motion or motion sensing by camera.And How to achieve?
For Example if I move my hand before camera then it will show some message or alert.
And plz give me some useful and easy to understand links about this.
Thanx
If all you want is some kind of crude motion detection, my open source GPUImage framework has a GPUImageMotionDetector within it.
This admittedly simple motion detector does frame-to-frame comparisons, based on a low-pass filter, and can identify the number of pixels that have changed between frames and the centroid of the changed area. It operates on live video and I know some people who've used it for motion activation of functions in their iOS applications.
Because it relies on pixel differences and not optical flow or feature matching, it can be prone to false positives and can't track discrete objects as they move in a frame. However, if all you need is basic motion sensing, this is pretty easy to drop into your application. Look at the FilterShowcase example to see how it works in practice.
I don't exactly understand what you mean here:
Here I am not asking about face recognition or a particular colored
image motion, because I got all result for OpenCV when I searched
But I would suggest to go for opencv as you can use opencv in IOS. Here is a good link which helps you to setup opencv in ios.
There are lot of opencv motion detection codes online and here is one among them, which you can make use of.
You need to convert the UIImage ( image type in IOS ) to cv::Mat or IplImage and pass it to the opencv algorithms. You can convert using this link or this.
I'm using optical flow as a real time obstacle detection and avoidance system for the visually impaired. I'm developing the application in c# and using Emgu Cv for image processing. I use the Lucas and Kanade method and I'm pretty satisfied with the speed of the algorithm. I am using monocular vision thus making it hard for me to compute the depth accurately to each of the features being tracked and to alert the user accordingly. I plan on using an ultrasonic sensor to help with the obstacle detection due to the fact that depth computation is hard with monocular camera. Any suggestions on how I could get an accurate estimation of depth using the camera alone?
You might want to check out this paper: A Robust Visual Odometry and Precipice Detection System Using Consumer-grade Monocular Vision. They usea nice trick for detecting as well obstacles as holes in the field of view.
Hate to give such a generic answer, but you'd be best off starting with a standard text on structure-from-motion to get an overview of techniques. A good one is Richard Szeliski's recent book available online (Chapter 7), and its references. After that, for your application you may want to look at recent work in SLAM - Oxford's Active Vision group have published some great work and Andrew Davison's group too.
more a comment on RobAu's answer below,
'structure from motion' might give better search results, than '3d from video'
Depth from one care will only work if you have movement of the camera. You could look into some 3d from video approaches. It is a very hard problem, especially when the objects in the field of view of the camera are moving as well.
intend to programme navigating an iPad App with head motions (originally hand motions, but hands seem to difficult to detect at the moment): left, right, up and down. So I plan to use openCV and either detect the optical flow of the head area, or detect the ears and head of the user with haar cascades (openCV is delivered with quite precise head and ear xmls). Can anyone offer some advice on which solution to use? Will one of the mentioned need more power? Ear and head detection might be easier to programme? I would like to avoid to much effort in the wrong direction and don't have much expertise in my current surroundings...
thank you for any help, advice, ideas!
I would suggest to use Haar cascades because optical flow is more expensive in computing time!
Is using the native face detection in iOS 5 not an option for what you want?
I'm thinking outside the box here, but I've been with OpenCV before and it still hurts..
http://maniacdev.com/2011/11/tutorial-easy-face-detection-with-core-image-in-ios-5/
*The idea of course, being that you apply this to live input somehow
One way could be using Hidden Markov Models. There is a lot of research material(working prototypes) on how to use HMMs to recognize head gestures.
I believe ear haar classifiers are not that effective with cluttered background. Please let us know if you've got it to work!