My question may be not reasonable. But I like to know what is the real industrial grade foreground/background separation in image processing. My application is I need to extract objects from the background. It is easy to track moving foreground object in the successive images, but for the stationary image (just one image) rather than using the threshold, what else will be more efficient than thresholding?
Thanks
There are actually many other methods that you can try. Threshold is actually a pretty good method to extract object in my opinion. But if you like, there are actually many other methods you can consider depending on the scenario.
1)If background is fixed, you can just simply just try background subtraction. Whatever remains is the odd one out, in this case, the object you are trying to extract.
2)If the object you are trying to extract is something specific regardless of background, you can use feature extraction, classifiers(Haar for instance)
3)If the object have a specific shape, like circle, rectangle, you can use Hough alongside Canny, for instance, or shape detection, alot available on the web.
4)If the object you are trying to extract have a specific colour, you can take a look at HSV, LAB space, both which are much much better than RGB. You can also try to use pre-processing methods like watershed for example(got alot other more)
And many many many many others depending on scenarios. Hope that helps.
I don't know if this is an answer to your question but if the camera is stationary you can look at optical flow. This tracks a moving object in a video stream. It looks at the changes in the images and then can seperate the background and foreground. [1]: http://www.mathworks.com/discovery/optical-flow.html
Hope it helps you
Related
I have been trying to detect multiple people in a small space and hence track them.
Input: CCTV feed from a camera mounted in a small room.
Expected Output: Track and hence store the path that people take while moving from one end of the room to the other.
I tried to implement some of the basic methods like background subtraction and pedestrian detection. But the results are not as desired.
In the results obtained by implementing background subtraction, due to occlusion the blob is not one single entity(the blob of one person is broken into multiple small blobs) hence, detecting it as a single person is very difficult.
Now, consider the case when there are many people standing close to each other. In this case detecting people using simple background subtraction is a complete disaster.
Is there a better way to detect multiple people?
Or maybe is there a way to improve the result of background subtraction?
And please suggest a good way for tracking multiple people?
that's a quite hard problem and there is no out-of-the-box solution, so you might have to try different methods.
In the beginning you will want to make some assumptions like static camera position and everything that's not background is a person or part of a person, maybe multiple persons. Persons can't appear within the image but they will have to 'enter' it (and are detected on entering and tracked after detection).
Detection and tracking can both be difficult problems so you might want to focus on one of them first. I would start with tracking and choose a probabilisic tracking method, since simple tracking methods like tracking by detection probably can't handle overlap and multiple targets very well.
Tracking:
I would try a particle filter, like http://www.irisa.fr/vista/Papers/2002/perez_hue_eccv02.pdf
which is capable to track multiple targets.
Detection: There is a HoG Person Detector in OpenCV which works quite fine for upright persons
HOGDescriptor hog;
hog.setSVMDetector(HOGDescriptor::getDefaultPeopleDetector());
but it's good to know the approximate size of a person in the image and scale the image accordingly. You can do this after background subtraction by scaling the blobs or combination of blobs, or you use a calibration of your camera and scale image parts of size 1.6m to 2.0m to your HoG detector size. Otherwise you might have many misses and many false alarms.
In the end you will have to work and research for some time to get the things running, but don't expect early success or 100% hit rates ;)
I would create a sample video and work on that, manually masking entering people as detection and implement the tracker with those detections.
Is it possible to compare two intensity histograms (derived from gray-scale images) and obtain a likeness factor? In other words, I'm trying to detect the presence or absence of an soccer ball in an image. I've tried feature detection algorithms (such as SIFT/SURF) but they are not reliable enough for my application. I need something very reliable and robust.
Many thanks for your thoughts everyone.
This answer (Comparing two histograms) might help you. Generally, intensity comparisons are quite sensitive as e.g. white during day time is different from white during night time.
I think you should be able to derive something from compareHist() in openCV (http://docs.opencv.org/doc/tutorials/imgproc/histograms/histogram_comparison/histogram_comparison.html) to suit your needs if compareHist() does fit your purpose.
If not, this paper http://www.researchgate.net/publication/222417618_Tracking_the_soccer_ball_using_multiple_fixed_cameras/file/32bfe512f7e5c13133.pdf
tracks the ball from multiple cameras and you might get some more ideas from that even though you might not be using multiple cameras.
As kkuilla have mentioned, there is an available method to compare histogram, such as compareHist() in opencv
But I am not certain if it's really applicable for your program. I think you will like to use HoughTransfrom to detect circles.
More details can be seen in this paper:
https://files.nyu.edu/jb4457/public/files/research/bristol/hough-report.pdf
Look for the part with coins for the circle detection in the paper. I did recall reading up somewhere before of how to do ball detection using Hough Transform too. Can't find it now. But it should be similar to your soccer ball.
This method should work. Hope this helps. Good luck(:
I am a bit new to image processing so I'd like to ask you about finding the optimal solution for my problem, not help for code.
I couldn't think of a good idea yet so wanted to ask for your advices. Hope you can help.
I'm working on a project under OpenCV which is about counting the vehicles from a video file or a live camera. Other people working on such a project generally track the moving objects then count them but instead of it, I wanted to work with a different viewpoint; asking user to set a ROI(Region of interest) on the video window and work only for this region(for some reasons, like to not deal with the whole frame and some performance increase), as seen below.(btw, user can set more than one ROI and user is asked to set the height of the ROI about 2 times of a normal car by sense of proportion )
I've done some basic progress so far, like backgound updating, morphological filters, threshoulding and getting the moving object as a binary image something like below.
After doing them, I tried to count the white pixels of the final threshoulded foreground frame and estimate whether it was a car or not by checking the total white pixels number(I set a lower bound by a static calculation by knowing the height of ROI). To illustrate, I drew a sample graphic:
As you can see from the graphic, it was easy to calculate the white pixels and checking if it draws a curve by the time and determining whether a car or something like noise.
I was quite successful until two cars passed through my ROI together at the same time. My algorithm crashed by counting them as one car as you can guess :/ I tried different approaches for this problem and similar to this like long vehicles but I couldn't get an optimum solution up to now.
My question is: is it impossible to handle this task by this approach of pixel value counting? If it is possible, what would be your suggestion? I wish you also faced something similar to this before and can help me.
All ideas are welcome, thanks in advance friends.
Isolate the traffic from the background - take two images, run high pass filter on one of them, convert the other to a binary image - use the binary image to mask the filtered one, you should be able to use edge detection to identify the roof of each vehicle as a quadrilateral and you should then be able to compute a relative measure of it.
You then have four scenarios:
no quadrilaterals - no cars
large quadrilaterals - trucks
multiple small quadrilaterals - several cars
single quadrilaterals - one car
In answer to your question "Is it possible to do this using pixel counting?"
The short answer is "No", for the very reason your quoting: mere pixel counting of static images is not enough.
If you are limited to pixel counting, you can try looking at pixel count velocity (change of pixel counts between success frames) and you might pick out different "velocity" shapes when 1 car, 2 cars or trucks pass.
But just plain pixel counting? No. You need shape (geometric) information as well.
If you apply any kind of thresholding algorithm (e.g. for background subtraction), don't forget to update the background whenever light levels change (e.g. day and night). Also consider the grief when it is a partly cloudy with sharp cloud shadows that move across your image.
As a part of my thesis work, I need to build a program for human tracking from video or image sequence like the KTH or IXMAS dataset with the assumptions:
Illumination remains unchanged
Only one person appear in the scene
The program need to perform in real-time
I have searched a lot but still can not find a good solution.
Please suggest me a good method or an existing program that is suitable.
Case 1 - If camera static
If the camera is static, it is really simple to track one person.
You can apply a method called background subtraction.
Here, for better results, you need a bare image from camera, with no persons in it. It is the background. ( It can also be done, even if you don't have this background image. But if you have it, better. I will tell at end what to do if no background image)
Now start capture from camera. Take first frame,convert both to grayscale, smooth both images to avoid noise.
Subtract background image from frame.
If the frame has no change wrt background image (ie no person), you get a black image ( Of course there will be some noise, we can remove it). If there is change, ie person walked into frame, you will get an image with person and background as black.
Now threshold the image for a suitable value.
Apply some erosion to remove small granular noise. Apply dilation after that.
Now find contours. Most probably there will be one contour,ie the person.
Find centroid or whatever you want for this person to track.
Now suppose you don't have a background image, you can find it using cvRunningAvg function. It finds running average of frames from your video which you use to track. But you can obviously understand, first method is better, if you get background image.
Here is the implementation of above method using cvRunningAvg.
Case 2 - Camera not static
Here background subtraction won't give good result, since you can't get a fixed background.
Then OpenCV come with a sample for people detection sample. Use it.
This is the file: peopledetect.cpp
I also recommend you to visit this SOF which deals with almost same problem: How can I detect and track people using OpenCV?
One possible solution is to use feature points tracking algorithm.
Look at this book:
Laganiere Robert - OpenCV 2 Computer Vision Application Programming Cookbook - 2011
p. 266
Full algorithm is already implemented in this book, using opencv.
The above method : a simple frame differencing followed by dilation and erosion would work, in case of a simple clean scene with just the motion of the person walking with absolutely no other motion or illumination changes. Also you are doing a detection every frame, as opposed to tracking. In this specific scenario, it might not be much more difficult to track either. Movement direction and speed : you can just run Lucas Kanade on the difference images.
At the core of it, what you need is a person detector followed by a tracker. Tracker can be either point based (Lucas Kanade or Horn and Schunck) or using Kalman filter or any of those kind of tracking for bounding boxes or blobs.
A lot of vision problems are ill-posed, some some amount of structure/constraints, helps to solve it considerably faster. Few questions to ask would be these :
Is the camera moving : No quite easy, Yes : Much harder, exactly what works depends on other conditions.
Is the scene constant except for the person
Is the person front facing / side-facing most of the time : Detect using viola jones or train one (adaboost with Haar or similar features) for side-facing face.
How spatially accurate do you need it to be, will a bounding box do it, or do you need a contour : Bounding box : just search (intelligently :)) in neighbourhood with SAD (sum of Abs Differences). Contour : Tougher, use contour based trackers.
Do you need the "tracklet" or only just the position of the person at each frame, temporal accuracy ?
What resolution are we speaking about here, since you need real time :
Is the scene sparse like the sequences or would it be cluttered ?
Is there any other motion in the sequence ?
Offline or Online ?
If you develop in .NET you can use the Aforge.NET framework.
http://www.aforgenet.com/
I was a regular visitor of the forums and I seem to remember there are plenty of people using it for tracking people.
I've also used the framework for other non-related purposes and can say I highly recommend it for its ease of use and powerful features.
I have a simple photograph that may or may not include a logo image. I'm trying to identify whether a picture includes the logo shape or not. The logo (rectangular shape with a few extra features) could be of various sizes and could have multiple occurrences. I'd like to use Computer Vision techniques to identify the location of these logo occurrences. Can someone point me in the right direction (algorithm, technique?) that can be used to achieve this goal?
I'm quite a novice to Computer Vision so any direction would be very appreciative.
Thanks!
Practical issues
Since you need a scale-invariant method (that's the proper jargon for "could be of various sizes") SIFT (as mentioned in Logo recognition in images, thanks overrider!) is a good first choice, it's very popular these days and is worth a try. You can find here some code to download. If you cannot use Matlab, you should probably go with OpenCV. Even if you end up discarding SIFT for some reason, trying to make it work will teach you a few important things about object recognition.
General description and lingo
This section is mostly here to introduce you to a few important buzzwords, by describing a broad class of object detection methods, so that you can go and look these things up. Important: there are many other methods that do not fall in this class. We'll call this class "feature-based detection".
So first you go and find features in your image. These are characteristic points of the image (corners and line crossings are good examples) that have a lot of invariances: whatever reasonable processing you do to to your image (scaling, rotation, brightness change, adding a bit of noise, etc) it will not change the fact that there is a corner in a certain point. "Pixel value" or "vertical lines" are bad features. Sometimes a feature will include some numbers (e.g. the prominence of a corner) in addition to a position.
Then you do some clean-up, like remove features that are not strong enough.
Then you go to your database. That's something you've built in advance, usually by taking several nice and clean images of whatever you are trying to find, running you feature detection on them, cleaning things up, and arrange them in some data structure for your next stage —
Look-up. You have to take a bunch of features form your image and try to match them against your database: do they correspond to an object you are looking for? This is pretty non-trivial, since on the face of it you have to consider all subsets of the bunch of features you've found, which is exponential. So there are all kinds of smart hashing techniques to do it, like Hough transform and Geometric hashing.
Now you should do some verification. You have found some places in the image which are suspect: it's probable that they contain your object. Usually, you know what is the presumed size, orientation, and position of your object, and you can use something simple (like a convolution) to check if it's really there.
You end up with a bunch of probabilities, basically: for a few locations, how probable it is that your object is there. Here you do some outlier detection. If you expect only 1-2 occurrences of your object, you'll look for the largest probabilities that stand out, and take only these points. If you expect many occurrences (like face detection on a photo of a bunch of people), you'll look for very low probabilities and discard them.
That's it, you are done!