For my project, supposed to segment closest hand region from camera, I initially try openCV's stereovision example. However, disparity map looks very bad and its useless for me.
Is there any other method which is better than openCV implementation and have some output(image-video). Because, my time is limited, I must choose one better algorithm and implement this.
Thank you.
OpenCV implements a number of stereo block matching algorithms some of them pretty cutting edge.
Disparity maps always look bad except in very simple circumstances - the first step is to try and improve the source images, the lighting and the background. I
If it was easy then everybody would eb doing it and there would be no market for expensive 3D laser scanners.
Try the different block matching algorithms provided by OpenCV. The little bit of experimentation I've done so far seems to indicate that cv::StereoSGBM gives better disparity maps than cv::StereoBM, but is slower.
The performance of the block matching algorithms will depend on what parameters they are initialized with. Have a look at the stereo examples again here, notice line 195-222 where the algorithms are initialized.
I also suggest you use some basic GUI (OpenCV:s highgui for example) to manipulate these parameters real-time when finetuning the algorithm.
Related
This is my template image:
https://drive.google.com/open?id=151q9lliGc5ySFe96rY6YM4wbVjvbr9uv
This is my test image where in which i am trying to find template image
https://drive.google.com/open?id=1ncop46vbTVTjcwp51GrcQfGX4w1WrzDI
I am able to find good match when the test image is pretty much straight and not angled something like this
https://drive.google.com/open?id=1SY68YXxIpDNyK5UfgRdjomI5bkKKqLWE
but whenever it is angled or slant as in my first test image, i am failing to identify good match points
I am using SIFT for extracting keypoints and descriptors and FLANN based search for matching the keypoints, I am pretty much doing everything according to this https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_matcher/py_matcher.html
what can be done to improve the accuracy of finding a match?
Ordinary feature detectors such as SIFT have limited angular invariance up to 30 degrees. In your case you need much more, so you can use ASIFT algorithm that is able to match features extracted from images which angular difference is much higher. Note that the computational requirements of this method are very high so you need really powerful hardware if you want to use it in real time. Unfortunately it is not included in OpenCV but there are plenty of implementations around the web, for instance on the mentioned website.
This question is for those who have tried feature detection/matching methods on brain images - it is a broad one, and perhaps a bad one:
How could you tell if the method you used was "good enough?"
What does a successful matching/detection test look like for your data?
EDIT:
As of now, I am not trying to detect any distinct features in particular.
I'm using OpenCV's ORB, SIFT, SURF, etc detection methods, and seeing what they identify for features.
Sometimes, however, the orientation of the brain changes entirely from a
few set of images to the next set, so if I compare two images from these sets,the detection methods won't yield any effective
results (i.e. the matching will be distinctly, completely off). But if I compare images that look similar, but not identical,
the detection seems to work alright. Point is, it seems like detection works for frames that were taken around the same
time, but not over a long interval. I wonder if others have come across this and if they have found that detection methods
are still useful despite the fact.
First of all, you should specify what kind of features or for which purpose, the experiment is going to be performed.
Feature extraction is highly subjective in nature, it all depends on what type of problem you are trying to handle. There is no generic feature extraction scheme which works in all cases.
For example if the features are pointing out to some tumor classification or lesion, then of course there are different softwares you can use to extract and define your features.
There are different methods to detect the relevant features regarding to the application:
SURF algorithm (Speeded Up Robust Features)
PLOFS: It is a fast wrapper approach with a subset evaluation.
ICA or 'PCA
This paper is a very great review about brain MRI data feature extraction for tissue classification:
https://pdfs.semanticscholar.org/fabf/a96897dcb59ad9f04b5ff92bd15e1bd159ef.pdf
I found this paper very good o understand the difference between feature extraction techniques.
https://www.sciencedirect.com/science/article/pii/S1877050918301297
I am working on a wide project involving object retrieval from videos.
According to "A Survey on Visual Content-Based Video Indexing and Retrieval", most popular methods are divided in:
simple "threshold based approach" (global or adaptative)
supervised learning-based classifiers (with SVM or AdaBoost)
unsupervised learning-based algorithm (mainly K-Means)
For the moment I implemented on my own a very simple and old-fashioned method based on differences of color-histograms among successive frames.
Nevertheless, I would like to try something more efficient and up to date, without spending too much time, considering that shot boundary detection is not the main topic of my research.
Does anybody know an implementation of an effective algorithm?
I've found the following implementation extremely useful and effective in my own research:
https://github.com/johmathe/Shotdetect
The workhorse of the codebase happens here:
https://github.com/johmathe/Shotdetect/blob/master/src/film.cc#L117-237
It mostly relies on color information for detection of shots.
Computing Chi-Square distance of RGB histograms of adjacent frames is one of the fast, simple and robust methods for shot boundary detection. You can see my implementation and usage of this method in here.
I would like to use computer vision to do the following:
A camera is mounted outside a building, capturing a videostream of the street below. The camera is installed approximately 5-6 meters above the street.
Whenever a person wearing a certain kind of hat(white, round) is captured by the camera, an event should be triggered.
Which algorithm should I look into to implement this kind of behavior ?
Is this best achieved through training the algorithm with sample data or is there another way to tell it to look for this type of hat ?
Also, how do I use multiple frames of video to increase the quality of detection ?
Edit: Added a picture of the hat
Before we do everything in comments I will start an answer here.
The first link you posted describes a simple color-based detection. You can try that, but it will fail if there are other pixel clusters of similar color in the image. Your idea of combining it with tracking is good: Identify clusters, build trajectories over several images, and only accept plausible trajectories as a hit. For robust tracking you may want to look into Kalman filtering. A problem you will most likely encounter is that a "white" hat will hardly be "white" in the images your camera delivers.
The second link you refer to - boosted Classifiers Based on Haar-like Features - is for detection of more complex objects. It probably won't help you find white blobs. Invest your time and energy in learning about tracking.
I'm happy to repeat myself here: "Solving a computer vision problem" is not something like "sorting an array". OpenCV is not the C++ Standard Library. You can use an std::map without knowing anything about a red-black tree. But (IMHO) you can't use Vision APIs without knowing a good deal of the math and theory. Working solutions Computer Vision are typically heavily tuned towards the specific problem scenario. Sorry if that sounds pedantic, but it explains why your question got beaten.
In brief, what are the available options for implementing the Tracking of a particular Image(A photo/graphic/logo) in webcam feed using OpenCv?In particular i am trying to collate opinion about the following:
Would HaarTraining be overkill(considering that it is not 3d objects but simply Images to be tracked) or is it the only way out?
Have tried Template Matching, Color-based detection but these don't offer reliable tracking under varying illumination/Scale/Orientation at all.
Would SIFT,SURF feature matching work as reliably in video as with static image
comparison?
Am a relative beginner to OpenCV , as is evident by my previous queries on SO (very helpful replies). Any cues or links to what could be good resources for beginning NFT implementation with OpenCV?
Can you talk a bit more about your requirements? Namely, what type of appearance variations do you expect/how much control you have over the environment. What type of constraints do you have in terms of speed/power/resource footprint?
Without those, I can only give some general assessment to the 3 paths you are talking about.
1.
Haar would work well and fast, particularly for instance recognition.
Note that Haar doesn't work all that well for 3D unless you train with a full spectrum of templates to cover various perspectives. The poster child application of Haar cascades is Viola Jones' face detection system which is largely geared towards frontal faces (can certainly be trained for many other things)
For a tutorial on doing Haar training using OpenCV, see here.
2.
Try NCC or better yet, Lucas Kanade tracking (cvCalcOpticalFlowPyrLK which is a pyramidal as in coarse-to-fine LK - a 4 level pyramid usually works well) for a template. Usually good upto 10% scale or 10 degrees rotation without template changes. Beyond that, you can have automatically evolving templates which can drift over time.
For a quick Optical Flow/tracking tutorial, see this.
3.
SIFT/SURF would indeed work very well. I'd suggest some additional geometric verification step to remove spurious matches.
I'd be a bit concerned about the amount of computational time involved. If there isn't significant illumination/scale/in-plane rotation, then SIFT is probably overkill. If you truly need it, check out Changchang Wu's excellent SIFTGPU implmentation. Note: 3rd party, not OpenCV.
It seems that none of the methods when applied alone could bring reliable results unless it is a hobby project. Probably some adaptive algorithm would be more or less acceptable. For example see a famous opensource project where they use machine learning.