OpenCV Adaptive Threshold Explanation - opencv

Could anyone provide a mathematical explanation of the Adaptive Threshold function in OpenCV?
From the OpenCV 3.1.0 Documentation I found only this explanation:
cv2.ADAPTIVE_THRESH_GAUSSIAN_C : threshold value is the weighted sum of neighbourhood values where weights are a Gaussian window.
Which I do understand and can picture what the algorithm is doing for each pixel, just defining a mathematical solution for it is proving tougher than I thought.
I also found the code for it on the Github repo here but, after trying to walk through the code, I've realised my knowledge of OpenCV doesn't quite cover it.
Also, any sources would be much appreciated.
Thanks in advance!

Related

Identify if an image is blurred (preferred without OpenCV)

Just completed building a camera using AVCaptureSession for scanning documents on iPhone, I am looking for away to determine if the captured image is in good quality and not blurred.
I saw many solutions using OpenCV and I am looking for an other options.
Any Help would be appreciated.
Thanks
First of all, interesting question, made me do some research to figure out stuff myself. In general, Analysis of focus measure operators for shape-from-focus is a great research paper, talking about a few methods (36 to be precise) on how to get measure of blurriness in an image, from simple/straightforward ones to more complex ones.
I have done myself some basic laplacian operation on one channel of the image (essentially 2nd derivative of the pixels) to measure the blurriness, which worked quite well for me. Once you convolve the channel with the laplacian operator, the variance of this laplacian image is a good estimate of the blurriness. The assumption here is that if an image contains high variance then there is a wide spread of responses, both edge-like and non-edge like, representative of a normal, in-focus image. But if there is very low variance, then there is a tiny spread of responses, indicating there are very little edges in the image. As we know, the more an image is blurred, the less edges there are. The trick here is to find an apt threshold for the variance to be high/low, which I guess you can ascertain by running it on your dataset.
Courtesy: Blog
PS. Although the blog I reference here mentions "OpenCV", the methods can be implemented as you want if you understand the concept and hence I started the answer with the research paper.

SIFT Feature Extraction

Firstly, English is not my native language hence sorry for my grammar mistakes in advance. I am trying to implement the SIFT feature extraction algorithm. I have couple of question which are not very clear at paper:
What happens at octave boundries when we are searching for local maxima? Do we search just 8+9 neighborhood or create extra layers that we dont use at other steps?
When interpolating the extremas with 2nd order func., do we upscale the downscaled images directly from DoG or interpolate at original downsampled image than upscale for the subpixel accuracy pixel positions?
When interpolating the extremas with 2nd order func., is it at same octave or other octaves are used? I think other octaves must be upsampled before interpolating?
Should image size stay same after convolving with the Gaussian? This will effect the keypoint locations.
Vevaldi provided great implementation http://vision.ucla.edu/~vedaldi/code/sift.html . Because it is at .mex format i can't reach what is going on inside. Other open source codes' solutions haven't satisfy me also. Hence i am asking for your help.
Thank you so much for your valuable answers.

Hand sign detection

I am trying to identify static hand signs. Confused with the libraries and algorithms I can use for the project.
What need to it identify hand signs and convert in to text. I managed to get the hand contour.
Can you please tell me what is the best method to classify hand signs.
Is it haar classifier, adaboost classifier, convex hull, orientation histograms, SVM, shift algorithm, or any thing else.
And also pls give me some examples as well.
I tried opencv and emugcv both for image processing. what is best c++ or c# for a real time system.
Any help is highly appreciated.
Thanks
I have implemented a handtracking for web applications in my master deggree. Basically, you should follow those steps:
1 - Detect features of skin color in a Region of Interest. Basically, put a frame in the screen and ask for the user put the hand.
2 - You should have a implementation of a lucas kanade tracker method. Basically, this alghorithm will ensure that your features are not lost through the frames.
3 - Try get more features for each 3 frames interval.
The people use many approaches, so I cannot give a unique. You could make some research using Google Scholar and use the keywords "hand sign", "recognition" and "detection".
Maybe you find some code with the help of Google. An example, the HandVu: http://www.movesinstitute.org/~kolsch/HandVu/HandVu.html
The haar classifier (method of Viola-Jones) help to detect hand, not to recognize them.
Good luck in your research!
I have made the following with OpenCV. Algorithm:
Skin detection made in HSV
Thinning (if pixel has zero neighbor than set zero)
Thicking (if pixel has neighbor nonzero then set it nonzero)
See this Wikipedia page for the details of these.
You can find the best trained cascade to detect hand using OpenCV from the GitHub...
https://github.com/Aravindlivewire/Opencv/blob/master/haarcascade/aGest.xml
Good luck...

OpenCV 2.2 PCA and EigenFaces

I just wanted to know if the cv::PCA::PCA constructor method in OpenCV 2.2 makes the substraction of the mean, or if I must pass my data already with the mean subtracted.
I tested both ways, but when visualizing eigenfaces neither of them are giving me good results but just a black screen. I have no segmentations faults or errors, I just don't get the eigenfaces visualization as in the papers.
I posted a complete example that shows how to use the PCA and display Eigenfaces here: PCA + SVM using C++ Syntax in OpenCV 2.2 (and on my page: http://www.bytefish.de/blog/pca_in_opencv).
It seems that they substract the mean within the PCA functions (i went to see the declaration of cv::PCA). Anyway I can't get eigenfaces visualization, it is just a black window. What I thought was that they weren't normalized, but no, I printed the L2 norm of each eigenvector and it is exactly 1.
I think to get eigenfaces you need to project the PCA eigenvector to an image.

What kind of histogram does OpenCV CAMSHIFT use, is it ratio or weighted?

I want to know the kind of histogram is used in OpenCV along with the camshift algorithm.
Is it the ratio histogram or is it the weighted histogram?
Many Thanks.
I believe this site explains it well.
If you want to know more about OpenCV histograms, you check the documentation.

Resources