DSP and ML: How to classify DTMF tones? - machine-learning

I am currently learning ML and one project (for learning purpose) that I am thinking is to classify DTMF tones using ML.
I will be using numpy/scipy and I will have a time domain DTMF signal (for all the numbers 0-9) and use that on an FFT function and I will get an array of frequency values that represents the phone dial pad with two frequencies that have higher value than the rest of the other frequency.
Hypothetical example: a hypothetical DTMF tone have two frequencies 100Hz and 300Hz. The FFT array will have an increment of 100Hz (only on this example, on my actual implementation this will have finer increments)
index 0 (100Hz) - 1.0
index 1 (200Hz) - 0.0
index 2 (300Hz) - 1.0
index 3 to n - 0.0
Most of the scikit-learn examples I seen uses single value for classification. How can I use this array of FFT frequency to train and classify the DTMF data?
What I am thinking currently is to use matplotlib and plot the FFT frequencies and save those plots as pictures and use image classification to be able to train the model and classify the DMTF signals. But that seems an "expensive" approach. What could be an approach that I could use without resorting to image classification?
Credit: picture from https://blogs.mathworks.com/cleve/2014/09/01/touch-tone-telephone-dialing/

A linear classifier would be a plausible ML approach for this task:
Compute the FFT of the input signal. Take the magnitude (abs) of the spectrum, since phase is not relevant for distinguishing DTMF tones. This will result in an array of N real nonnegative values.
Follow this with a linear layer (without bias term), taking a size-N input and producing a size-4 output. In other words, the layer is an Nx4 matrix, and you multiply the spectrum with this matrix to get 4 values.
Finally, apply a softmax to get 4 normalized confidences in the [0, 1] range. Interpret the ith confidence as predicting whether the ith tone is present. Find the largest two confidences to determine which DTMF symbol it is.
When trained, the linear layer should learn which band of frequencies to associate with each tone. If N is large compared to the number of training examples, it will help to add an L2 penalty or other regularization on the linear layer's parameters.

Related

Minibatch SGD gradient computation- average or sum

I am trying to understand how the gradients are computed when using miinibatch SGD. I have implemented it in CS231 online course, but only came to realize that in intermediate layers the gradient is basically the sum over all the gradients computed for each sample (the same for the implementations in Caffe or Tensorflow). It is only in the last layer (the loss) that they are averaged by the number of samples.
Is this correct? if so, does it mean that since in the last layer they are averaged, when doing backprop, all the gradients are also averaged automatically?
Thanks!
It is best to understand why SGD works first.
Normally, what a neural network actually is, a very complex composite function of an input vector x, a label y(or target variable, changes according to whether the problem is classification or regression) and some parameter vector, w. Assume that we are working on classification. We are actually trying to do a maximum likelihood estimation (actually MAP estimation since we are certainly going to use L2 or L1 regularization, but this is too much technicality for now) for variable vector w. Assuming that samples are independent; then we have the following cost function:
p(y1|w,x1)p(y2|w,x2) ... p(yN|w,xN)
Optimizing this wrt to w is a mess due to the fact that all of these probabilities are multiplicated (this will produce an insanely complicated derivative wrt w). We use log probabilities instead (taking log does not change the extreme points and we divide by N, so we can treat our training set as a empirical probability distribution, p(x) )
J(X,Y,w)=-(1/N)(log p(y1|w,x1) + log p(y2|w,x2) + ... + log p(yN|w,xN))
This is the actual cost function we have. What the neural network actually does is to model the probability function p(yi|w,xi). This can be a very complex 1000+ layered ResNet or just a simple perceptron.
Now the derivative for w is simple to state, since we have an addition now:
dJ(X,Y,w)/dw = -(1/N)(dlog p(y1|w,x1)/dw + dlog p(y2|w,x2)/dw + ... + dlog p(yN|w,xN)/dw)
Ideally, the above is the actual gradient. But this batch calculation is not easy to compute. What if we are working on a dataset with 1M training samples? Worse, the training set may be a stream of samples x, which has an infinite size.
The Stochastic part of the SGD comes into play here. Pick m samples with m << N randomly and uniformly from the training set and calculate the derivative by using them:
dJ(X,Y,w)/dw =(approx) dJ'/dw = -(1/m)(dlog p(y1|w,x1)/dw + dlog p(y2|w,x2)/dw + ... + dlog p(ym|w,xm)/dw)
Remember that we had an empirical (or actual in the case of infinite training set) data distribution p(x). The above operation of drawing m samples from p(x) and averaging them actually produces the unbiased estimator, dJ'/dw, for the actual derivative dJ(X,Y,w)/dw. What does that mean? Take many such m samples and calculate different dJ'/dw estimates, average them as well and you get dJ(X,Y,w)/dw very closely, even exactly, in the limit of infinite sampling. It can be shown that these noisy but unbiased gradient estimates will behave like the original gradient in the long run. On the average, SGD will follow the actual gradient's path (but it can get stuck at a different local minima, all depends on the selection of the learning rate). The minibatch size m is directly related to the inherent error in the noisy estimate dJ'/dw. If m is large, you get gradient estimates with low variance, you can use larger learning rates. If m is small or m=1 (online learning), the variance of the estimator dJ'/dw is very high and you should use smaller learning rates, or the algorithm may easily diverge out of control.
Now enough theory, your actual question was
It is only in the last layer (the loss) that they are averaged by the number of samples. Is this correct? if so, does it mean that since in the last layer they are averaged, when doing backprop, all the gradients are also averaged automatically? Thanks!
Yes, it is enough to divide by m in the last layer, since the chain rule will propagate the factor (1/m) to all parameters once the lowermost layer is multiplied by it. You don't need to do separately for each parameter, this will be invalid.
In the last layer they are averaged, and in the previous are summed. The summed gradients in previous layers are summed across different nodes from the next layer, not by the examples. This averaging is done only to make the learning process behave similarly when you change the batch size -- everything should work the same if you sum all the layers, but decrease the learning rate appropriately.

Is there a need to normalise input vector for prediction in SVM?

For input data of different scale I understand that the values used to train the classifier has to be normalized for correct classification(SVM).
So does the input vector for prediction also needs to be normalized?
The scenario that I have is that the training data is normalized and serialized and saved in the database, when a prediction has to be done the serialized data is deserialized to get the normalized numpy array, and the numpy array is then fit on the classifier and the input vector for prediction is applied for prediction. So does this input vector also needs to be normalized? If so how to do it, since at the time of prediction I don't have the actual input training data to normalize?
Also I am normalizing along axis=0 , i.e. along the column.
my code for normalizing is :
preprocessing.normalize(data, norm='l2',axis=0)
is there a way to serialize preprocessing.normalize
In SVMs it is recommended a scaler for several reasons.
It is better to have the same scale in many optimization methods.
Many kernel functions use internally an euclidean distance to compare two different samples (in the gaussian kernel the euclidean distance is in the exponential term), if every feature has a different scale, the euclidean distance only take into account the features with highest scale.
When you put the features in the same scale you must remove the mean and divide by the standard deviation.
xi - mi
xi -> ------------
sigmai
You must storage the mean and standard deviation of every feature in the training set to use the same operations in future data.
In python you have functions to do that for you:
http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html
To obtain means and standar deviations:
scaler = preprocessing.StandardScaler().fit(X)
To normalize then the training set (X is a matrix where every row is a data and every column a feature):
X = scaler.transform(X)
After the training, you must normalize of future data before the classification:
newData = scaler.transform(newData)

Speeding up the classification process - PCA combined with SVM?

I have a cyclic method running which collects a data set of 15.000 feature vectors with 30 dimensions (every 200ms). My current setup simply feeds all raw feature vectors to a SVM with RBF (Radial basis function). The classification result is rather unconvincing as being costly in terms of time. I know that the dataset isn't that big, so classification in real-time could be possible with the right subsampling feature vector or so. The goal is to speed up the entire classification process (training/prediction) to reach a few milliseconds. To obtain an unsupervised classification approach, I currently run k-means to label the feature vectors. I pick a few cluster results and assign them class 1 and all others class 0.
The idea now the following:
collect all 15.000 (N) feature vectors with 30 (D) dimensions
PCA on all N feature vectors
use the eigenvalues to determine a feature vector with (d) dimensions (d < D)
Fed the new set of (n < N)
feature vectors
or: the eigenvectors ?
to train the svm
Maybe instead of SVM a KNN approach would result in similar result?
Does this approach makes sense?
Any ideas to improve the process or change it in order to speed it up?
How do I determine the best number of d?
The classification accuracy shouldn't suffer too much from the time reduction.
EDIT: Data stream mining
I was just reading about Data Stream Mining. I think this topic fits my setup quite well since I have to extract knowledge structures from continuous, rapid data records. Maybe I should replace the SVM with a Gradient Boosted Tree?
Thanks!

word2vec: negative sampling (in layman term)?

I'm reading the paper below and I have some trouble , understanding the concept of negative sampling.
http://arxiv.org/pdf/1402.3722v1.pdf
Can anyone help , please?
The idea of word2vec is to maximise the similarity (dot product) between the vectors for words which appear close together (in the context of each other) in text, and minimise the similarity of words that do not. In equation (3) of the paper you link to, ignore the exponentiation for a moment. You have
v_c . v_w
-------------------
sum_i(v_ci . v_w)
The numerator is basically the similarity between words c (the context) and w (the target) word. The denominator computes the similarity of all other contexts ci and the target word w. Maximising this ratio ensures words that appear closer together in text have more similar vectors than words that do not. However, computing this can be very slow, because there are many contexts ci. Negative sampling is one of the ways of addressing this problem- just select a couple of contexts ci at random. The end result is that if cat appears in the context of food, then the vector of food is more similar to the vector of cat (as measures by their dot product) than the vectors of several other randomly chosen words (e.g. democracy, greed, Freddy), instead of all other words in language. This makes word2vec much much faster to train.
Computing Softmax (Function to determine which words are similar to the current target word) is expensive since requires summing over all words in V (denominator), which is generally very large.
What can be done?
Different strategies have been proposed to approximate the softmax. These approaches can be grouped into softmax-based and sampling-based approaches. Softmax-based approaches are methods that keep the softmax layer intact, but modify its architecture to improve its efficiency (e.g hierarchical softmax). Sampling-based approaches on the other hand completely do away with the softmax layer and instead optimise some other loss function that approximates the softmax (They do this by approximating the normalization in the denominator of the softmax with some other loss that is cheap to compute like negative sampling).
The loss function in Word2vec is something like:
Which logarithm can decompose into:
With some mathematic and gradient formula (See more details at 6) it converted to:
As you see it converted to binary classification task (y=1 positive class, y=0 negative class). As we need labels to perform our binary classification task, we designate all context words c as true labels (y=1, positive sample), and k randomly selected from corpora as false labels (y=0, negative sample).
Look at the following paragraph. Assume our target word is "Word2vec". With window of 3, our context words are: The, widely, popular, algorithm, was, developed. These context words consider as positive labels. We also need some negative labels. We randomly pick some words from corpus (produce, software, Collobert, margin-based, probabilistic) and consider them as negative samples. This technique that we picked some randomly example from corpus is called negative sampling.
Reference :
(1) C. Dyer, "Notes on Noise Contrastive Estimation and Negative Sampling", 2014
(2) http://sebastianruder.com/word-embeddings-softmax/
I wrote an tutorial article about negative sampling here.
Why do we use negative sampling? -> to reduce computational cost
The cost function for vanilla Skip-Gram (SG) and Skip-Gram negative sampling (SGNS) looks like this:
Note that T is the number of all vocabs. It is equivalent to V. In the other words, T = V.
The probability distribution p(w_t+j|w_t) in SG is computed for all V vocabs in the corpus with:
V can easily exceed tens of thousand when training Skip-Gram model. The probability needs to be computed V times, making it computationally expensive. Furthermore, the normalization factor in the denominator requires extra V computations.
On the other hand, the probability distribution in SGNS is computed with:
c_pos is a word vector for positive word, and W_neg is word vectors for all K negative samples in the output weight matrix. With SGNS, the probability needs to be computed only K + 1 times, where K is typically between 5 ~ 20. Furthermore, no extra iterations are necessary to compute the normalization factor in the denominator.
With SGNS, only a fraction of weights are updated for each training sample, whereas SG updates all millions of weights for each training sample.
How does SGNS achieve this? -> by transforming multi-classification task into binary classification task.
With SGNS, word vectors are no longer learned by predicting context words of a center word. It learns to differentiate the actual context words (positive) from randomly drawn words (negative) from the noise distribution.
In real life, you don't usually observe regression with random words like Gangnam-Style, or pimples. The idea is that if the model can distinguish between the likely (positive) pairs vs unlikely (negative) pairs, good word vectors will be learned.
In the above figure, current positive word-context pair is (drilling, engineer). K=5 negative samples are randomly drawn from the noise distribution: minimized, primary, concerns, led, page. As the model iterates through the training samples, weights are optimized so that the probability for positive pair will output p(D=1|w,c_pos)≈1, and probability for negative pairs will output p(D=1|w,c_neg)≈0.

Libsvm: SVM normalizing starts from 0 or 0.001

I am using libsvm for my document classification.
I use svm.h and svm.cc only in my project.
Its struct svm_problem requires array of svm_node that are non-zero thus using sparse.
I get a vector of tf-idf words with lets say in range [5,10]. If i normalize it to [0,1], all the 5's would become 0.
Should i remove these zeroes when sending it to svm_train ?
Does removing these would not reduce the information and lead to poor results ?
should i start the normalization from 0.001 rather than 0 ?
Well, in general, in SVM does normalizing in [0,1] not reduces information ?
SVM is not a Naive Bayes, feature's values are not counters, but dimensions in multidimensional real valued space, 0's have exactly the same amount of information as 1's (which also answers your concern regarding removing 0 values - don't do it). There is no reason to ever normalize data to [0.001, 1] for the SVM.
The only issue here is that column-wise normalization is not a good idea for the tf-idf, as it will degenerate yout features to the tf (as for perticular i'th dimension, tf-idf is simply tf value in [0,1] multiplied by a constant idf, normalization will multiply by idf^-1). I would consider one of the alternative preprocessing methods:
normalizing each dimension, so it has mean 0 and variance 1
decorrelation by making x=C^-1/2*x, where C is data covariance matrix

Resources