I am working on a net in tensorflow which produces a vector which is then passed through a softmax which is my output.
Now I have been testing this and weirdly enough the vector (the one that passed through softmax) has zeros in all coordinate but one.
Based on the softmax's definition with the exponential, I assumed that this wasn't supposed to happen. Is this an error?
EDIT: My vector is 120x160 =192000. All values are float32
It may not be an error. You need to look at the input to the softmax as well. It is quite possible this vector has very negative values and a single very positive value. This would result in a softmax output vector containing all zeros and a single one value.
You correctly pointed out that the softmax numerator should never have zero-values due to the exponential. However, due to floating point precision, the numerator could be a very small value, say, exp(-50000), which essentially evaluates to zero.
Related
In the paper CountNet: Estimating the Number of Concurrent Speakers Using Supervised Learning I recently read, it specified that the 3D volume output from a CNN layer must be reduced into a 2 dimensional sequence before entering the LSTM layer, why is that? What's wrong with using the 3 dimensional format?
The standard LSTM neural network assumes input of the following size:
[batch size] × [sequence length] × [feature dim]
The LSTM first multiplies each vector of size [feature dim] by a matrix, and then combines them in a fancy way. What's important here is that there's a vector per each example (the batch dimensions) and each timestep (the seq. length dimension). In a sense, this vector is first transformed by a matrix multiplication(s) (possibly involving some pointwise non-linearities, which don't change the shape, so I don't mention them) into a hidden state update, which is also a vector, and the updated hidden state vector is then used to produce the output (also a vector).
As you can see, the LSTM is designed to operate on vectors. You could design a Matrix-LSTM – an LSTM counterpart that assumes any or all of the following are matrices: the input, the hidden state, the output. That would require you to replace matrix-vector multiplications that process the input (or the state) by a generatlized linear operation that is able to turn any matrix into any other, which would be given by a rank-4 tensor, I believe. However, it'd be equivalent to just reshaping the input matrix into a vector, reshaping the rank-4 tensor into a matrix, doing matrix-vector product and then reshaping the output back into a matrix, so it makes little sense to devise such Matrix-LSTMs instead of just reshaping your inputs.
That said, it might still make sense to design a generalized LSTM that takes something other than a vector as input if the you know something about the input structure that instructs a more specific linear operator than a general rank-4 tensor. For example, images are known to have local structure (nearby pixels are more related than those far apart), hence using convolutions is more "reasonable" than reshaping images to vectors and then performing a general matrix multiplication. In a similar fashion you could replace all the matrix-vector multiplications in the LSTM with convolutions, which would allow for image-like input, states and outputs.
I'm trying to understand SVR model.
To do it I looked at SVM and it's pretty clear for me. But there is no much explications about SVR.
The first question is why it's called Support Vector Regression or how we use vectors to predict numerical values?
Also I don't understand some parameters such as epsilon and gamma. How they influence predicted result?
A SVM learns a so called decision function from your features, such that features from you positive class produce positive real numbers, and features from the negative class produce negative numbers (at least most of the time, depending on your data).
For two features you can visualize this in a 2D plane. The function assigns a real value to each point in the plane, this value can be depicted as color. This plot shows the values as different blue colors.
The feature values resulting in zero form the so called decision boundary.
This function itself has two kind of parameters:
kernel dependend parameters. In your case for the radial basis functions, these parameters are epsilon and gamma, which you set before learning.
And the so called support-vectors which are determined during learning. support-vectors are just parameters of your decision function.
Learning is nothing than determining good support-vectors (parameters !).
In this 2d example video the colors don't show the actual function value, but only the sign. You can see how gamma influences the smoothness of the decision function.
To answer you question:
SVR builds such a function but with a different goal. The function does not try to assign positive outcomes to your postive examples, and negative outcomes to the negative examples.
Instead the function is built to approximate the given numeric outcomes.
I see a formula to calculate the probability for any value(x=x1) in the image attached. Don't the probability of any continuous variable for a particular values would be zero? Because probability is the area right? which is computed between 2 values. So, don't the probability be 0 for any particular continuous value? Please someone correct me if i am wrong!
You are correct. The probability for any particular value in a continuous distribution is zero. The equation you've posted isn't a formula for the probability, it's a formula for the Probability Density Function
In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function, whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would equal that sample. In other words, while the absolute likelihood for a continuous random variable to take on any particular value is 0 (since there are an infinite set of possible values to begin with), the value of the PDF at two different samples can be used to infer that, in any particular draw of the random variable, how much more likely it is that the random variable would equal one sample compared to the other sample.
I am using svm light to train a model for binary classification. Using the model, I tested some examples. I was surprised to see the output of the prediction file, it contains values greater than 1 as well as less than -1. I thought the range is [-1,1]. Am I doing something wrong?
It makes sense why the values are not bounded by the interval of [-1, 1] if you understand how the SVM works. An SVM tries to draw the line which separates the negative and positive data points while maximizing their distances from the line.
The values in the prediction file represent the distances of data from the SVM optimal hyperplane, where positive values are on the positive class side of the hyperplane and negative values are on the negative class side of the hyperplane. These distance can be arbitrarily large or small and are not bounded as can be seen by this image:
I've seen some SVM implementations such as Weka's implementation of Platt's SMO which normalize the values so that they are confidence values on the positive class bounded by the interval of [0, 1], but both ways work just fine for determining how confident an SVM is on a classification since a data point further from the hyperplane is more certain than one lying close to the hyperplane.
I know the form of the softmax regression, but I am curious about why it has such a name? Or just for some historical reasons?
The maximum of two numbers max(x,y) could have sharp corners / steep edges which sometimes is an unwanted property (e.g. if you want to compute gradients).
To soften the edges of max(x,y), one can use a variant with softer edges: the softmax function. It's still a max function at its core (well, to be precise it's an approximation of it) but smoothed out.
If it's still unclear, here's a good read.
Let's say you have a set of scalars xi and you want to calculate a weighted sum of them, giving a weight wi to each xi such that the weights sum up to 1 (like a discrete probability). One way to do it is to set wi=exp(a*xi) for some positive constant a, and then normalize the weights to one. If a=0 you get just a regular sample average. On the other hand, for a very large value of a you get max operator, that is the weighted sum will be just the largest xi. Therefore, varying the value of a gives you a "soft", or a continues way to go from regular averaging to selecting the max. The functional form of this weighted average should look familiar to you if you already know what a SoftMax regression is.