How would I approach learning changes in speed using RNNs/LSTMs given x,y coordinates of continuous data? (I have to use a recurrent layer as this is a sub-problem of a bigger end-to-end model that does other things too)
Training data example:
x,y,speed_changed
0,0,0
0,0.1,0
0,0.2,0
0,0.3,0
0,0.5,1
0,0.6,0
0,0.7,0
...
So far I constructed stateful LSTM and train it on one item per batch. After, I reset the state of the LSTM every time there is a change in speed, so I learn that a segment had the same speed (segments can have different lengths).
How do I use such model in production then since the segments have different lengths?
Or is there a better way to train the recurrent net on such data? Perhaps an anomaly detection? (I want to avoid having a fixed batch size (e.g. window of 3 frames))
The structure of RNNs and LSTMs will not let you do it directly, and this is the reason why - The activation function for an RNN is:
h(t) = Tanh(W * h(t-1) + U * x(t) + Bias)
Note that W, U and the Bias are all the same - no matter how many time frames you use for the RNN. So given some X vector, the output will be a function of p1*X1+p2*X2 and so forth, where X1 is X in your example and X2 is Y.
However - to detect a change in speed - you need a different calculation. A change in speed indicates that a different distance was traveled between time frames 1 and 2, and between time frames 2 and 3.
The traveled distance is SQRT((X1(t)-X1(t-1))^2 + (X2(t)-X2(t-1))^2).
This means that you need an activation function that takes into consideration X1*X1 in some way - and this is not possible within an RNN or LSTM.
However, you could achieve what you need indirectly, by using a custom activation function that calculates the distance passed on the latest time frame. Take a look at this link. By using your custom activation function, you can insert the vector of X1(t), X2(t), X1(t-1), X2(t-1) and calculate the distance D. At t=1 you may use 0's as X1(t=0) and X2(t=0).
Your custom activation function should look like D = (X1(t) - X1(t-1))^2 + (X2(t)-X2(t-1))^2. This way - if the speed is the same between time frames you will feed the RNN with constant D values, so you expect the RNN to achieve weights that will simulate a function of D(t) - D(t-1).
Related
I was going through one of the deep learning lectures from MIT on CNN. It said when multiplying weights with pixel values, a non linear activation function like relu can be applied on every pixel. I understand why it should be applied in a simple neural network, since it introduces non linearity in our input data. But why would I want to apply it on a single pixel ? Or am I getting it wrong ?
You may have got it a little wrong.
When they say "multiplying weights with pixel values" - they refer to the linear operation of multiplying the filter (weights + bias) with the pixels of the image. If you think about it, each filter in a CNN essentially represents a linear equation.
For example - if we're looking at a 4*4 filter, the filter is essentially computing x1 * w1 + x2 * w2 + x3 * w3 + x4 * w4 + b for every 4*4 patch of the image it goes over. (In the above equation, x1,x2,x4,x4 refer to pixels of the image, while w1,w2,w3,w4 refer to the weights present in the CNN filter)
Now, hopefully it's fairly clear that the filter is essentially computing a linear equation. To be able to perform a task like let's say image classification, we require some amount of non-linearity. This is achieved by using, most popularly, the ReLU activation function.
So you aren't applying non linearity to a "pixel" per se, you're still applying it to a linear operation (like in a vanilla neural network) - which consists of pixel values multiplied by the weights present in a filter.
Hope this cleared your doubt, feel free to reach out for more help!
Suppose I have a PyTorch Cuda Float tensor x of the shape [b,c,h,w] taking on any arbitrary value allowed by Float Tensor range. I want to normalise it in the range [0,1].
I think of the following algorithm (but any other will also do).
Step1: Find minimum in each batch. Call it min and having shape [b,1,1,1].
Step2: Similarly find the maximum and call it max.
Step3: Use y = (x-min)/max. Alternatively use y = (x-min)/(max-min). I don't know which one will be better. y should have the same shape as that of x.
I am using PyTorch 1.3.1.
Specifically I am unable to get the desired min using torch.min(). Same goes for max.
I am going to use it for feeding it to pre-trained VGG for calculating perceptual loss (after the above normalisation i will additionally bring them to ImageNet mean and std). Due to some reason I cannot enforce [0,1] range during data loading part because the previous works in my area have a very specific normalisation algorithm which has to be used but some times does not ensures [0,1] bound but will be somewhere in its vicinity. That is why at the time computing perceptual loss I have to do this explicit normalisation as a precaution. All out of the box implementation of perceptual loss I am aware assume data is in [0,1] or [-1,1] range and so do not do this transformation.
Thankyou very much
Not the most elegant way, but you can do that using keepdim=True and specifying each of the dimensions:
channel_min = x.min(dim=1, keepdim=True)[0].min(dim=2,keepdim=True)[0].min(dim=3, keepdim=True)[0]
channel_max = x.max(dim=1, keepdim=True)[0].max(dim=2,keepdim=True)[0].max(dim=3, keepdim=True)[0]
I have hypothesis function h(x) = theta0 + theta1*x.
How can I select theta0 and theta1 value for the linear regression model?
The question is unclear whether you would like to do this by hand (with the underlying math), use a program like Excel, or solve in a language like MATLAB or Python.
To start, here is a website offering a summary of the math involved for a univariate calculation: http://www.statisticshowto.com/probability-and-statistics/regression-analysis/find-a-linear-regression-equation/
Here, there is some discussion of the matrix formulation of the multivariate problem (I know you asked for univariate but some people find the matrix formulation helps them conceptualize the problem): https://onlinecourses.science.psu.edu/stat501/node/382
We should start with a bit of an intuition, based on the level of the question. The goal of a linear regression is to find a set of variables, in your case thetas, that minimize the distance between the line formed and the data points observed (often, the square of this distance). You have two "free" variables in the equation you defined. First, theta0: this is the intercept. The intercept is the value of the response variable (h(x)) when the input variable (x) is 0. This visually is the point where the line will cross the y axis. The second variable you have defined is the slope (theta1), this variable expresses how much the response variable changes when the input changes. If theta1 = 0, h(x) does not change when x changes. If theta1 = 1, h(x) increases and decreases at the same rate as x. If theta1 = -1, h(x) responds in the opposite direction: if x increases, h(x) decreases by the same amount; if x decreases, h(x) increases by the quantity.
For more information, Mathworks provides a fairly comprehensive explanation: https://www.mathworks.com/help/symbolic/mupad_ug/univariate-linear-regression.html
So after getting a handle on what we are doing conceptually, lets take a stab at the math. We'll need to calculate the standard deviation of our two variables, x and h(x). WTo calculate the standard deviation, we will calculate the mean of each variable (sum up all the x's and then divide by the number of x's, do the same for h(x)). The standard deviation captures how much a variable differs from its mean. For each x, subtract the mean of x. Sum these differences up and then divide by the number of x's minus 1. Finally, take the square root. This is your standard deviation.
Using this, we can normalize both variables. For x, subtract the mean of x and divide by the standard deviation of x. Do this for h(x) as well. You will now have two lists of normalized numbers.
For each normalized number, multiply the value by its pair (the first normalized x value with its h(x) pair, for all values). Add these products together and divide by N. This gives you the correlation. To get the least squares estimate of theta1, calculate this correlation value times the standard deviation of h(x) divided by the standard deviation of x.
Given all this information, calculating the intercept (theta0) is easy, all we'll have to do is take the mean of h(x) and subtract the product (multiply!) of our calculated theta1 and the average of x.
Phew! All taken care of! We have our least squares solution for those two variables. Let me know if you have any questions! One last excellent resource: https://people.duke.edu/~rnau/mathreg.htm
If you are asking about the hypothesis function in linear regression, then those theta values are selected by an algorithm called gradient descent. This helps in finding the theta values to minimize the cost function.
I tried to implement GMMs but I have a few problems during the em-algorithm.
Let's say I've got 3D Samples (stat1, stat2, stat3) which I use to train the GMMs.
One of my training sets for one of the GMMs has in nearly every sample a "0" for stat1. During training I get really small Numbers (like "1.4456539880060609E-124") in the first row and column of the covariance matrix which leads in the next iteration of the EM-Algorithm to 0.0 in the first row and column.
I get something like this:
0.0 0.0 0.0
0.0 5.0 6.0
0.0 2.0 1.0
I need the inverse covariance matrix to calculate the density but since one column is zero I can't do this.
I thought about falling back to the old covariance matrix (and mean) or to replace every 0 with a really small number.
Or is there a another simple solution to this problem?
Simply your data lies in degenerated subspace of your actual input space, and GMM is not well suited in most generic form for such setting. THe problem is that empirical covariance estimator that you use simply fail for such data (as you said - you cannot inverse it). What you usually do? You chenge covariance estimator to the constrained/regularized ones, which contain:
Constant-based shrinking, thus instead of using Sigma = Cov(X) you do Sigma = Cov(X) + eps * I, where eps is prefedefined small constant, and I is identity matrix. Consequently you never have a zero values on the diagonal, and it is easy to prove that for reasonable epsilon, this will be inversible
Nicely fitted shrinking, like Oracle Covariance Estimator or Ledoit-Wolf Covariance Estimator which find best epsilon based on the data itself.
Constrain your gaussians to for example spherical family, thus N(m, sigma I), where sigma = avg_i( cov( X[:, i] ) is the mean covariance per dimension. This limits you to spherical gaussians, and also solves the above issue
There are many more solutions possible, but all based on the same thing - chenge covariance estimator in such a way, that you have a guarantee of invertability.
Seems like a basic question, but I need to use feature scaling (take each feature value, subtract the mean then divide by the standard deviation) in my implementation of linear regression with gradient descent. After I'm finished, I'd like the weights and regression line rescaled to the original data. I'm only using one feature, plus the y-intercept term. How would I change the weights, after I get them using the scaled data, so that they apply to the original unscaled data?
Suppose your regression is y = W*x + b with x the scaled data, with the original data it is
y = W/std * x0 + b - u/std * W
where u and std are mean value and standard deviation of x0. Yet I don't think you need to transform back the data. Just use the same u and std to scale the new test data.