Drake matrix indexing issue - drake

This code errors in Drake:
forces_matrix = np.zeros((T - 1, O, O, D))
tril_indices = np.tril_indices(O, m=O, k=-1)
forces_matrix[:, tril_indices[0], tril_indices[1]] = f_oo
But it works in PyTorch / Numpy.
The LHS and RHS are both 9, 1, 1 shape numpy tensors.
Is this type of operation not supported in Drake?
EDIT: No more syntax error if the initial forces_matrix is OBJECT type. However, I am not sure if the computation graph is being built correctly connecting f_oo and forces_matrix. Is this type of operation supported?

Related

No method matching pcacov

I am trying to apply PCA to reduce dimensionality and noise using Julia language but am getting an error message. Could you please help me to solve this issue.
Are there other alternatives in julia to the perform the same task?
Here's the error message:
julia> X = (train_input)' |> Array;
julia> typeof(X)
Matrix{Real} (alias for Array{Real, 2})
julia> using MultivariateStats, MLJMultivariateStatsInterface
julia> M = fit(PCA, X; maxoutdim = 3)
MethodError: no method matching pcacov(::Matrix{Float64}, ::Vector{Real}; maxoutdim=3, pratio=0.99)
Closest candidates are:
pcacov(::AbstractMatrix{T}, ::AbstractVector{T}; maxoutdim, pratio) where T<: Real at C:\Users\USER\.julia\packages\MultivariateStats\rCiqT\src\pca.jl:209
I can't reproduce your error. But this is how I get the job done via the MultivariateStats v0.10.0 package in the case of fitting a PCA model:
julia> using MultivariateStats
julia> X = rand(5, 100);
fit(PCA, X, maxoutdim=3)
PCA(indim = 5, outdim = 3, principalratio = 0.6599153346885055)
Pattern matrix (unstandardized loadings):
────────────────────────────────────
PC1 PC2 PC3
────────────────────────────────────
1 0.201331 -0.0213382 0.0748083
2 0.0394825 0.137933 0.213251
3 0.14079 0.213082 -0.119594
4 0.154639 -0.0585538 -0.0975059
5 0.15221 -0.145161 0.0554158
────────────────────────────────────
Importance of components:
─────────────────────────────────────────────────────────
PC1 PC2 PC3
─────────────────────────────────────────────────────────
SS Loadings (Eigenvalues) 0.108996 0.0893847 0.0779532
Variance explained 0.260295 0.21346 0.186161
Cumulative variance 0.260295 0.473755 0.659915
Proportion explained 0.394436 0.323466 0.282098
Cumulative proportion 0.394436 0.717902 1.0
─────────────────────────────────────────────────────────
julia> typeof(X)
Matrix{Float64} (alias for Array{Float64, 2})
julia> eltype(X)
Float64
As you can see, I used a Matrix with Float64 element types as the input. This is the difference between my input in comparison with yours, I guess. So this might be the problem in your case.
Keep in mind that rows represent the features and the columns represent the data samples!
Finally, since you asked for other alternatives, I introduce you to the WeightedPCA package. This package provides weighted principal component analysis (PCA) for data with samples of heterogeneous quality (heteroscedastic noise). Here is a quick example:
julia> using WeightedPCA
julia> X = rand(5, 100);
pc1, pc2, pc3 = wpca.(Ref(collect(eachrow(X))), [1, 2, 3], Ref(UniformWeights()));
In the above, I fitted an equally weighted PCA on the X data and I requested values on 1, 2, and 3 principal components. Using this package, you can even apply specific weights or optimal weights. This package can be installed by pkg> add https://github.com/dahong67/WeightedPCA.jl.
Furtherore, as Antonello said, one can utilize BetaML package to perform PCA. This package provides machine learning algorithms written in the Julia programming language. Let's use it to perform PCA:
julia> using BetaML
julia> X = rand(100, 5);
julia> mod = PCA(max_unexplained_var=0.3)
A PCA BetaMLModel (unfitted)
julia> reproj_X = fit!(mod,X)
100×4 Matrix{Float64}:
0.204151 -0.482558 -0.161929 0.222503
0.69425 -0.371519 -0.628404 0.462256
0.198191 -0.601537 -0.638573 0.463886
⋮
-0.00176858 0.557353 -0.4237 0.310565
0.533239 0.133691 -0.236009 -0.0793025
0.333652 -0.388115 -0.28662 0.481249
julia> info(mod)
Dict{String, Any} with 5 entries:
"explained_var_by_dim" => [0.277255, 0.484764, 0.669897, 0.846831, 1.0]
"fitted_records" => 100
"prop_explained_var" => 0.846831
"retained_dims" => 4
"xndims" => 5
In the above, the max_unexplained_var specifies the actual proportion of variance not explained in the reprojected dimensions or in other words, the maximum unexplained variance that I'm ready to accept.
The error message is telling you that somewhere in the PCA fit an internal function is called which requires an AbstractMatrix{T} and an AbstractVector{T} as an input, which means that the element type of both arguments T needs to be the same. In your case a Matrix{Float64} and a Vector{Real} is being passed. I assume that the Vector{Real} comes from your X input which as your first cell shows is a Matrix{Real}.
This generally indicates an issue in the construction of X, which shouldn't have an abstract element type like Real. Try float.(X) as an input to coerce all elements to Float64.

Efficient pseudo-inverse for PyTorch 2D convolution

Background:
Thanks for your attention! I am learning the basic knowledge of 2D convolution, linear algebra and PyTorch. I encounter the implementation problem about the psedo-inverse of the convolution operator. Specifically, I have no idea about how to implement it in an efficient way. Please see the following problem statements for details. Any help/tip/suggestion is welcomed.
(Thanks a lot for your attention!)
The Original Problem:
I have an image feature x with shape [b,c,h,w] and a 3x3 convolutional kernel K with shape [c,c,3,3]. There is y = K * x. How to implement the corresponding pseudo-inverse on y in an efficient way?
There is [y = K * x = Ax], how to implement [x_hat = (A^+)y]?
I guess that there should be some operations using torch.fft. However, I still have no idea about how to implement it. I do not know if there exists an implementation previously.
import torch
import torch.nn.functional as F
c = 32
K = torch.randn(c, c, 3, 3)
x = torch.randn(1, c, 128, 128)
y = F.conv2d(x, K, padding=1)
print(y.shape)
# How to implement pseudo-inverse for y = K * x in an efficient way?
Some of My Efforts:
I may know that the 2D convolution is a linear operator. It is equivalent to a "matrix product" operator. We can actually write out the matrix form of the convolution and calculate its psedo-inverse. However, I think this type of operation will be inefficient. And I have no idea about how to implement it in an efficient way.
According to Wikipedia, the psedo-inverse may satisfy the property of A(A_pinv(x))=x, where A is the convolutional operator, A_pinv is its psedo-inverse, and x may be any image feature.
(Thanks again for reading such a long post!)
This takes the problem to another level.
The convolution itself is a linear operation, you can determine the matrix of the operation and solve a least square problem directly [1], or compute the pseudo-inverse as you mentioned, and then apply to different outputs and predicting a projection of the input.
I am changing your code to using padding=0
import torch
import torch.nn.functional as F
# your code
c = 32
K = torch.randn(c, c, 1, 1)
x = torch.randn(4, c, 128, 128)
y = F.conv2d(x, K, bias=torch.zeros((c,)))
Also, as you probably already suggested the convolution can be computed as ifft(fft(h)*fft(x)). However, the conv2d function is a cross-correlation, so you have to conjugate the filter leading to ifft(fft(h)*fft(x)), also you have to apply this to two axes, and you have to make sure the FFT is calcuated using the same representation (size), since the data is real, we can apply multi-dimensional real FFT. To be complete, conv2d works on multiple channels, so we have to calculate summations of convolutions. Since the FFT is linear, we can simply compute the summations on the frequency domain
using einsum.
s = y.shape[-2:]
K_f = torch.fft.rfftn(K, s)
x_f = torch.fft.rfftn(x, s)
y_f = torch.einsum('jkxy,ikxy->ijxy', K_f.conj(), x_f)
y_hat = torch.fft.irfftn(y_f, s)
Except for the borders it should be accurate (remember FFT computes a cyclic convolution).
torch.max(abs(y_hat[:,:,:-2,:-2] - y[:,:,:,:]))
Now, notice the pattern jk,ik->ij on the einsum, that means y_f[i,j] = sum(K_f[j,k] * x_f[i,k]) = x_f # K_f.T, if # is the matrix product on the first two dimensions. So to invert this operation we have to can interpret the first two dimensions as matrices. The function pinv will compute pseudo-inverses on the last two axes, so in order to use that we have to permute the axes. If we right multiply the output by the pseudo-inverse of transposed K_f we should invert this operation.
s = 128,128
K_f = torch.fft.rfftn(K, s)
K_f_inv = torch.linalg.pinv(K_f.T).T
y_f = torch.fft.rfftn(y_hat, s)
x_f = torch.einsum('jkxy,ikxy->ijxy', K_f_inv.conj(), y_f)
x_hat = torch.fft.irfftn(x_f, s)
print(torch.mean((x - x_hat)**2) / torch.mean((x)**2))
Nottice that I am using the full convolution, but the conv2d actually cropped the images. Let's apply that
y_hat[:,:,128-(k-1):,:] = 0
y_hat[:,:,:,128-(k-1):] = 0
Repeating the calculation you will see that the input is not accurate anymore, so you have to be careful about what you do with your convolution, but in some situations where you can get this to work it will be in fact efficient.
s = 128,128
K_f = torch.fft.rfftn(K, s)
K_f_inv = torch.linalg.pinv(K_f.T).T
y_f = torch.fft.rfftn(y_hat, s)
x_f = torch.einsum('jkxy,ikxy->ijxy', K_f_inv.conj(), y_f)
x_hat = torch.fft.irfftn(x_f, s)
print(torch.mean((x - x_hat)**2) / torch.mean((x)**2))

Type error when use apex.amp O1 in PyTorch

When I try to use NVIDIA apex.amp O1 to accelerate my training, it report an error in my code logits = einsum('b x y d, r d -> b x y r', q, rel_k):
RuntimeError: RuntimeErrorRuntimeErrorexpected scalar type Half but found Float: :
expected scalar type Half but found Float
It means that rel_kshould be torch.HalfTensor.
rel_k is defined as follow: self.rel_height = nn.Parameter(torch.randn(height * 2 - 1, dim_head) * scale)
But when I specify the type of rel_kto be torch.HalfTensor, it report an error that I should not specify dtype manually
RuntimeErrorRuntimeError: : Found param encoder.layers.0.blocks.0.attn.rel_pos_emb.rel_height with type torch.cuda.HalfTensor, expected torch.cuda.FloatTensor.
When using amp.initialize, you do not need to call .half() on your model
before passing it, no matter what optimization level you choose.Found param encoder.layers.0.blocks.0.attn.rel_pos_emb.rel_height with type torch.cuda.HalfTensor, expected torch.cuda.FloatTensor.
When using amp.initialize, you do not need to call .half() on your model
before passing it, no matter what optimization level you choose.
How should I do to use amp O1 correctly in my code?

Getting the error "dtw() got an unexpected keyword argument 'dist'" while calculating dtw of 2 voice samples

I am getting the error "dtw() got an unexpected keyword argument 'dist'" while I'm trying to calculate the dtw of 2 wav files. I can't figure out why or what to do to fix it. I am attaching the code below.
import librosa
import librosa.display
y1, sr1 = librosa.load('sample_data/Abir_Arshad_22.wav')
y2, sr2 = librosa.load('sample_data/Abir_Arshad_22.wav')
%pylab inline
subplot(1, 2, 1)
mfcc1 = librosa.feature.mfcc(y1, sr1)
librosa.display.specshow(mfcc1)
subplot(1, 2, 2)
mfcc2 = librosa.feature.mfcc(y2, sr2)
librosa.display.specshow(mfcc2)
from dtw import dtw
from numpy.linalg import norm
dist, cost, acc_cost, path = dtw(mfcc1.T, mfcc2.T, dist=lambda x, y: norm(x - y, ord=1))
print ('Normalized distance between the two sounds:', dist)
the error is occurring in the 2nd last line.
The error message is straight forward. Lets read the docs of the method you are calling:
https://dynamictimewarping.github.io/py-api/html/api/dtw.dtw.html#dtw.dtw
The dtw function has the following parameters:
Parameters x – query vector or local cost matrix
y – reference vector, unused if x given as cost matrix
dist_method – pointwise (local) distance function to use.
step_pattern – a stepPattern object describing the local warping steps
allowed with their cost (see [stepPattern()])
window_type – windowing function. Character: “none”, “itakura”,
“sakoechiba”, “slantedband”, or a function (see details).
open_begin,open_end – perform open-ended alignments
keep_internals – preserve the cumulative cost matrix, inputs, and
other internal structures
distance_only – only compute distance (no backtrack, faster)
You try to pass an argument named dist and that argument simply is not known.
Instead, removing that argument would solve the issue, such as
dist, cost, acc_cost, path = dtw(mfcc1.T, mfcc2.T)

arguments and function call of LSTM in pytorch

Could anyone please explain me the below code:
import torch
import torch.nn as nn
input = torch.randn(5, 3, 10)
h0 = torch.randn(2, 3, 20)
c0 = torch.randn(2, 3, 20)
rnn = nn.LSTM(10,20,2)
output, (hn, cn) = rnn(input, (h0, c0))
print(input)
While calling rnn rnn(input, (h0, c0)) we gave arguments h0 and c0 in parenthesis. What is it supposed to mean? if (h0, c0) represents a single value then what is that value and what is the third argument passed here?
However, in the line rnn = nn.LSTM(10,20,2) we are passing arguments in LSTM function without paranthesis.
Can anyone explain me how this function call is working?
The assignment rnn = nn.LSTM(10, 20, 2) instanciates a new nn.Module using the nn.LSTM class. It's first three arguments are input_size (here 10), hidden_size (here 20) and num_layers (here 2).
On the other hand rnn(input, (h0, c0)) corresponds to actually calling the class instance, i.e. running __call__ which is roughly equivalent to the forward function of that module. The __call__ method of nn.LSTM takes in two parameters: input (shaped (sequnce_length, batch_size, input_size), and a tuple of two tensors (h_0, c_0) (both shaped (num_layers, batch_size, hidden_size) in the basic use case of nn.LSTM)
Please refer to the PyTorch documentation whenever using builtins, you will find the exact definition of the parameters list (the arguments used to initialize the class instance) as well as the input/outputs specifications (whenever inferring with that said module).
You might be confused with the notation, here's a small example that could help:
tuple as input:
def fn1(x, p):
a, b = p # unpack input
return a*x + b
>>> fn1(2, (3, 1))
>>> 7
tuple as output
def fn2(x):
return x, (3*x, x**2) # actually output is a tuple of int and tuple
>>> x, (a, b) = fn2(2) # unpacking
(2, (6, 4))
>>> x, a, b
(2, 6, 4)

Resources