[C,L] = wavedec(sig_new,6,waveletFunction);
After decomposing the EEG signal I reconstructed signal by compressing all decomposed sub-signals. After getting reconstructed signal, I compare reconstructed signal with original signal by using "mean square error" to find the best waveletFunction
How to reconstruct the signal. For decomposition I used:
[C,L] = wavedec(sig_new, 6, waveletFunction);
For reconstruction I used:
X = waverec(C,L,waveletFunction);
Please tell me the reconstruction procedure is correct or not.
I think it is not the correct way. Because I didn't get any difference between the reconstructed signal and original signal. Please help me.
waveletFunction = 'db1';
[C,L] = wavedec(sig_new,8,waveletFunction);
cD1 = detcoef(C,L,1);
cD2 = detcoef(C,L,2);
cD3 = detcoef(C,L,3);
cD4 = detcoef(C,L,4);
cD5 = detcoef(C,L,5); %GAMA
cD6 = detcoef(C,L,6); %BETA
cD7 = detcoef(C,L,7); %ALPHA
cD8 = detcoef(C,L,8); %THETA
cA8 = appcoef(C,L,waveletFunction,8); %DELTA
(or)
D1 = wrcoef('d',C,L,waveletFunction,1);
D2 = wrcoef('d',C,L,waveletFunction,2);
D3 = wrcoef('d',C,L,waveletFunction,3);
D4 = wrcoef('d',C,L,waveletFunction,4);
D5 = wrcoef('d',C,L,waveletFunction,5); %GAMMA
D6 = wrcoef('d',C,L,waveletFunction,6); %BETA
D7 = wrcoef('d',C,L,waveletFunction,7); %ALPHA
D8 = wrcoef('d',C,L,waveletFunction,8); %THETA
A8 = wrcoef('a',C,L,waveletFunction,8); %DELTA
Among above two decompositions which one is correct? What is the difference between them?
Please help me for completing my research work.
Related
to practice Wiener deconvolution, I'm trying to perform a simple deconvolution:
def div(img1 ,img2):
res = np.zeros(img2.shape, dtype = 'complex_')
for i in range (img2.shape[0]):
for j in range (img2.shape[0]):
if (np.abs(img2[i][j]) > 0.001):
res[i][j] = 1 / (img2[i][j])
else:
res[i][j] = 0.001
return res
filtre = np.asarray([[1,1,1],
[1,1,1],
[1,1,1]]) * 1/9
filtre_freq = fft2(filtre)
v = signal.convolve(img, filtre)
F = div(1,(filtre_freq))
f = ifft2(F)
res = signal.convolve(v, f)
I am trying to compute the inverse filter in the frequency domain, pass it to the spatial domain and do the convolution with the inverse filter. On paper it's pretty simple, even if I have to manage the divisions by 0 without really knowing how to do it.
But my results seem really inconsistent:
If anyone can enlighten me on this ... Thanks in advance and have a great evening.
I have developed two methods using SIFT and ORB, but it seems to me that the points do not correspond correctly. Am I using these functions wrongly or do I need something different?
orb = cv2.ORB_create()
keypoints_X, descriptor_X = orb.detectAndCompute(car1_gray, None)
keypoints_y, descriptor_y = orb.detectAndCompute(car2_gray, None)
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck = True)
matches = bf.match(descriptor_X, descriptor_y)
matches = sorted(matches, key = lambda x: x.distance)
result = cv2.drawMatches(car1_gray, keypoints_X, car2_gray, keypoints_y, matches[:10], car2_gray, flags = 2)
sift = cv2.SIFT_create()
keypoints_X, descriptor_X = sift.detectAndCompute(car1_gray, None)
keypoints_y, descriptor_y = sift.detectAndCompute(car2_gray, None)
bf = cv2.BFMatcher()
matches = bf.knnMatch(descriptor_X, descriptor_y, k=2)
bom = []
for m,n in matches:
if m.distance < 0.75*n.distance:
bom.append([m])
result = cv2.drawMatchesKnn(car1_gray, keypoints_X, car2_gray, keypoints_y, bom, None, flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
Below the result of SIFT and ORB:
Take a look into SuperGlue, graph neural network based feature matching. Although, they do not provide training code, but two pretrained model for indoor, outdoor is available. Links,
https://github.com/magicleap/SuperGluePretrainedNetwork
https://psarlin.com/superglue/
https://arxiv.org/pdf/1911.11763.pdf
I'm using the timeslice method in caret's trainControl function to perform cross-validation on a time series model. I've noticed that RMSE increases with the horizon argument.
I realise this might happen for several reasons, e.g., if explanatory variables are being forecast and/or there's autocorrelation in the data such that the model can better predict nearer vs. farther ahead observations. However, I'm seeing the same behaviour even when neither is the case (see trivial reproducible example below).
Can anyone explain why RSMEs are increasing with horizon?
# Make data
X = data.frame(matrix(rnorm(1000 * 3), ncol = 3))
X$y = rowSums(X) + rnorm(nrow(X))
# Iterate over different different forecast horizons and record RMSES
library(caret)
forecast_horizons = c(1, 3, 10, 50, 100)
rmses = numeric(length(forecast_horizons))
for (i in 1:length(forecast_horizons)) {
ctrl = trainControl(method = 'timeslice', initialWindow = 500, horizon = forecast_horizons[i], fixedWindow = T)
rmses[i] = train(y ~ ., data = X, method = 'lm', trControl = ctrl)$results$RMSE
}
print(rmses) #0.7859786 0.9132649 0.9720110 0.9837384 0.9849005
I am working on wavelet and I am new in this field.I want to decompose a signal into multiple band.So I use wavedec() to decompose a signal into 5 level and use wrcoef() to reconstruct individual band.But problem is that when I sum 5 band then reconstruct signal is more differ than Original signal.
plz any body help me about this.
Here my code..
load sumsin; s = sumsin;
figure;plot(s);
% Perform decomposition at level 5 of s using sym4.
[c,l] = wavedec(s,5,'sym4');
% Reconstruct approximation at level 5,
% from the wavelet decomposition structure [c,l].
a1= wrcoef('a',c,l,'sym4',1);
a2 = wrcoef('a',c,l,'sym4',2);
a3 = wrcoef('a',c,l,'sym4',3);
a4 = wrcoef('a',c,l,'sym4',4);
a5 = wrcoef('a',c,l,'sym4',5);
figure; subplot(5,1,1); plot(a1); title('Approximation at level 1');
subplot(5,1,2); plot(a2); title('Approximation at level 2');
subplot(5,1,3); plot(a3); title('Approximation at level 3');
subplot(5,1,4); plot(a4); title('Approximation at level 4');
subplot(5,1,5); plot(a5); title('Approximation at level 5');
figure;plot(a1+a2+a3+a4+a5);title('Reconstruct Original signal');
To reconstruct the original signal you need to sum together five detailed components and the approximation component at the last, fifth, level.
d1= wrcoef('d',c,l,'sym4',1);
d2 = wrcoef('d',c,l,'sym4',2);
d3 = wrcoef('d',c,l,'sym4',3);
d4 = wrcoef('d',c,l,'sym4',4);
d5 = wrcoef('d',c,l,'sym4',5);
a5 = wrcoef('a',c,l,'sym4',5);
s_origin=d1+d2+d3+d4+d5+a5;
I'm trying to learn theano and decided to implement linear regression (using their Logistic Regression from the tutorial as a template). I'm getting a wierd thing where T.grad doesn't work if my cost function uses .sum(), but does work if my cost function uses .mean(). Code snippet:
(THIS DOESN'T WORK, RESULTS IN A W VECTOR FULL OF NANs):
x = T.matrix('x')
y = T.vector('y')
w = theano.shared(rng.randn(feats), name='w')
b = theano.shared(0., name="b")
# now we do the actual expressions
h = T.dot(x,w) + b # prediction is dot product plus bias
single_error = .5 * ((h - y)**2)
cost = single_error.sum()
gw, gb = T.grad(cost, [w,b])
train = theano.function(inputs=[x,y], outputs=[h, single_error], updates = ((w, w - .1*gw), (b, b - .1*gb)))
predict = theano.function(inputs=[x], outputs=h)
for i in range(training_steps):
pred, err = train(D[0], D[1])
(THIS DOES WORK, PERFECTLY):
x = T.matrix('x')
y = T.vector('y')
w = theano.shared(rng.randn(feats), name='w')
b = theano.shared(0., name="b")
# now we do the actual expressions
h = T.dot(x,w) + b # prediction is dot product plus bias
single_error = .5 * ((h - y)**2)
cost = single_error.mean()
gw, gb = T.grad(cost, [w,b])
train = theano.function(inputs=[x,y], outputs=[h, single_error], updates = ((w, w - .1*gw), (b, b - .1*gb)))
predict = theano.function(inputs=[x], outputs=h)
for i in range(training_steps):
pred, err = train(D[0], D[1])
The only difference is in the cost = single_error.sum() vs single_error.mean(). What I don't understand is that the gradient should be the exact same in both cases (one is just a scaled version of the other). So what gives?
The learning rate (0.1) is way to big. Using mean make it divided by the batch size, so this help. But I'm pretty sure you should make it much smaller. Not just dividing by the batch size (which is equivalent to using mean).
Try a learning rate of 0.001.
Try dividing your gradient descent step size by the number of training examples.