I have gone through the code below and would like to know how can I count the outlier points and inlier points after using RANSAC? could you point to a good code how it can be done?
Second question, which feature matching algorithm is better: BFMatcher.knnMatch() with Test ratio or bf = cv.BFMatcher(cv.NORM_HAMMING, crossCheck=True) with shortest distance? any reference for this comparison?
**# BFMatcher with default params
bf = cv.BFMatcher()
matches = bf.knnMatch(des1, des2, k=2)
# Apply ratio test
good_matches = []
for m,n in matches:
if m.distance < 0.75*n.distance:
good_matches.append([m])
# Draw matches
img3=cv.drawMatchesKnn(img1,kp1,img2,kp2,good_matches,None,flags=cv.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
cv.imwrite('matches.jpg', img3)
# Select good matched keypoints
ref_matched_kpts = np.float32([kp1[m[0].queryIdx].pt for m in good_matches])
sensed_matched_kpts = np.float32([kp2[m[0].trainIdx].pt for m in good_matches])
# Compute homography
H, status = cv.findHomography(sensed_matched_kpts, ref_matched_kpts, cv.RANSAC,5.0)**
Count number of outliers and inliers
# number of detected outliers: len(status) - np.sum(status)
# number of detected inliers: np.sum(status)
# Inlier Ratio, number of inlier/number of matches: float(np.sum(status)) / float(len(status))
Feature Matching Algorithm
I would say that if you are using the sparse feature-based algorithm (SIFT or SURF), BFMatcher.knnMatch() with Test ratio is preferred. While the bf = cv.BFMatcher(cv.NORM_HAMMING, crossCheck=True) is used for binary-based algorithm (ORB, FAST, etc). My suggestion would be try both algorithms on your project to investigate which one is better.
Related
I was going through this research paper. This paper mentions that:
But upon searching and referring other sources I found that the low rank approximation of a matrix using SVD is given as:
Please explain the approach mentioned in the research paper
They only complicated a bit by splitting the SVD decomposition in the used components and the unused components.
Notice that A = sum(U[:, i:i+1] * s[i] * V[i:i+1,:] for i in range(len(s)),
import numpy as np
A = np.random.rand(10, 15)
U, s, V = np.linalg.svd(A)
assert np.allclose(A, U # np.diag(s) # V.T)
assert np.allclose(A, sum(s[i] * U[:, i:i+1] * V[i:i+1,:]
for i in range(len(s))))
Notice that the left multiplication by U[:, :M].T will project the matrix onto a subspace of the M first left singular vectors. And then left multiplying by U[:,:M] brings the projected value to the original basis.
M=6
assert np.allclose(U[:, :M] # U[:, :M].T # A,
sum(s[i] * U[:, i:i+1] * V[i:i+1,:] for i in range(M)))
I've been implementing VAE and IWAE models on the caltech silhouettes dataset and am having an issue where the VAE outperforms IWAE by a modest margin (test LL ~120 for VAE, ~133 for IWAE!). I don't believe this should be the case, according to both theory and experiments produced here.
I'm hoping someone can find some issue in how I'm implementing that's causing this to be the case.
The network I'm using to approximate q and p is the same as that detailed in the appendix of the paper above. The calculation part of the model is below:
data_k_vec = data.repeat_interleave(K,0) # Generate K samples (in my case K=50 is producing this behavior)
mu, log_std = model.encode(data_k_vec)
z = model.reparameterize(mu, log_std) # z = mu + torch.exp(log_std)*epsilon (epsilon ~ N(0,1))
decoded = model.decode(z) # this is the sigmoid output of the model
log_prior_z = torch.sum(-0.5 * z ** 2, 1)-.5*z.shape[1]*T.log(torch.tensor(2*np.pi))
log_q_z = compute_log_probability_gaussian(z, mu, log_std) # Definitions below
log_p_x = compute_log_probability_bernoulli(decoded,data_k_vec)
if model_type == 'iwae':
log_w_matrix = (log_prior_z + log_p_x - log_q_z).view(-1, K)
elif model_type =='vae':
log_w_matrix = (log_prior_z + log_p_x - log_q_z).view(-1, 1)*1/K
log_w_minus_max = log_w_matrix - torch.max(log_w_matrix, 1, keepdim=True)[0]
ws_matrix = torch.exp(log_w_minus_max)
ws_norm = ws_matrix / torch.sum(ws_matrix, 1, keepdim=True)
ws_sum_per_datapoint = torch.sum(log_w_matrix * ws_norm, 1)
loss = -torch.sum(ws_sum_per_datapoint) # value of loss that gets returned to training function. loss.backward() will get called on this value
Here are the likelihood functions. I had to fuss with the bernoulli LL in order to not get nan during training
def compute_log_probability_gaussian(obs, mu, logstd, axis=1):
return torch.sum(-0.5 * ((obs-mu) / torch.exp(logstd)) ** 2 - logstd, axis)-.5*obs.shape[1]*T.log(torch.tensor(2*np.pi))
def compute_log_probability_bernoulli(theta, obs, axis=1): # Add 1e-18 to avoid nan appearances in training
return torch.sum(obs*torch.log(theta+1e-18) + (1-obs)*torch.log(1-theta+1e-18), axis)
In this code there's a "shortcut" being used in that the row-wise importance weights are being calculated in the model_type=='iwae' case for the K=50 samples in each row, while in the model_type=='vae' case the importance weights are being calculated for the single value left in each row, so that it just ends up calculating a weight of 1. Maybe this is the issue?
Any and all help is huge - I thought that addressing the nan issue would permanently get me out of the weeds but now I have this new problem.
EDIT:
Should add that the training scheme is the same as that in the paper linked above. That is, for each of i=0....7 rounds train for 2**i epochs with a learning rate of 1e-4 * 10**(-i/7)
The K-sample importance weighted ELBO is
$$ \textrm{IW-ELBO}(x,K) = \log \sum_{k=1}^K \frac{p(x \vert z_k) p(z_k)}{q(z_k;x)}$$
For the IWAE there are K samples originating from each datapoint x, so you want to have the same latent statistics mu_z, Sigma_z obtained through the amortized inference network, but sample multiple z K times for each x.
So its computationally wasteful to compute the forward pass for data_k_vec = data.repeat_interleave(K,0), you should compute the forward pass once for each original datapoint, then repeat the statistics output by the inference network for sampling:
mu = torch.repeat_interleave(mu,K,0)
log_std = torch.repeat_interleave(log_std,K,0)
Then sample z_k. And now repeat your datapoints data_k_vec = data.repeat_interleave(K,0), and use the resulting tensor to efficiently evaluate the conditional p(x |z_k) for each importance sample z_k.
Note you may also want to use the logsumexp operation when calculating the IW-ELBO for numerical stability. I can't quite figure out what's going on with the log_w_matrix calculation in your post, but this is what I would do:
log_pz = ...
log_qzCx = ....
log_pxCz = ...
log_iw = log_pxCz + log_pz - log_qzCx
log_iw = log_iw.reshape(-1, K)
iwelbo = torch.logsumexp(log_iw, dim=1) - np.log(K)
EDIT: Actually after thinking about it a bit and using the score function identity, you can interpret the IWAE gradient as an importance weighted estimate of the standard single-sample gradient, so the method in the OP for calculation of the importance weights is equivalent (if a bit wasteful), provided you place a stop_gradient operator around the normalized importance weights, which you call w_norm. So I the main problem is the absence of this stop_gradient operator.
I need to evaluate the results after using ORB and BFMatcher in OpenCV, such that I interpret the matches after comparing img1 to img3, and img2 to img3. I understand that ORB matches contain a list of hamming distances, but I want to convert this vector into a scalar value of similarity.
I thought of two scenarios:
1) using the length of matches, the higher indicates the great similarity. But how we deal if the length of matches1 = length matches2?. In this case, we can 2)add all distances, and the minimum is preferable.
Can we combine all cases into one metric?
Here is the minimum of my work:
orb = cv2.ORB()
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
matches = bf.match(des1,des2)
matches = sorted(matches, key = lambda x:x.distance)
return len(matches)
Thanks
I have the current programming problem in Torch.
I have a table made of two Tensors:
require 'nn'
N = 4
aaaTensor = torch.randn(N)
bbbTensor = torch.randn(N)
thisTable = {aaaTensor, bbbTensor}
I would like to compute the cosine distance for each pair of single values of aaaTensor and bbbTensor:
the cosine distance between aaaTensor[1] and bbbTensor[1]
the cosine distance between aaaTensor[2] and bbbTensor[2]
...
the cosine distance between aaaTensor[N] and bbbTensor[N]
And I don't know how to do this.
If I use the nn.CosineDistance() module (link), it will compute the general cosine distance between aaaTensor and bbbTensor:
cosine = nn.CosineDistance()
cosine:forward{aaaTensor, bbbTensor}
0.7185
[torch.DoubleTensor of size 1]
I want to have N=4 outputs.
How could I implement this one-by-one cosine distance computation?
Thanks
The documentation says nn.CosineDistance() accepts batches. So (although cosine distance of single values does not make sense) you can do it like;
require 'nn'
N = 4
aaaTensor = torch.randn(N,1)
bbbTensor = torch.randn(N,1)
thisTable = {aaaTensor, bbbTensor}
cosine = nn.CosineDistance()
cosine:forward{aaaTensor, bbbTensor}
I know this question has been asked ad nauseam but somehow I can't make it work properly. I created a single, sine wave of 440 Hz having a unit amplitude. Now, after the FFT, the bin at 440 Hz has a distinct peak but the value just isn't right. I'd expect to see 0 dB since I'm dealing with a unit amplitude sine wave. Instead, the power calculated is well above 0 dB. The formula I'm using is simply
for (int i = 0; i < N/2; i++)
{
mag = sqrt((Real[i]*Real[i] + Img[i]*Img[i])/(N*0.54)); //0.54 correction for a Hamming Window
Mag[i] = 10 * log(mag) ;
}
I should probably point out that the total energy in the time domain is equal to the energy in the frequency domain (Parseval's theorem), so I know that my FFT routine is fine.
Any help is much appreciated.
I've been struggling with this again for work. It seems that a lot of software routines / books are a bit sloppy on the normalization of the FFT.
The best summary I have is: Energy needs to be conserved - which is Parseval's theorem. Also when coding this in Python, you can easily loose an element and not know it. Note that numpy.arrays indexing is not inclusive of the last element.
a = [1,2,3,4,5,6]
a[1:-1] = [2,3,4,5]
a[-1] = 6
Here's my code to normalize the FFT properly:
# FFT normalization to conserve power
import numpy as np
import matplotlib.pyplot as plt
import scipy.signal
sample_rate = 500.0e6
time_step = 1/sample_rate
carrier_freq = 100.0e6
# number of digital samples to simulate
num_samples = 2**18 # 262144
t_stop = num_samples*time_step
t = np.arange(0, t_stop, time_step)
# let the signal be a voltage waveform,
# so there is no zero padding
carrier_I = np.sin(2*np.pi*carrier_freq*t)
#######################################################
# FFT using Welch method
# windows = np.ones(nfft) - no windowing
# if windows = 'hamming', etc.. this function will
# normalize to an equivalent noise bandwidth (ENBW)
#######################################################
nfft = num_samples # fft size same as signal size
f,Pxx_den = scipy.signal.welch(carrier_I, fs = 1/time_step,\
window = np.ones(nfft),\
nperseg = nfft,\
scaling='density')
#######################################################
# FFT comparison
#######################################################
integration_time = nfft*time_step
power_time_domain = sum((np.abs(carrier_I)**2)*time_step)/integration_time
print 'power time domain = %f' % power_time_domain
# Take FFT. Note that the factor of 1/nfft is sometimes omitted in some
# references and software packages.
# By proving Parseval's theorem (conservation of energy) we can find out the
# proper normalization.
signal = carrier_I
xdft = scipy.fftpack.fft(signal, nfft)/nfft
# fft coefficients need to be scaled by fft size
# equivalent to scaling over frequency bins
# total power in frequency domain should equal total power in time domain
power_freq_domain = sum(np.abs(xdft)**2)
print 'power frequency domain = %f' % power_freq_domain
# Energy is conserved
# FFT symmetry
plt.figure(0)
plt.subplot(2,1,1)
plt.plot(np.abs(xdft)) # symmetric in amplitude
plt.title('magnitude')
plt.subplot(2,1,2)
plt.plot(np.angle(xdft)) # pi phase shift out of phase
plt.title('phase')
plt.show()
xdft_short = xdft[0:nfft/2+1] # take only positive frequency terms, other half identical
# xdft[0] is the dc term
# xdft[nfft/2] is the Nyquist term, note that Python 2.X indexing does NOT
# include the last element, therefore we need to use 0:nfft/2+1 to have an array
# that is from 0 to nfft/2
# xdft[nfft/2-x] = conjugate(xdft[nfft/2+x])
Pxx = (np.abs(xdft_short))**2 # power ~ voltage squared, power in each bin.
Pxx_density = Pxx / (sample_rate/nfft) # power is energy over -fs/2 to fs/2, with nfft bins
Pxx_density[1:-1] = 2*Pxx_density[1:-1] # conserve power since we threw away 1/2 the spectrum
# note that DC (0 frequency) and Nyquist term only appear once, we don't double those.
# Note that Python 2.X array indexing is not inclusive of the last element.
Pxx_density_dB = 10*np.log10(Pxx_density)
freq = np.linspace(0,sample_rate/2,nfft/2+1)
# frequency range of the fft spans from DC (0 Hz) to
# Nyquist (Fs/2).
# the resolution of the FFT is 1/t_stop
# dft of size nfft will give nfft points at frequencies
# (1/stop) to (nfft/2)*(1/t_stop)
plt.figure(1)
plt.plot(freq,Pxx_density_dB,'^')
plt.figure(1)
plt.plot(f, 10.0*np.log10(Pxx_den))
plt.xlabel('Freq (Hz)'),plt.ylabel('dBm/Hz'),#plt.show()
plt.ylim([-200, 0])
plt.show()
Many common (but not all) FFT libraries scale the FFT result of a unit amplitude sinusoid by the length of the FFT. This maintains Parsevals equality since a longer sinusoid represents more total energy than a shorter one of the same amplitude.
If you don't want to scale by the FFT length when using one of these libraries, then divide by the length before computing the magnitude in dB.
Normalization can be done in many different ways - depending on window, number of samples, etc.
Common trick: take FFT of known signal and normalize by the value of the peak. Say in the above example your peak is 123 - if you want it to be 1, then divide it ( and all results obtained with this algorithm) by 123.