I am seeing a lot of literature in which they say that by using the fft one can reach a faster convolution. I know that one needs to get fft and and then ifft from the results, but I really do not understand why using the fft can make the convolution faster?
FFT speeds up convolution for large enough filters, because convolution requires N multiplications (and N-1) additions for each output sample and conversely (2)N^2 operations for a block of N samples.
Taking account, that one has to double the block size for FFT processing by adding zeroes, each block requires (2)*(2N)*log(2N) operations to perform FFT, 2N operations to multiply and again 4N*log(2N) operations to perform inverse FFT, there is a break even point, where 8Nlog2N <= 2N^2.
The fundamental reasons are:
1) a discrete time-domain signal can be represented as a sum of frequencies.
2) convolution in time domain (O(N^2)) equals multiplication of frequencies in frequency domain (O(N))
3) the transformation is invertible
4) there exists a method to convert a signal from time domain to frequency domain in less than N^2 operations (that's the first F in 'Fast Fourier Transform').
The straight forward FT is O(N^2), where each Frequency domain element F(i) = Sigma f(i) * exp(i*pi/N).
However the FFT is based on an observation that exp(i*pi/N) has certain symmetries, allowing the calculation to be split in odd/even vectors. The even vectors can be calculated at the expense of O(N) while the odd vectors require a full FT of half the size. As this can be repeated until N=2, the overall complexity will be (proportional to) Nlog(N).
Related
I'm currently working with Fourier transforms and I notice that the output of an FFT usually has the dimensions (n_fft, ) where n_fft is the number of samples to consider in the FFT, though some implementations discard frequency bins above the Nyquist frequency. What I can't wrap my head around is why the frequency resolution is dependent on the number of samples considered in the FFT.
Could someone please explain the intuition behind this please?
The DFT is a change of basis from the time domain to the frequency domain. As Cris put it, "If you have N input samples, you need at least N numbers to fully describe the input."
some implementations discard frequency bins above the Nyquist frequency.
Yes, that's right. When the input signal is real-valued, its spectrum is Hermitian symmetric. The components above the Nyquist frequency are complex-conjugates of lower frequencies, so it is common practice to use real-to-complex FFTs to skip explicitly computing and storing them. In other words, if you have N real-valued samples, you need N/2 complex numbers to represent them.
I'm using the OpenCV implementation of ORB along with the BFMatcher. The OpenCV states that NORM_HAMMING should be used with ORB.
Why is this? What advantages does norm_hamming offer over other methods such as euclidean distance, norm_l1, etc.
When comparing descriptors in computer vision, the Euclidian distance is usually understood as the square root of the sum of the squared differences between the two vectors' elements.
The ORB descriptors are vectors of binary values. If applying Euclidian distance to binary vectors, the squared result of a single comparison would always be 1 or 0, which is not informative when it comes to estimating the difference between the elements. The overall Euclidian distance would be the square root of the sum of those ones and zeroes, again not a good estimator of the difference between the vectors.
That's why the Hamming distance is used. Here the distance is the number of elements that are not the same. As noted by Catree, you can calculate it by a simple boolean operation on the vectors, as shown in the figure below. Here D1 is a single 4-bit descriptor that we are comparing with 4 descriptors shown in D2. Matrix H is the hamming distances for each row.
ORB (ORB: an efficient alternative to SIFT or SURF) is a binary descriptor.
It should be more efficient (in term of computation) to use the HAMMING distance rather than the L1/L2 distance as the HAMMING distance can be implemented using a XOR followed by a bit count (see BRIEF: Binary Robust Independent Elementary Features):
Furthermore, comparing strings can be done by computing the Hamming
distance, which can be done extremely fast on modern CPUs that often
provide a specific instruction to perform a XOR or bit count
operation, as is the case in the latest SSE [10] instruction set.
Of course, with a classical descriptor like SIFT, you cannot use the HAMMING distance.
You can test yourself:
D1=01010110
D2=10011010
L2_dist(D1,D2)=sqrt(4)=2
XOR(D1,D2)=11001100 ; bit_count(11001100)=4
L1/L2 distance is used for string based descriptors and Hamming distance is used for binary descriptors (AKAZE, ORB, BRIEF etc.).
I am trying to understand how the gradients are computed when using miinibatch SGD. I have implemented it in CS231 online course, but only came to realize that in intermediate layers the gradient is basically the sum over all the gradients computed for each sample (the same for the implementations in Caffe or Tensorflow). It is only in the last layer (the loss) that they are averaged by the number of samples.
Is this correct? if so, does it mean that since in the last layer they are averaged, when doing backprop, all the gradients are also averaged automatically?
Thanks!
It is best to understand why SGD works first.
Normally, what a neural network actually is, a very complex composite function of an input vector x, a label y(or target variable, changes according to whether the problem is classification or regression) and some parameter vector, w. Assume that we are working on classification. We are actually trying to do a maximum likelihood estimation (actually MAP estimation since we are certainly going to use L2 or L1 regularization, but this is too much technicality for now) for variable vector w. Assuming that samples are independent; then we have the following cost function:
p(y1|w,x1)p(y2|w,x2) ... p(yN|w,xN)
Optimizing this wrt to w is a mess due to the fact that all of these probabilities are multiplicated (this will produce an insanely complicated derivative wrt w). We use log probabilities instead (taking log does not change the extreme points and we divide by N, so we can treat our training set as a empirical probability distribution, p(x) )
J(X,Y,w)=-(1/N)(log p(y1|w,x1) + log p(y2|w,x2) + ... + log p(yN|w,xN))
This is the actual cost function we have. What the neural network actually does is to model the probability function p(yi|w,xi). This can be a very complex 1000+ layered ResNet or just a simple perceptron.
Now the derivative for w is simple to state, since we have an addition now:
dJ(X,Y,w)/dw = -(1/N)(dlog p(y1|w,x1)/dw + dlog p(y2|w,x2)/dw + ... + dlog p(yN|w,xN)/dw)
Ideally, the above is the actual gradient. But this batch calculation is not easy to compute. What if we are working on a dataset with 1M training samples? Worse, the training set may be a stream of samples x, which has an infinite size.
The Stochastic part of the SGD comes into play here. Pick m samples with m << N randomly and uniformly from the training set and calculate the derivative by using them:
dJ(X,Y,w)/dw =(approx) dJ'/dw = -(1/m)(dlog p(y1|w,x1)/dw + dlog p(y2|w,x2)/dw + ... + dlog p(ym|w,xm)/dw)
Remember that we had an empirical (or actual in the case of infinite training set) data distribution p(x). The above operation of drawing m samples from p(x) and averaging them actually produces the unbiased estimator, dJ'/dw, for the actual derivative dJ(X,Y,w)/dw. What does that mean? Take many such m samples and calculate different dJ'/dw estimates, average them as well and you get dJ(X,Y,w)/dw very closely, even exactly, in the limit of infinite sampling. It can be shown that these noisy but unbiased gradient estimates will behave like the original gradient in the long run. On the average, SGD will follow the actual gradient's path (but it can get stuck at a different local minima, all depends on the selection of the learning rate). The minibatch size m is directly related to the inherent error in the noisy estimate dJ'/dw. If m is large, you get gradient estimates with low variance, you can use larger learning rates. If m is small or m=1 (online learning), the variance of the estimator dJ'/dw is very high and you should use smaller learning rates, or the algorithm may easily diverge out of control.
Now enough theory, your actual question was
It is only in the last layer (the loss) that they are averaged by the number of samples. Is this correct? if so, does it mean that since in the last layer they are averaged, when doing backprop, all the gradients are also averaged automatically? Thanks!
Yes, it is enough to divide by m in the last layer, since the chain rule will propagate the factor (1/m) to all parameters once the lowermost layer is multiplied by it. You don't need to do separately for each parameter, this will be invalid.
In the last layer they are averaged, and in the previous are summed. The summed gradients in previous layers are summed across different nodes from the next layer, not by the examples. This averaging is done only to make the learning process behave similarly when you change the batch size -- everything should work the same if you sum all the layers, but decrease the learning rate appropriately.
I'm a software engineer working on DSP for the first time.
I'm successfully using an FFT library that produces frequency spectrums. I also understand how the FFT works in terms of its inputs and outputs, in particular the contents of the two output arrays:
Now, my problem is that I'm reading some new research reports that suggest that I extract: "the energy, variance, and sum of FFT coefficients".
What are the 'FFT coefficients'? Are those the values of the Real and Imaginary arrays shown above, which (from my understanding) correspond to the amplitudes of the constituent cosine and sine waves?
What is the 'energy' of the FFT coefficients? Is that terminology from statistics or from DSP?
You are correct. FFT coefficients are the signal values in the frequency domain.
"Energy" is the square modulus of the coefficients. The total energy (sum of square modulus of all values) in either time or frequency domain is the same (see Parseval's Theorem).
The real and imaginary arrays, when put together, can represent a complex array. Every complex element of the complex array in the frequency domain can be considered a frequency coefficient, and has a magnitude ( sqrt(R*R + I*I) ). Parseval's theorem says that the sum of all the Frequency domain complex vector magnitudes (squared) is equal to the energy of the time domain signal (which may require a scaling factor involving the FFT length, depending on your particular DFT/FFT library implementation).
One example of a time domain signal is voltage on a wire, which when measured in Volts times Amps into Ohms represents power, or over time, energy. Probably the word "energy" in the strictly numerical case is derived from historical usage from physics or engineering, where the numbers meant something that could burn your fingers.
Using GNU octave, I'm computing a fft over a piece of signal, then eliminating some frequencies, and finally reconstructing the signal. This give me a nice approximation of the signal ; but it doesn't give me a way to extrapolate the data.
Suppose basically that I have plotted three periods and a half of
f: x -> sin(x) + 0.5*sin(3*x) + 1.2*sin(5*x)
and then added a piece of low amplitude, zero-centered random noise. With fft/ifft, I can easily remove most of the noise ; but then how do I extrapolate 3 more periods of my signal data? (other of course that duplicating the signal).
The math way is easy : you have a decomposition of your function as an infinite sum of sines/cosines, and you just need to extract a partial sum and apply it anywhere. But I don't quite get the programmatic way...
Thanks!
The Discrete Fourier Transform relies on the assumption that your time domain data is periodic, so you can just repeat your time domain data ad nauseam - no explicit extrapolation is necessary. Of course this may not give you what you expect if your individual component periods are not exact sub-multiples of the DFT input window duration. This is one reason why we typically apply window functions such as the Hanning Window prior to the transform.