How can I downsample a temporal signal with arrayfire?
To avoid anti aliasing, I should do a lowpass filter before dropping samples, but I didn't see anything like that in the API.
Thanks.
Related
I am currently operating in vhf band AND trying to detect frequencies using Fast Fourier transform thresholding method.
While detection of multiple frequencies , i received spurs(May not appropriate word) in addition with original frequencies, Such as
in case of f1,f2 that are incoming frequencies i also receive their sum f1+f2 and difference f1-f2.
i am trying to eliminate these using thresholding method but i can't differentiate them with real frequency magnitudes.
Please suggest me some method, or methodology to eliminate this problem
Input frequencies F1, F2
Expected frequencies F1,F2
Receive frequencies ,F1,F2,F1-F2,F1+F2
https://imgur.com/3rYYNv2 plot link that elaborate problem
Windowing can reduce windowing artifacts and distant side lobes, but makes the main lobe wider in exchange. But a large reduction in both the main-lobe and near side-lobes normally requires using more data and a longer FFT.
I am new to Deep Learning and I recently came across Depth Wise Separable Convolutions. They significantly reduce the computation required to process the data and need only like 10% of standard convolution step computation.
I am curious as to what is the intuition behind doing this? We do achieve greater speed by reducing number of parameters and doing less computation, but is there a trade-off in performance?
Also, is it only used for some specific use cases like images, etc or can it be applied to all forms of data?
Intuitively, depthwise separable conovolutions (DSCs) model the spatial correlation and cross-channel correlation separately while regular convolutions model them simultaneously. In our recent paper published on BMVC 2018, we give a mathematical proof that DSC is nothing but the principal component of regular convolution. This means it can capture the most effective part of regular convolution and discard other redundant parts, making it super efficient. For the tradeoff, in our paper, we gave some empirical results on VGG16 data-free regular convolution decomposition. However, with enough fine-tuning, we could almost reduce the accuracy degradation. Hope our paper helps you understand DSC further.
Intuition
The intuition behind doing this is to decouple the spatial information (width and height) and the depthwise information (channels). While regular convolutional layers will merge feature maps over the number of input channels, depthwise separable convolutions will perform another 1x1 convolution before adding them up.
Performance
Using a depthwise separable convolutional layer as a drop-in replacement for a regular one will greatly reduce the number of weights in the model. It will also very likely hurt the accuracy due to the much smaller number of weights. However, if you change the width and depth of your architecture to increase the weights again, you may reach the same accuracy of the original model with less parameters. At the same time a depthwise separable model with the same number of weights might achieve a higher accuracy compared the original model.
Application
You can use them wherever you can apply a CNN. I'm sure you will find use cases of depthwise separable models outside image related tasks. It's just that CNNs have been most popular around images.
Further Reading
Let me shamelessly point you to an article of mine that discusses different types of convolutions in Deep Learning with some information how they work. Maybe that helps as well.
I have a swarm robotic project. The localization system is done using ultrasonic and infrared transmitters/receivers. The accuracy is +-7 cm. I was able to do follow the leader algorithm. However, i was wondering why do i still have to use Kalman filter if the sensors raw data are good? what will it improve? isn't just will delay the coordinate being sent to the robots (the coordinates won't be updated instantly as it will take time to do the kalman filter math as each robot send its coordinates 4 times a second)
Sensor data is NEVER the truth, no matter how good they are. They will always be perturbed by some noise. Additionally, they do have finite precision. So sensor data is nothing but an observation that you make, and what you want to do is estimate the true state based on these observations. In mathematical terms, you want to estimate a likelihood or joint probability based on those measurements. You can do that using different tools depending on the context. One such tool is the Kalman filter, which in the simplest case is just a moving average, but is usually used in conjunction with dynamic models and some assumptions on error/state distributions in order to be useful. The dynamic models model the state propagation (e.g motion knowing previous states) and the observation (measurements), and in robotics/SLAM one often assumes the error is Gaussian. A very important and useful product of such filters is an estimation of the uncertainty in terms of covariances.
Now, what are the potential improvements? Basically, you make sure that your sensor measurements are coherent with a mathematical model and that they are "smooth". For example, if you want to estimate the position of a moving vehicle, the kinematic equations will tell you where you expect the vehicle to be, and you have an associated covariance. Your measurements also come with a covariance. So, if you get measurements that have low certainty, you will end up trusting the mathematical model instead of trusting the measurements, and vice-versa.
Finally, if you are worried about the delay... Note that the complexity of a standard extended Kalman filter is roughly O(N^3) where N is the number of landmarks. So if you really don't have enough computational power you can just reduce the state to pose, velocity and then the overhead will be negligible.
In general, Kalman filter helps to improve sensor accuracy by summing (with right coefficients) measurement (sensor output) and prediction for the sensor output. Prediction is the hardest part, because you need to create model that predicts in some way sensors' output. And I think in your case it is unnecessary to spend time creating this model.
Although you are getting accurate data from sensors but they cannot be consistent always. The Kalman filter will not only identify any outliers in the measurement data but also can predict it when some measurement is missing. However, if you are really looking for something with less computational requirements then you can go for a complimentary filter.
Often times in signal processing discussions people talk about the number of points of the fft (eg, 512, 1024, 2048), and they also talk about the number of bits of the signal. Another important part of the discussion should be the signals of interest. For example, if one is really interested in signals below 60Hz, it seems wasteful for an FFT algorithm to test for power at higher frequencies (and Fourier coefficients?). Is this the case in common implementations of the FFT algorithm? This savings could be quite relevant to someone performing an FFT on a low-powered microcontroller.
You could low-pass filter, decimate, and use a shorter FFT. But if the cost of quality filtering is a large fraction of NLogN, it (plus the shorter FFT) may cost as much as just doing the longer FFT and throwing away unneeded result bins.
You could use a Goertzel filter for just the needed DFT result frequency bins, but again, if you need around LogN result bins or more, an optimized full FFT may cost less computation (and also be slightly more accurate). So this is mainly useful if you need a lot less result bins than log(N), such as with DTMF decoding using a slow microcontroller.
If I am right, Wavelet packet decomposition (WPT) breaks a signal into various filter banks.
The same thing can be done using many band pass filters.
My aim is to find the energy content of a signal with a large sapmling rate ((2000 hz) in various frequency bands like 1-200, 200-400, 400-600.
What are the advantages and disadvantages of using a WPT of band pass filters?
with wpt (or dwt indeed) you have quadrature mirror filters that will ensure that if you add up all the reconstructed signals in the last level (the leaves) of the wpt tree you get exactly the original signal except for the math processor finite word length aproximations. The algorithm is pretty fast.
Moreover if your signal is non-stationary you can gain the time-frequency localization although this will drastically decrease as you go down on the tree (inverted).
The other aspect is that if yoy are lucky to get a wavelet that correlates well with the non stationary components of your signal the transform will map this components more efficiently.
For you application firstly see how many levels you have to go down in the wpt tree to go from your sampling frequency to the desired freq intervals, you may not get excately 200-400, 400-600 etc,the downer you go in the tree the more accurate are the feq limits, and you may have to join nodes to get your bands.