Liftering cutoff - signal-processing

Is there a rule of thumb in deciding the cut-off value when performing the low time liftering process in Cepstral analysis? Or is it just trial and error analysis?
I am trying to calculate the spectral envelope of the frequency response of data obtained from a vibration sensor. Sampling frequency is 5000 Hz.

I had done a project on cepstral analysis and found that 15 or 20 is the low-time liftering cut-off.
To my application 15 seemed to be pretty good.
Both the values are acceptable and works good.
But think for your application.
And I don't think there exist any rule or formula to calculate it.
Because it works well for all kinds of values and signals.
So it must be trail and error by someone who ended up in this value.

Related

Fault Detection on time sequence of variable changing (trending) over the time

I am pretty new on anomaly detection on time sequence so my question can be obvious for some of you.
Today, I am using lstm and clustering techniques to detect anomalies on time sequences but those method can not identify anomalies that get worse slowly over the time (i think it called trending), i.e temprature of machine increase slowly over the month (lstm will learn this trend and predict the increase without any special error).
There is such a method to detect this kind of faluts?
With time series that is usually what you want: learning gradual change, detecting abrupt change. Otherwise, time plays little role.
You can try e.g. the SigniTrend model with a very slow learning rate (a long half-life time or whatever they called it. Ignore all the tokens, hashing and scalability in that paper, only get the EWMA+EWMVar part which I really like and use it on your time series).
If you set the learning rate really low, the threshold should move slow enough so that your "gradual" change may still be able to trigger them.
Or you ignore time completely. Split your data into a training set (that must not contain anomalies), learn mean and variance on that to find thresholds. Then classify any point outside these thresholds as abnormal (I.e. temperature > mean + 3 * standarddeviation).
As this super naive approach does not learn, it will not follow a drift either. But then time does not play any further role.

How to exclude poses of a wheel based robot which place behind of the porevious pose

I am currently working on coding sensor fusion of a wheel based robot pose from GPS, Lidar, Vision and Vehicle measure. Its model is basic kinematics using EKF and no discrimination against sensors i.e. data comes in based on time stamp.
I have difficulty to fuse those sensors due to following issue;
Sometimes when the latest incoming data comes in from different sensor from a sensor gave previous state, the latest pose of the robot comes in behind previous pose. Therefore data fusion does not get so smooth and zigzag-ed as a result.
I would like discard data which plots behind/backwards of the previous data and take data which poses always forward/ahead of previous state even when sensor to provide the data changes between timestamp t and timestamp t+1. Since the data frame is global frame, it is impossible to rely on its x coordinate in minus to achieve this.
Please let me know if you had some idea on this. Thank you so much in advance.
Best,
Preliminary warning
Let me slip here a warning before suggesting posible solutions to your problem: be careful with discarding data based on your current estimate, since you never know if last measure is "pulling pose back" or previous one was wrong and caused your estimate to move forward too much.
Posible solutions
In a Kalman-like filter, observations are assumed to provide independent, uncorrelated information about state vector variables. These observations are assumed to have a random error distributed as a zero mean gaussian variable. Real life is harder, though :-(
Sometimes, measures are affected by a "bias" (a fixed term, similar to the gaussian error having a non-zero mean). e.g. tropospheric perturbations are known to introduce a position error in GPS fixes that drifts slowly over time.
If you take several sensors observing the same variable, as GPS and Lidar for for position, but they have different biases, your estimation will be jumping back and forth. Scaling problems can have a similar effect.
I will assume this is the root of your problem. If not, please refine your question.
How can you mitigate this problem? I see several alternatives:
Introduce a bias/scale correction term in your state vector to compensate sensor bias/drift. This is a very common trick in EKFs for inertial sensor fusion (gyro/accelerometer), that can work nice when tuned properly.
Apply some preprocessing to sensory inputs to correct known problems. It can be difficult to tune a filter for estimating state vector and sensor parameters at the same time.
Change how observations are interpreted. For example, use difference between consecutive position observations so that you are creating a fake odometer sensor. This greatly reduces the drift problem.
Post-process your output. Instead of discarding observations, integrate them and keep the "jumping" state vector internally, but smooth the output vector to eliminate the jumps. This is done in some UAV autopilots because such jumps affect the performance of PID controllers.
Finally, the most obvious and simple approach: discard observations based on some statistical test. A chi-square test of the residual can be used to determine if an observation is too far from expected values and must be discarded. Be careful with this options, though: observation rejection schemes must be completed with a state vector reinitialization logic to resutl in a stable behavior.
Almost all these solutions require knowning the source of each observation, so you would no longer be able to treat them indistinctly.

How to fine tune input parameters for ALWRS Factorizer in Apache Mahout?

So I have been using Apache Mahout for building a recommendation system. I am interested in using the SVD matrix factorization method.
I would like to know how I can fine tune the input paramter for :
ALSWRFactorizer(dataModel, no_of_hidden_features, lambda, iterations)
I have tried varying the values of lamda from 0.05 - 0.065 and my recommendation scores increased and then decreased. I thus selected 0.05945 as the value where the scores had reached the peak.
Is this the only approach I can use to estimate no_of_iterations and hidden_features. (values are rising and then decreasing, I expect no-of_features to be between 20-30).
Moreover is this the right approach even?
EDIT: Well I ran a couple more tests, and I seem to have zeroed in on using 20 hidden features, lambda = 0.0595, 20 iterations.
However I'd appreciate any answers explaining how I can do it in a better way.
So I came across this paper:
Application of Dimensionality Reduction in Recommender System
In section 4.3 they have essentially followed the same steps that I have done. After spending a day or two going through google results, iterative experimentation seems to be the only answer to fine tune these paramters.
Not sure what you mean that your "scores" increased then decreased. If you are describing running a precision type cross-validation test after applying each parameter iteration then you did the right thing. The values you came up with are very close to the "rules of thumb" for ALS-WR.

Phase difference between two signals?

I'm working on this embedded project where I have to resonate the transducer by calculating the phase difference between its Voltage and Current waveform and making it zero by changing its frequency. Where I(current) & V(Voltage) are the same frequency signals at any instant but not the fixed frequency signals approx.(47Khz - 52kHz). All I have to do is to calculate phase difference between these two signals. Which method will be most effective.
FFT of Two signals and then phase difference between the specific components
Or cross-correlation of two signals?
Or another if any ? Which method will give me most accurate result ? and with what resolution? Does sampling rate affects phase difference's resolution (minimum phase difference which can be sensed) ?
I'm new to Digital signal processing, in case of any mistake, correct me.
ADDITIONAL DETAILS:-
Noise In my system can be white/Gaussian Noise(Not significant) & Harmonics of Fundamental (Which might be significant one in resonant mismatch case).
Yes 4046 can be a good alternative with switching regulators. I'm working with (NCO/DDS) where I can scale/ reshape sinusoidal on ongoing basis.
Implementation of Analog filter will be very complex as I will require higher order filter with high roll-off rate for harmonic removal , so I'm choosing DSP based filter and its easy to work with MATLAB DSP Processors.
What sampling rate would you suggest for a ~50 KHz (47Khz-52KHz) system for achieving result in FFT or Goertzel with phase resolution of preferably =<0.1 degrees or less and frequency steps will vary from as small as ~1 to 2Hz . to 50 Hz-200Hz.
My frequency is variable 45KHz - 55Khz ... But will be known to my system... Knowing phase error for the last fed frequency is more desirable. After FFT AND DIGITAL FILTERING , IFFT can be performed for more noise free samples which can be used for further processing. So i guess FFT do both the tasks ...
But I'm wondering about the Phase difference accuracy cause thats the crucial part.
The Goertzel algorithm http://www.embedded.com/design/configurable-systems/4024443/The-Goertzel-Algorithm is a fairly efficient tone detection method that resolves the signal into real and imaginary components. I'll assume you can do the numeric to get the phase difference or just polarity, as you require.
Resolution versus time constant is a design tradeoff which this article highlights issues. http://www.mstarlabs.com/dsp/goertzel/goertzel.html
Additional
"What accuracy can be obtained?"
It depends...upon what you are faced with (i.e., signal levels, external noise, etc.), what hardware you have (i.e., adc, processor, etc.), and how you implement your solution (sample rate, numerical precision, etc.). Without the complete picture, I'll be guessing what you could achieve as the Goertzel approach is far from easy.
But I imagine for a high school project with good signal levels and low noise, an easier method of using the phase comparator (2 as it locks at zero degrees) of a 4046 PLL www.nxp.com/documents/data_sheet/HEF4046B.pdf will likely get you down to a few degrees.
One other issue if you have a high Q transducer is generating a high-resolution frequency. There is a method but that's another avenue.
Yet more
"Harmonics of Fundamental (Which might be significant)"... hmm hence the digital filtering;
but if the sampling rate is too low then there might be a problem with aliasing. Also, mismatched anti-aliasing filters are likely to take your whole error budget. A rule of thumb of ten times sampling frequency seems a bit low, and it being higher it will make the filter design easier.
Spatial windowing addresses off-frequency issues along with higher roll-off and attenuation and is described in this article. Sliding Spectrum Analysis by Eric Jacobsen and Richard Lyons in Streamlining Digital Signal Processing http://www.amazon.com/Streamlining-Digital-Signal-Processing-Guidebook/dp/1118278380
In my previous project after detecting either carrier, I then was interested in the timing of the frequency changes in immense noise. With carrier phase generation inconstancies, the phase error was never quiescent to be quantified, so I can't guess better than you what you might get with your project conditions.
Not to detract from chip's answer (I upvoted it!) but some other options are:
Cross correlation. Off the top of my head, I am not sure what the performance difference between that and the Goertzel algorithm will be, but both should be doable on an embedded system.
Ad-hoc methods. For example, I would try something like this: bandpass the signals to eliminate noise, find the peaks and measure the time difference between the peaks. This will probably be more efficient, and, provided you do a reasonable job throwing out outliers and handling wrap-around, should be extremely robust. The bandpass filters will, themselves, alter the phase, so you'll have to make sure you apply exactly the same filter to both signals.
If the input signal-to-noise ratios are not too bad, a computually efficient solution can be built based on zero crossing detection. Also, have a look at http://www.metrology.pg.gda.pl/full/2005/M&MS_2005_427.pdf for a nice comparison of phase difference detection algorithms, including zero-crossing ones.
Computing 1-bin of a DFT (or using the similar complex Goertzel block filter) will work if the signal frequency is accurately known. (Set the DFT bin or the Goertzel to exactly that frequency).
If the frequency isn't exactly known, you could try using an FFT with an FFTshift to interpolate the frequency magnitude peak, and then interpolate the phase at that frequency for each of the two signals. An FFT will also allow you to window the data, which may improve phase estimation accuracy if the frequency isn't exactly bin centered (or exactly the Goertzel filter frequency). Different windows may improve the phase estimation accuracy for frequencies "between bins". A Blackman-Nutall window will be better than a rectangular window, but there may be better window choices.
The phase measurement accuracy will depend on the S/N ratio, the length of time one samples the two (assumed stationary) signals, and possibly the window used.
If you have a Phase Locked Loop (PLL) that tracks each input, then you can subtract the phase coefficients (of the generator components) to determine offset between the phases. This would also be robust against noise.

removing bias/noise from gappy signal

I have an array of soil water content sensors across several desert field sites. Their signals contain a lot of noise or bias (depending on who I talk to). I want to remove the junk while keeping as much of the signal as possible. I'm not a signal processing guy, so anything along the lines of "use an XYZ filter" or a particular algorithm or something would really help me.
I've posted a plot showing a year's worth of data from one probe. The signal is the "top"; all the junk is below the signal:
http://www.unm.edu/~hilton/swc.png
I've played around with lowess smoothing a lot; that works reasonably well except in places where there's a lot of bias below the signal (like roughly idx 1000 to 2000 and 15000 to 16000 in the example below).
I have access to Matlab's signal processing toolbox and I'm very comfortable in R and python; if there's a pre-packaged filter in one of those I could jump off from that would be great (but I'm open to coding something new).
Many thanks,
Tim
I'd start with a median filter. If I read your plot correctly you're sampling twice an hour and the data isn't too dynamic. Assuming that's correct, a median filter length of 47 or 49 would equate to a one-day window. In this data set you could probably crank that up to a week or more. In any case you should plot the unfiltered and filtered data on top of each other to make sure the filtered data passes the eyeball test (you'll know it when you see it). You may need to do the final clean-up by hand (hope you don't have thousands of sensors).
(Also, I'd send an intern or grad student out to the field sites to find out what's wrong with sensors and fix them.)
It might be worth a quick try to implement some standard deviation filtering of your data set. Split your data up into N segments and for each segment, calculate the standard deviation for the Y-values. Once you've got that, filter out data points that have Y-values that exceed 3 standard deviations (or however much you want). Of course, there is some manual work that goes on with figuring out exactly how many segments to use.

Resources