wavelet analysis transforms - signal-processing

Design a 4-stage wavelet analysis transforms code capable of transforming a 1D Voice/Speech signal to various lowpass and highpass filter components. Use the Haar, Daubechies-4, Morlet, Cauchy and Shannon wavelets?

Related

Using ICA over MIT BIH NST dataset

In this dataset, I wanted to use signals with same unit and different SNR as input signals in ICA, i.e.
ica_input = np.array([ record_118e_6(MLII),
record_118e00(MLII),
record_118e06(MLII),
record_118e12(MLII),
record_118e18(MLII)
])
Is this a correct input to ICA?
Can I here consider the signal with different SNR linear mix of noises and true signal?
According to this article (which states how the dataset is generated by nst), The above input channels are linear mix of noise and clean signal and from plotting one can see that this data is clearly non gaussian hence ICA can be used in this case. Please correct me if I am wrong.

Is there a fundamental difference between DSP for Audio Signal Processing and Sensor Signal Processing?

Audio is made up of multiple frequencies occurring at any given time, and we can perform the FFT to get the Frequency bins, but what does the concept of Frequency mean when it comes to Sensor data?
For example, a Triaxial Accelerometer somehow converts a voltage signal and produces acceleration readings in ms^-2. Is the FFT performed with those X,Y,Z readings or the voltages sampled at Fs.
Am I overcomplicating things or is there a difference in mindset when performing DSP for Audio vs Sensor data?
A Fourier transform is tool to convert functions or signals into something that is easier to work with. It is a mathematical tool. The result can have an easy physical interpretation but that is not always the case.
Assume you have an object with constant mass and several periodic sin-like forces F_1*sin(c*t), F_2*sin(d*t), ... that act on the object. The total force is just the sum of those forces:
F(t) = F_1*sin(c*t) + F_2*sin(d*t) + ...
You get the particle's acceleration using Newton's 2nd law:
m * a(t) = F(t)
=> a(t) = F(t) / m = F_1/m * sin(c*t) + F_2/m * sin(d*t) + ...
Let's assume you measured a(t) but don't know the equation above. It you perform a Fourier transformation you can calculate the values of F_1/m, F_2/m, ... . That means your Fourier transform of the the acceleration is the amplitude of the force at the given frequency over the object's mass.
This interpretation works because the Fourier transform is linear and so is the adding of forces (See Newtons 2nd law). If you describe something non-linear chances are that there is no easy interpretation of the result of the transformation.
So when do you perform the FFT? It depends:
If you do it to improve you signal (remove noise) do it on the measured data.
If you want to analyse the physical object (resonances) do it on the acceleration.
(If the conversion is linear (ADC output to m/s^2 is a simple multiplication) it does not matter.)
I hope this makes things a bit clearer (at least from the physical point of view).

documentation for Python statsmodel seasonal_decompose() using convolution filter

I would like to better understand how the seasonal_decompose() function from python statsmodel module works.
The documentation is a little sparse as it only states:
This is a naive decomposition. More sophisticated methods should be preferred.
The additive model is Y[t] = T[t] + S[t] + e[t]
The multiplicative model is Y[t] = T[t] * S[t] * e[t]
The seasonal component is first removed by applying a convolution filter to
the data. The average of this smoothed series for each period is the returned
seasonal component.
Is there any further documentation on this method? How is the trend extracted after estimating the seasonal component? I would prefer exploring theoretical explanations rather than directly dive into the code.

Discrete Wavelet Transform (Daubechies wavelet) of an array complex numbers

Say, I have a signal represented as an array of real numbers y = [1,2,0,4,5,6,7,90,5,6]. I can use Daubechies-4 coefficients D4 = [0.482962, 0.836516, 0.224143, -0.129409], and apply a wavelet transform to receive high- and low-frequencies of the signal. So, the high frequency component will be calculated like this:
high[v] = y[2*v]*D4[0] + y[2*v+1]*D4[1] + y[2*v+2]*D4[2] + y[2*v+3]*D4[3],
and the low frequency component can be calculated using other D4 coefs permutation.
The question is: what if y is complex array? Do I just multiply and add complex numbers to receive subbands, or is it correct to get amplitude and phase, treat each of them like a real number, do the wavelet transform for them, and then restore complex number array for each subband using formulas real_part = abs * cos(phase) and imaginary_part = abs * sin(phase)?
To handle the case of complex data, you're looking at the Complex Wavelet Transform. It's actually a simple extension to the DWT. The most common way to handle complex data is to treat the real and imaginary components as two separate signals and perform a DWT on each component separately. You will then receive the decomposition of the real and imaginary components.
This is commonly known as the Dual-Tree Complex Wavelet Transform. This can best be described by the figure below that I pulled from Wikipedia:
Source: Wikipedia
It's called "dual-tree" because you have two DWT decompositions happening in parallel - one for the real component and one for the imaginary. In the above diagram, g0/h0 represent the low-pass and high-pass components of the real part of the signal x and g1/h1 represent the low-pass and high-pass components of the imaginary part of the signal x.
Once you decompose the real and imaginary parts into their respective DWT decompositions, you can combine them to get the magnitude and/or phase and proceed to the next step or whatever you desire to do with them.
The mathematical proof regarding the correctness of this is outside the scope of what we're talking about, but if you would like to see how this got derived, I refer you to the canonical paper by Kingsbury in 1997 in the work Image Processing with Complex Wavelets - http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=835E60EAF8B1BE4DB34C77FEE9BBBD56?doi=10.1.1.55.3189&rep=rep1&type=pdf. Pay close attention to the noise filtering of images using the CWT - this is probably what you're looking for.

Specify the timeshift parameter in Matlab's cwt()? (continuous 1-D wavelet transform)

I want to compute the wavelet of a signal with a given scale and timeshift.
In Matlab using the cwt() function (Continuous 1-D wavelet transform) provided in the Wavelet Toolbox I can specify the scale(s) I want as a parameter to cwt(), and it will return all possible timeshifts:
x = [1, 2, 3, 4];
scales = [3];
wavelet_name = 'db1';
coefs = cwt(x,scales, wavelet_name);
>> coefs =
-1.1553 -1.1553 -1.1553 1.7371
How can I specify the timeshift (instead of having cwt() computing all possible timeshifts)? I'm aiming at reducing the computation time as I have a bunch of signals to analyze.
To put it visually:
[coefs,frequencies] = cwt(x,scales,wname, samplingperiod) returns the
frequencies in cycles per unit time corresponding to the scales and
the analyzing wavelet wname. samplingperiod is a positive real-valued
scalar. If the units of samplingperiod are seconds, the frequencies
are in hertz.
This is taken directly from the Matlab cwt page. I think this might be what you're looking for.

Resources