EEG Wavelet Analysis - signal-processing

I want to do a time-frequency analysis of an EEG signal. I found the GSL wavelet function for computing wavelet coefficients. How can I extract actual frequency bands (e.g. 8 - 12 Hz) from that coefficients? The GSL manual says:
For the forward transform, the elements of the original array are replaced by the discrete wavelet transform f_i -> w_{j,k} in a packed triangular storage layout, where J is the index of the level j = 0 ... J-1 and K is the index of the coefficient within each level, k = 0 ... (2^j)-1. The total number of levels is J = \log_2(n).
The output data has the following form, (s_{-1,0}, d_{0,0}, d_{1,0}, d_{1,1}, d_{2,0}, ..., d_{j,k}, ..., d_{J-1,2^{J-1}-1})
If I understand that right an output array data[] contains at position 1 (e.g. data[1]) the amplitude of the frequency band 2^0 = 1 Hz, and
data[2] = 2^1 Hz
data[3] = 2^1 Hz
data[4] = 2^2 Hz
until
data[7] = 2^2 Hz
data[8] = 2^3 Hz
and so on ...
That means I have only the amplitudes for the frequencies 1 Hz, 2 Hz, 4 Hz, 8 Hz, 16 Hz, ... How can I get e.g. the amplitude of a frequency component oscillating at 5.3 Hz? How can I get the amplitude of a whole frequency range, e.g. the amplitude of 8 - 13 Hz? Any recommendations how to get a good time-frequency distribution?

I'm not sure how familiar you are with general signal processing, so I'll try to be clear, but not chew the food for you.
Wavelets are essentially filter banks. Each filter splits a given signal into two non-overlapping independent high frequency and low frequency subbands such that it can then be reconstructed by the means of an inverse transform. When such filters are applied continually, you get a tree of filters with output of one fed into the next. The simplest, and the most intuitive way to build such tree is as follows:
Decompose a signal into low frequency (approximation) and high frequency (detail) components
Take the low frequency component, and perform the same processing on that
Keep going until you've processed the required number of levels
The reason for this is that you can then downsample the resulting approximation signal. For example, if your filter splits a signal with sampling frequency (Fs) 48000 Hz -- which yields maximum frequency of 24000 Hz by Nyquist Theorem -- into 0 to 12000 Hz approximation component and 12001 to 24000 Hz detail component, you can then take every second sample of the approximation component without aliasing, essentially decimating the signal. This is widely used in signal and image compression.
According to this description, at level one you split your frequency content down the middle and create two separate signals. Then you take your lower frequency component and split it down the middle again. You now get three components in total: 0 to 6000 Hz, 6001 to 12000 Hz, and 12001 to 24000 Hz. You see that the two newer components are both half the bandwidth of the first detail component. You get this sort of a picture:
This correlates with the bandwidths you describe above (2^1 Hz, 2^2 Hz, 2^3 Hz and so on). However, using a broader definition of a filter bank, we can arrange the above tree structure as we like, and it will still remain a filter bank. For example, we can feed both approximation and detail component to to split into two high-frequency and low-frequency signals like so
If you look at it carefully, you see that both high and low frequency components down the middle in their frequencies and as a result you get a uniform filter bank whose frequency separation looks more like this:
Notice that all bands are of the same size. By building a uniform filter bank with N levels, you end up with responses of 2^(N-1) band-bass filters. You can fine-tune your filter bank to eventually give you the desired band (8-13 Hz).
In general, I would not advise you to do this with wavelets. You can go through some literature on designing good band-pass filters and simply build a filter that would only let through 8-13 Hz of your EEG signals. That's what I've done before and it worked quite well for me.

Related

Resampling signal to get N amount of points from signal

I have a signal and I would like to get n equally spaced points from it.
I was thinking of using resample as a way to do this but I wasn't sure of the correct values to use.
Example: I have a sine wave signal sampled at 8000 Hz but I want to get just 4 equally spaced points from the signal.
fs=8000
len_of_sig=1.0; %length of signal in seconds
t=linspace(0,len_of_sig,fs*len_of_sig);
y=1*sin(1*(2*pi)*t);
spaced_points=resample(y,)
I'm not sure how to calculate the correct values to use to get just n equally spaced points (like 4,5,6...points).
Any ideas? I don't really need to use resample, I just thought that would be the quickest.
I'm using Octave 4.2.2 on Ubuntu 64bit.
The documentation for the resample function doesn't need anything aside from the resampling factor itself:
y = resample (x, p, q, h)
Change the sample rate of x by a factor of p/q. This is performed using a polyphase algorithm. The impulse response h of the antialiasing filter is either specified or either designed with a Kaiser-windowed sinecard.
Suppose you have the variable ndesired_samples, which specifies how many samples you want in the end. Let nsamp = fs*len_of_sig.
The resampling factor is given by ndesired_samples/nsamp, hence p is the number of desired samples, and q is the number of total samples. Note that resample divides p and q by their GCD internally.
Beware of issues stemming from the polyphase structure and the Kaiser interpolation window. IIRC these are especially bad if p and q end up large after GCD (i.e. resampling 10000 samples to 8000 samples is OK, resampling 10000 points to 8001 warrants further caution).

How to calculate 512 point FFT using 2048 point FFT hardware module

I have a 2048 point FFT IP. How may I use it to calculate 512 point FFT ?
There are different ways to accomplish this, but the simplest is to replicate the input data 4 times, to obtain a signal of 2048 samples. Note that the DFT (which is what the FFT computes) can be seen as assuming the input signal being replicated infinitely. Thus, we are just providing a larger "view" of this infinitely long periodic signal.
The resulting FFT will have 512 non-zero values, with zeros in between. Each of the non-zero values will also be four times as large as the 512-point FFT would have produced, because there are four times as many input samples (that is, if the normalization is as commonly applied, with no normalization in the forward transform and 1/N normalization in the inverse transform).
Here is a proof of principle in MATLAB:
data = randn(1,512);
ft = fft(data); % 512-point FFT
data = repmat(data,1,4);
ft2 = fft(data); % 2048-point FFT
ft2 = ft2(1:4:end) / 4; % 512-point FFT
assert(all(ft2==ft))
(Very surprising that the values were exactly equal, no differences due to numerical precision appeared in this case!)
An alternate solution from the correct solution provided by Cris Luengo which does not require any rescaling is to pad the data with zeros to the required length of 2048 samples. You then get your result by reading every 2048/512 = 4 outputs (i.e. output[0], output[3], ... in a 0-based indexing system).
Since you mention making use of a hardware module, this could be implemented in hardware by connecting the first 512 input pins and grounding all other inputs, and reading every 4th output pin (ignoring all other output pins).
Note that this works because the FFT of the zero-padded signal is an interpolation in the frequency-domain of the original signal's FFT. In this case you do not need the interpolated values, so you can just ignore them. Here's an example computing a 4-point FFT using a 16-point module (I've reduced the size of the FFT for brievety, but kept the same ratio of 4 between the two):
x = [1,2,3,4]
fft(x)
ans> 10.+0.j,
-2.+2.j,
-2.+0.j,
-2.-2.j
x = [1,2,3,4,0,0,0,0,0,0,0,0,0,0,0,0]
fft(x)
ans> 10.+0.j, 6.499-6.582j, -0.414-7.242j, -4.051-2.438j,
-2.+2.j, 1.808+1.804j, 2.414-1.242j, -0.257-2.3395j,
-2.+0.j, -0.257+2.339j, 2.414+1.2426j, 1.808-1.8042j,
-2.-2.j, -4.051+2.438j, -0.414+7.2426j, 6.499+6.5822j
As you can see in the second output, the first column (which correspond to output 0, 3, 7 and 11) is identical to the desired output from the first, smaller-sized FFT.

Filters for FFT signals for analysis and MFCC

Below is the code I have been writing to try to create a mel triangular filter bank.
I start with 300 to 8000 hz range, convert the frequency to mels, and then mels back into frequency to then get the fft_bin numbers.
clear all;
g=[300 8000]; % low freqncy and fs/2 for the highest frequency
freq2mel=1125*log(1+(g/700)); % creating mel scale from the frequency
% answer [401.25 2834.99]
f=linspace(0,2835,12); % if we want 10 filter banks that we use the
two endpoints and it will put 10 banks between them
% answer is [401.25 622.50 843.75 1065.0 1286.25 1507.50 1728.74
1949.99 2171.24 2392.49 2613.74 2834.99]
mel2freq=700*(exp(f/1125)-1); % converting the mel back into frequency
%answer is [300 517.33 781.90 1103.97 1496.04 1973.32 2554.33
3261.62 4122.63 5170.76 6446.70 8000]
fft_bins=floor((mel2freq/16000)*512); % creating fft bins
%answer is [9 16 25 35 47 63 81 104 132 165 206 256]
My issue is this. I am stuck after this. I keep seeing the below filter bank piecewise function come up but I do not understand what K is in this function. Is k the array of $$ \mid(FFT)\mid ^2 $$ numbers from the hamming window? how to get the actual filter with the triangular output with magnitude of 1 to pass the $\mid(FFT)\mid^2$ to get my MFCC's. Can someone please help me out.
Normally when you do this kind of filtering, you have your spectrum in a 1D array and then you have a Mel filter bank in 2D matrix, with one dimension matching the FFT bins on your spectrum array, and another dimension being your target Mel bands. You multiply them and get your 1D Mel spectrum.
The H_m function really just describes a triangle centered around m, where m is the center Mel band and k is the frequency from 0 to Fs/2. In theory, the k parameter should be continuous. You can assume that k is a FFT bin and it will kind of work, but you will not get great results at low frequencies where you entire Mel band covers 1 or 2 FFT bins. If you need to get a better resolution than that, you will consider how much of the triangle a particular FFT bin contains.

Implementing a neural network classifier for my data, but is it solveable this way?

I will try to explain what the problem is.
I have 5 materials, each composed of 3 different minerals of a set of 10 different minerals. For each material I have measured the inensity vs wavelength. And each Intensity vs wavelength vector can be mapped into a binary vector of ones and zeros corresponding to the minerals the material is composed of.
So material 1 has an intensity of [0.51 0.53 0.57 0.68...... ] measured at different wavelengths [470 480 490 500 510 ......] and a binary vector
[1 0 0 0 1 0 0 1 0 0]
and so on for each material.
For each material I have 5000 examples, so 25000 examples for all. Each example will have a 'similar' intensity vs wavelength behaviour but will give the 'same' binary vector.
I want to design a NN classifier so that if I give it as an input the intensity vs wavelength, it gives me the corresponding binary vector.
The intensity vs wavelength has a length of 450 so I will have 450 units in the input layer
the binary vector has a length of 10, so 10 output neurons
the hidden layer/s will have as a beginning 200 neurons.
Can I simly design a NN classifier this way, and would it solve the problem, or I need something else?
You can do that, however, be aware to use the right cost and output layer activation functions. In your case, you should use sigmoid units for your outer layer and binary-cross-entropy as a cost function.
Another way to go about this would be to use one-hot encoding so that you can use normal multi-class classification (will probably not make sense since your output is probably sparse).

N step fft in D language

I am using fft function from std.numeric
Complex!double[] resultfft = fft(timeDomainAmplitudeVal);
The parameter timeDomainAmplitudeVal is audio amplitude data. Sample rate 44100 hz and there is 131072(2^16) samples
I am seeing that resultfft has the same size as timeDomainAmplitudeVal(131072) which does not fits my project(also makes no sense) . I need to be able to divide FFT to N equally spaced frequencies. And I need this N to be defined by me .
Is there anyway to implement this with std.numeric.fft or can you have any advices for fft library?
Ps: I will be glad to hear if some DSP libraries exist also
That's just how Fourier transforms work in the practical number-crunching world. Give S samples of signal, get S amplitudes. (Ignoring issues with complex numbers and symmetries.)
If you want N amplitudes, you'll have to interpolate the S-points amplitudes you get from FFT. Your biggest decision is to choose between linear, cubic, truncated sinc, etc.
Altnernative: resample the original audio signal to have your desired N samples in the same overall time interval. Then FFT it.
take a look at pfft, a fast FFT written in D.
http://jerro.github.io/pfft/doc/pfft.pfft.html
or numpy & Pyd
http://docs.scipy.org/doc/numpy/reference/routines.fft.html
http://pyd.dsource.org/
HTH
This is absolutely normal that the FFT gives the same data length.
Here some C++ code to perform windows FFT analysis with overlap and optional "zero-phase" ordering. http://pastebin.com/4YKgbed1
What do FFT coefficients mean?
Question: "OK so I've done the FFT and I'm said I can recover the original signal. Now, what are these coefficients."
Answer: "You can think of coefficient i as representing the phase and amplitude of frequencies from SR*i/(2*N) to SR*(i+1)/(2*N). This is a helpful metaphor. But a more accurate view is that coefficient i is the contribution of a sine of frequency SR*i/(2*N) in a reconstruction of the original input chunk."

Resources