Remove DC offset from a sine wave - signal-processing

I am performing FFT signal processing on a sine wave for which I have to remove the DC offset from the signal. How can I do so in C++?
I tried using a-mean(a) where a is the sine wave, but it gives incorrect results. In MATLAB it is very easy using the detrend function.

Related

How can a non-sine waveform can equate to combination of DC voltage?

It has been found that any repeating, non-sinusoidal waveform can be equated to a combination of DC voltage, sine waves, and/or cosine waves (sine waves with a 90 degree phase shift) at various amplitudes and frequencies.
Above excerpt from allaboutcircuits.com.
I understand that a non-sine wave can be broken down into a combination of harmonic sine waves. What I dont understand is how it states above that it can also be from DC voltage which isnt alternating like a sine wave.
Could someone help clarify? Thanks

Strange FFT spectrum from a near perfect sinusoid

I have retrieved some signal in my Abaqus simulation for verification purpose. The true signal shall be a perfect sinusoid at 300kHz and I performed fft on the sampled signal using scipy.fftpack.fft.
But I got a strange spectrum as shown below (sorry that I am too lazy to scale the x-axis of the spectrum to the correct frequency). In the same figure, I sliced the signal into pieces and plotted in the time domain. I also repeated the same process for a pure sine wave.
This totally surprises me. As indicated below in the code, sampling frequency is 16.66x of the frequency of the signal. At the moment, I think it is due to the very little error in the sampling period. In theory, Abaqus shall sample it in a regular time interval. As you can see, there is some little error so that the dots in my signal appear to be thicker than the perfect signal. But does such a small error give a striking difference in the frequency spectrum? Otherwise, why is the frequency spectrum like that?
FYI1: This is the magnified fft spectrum of my signal:
FYI2: This is the python code that was used to produce the above figures
def myfft(x, k, label):
plt.plot(np.abs(fft(x))[0:k], label = label)
plt.legend()
plt.subplot(4,1,1)
for i in range(149800//200):
plt.plot(mysignal[200*i:200*(i+1)], 'bo')
plt.subplot(4,1,2)
myfft(mysignal,150000//2, 'fft of my signal')
plt.subplot(4,1,3)
[Fs,f, sample] = [5e6,300000, 150000]
x = np.arange(sample)
y = np.sin(2 * np.pi * f * x / Fs)
for i in range(149800//200):
plt.plot(y[200*i:200*(i+1)], 'bo')
plt.subplot(4,1,4)
myfft(y,150000//2, 'fft of a perfect signal')
plt.subplots_adjust(top = 2, right = 2)
FYI3: Here is my signal in .npy and .txt format. The signal is pretty long. It has 150001 points. The .txt one is the raw file from Abaqus. The .npy format is what I used to produce the above plot - (1) the time vector is removed and (2) the data is in half precision and normalized.
Any standard FFT algorithm you use operates on the assumption that the signal you provide is uniformly sampled. Uniform in this context means equally spaced in time. Your signal is clearly not uniformly sampled, therefore the FFT does not "see" a perfect sine but a distorted version. As a consequence you see all these additional spectral components the FFT computes to map your distorted signal to the frequency domain. You have two options now. Resample your signal i.e. it is uniformly sampled and use your off the shelf FFT or take a non-uniform FFT to get your spectrum. Here is one library you could use to calculate your non-uniform FFT.

Calculate autocorrelation in time domain vs FFT

I'm doing a pitch detection using a combination of an ACF and AMDF.
First I was using ACF in the time domain like this:
Get a buffer of 2048 samples
Window it (Hamming window)
sum=Sum(Buffer[i]*Buffer[i+lag]) for all i < 2048 - lag
acf = sum / 2048
And repeat the last 2 steps for all lags to be considered. (actually doing interpolation for non-integer lags)
Now I found that you can use FFT to calculate the ACF:
Get a buffer of 2048 samples
Window it (Hamming window)
fftBuf=fft(buffer)
buffer[i]=real(fftBuf[i])^2+imag(fftBuf[i])^2
fftBuf=fft(buffer) //ifft=fft for real signals
acfBuf = real(fftBuf) / 2048
Then actBuf[lag] is the ACF value at that lag.
I expected that the results will be the same or at least similar. But they are not.
E.g for a 65.4Hz Sine wave (note C2) I get ~0.2 for a the corresponding lag of 674.25 using the time-domain approach and ~536.795 using the fft.
What did I miss? Or isn't both the same?

Frequency Shifting with FFT

I've been experimenting with a few different techniques that I can find for a freq shifting (specifically I want to shift high freq signals to a lower freq). At the moment I'm trying to use this technique -
take the original signal, x(t), multiply it by: cos(2 PI dF t), sin(2
PI dF t)
R(t) = x(t) cos(2 PI dF t)
I(t) = x(t) sin(2 PI dF t)
where dF is the delta frequency to be shifted.
Now you have two time series signals: R(t) and I(t).
Conduct complex Fourier transform using R(t) as real and I(t) as
imaginary parts. The results will be frequency shifted spectrum.
I have interpreted this into the following code -
for(j=0;j<(BUFFERSIZE/2);j++)
{
Partfunc = (((double)j)/2048);
PreFFTShift[j+x] = PingData[j]*(cos(2*M_PI*Shift*(Partfunc)));
PreFFTShift[j+1+x] = PingData[j]*(sin(2*M_PI*Shift*(Partfunc)));
x++;
}
//INITIALIZE FFT
status = arm_cfft_radix4_init_f32(&S, fftSize, ifftFlag, doBitReverse);
//FFT on FFTData
arm_cfft_radix4_f32(&S, PreFFTShift);
This builds me an array with interleaved real and imag data and then FFT. I then inverse the FFT, but the output im getting is pretty garbled. Results seem huge in comparison to what I think they should be, and although there are a few traces of a freq shifted signal, its hard to tell as the result seems mostly pretty noisy.
I've also attempted simply revolving the array values of a standard FFT of my original signal to get a freq shift, but to no avail. Is there a better method for doing this?
have you tried something like:
Use a Hanning window for each framed data
Once you have your windowed frame of audio data, you do an FFT on it
Do some kind of transformation in the frequency domain (you can use
Flanagan - phase vocoder)
Now you need to go back to the time domain with an IFFT
Apply Hanning window in the IFFT data
Use overlap-add at each new frame of time-domain data into the output
stream
My results:
I created two concatenated sinusoids (250Hz and 400Hz) and move one octave UP!
Blue waveform is the original and red was changed, you can see one fadeIN-fadeOut caused by overlap add and hann window !
If you want the frequency shift to sound more "natural", you will have to maintain the ratios between all the initial frequency bins, where the amount of shift will depend on the FFT bin, thus requiring lots of interpolation. The Phase Vocoder algorithm will use multiple FFTs to reduce phase distortion in the result.

How to properly draw sine wave?

I need to draw sine wave (user chooses the frequency). My current approach is drawing lines between calculated points (point's that of course correspond to the sine wave's values). This works, but unfortunately only for lower frequencies. I think that the problem is that my sampling frequency (scale between my screen's coordinates and 1 unit) is about 50, so I can correctly draw sine wave with frequencies up to 25Hz (as per Nyquist–Shannon sampling theorem).
Here are some screenshots so you can see what I am talking about:
http://imgur.com/a/pL2WP
So instead this incorrect graphs, when the frequency is too high, I would like to get something like this:

Resources