I have read various posts here at StackOverflow regarding the execution of FFT on accelerometer data, but none of them helped me understand my problem.
I am executing this FFT implementation on my accelerometer data array in the following way:
int length = data.size();
double[] re = new double[256];
double[] im = new double[256];
for (int i = 0; i < length; i++) {
input[i] = data[i];
}
FFT fft = new FFT(256);
fft.fft(re, im);
float outputData[] = new float[256];
for (int i = 0; i < 128; i++) {
outputData[i] = (float) Math.sqrt(re[i] * re[i]
+ im[i] * im[i]);
}
I plotted the contents of outputData (left,) and also used R to perform the FFT on my data (right.)
What am I doing wrong here? I am using the same code for executing the FFT that I see in other places.
EDIT: Following the advice of #PaulR to apply a windowing function, and the link provided by #BjornRoche (http://baumdevblog.blogspot.com.br/2010/11/butterworth-lowpass-filter-coefficients.html), I was able to solve my problem. The solution is pretty much what is described in that link. This is my graph now: http://imgur.com/wGs43
The low frequency artefacts are probably due to a lack of windowing. Try applying a window function.
The overall shift is probably due to different scaling factors in the two different FFT implementations - my guess is that you are seeing a shift of 24 dB which corresponds to a difference in scaling by a factor of 256.
Because all your data on left are above 0, for frequency analyze it is a DC signal. So after your fft, it abstract the DC signal out, it is very hugh. For your scene, you only need to cut off the DC signal, just preserve the signal over 0 Hz(AC signal), that makes sense.
Related
What I am doing is trying to implement an Skin Probability Maps algorithm for skin detection in OpenCV.
I've stuck in a place where I should compare SkinHistValue / NonSkinHistValue probability of each pixel with Theta threshold according to http://www.cse.unsw.edu.au/~icml2002/workshops/MLCV02/MLCV02-Morales.pdf and this tutorial http://www.morethantechnical.com/2013/03/05/skin-detection-with-probability-maps-and-elliptical-boundaries-opencv-wcode/
My problems lies in calculating the coords for hist value:
From the tutorial:
calcHist(&nRGB_frame,1,channels,mask,skin_Histogram,2,histSize,ranges,uniform,accumulate);
calcHist(&nRGB_frame,1,channels,~mask,non_skin_Histogram,2,histSize,ranges,uniform,accumulate);
Calculates the histograms. Than i normalize them.
And after that:
for (int i=0; i<nrgb.rows; i++) {
int gbin = cvRound((nrgb(i)[1] - 0)/range_dist[0] * hist_bins[0]);
int rbin = cvRound((nrgb(i)[2] - low_range[1])/range_dist[1] * hist_bins[1]);
float skin_hist_val = skin_Histogram.at<float>(gbin,rbin);
};
Where nrgb is my image, and im trying to get skin_hist_value for that. But the gbin and rbin are probably calculated wrong and it throws an exception (i run outside of array?) when it comes to
skin_Histogram.at<float>(gbin,rbin);
I have totally no idea how to calculate it correctly. Any help?
I am using the SVM implementation of OpenCV (based on LibSVM) on iOS. Is it possible to obtain the weight vector after training?
Thank you!
After dealing with it I have been able to obtain the weights. For obtaining the weights one has to obtain first the support vectors and then add them multiplied by the alpha values.
// get the svm weights by multiplying the support vectors by the alpha values
int numSupportVectors = SVM.get_support_vector_count();
const float *supportVector;
const CvSVMDecisionFunc *dec = SVM.decision_func;
svmWeights = (float *) calloc((numOfFeatures+1),sizeof(float));
for (int i = 0; i < numSupportVectors; ++i)
{
float alpha = *(dec[0].alpha + i);
supportVector = SVM.get_support_vector(i);
for(int j=0;j<numOfFeatures;j++)
*(svmWeights + j) += alpha * *(supportVector+j);
}
*(svmWeights + numOfFeatures) = - dec[0].rho; //Be careful with the sign of the bias!
The only trick here is that the instance variable float *decision_function is protected on the opencv framework, so I had to change it in order to access it.
A cursory glance of the doc and the source code (https://github.com/Itseez/opencv/blob/master/modules/ml/src/svm.cpp) tells me that on the surface the answer is "No". The hyperplane parameters seem to be tucked away into the CvSVMSolver class. CvSVM contains an object of this class called "solver". See if you can get to its members.
I'm working on an app that should do some audio signal processing. I need to measure the audio level in each one of the buffers I get (through the Callback function). I've been searching the web for some time, and I found that there is a build-in property called Current level metering:
AudioQueueGetProperty(recordState->queue,kAudioQueueProperty_CurrentLevelMeter,meters,&dlen);
This property gets me the average or peak audio level, but it's not synchronised to the current buffer.
I figured out I need to calculate the audio level from the buffer data by myself, so I had this:
double calcAudioRMS (SInt16 * audioData, int numOfSamples)
{
double RMS, adPercent;
RMS = 0;
for (int i=0; i<numOfSamples; i++)
{
adPercent=audioData[i]/32768.0f;
RMS += adPercent*adPercent;
}
RMS = sqrt(RMS / numOfSamples);
return RMS;
}
This function gets the audio data (casted into Sint16) and the number of samples in the current buffer. The numbers I get are indeed between 0 and 1, but they seem to be rather random and low comparing to the numbers I got from the built-in audio level metering.
The recording audio format is:
format->mSampleRate = 8000.0;
format->mFormatID = kAudioFormatLinearPCM;
format->mFramesPerPacket = 1;
format->mChannelsPerFrame = 1;
format->mBytesPerFrame = 2;
format->mBytesPerPacket = 2;
format->mBitsPerChannel = 16;
format->mReserved = 0;
format->mFormatFlags = kLinearPCMFormatFlagIsSignedInteger |kLinearPCMFormatFlagIsPacked;
My question is how to get the right values from the buffer? Is there a built-in function \ property for this? Or should I calculate the audio level myself, and how to do it?
Thanks in advance.
Your calculation for RMS power is correct. I'd be inclined to say that you have a fewer number of samples than Apple does, or something similar, and that would explain the difference. You can check by inputting a loud sine wave, and checking that Apple (and you) calculate RMS power at 1/sqrt(2).
Unless there's a good reason, I would use Apple's power calculations. I've used them, and they seem good to me. Additionally, generally you don't want RMS power, you want RMS power as decibels, or use the kAudioQueueProperty_CurrentLevelMeterDB constant. (This depends on if you're trying to build an audio meter, or truly display the audio power)
I need to compute sum of elements in all columns separately.
Now I'm using:
Matrix cross_corr should be summed.
Mat cross_corr_summed;
for (int i=0;i<cross_corr.cols;i++)
{
double column_sum=0;
for (int k=0;k<cross_corr.rows;k++)
{
column_sum +=cross_corr.at<float>(k,i);
}
cross_corr_summed.push_back(column_sum);
}
The problem is that my program takes quite a long time to run. This is one of parts that is suspicious to cause this.
Can you advise any possible faster implementation???
Thanks!!!
You need a cv::reduce:
cv::reduce(cross_corr, cross_corr_summed, 0, CV_REDUCE_SUM, CV_32S);
If you know that your data is continuous and single-channeled, you can access the matrix data directly:
int width = cross_corr.cols;
float* data = (float*)cross_corr.data;
Mat cross_corr_summed;
for (int i=0;i<cross_corr.cols;i++)
{
double column_sum=0;
for (int k=0;k<cross_corr.rows;k++)
{
column_sum += data[i + k*width];
}
cross_corr_summed.push_back(column_sum);
}
which will be faster than your use of .at_<float>(). In general I avoid the use of .at() whenever possible because it is slower than direct access.
Also, although cv::reduce() (suggested by Andrey) is much more readable, I have found it is slower than even your implementation in some cases.
Mat originalMatrix;
Mat columnSum;
for (int i = 0; i<originalMatrix.cols; i++)
columnSum.push_back(cv::sum(originalMatrix.col(i))[0]);
Is there any built in library for sliding a window (custom size) over an image in opencv version 2.x?
I tried to write the algorithm by myself but I found it very painful and probably error-prone.
I need to slide over an image and create histogram for the input of svm.
there is one for HOG Descriptor, which calculates HOG features but I have my own feature set so I just need an algorithm to let me slide over an image.
You can define a Region of Interest (ROI) on a cv::Mat object, which gives you a new Mat object referring to the sub-window. This does not copy the underlying data, merely a new header with the appropriate metadata.
cv::Mat::operator()
See also this other question:
OpenCV C++, getting Region Of Interest (ROI) using cv::Mat
Basic code can looks like. The code is described good enought. I hope.
This is single scale slideing window 60x60 witch Step 30.
Result of this simple example is ROI.
You can visit this basic tutorial Tutorial Here.
// Parameters of your slideing window
int windows_n_rows = 60;
int windows_n_cols = 60;
// Step of each window
int StepSlide = 30;
for (int row = 0; row <= LoadedImage.rows - windows_n_rows; row += StepSlide)
{
for (int col = 0; col <= LoadedImage.cols - windows_n_cols; col += StepSlide)
{
Rect windows(col, row, windows_n_rows, windows_n_cols);
Mat Roi = LoadedImage(windows);
}
}