I am working on an app. that its measuring the motion of a device. (xyz direction )
and now i must use fftw to filter the data.
i don't know how to call the data through fftw. hier below is a part of my code trying to execute the X-data ( i am working on each direction separate so X, Y, and then Z)
// FFTW for X-data
int SIZE = 97;
fftw_complex *dataX, *fft_resultX;
fftw_plan plan_X;
int i ;
dataX = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * SIZE);
fft_resultX = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * SIZE);
plan_X = fftw_plan_dft_1d(SIZE, dataX, fft_resultX,
FFTW_FORWARD, FFTW_ESTIMATE); // FFTW_MEASURE
for( i = 0 ; i < SIZE ; i++ ) {
dataX[i][0] = 1.0; // real
dataX[i][1] = 0.0; // complex
for( i = 0 ; i < SIZE ; i++ ) {
fprintf( stdout, "dataX[%d] = { %2.2f, %2.2f }\n",
i, dataX[i][0], dataX[i][1] );
}
fftw_execute( plan_X);
for( i = 0 ; i < SIZE ; i++ ) {
fprintf( stdout, "fft_resultX[%d] = { %2.2f, %2.2f }\n",
i, fft_resultX[i][0], fft_resultX[i][1] );
}
and hier is the userAcceleration:
[[weakSelf.graphViews objectAtIndex:kDeviceMotionGraphTypeUserAcceleration] addX:deviceMotion.userAcceleration.x y:deviceMotion.userAcceleration.y z:deviceMotion.userAcceleration.z];
for example, when i am writing :
dataX= deviceMotion.userAcceleration.x;
i am getting this error:
Assigning to 'fftw_complex *' (aka '_Complex double *') from incompatible type 'double'
any idea how to make fftw work on it ?
thanks for every try
You can't simply convert real data to a real imaginary pair. Each complex number is made up of 2 doubles.
You need to store all your acceleration data into a larger array (256 entries for example) where the x value is assigned to the real part of the complex number and 0 is assigned to the imaginary part.
Related
I have a real 2d matrix. I am taking its fft using fftw. But the result of using a real to complex fft is different from a complex ( with imaginary part equal to zero) to complex fft.
real matrix
0 1 2
3 4 5
6 7 8
result of real to complex fft
36 -4.5+2.59808i -13.5+7.79423i
0 -13.5-7.79423i 0
0 0 0
Code:
int r = 3, c = 3;
int sz = r * c;
double *in = (double*) malloc(sizeof(double) * sz);
fftw_complex *out = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * sz);
fftw_plan p = fftw_plan_dft_r2c_2d(r, c, in, out, FFTW_MEASURE);
for ( int i=0; i<r; ++i ){
for ( int j=0; j<c; ++j ){
in[i*c+j] = i*c + j;
}
}
fftw_execute(p);
using a complex matrix with imaginary part of zero
complex matrix
0+0i 1+0i 2+0i
3+0i 4+0i 5+0i
6+0i 7+0i 8+0i
result of complex to complex fft
36 -4.5 + 2.59808i -4.5 - 2.59808i
-13.5 + 7.79423i 0 0
-13.5 - 7.79423i 0 0
Code:
int r = 3, c = 3;
int sz = r * c;
fftw_complex *out = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * sz);
fftw_complex *inc = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * sz);
p = fftw_plan_dft_2d( r,c, inc, out, FFTW_FORWARD,FFTW_MEASURE);
for ( int i=0; i<r; ++i ){
for ( int j=0; j<c; ++j ){
inc[i*c+j][0] = i*c+j;
inc[i*c+j][1] = 0;
}
}
fftw_execute(p);
I am after the result of complex to complex fft. But the real to complex fft is much faster and my data is real. Am I making a programming mistake or the result should be different?
As indicated in FFTW documentation
Then, after an r2c transform, the output is an n0 × n1 × n2 × … × (nd-1/2 + 1) array of fftw_complex values in row-major order
In other words, the output for your real-to-complex transform of your sample real matrix really is:
36 -4.5+2.59808i
-13.5+7.79423i 0
-13.5-7.79423i 0
You may notice that these two columns match exactly the first two columns of your complex-to-complex transform. The missing column is omitted from the real-to-complex transform since it is redundant due to symmetry. As such, the full 3x3 matrix including the missing column could be constructed using:
fftw_complex *outfull = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * sz);
int outc = (c/2+1);
for ( int i=0; i<r; ++i ){
// copy existing columns
for ( int j=0; j<outc; ++j ){
outfull[i*c+j][0] = out[i*outc+j][0];
outfull[i*c+j][1] = out[i*outc+j][1];
}
// generate missing column(s) from symmetry
for ( int j=outc; j<c; ++j){
int row = (r-i)%r;
int col = c-j;
outfull[i*c+j][0] = out[row*outc+col][0];
outfull[i*c+j][1] = -out[row*outc+col][1];
}
}
I am trying to take data from an accelerometer and apply Kiss FFT to the samples. I'm using a Freescale Kinetis FRDM-K22F board. I want to use 64 samples, but when I run the program I get an error saying "kiss fft usage error: improper alloc" I started turning down the sample size and saw that the FFT does work with 32 samples, but giving it 33 samples the program just stops and returns no errors. Giving it any more samples gives similar results.
I played around with how I set up the FFT and followed a few websites and forum posts:
KissFFT output of kiss_fftr
http://digiphd.com/programming-reconstruction-fast-fourier-transform-real-signal-kiss-fft-libraries/
Kiss FFT on a dsPIC33
From what I can see, I haven't done anything different from what the above websites and forums have done. I've included my code below. Any help or advice is greatly appreciated.
void Sample_RUN()
{
int size = 64;
kiss_fft_scalar zero;
memset(&zero,0,sizeof(zero));
kiss_fft_cpx fft_in[size];
kiss_fft_cpx fft_out[size];
kiss_fftr_cfg fft = kiss_fftr_alloc(size*2 ,0 ,NULL,NULL);
signed short samples[size];
for (int i = 0; i < size; i++) {
fft_in[i].r = zero;
fft_in[i].i = zero;
fft_out[i].r = zero;
fft_out[i].i = zero;
}
printf("Data Collection Begins \r\n");
for(int j = 0; j < size; j++)
{
for(;;)
{
dr_status = My_I2C_ReadByte(STATUS_REG);
dr_status = (dr_status & 0x04);
if (dr_status == 0x04)
{
//READING FROM ACCEL OUTPUT DATA REGISTERS
AccelData[0] = My_I2C_ReadByte(OUT_X_MSB_REG);
AccelData[1] = My_I2C_ReadByte(OUT_X_LSB_REG);
AccelData[2] = My_I2C_ReadByte(OUT_Y_MSB_REG);
AccelData[3] = My_I2C_ReadByte(OUT_Y_LSB_REG);
AccelData[4] = My_I2C_ReadByte(OUT_Z_MSB_REG);
AccelData[5] = My_I2C_ReadByte(OUT_Z_LSB_REG);
// 14-bit accelerometer data
Xout_Accel_14_bit = ((signed short) (AccelData[0]<<8 | AccelData[1])) >> 2; // Compute 16-bit X-axis acceleration output value
Yout_Accel_14_bit = ((signed short) (AccelData[2]<<8 | AccelData[3])) >> 2; // Compute 16-bit Y-axis acceleration output value
Zout_Accel_14_bit = ((signed short) (AccelData[4]<<8 | AccelData[5])) >> 2; // Compute 16-bit Z-axis acceleration output value
mag_accel = sqrt(pow(Xout_Accel_14_bit, 2) + pow(Yout_Accel_14_bit, 2) + pow(Zout_Accel_14_bit, 2) );
printf("%d \r\n", mag_accel);
samples[j] = mag_accel;
break;
} // end if
} // end infinite for
} // end for
for (int j = 0; j < size; j++)
{
fft_in[j].r = samples[j];
fft_in[j].i = zero;
fft_out[j].r = zero;
fft_out[j].i = zero;
}
printf("Executing FFT\r\n");
kiss_fftr(fft, (kiss_fft_scalar*) fft_in, fft_out);
printf("Printing FFT Outputs\r\n");
for(int j = 0; j < size; j++)
{
printf("%d \r\n", fft_out[j].r);
}
kiss_fft_cleanup();
free(fft);
} // end Sample_RUN
Sounds like you are running out of memory. I am not familiar with that chip, but perhaps you should be using the last arguments of kiss_fft_alloc so you can skip heap allocation.
Basically I've got a loop which goes through all the kinects depth pixels. If they are greater than 3000mm it sets the pixel value to black.
For some reason this works only at a close range while pointed to a wall. If I pull the kinect back (giving it a larger area to scan) I get a Bad Memory allocation error. My code can be found below. I get the bad memory allocation error inside that try catch statement. Most of the code is from the opencv kinect sample here and here.
i figured out the problem, its because the depth values are stored in an array instead of matrix, i need a better way of finding out which location in the array, the x.y of the pixels which start from 1,1 point to instead of the (i = x+y*640)
#include <opencv.hpp>
#include <iostream>
#include <string>
#include <stdio.h>
#include <OpenNI.h>
using namespace std;
using namespace cv;
int main()
{
openni::Device device;
openni::VideoStream depth;
const char* device_uri = openni::ANY_DEVICE;
openni::Status ret = openni::OpenNI::initialize();
// Open
ret =device.open( device_uri );
ret = depth.create( device, openni::SENSOR_DEPTH );
if ( ret == openni::STATUS_OK )
{
// Start Depth
depth.start();
}
// Get Depth Stream Min-Max Value
int minDepthValue = depth.getMinPixelValue();
int maxDepthValue = depth.getMaxPixelValue();
//cout << "Depth min-Max Value : " << minDepthValue << "-" << maxDepthValue << endl;
// Frame Information Reference
openni::VideoFrameRef depthFrame;
// Get Sensor Resolution Information
int dImgWidth = depth.getVideoMode().getResolutionX();
int dImgHeight = depth.getVideoMode().getResolutionY();
// Depth Image Matrix
cv::Mat dImg = cv::Mat( dImgHeight, dImgWidth, CV_8UC3 );
Mat grey= cvCreateImage(cvSize(640, 480), 8, 1); ;
for(;;)
{
depth.readFrame( &depthFrame );
openni::DepthPixel* depthImgRaw = (openni::DepthPixel*)depthFrame.getData();
for ( int i = 0 ; i < ( depthFrame.getDataSize() / sizeof( openni::DepthPixel ) ) ; i++ )
{
int idx = i * 3; // Grayscale
unsigned char* data = &dImg.data[idx];
int gray_scale = ( ( depthImgRaw[i] * 255 ) / ( maxDepthValue - minDepthValue ) );
data[0] = (unsigned char)~gray_scale;
data[1] = (unsigned char)~gray_scale;
data[2] = (unsigned char)~gray_scale;
}
openni::DepthPixel* depthpixels = (openni::DepthPixel*)depthFrame.getData();
cvtColor(dImg, grey, CV_RGB2GRAY);
int i ;
try{
for( int y =0; y < 480 ; y++){
//getting in to each pixel in a row
for(int x = 0; x < 640; x++){
//getting out the corresponding pixel value from the array
i = x+y*640;
if (depthpixels[i] >3000)
{
grey.at<unsigned char>(x,y) = 0;
}
}
}
}catch(exception e)
{cout << e.what() <<endl ;
cout <<depthpixels[i] <<endl ;
cout << i <<endl ;
}
// cv:imshow( "depth", dImg );
imshow("dpeth2", grey);
int k = cvWaitKey( 30 ); // About 30fps
if ( k == 0x1b )
break;
}
// Destroy Streams
depth.destroy();
// Close Device
device.close();
// Shutdown OpenNI
openni::OpenNI::shutdown();
return 0;
}
solved the problem simply by swapping my x and y around
for( y =0; y < 480 ; y++)
{
//getting in to each pixel in a row
for( x = 0; x < 640; x++)
{
if (depthpixels[i]>1500)
{
grey.at<unsigned char >(y,x) = 0;
}
if (depthpixels[i] <500)
{
grey.at<unsigned char >(y,x) = 0;
}
i++;
}
}
I am using an OpenCV 1.0 based calibration toolbox to which I am making small additions. My additions require the use of the FFTW library (OpenCV has DFT functions but they aren't to my liking).
I have been trying to access the pixel values of an image and store those pixel values into a FFTW_complex type variable. I have tried a lot of the different suggestions (including openCV documentation) but I have been unable to do this properly.
The code below doesn't bring up any inconsistencies with variable types during the build or whilst debugging; however, the pixel values obtained and stored in "testarray" are a repetition of the values [13, 240, 173, 186]. Does anyone know how to access the pixel values and store them into FFTW compliant matrices/containers?
//.....................................//
//For image manipulation
IplImage* im1 = cvCreateImage(cvSize(400,400),IPL_DEPTH_8U,1);
int width = im1 -> width;
int height = im1 -> height;
int step = im1 -> widthStep/sizeof(uchar);
int fft_size = width *height;
//Setup pointers to images
uchar *im_data = (uchar *)im1->imageData;
//......................................//
fftw_complex testarray[subIM_size][subIM_size]; //size of complex FFTW array
im1= cvLoadImage(FILEname,0);
if (!im1)printf("Could not load image file");
//Load imagedata into FFTW arrays
for( i = 0 ; i < height ; i++ ) {
for( j = 0 ; j < width ; j++) {
testarray[i][j].re = double (im_data[i * step + j]);
testarray[i][j].im = 0.0;
}
}
I found out the problem. I had been using the wrong approach to access it.
This is what I used:
testarray[i][j].re = ((uchar*)(im1->imageData + i *im1->widthStep))[j]; //double (im_data[i * step + j]);
I am using C++ in Visual Studio 2008 and this is the way is use:
If we have a loop like that for going through the image:
for (int y = 0 ; y < height; y++){
for (int x = 0 ; x < width ; x++){
Then, the access to the fftw variable ( let's call it A) will be done as follows:
A [ height * y + x][0] = double (im_data[height * y + x]);
A [ height * y + x][1] = 0;
Hope it helps!
Antonio
I'm using the Hough transform in OpenCV to detect lines. However, I know in advance that I only need lines within a very limited range of angles (about 10 degrees or so). I'm doing this in a very performance sensitive setting, so I'd like to avoid the extra work spent detecting lines at other angles, lines I know in advance I don't care about.
I could extract the Hough source from OpenCV and just hack it to take min_rho and max_rho parameters, but I'd like a less fragile approach (have to manually update my code w/ each OpenCV update, etc.).
What's the best approach here?
Well, i've modified the icvHoughlines function to go for a certain range of angles. I'm sure there's cleaner ways that plays with memory allocation as well, but I got a speed gain going from 100ms to 33ms for a range of angle going from 180deg to 60deg, so i'm happy with that.
Note that this code also outputs the accumulator value. Also, I only output 1 line because that fit my purposes but there was no gain really there.
static void
icvHoughLinesStandard2( const CvMat* img, float rho, float theta,
int threshold, CvSeq *lines, int linesMax )
{
cv::AutoBuffer<int> _accum, _sort_buf;
cv::AutoBuffer<float> _tabSin, _tabCos;
const uchar* image;
int step, width, height;
int numangle, numrho;
int total = 0;
float ang;
int r, n;
int i, j;
float irho = 1 / rho;
double scale;
CV_Assert( CV_IS_MAT(img) && CV_MAT_TYPE(img->type) == CV_8UC1 );
image = img->data.ptr;
step = img->step;
width = img->cols;
height = img->rows;
numangle = cvRound(CV_PI / theta);
numrho = cvRound(((width + height) * 2 + 1) / rho);
_accum.allocate((numangle+2) * (numrho+2));
_sort_buf.allocate(numangle * numrho);
_tabSin.allocate(numangle);
_tabCos.allocate(numangle);
int *accum = _accum, *sort_buf = _sort_buf;
float *tabSin = _tabSin, *tabCos = _tabCos;
memset( accum, 0, sizeof(accum[0]) * (numangle+2) * (numrho+2) );
// find n and ang limits (in our case we want 60 to 120
float limit_min = 60.0/180.0*PI;
float limit_max = 120.0/180.0*PI;
//num_steps = (limit_max - limit_min)/theta;
int start_n = floor(limit_min/theta);
int stop_n = floor(limit_max/theta);
for( ang = limit_min, n = start_n; n < stop_n; ang += theta, n++ )
{
tabSin[n] = (float)(sin(ang) * irho);
tabCos[n] = (float)(cos(ang) * irho);
}
// stage 1. fill accumulator
for( i = 0; i < height; i++ )
for( j = 0; j < width; j++ )
{
if( image[i * step + j] != 0 )
//
for( n = start_n; n < stop_n; n++ )
{
r = cvRound( j * tabCos[n] + i * tabSin[n] );
r += (numrho - 1) / 2;
accum[(n+1) * (numrho+2) + r+1]++;
}
}
int max_accum = 0;
int max_ind = 0;
for( r = 0; r < numrho; r++ )
{
for( n = start_n; n < stop_n; n++ )
{
int base = (n+1) * (numrho+2) + r+1;
if (accum[base] > max_accum)
{
max_accum = accum[base];
max_ind = base;
}
}
}
CvLinePolar2 line;
scale = 1./(numrho+2);
int idx = max_ind;
n = cvFloor(idx*scale) - 1;
r = idx - (n+1)*(numrho+2) - 1;
line.rho = (r - (numrho - 1)*0.5f) * rho;
line.angle = n * theta;
line.votes = accum[idx];
cvSeqPush( lines, &line );
}
If you use the Probabilistic Hough transform then the output is in the form of a cvPoint each for lines[0] and lines[1] parameters. We can get x and y co-ordinated for each of the two points by pt1.x, pt1.y and pt2.x and pt2.y.
Then use the simple formula for finding slope of a line - (y2-y1)/(x2-x1). Taking arctan (tan inverse) of that will yield that angle in radians. Then simply filter out desired angles from the values for each hough line obtained.
I think it's more natural to use standart HoughLines(...) function, which gives collection of lines directly in rho and theta terms and select nessessary angle range from it, rather than recalculate angle from segment end points.