what's wrong with the cvUndistortPoints - opencv

float k[]={1531.49,0,1267.78,0,1521.439,952.078,0,0,1};
float d[]={-0.27149,0.15384,0.0046,-0.0026};
CvMat camera1=cvMat( 3, 3, CV_32FC2, k );
CvMat distCoeffs1=cvMat(1,4,CV_32FC2,d);
const int npoints = 4; // number of point specified
// Points initialization.
// Only 2 ponts in this example, in real code they are read from file.
float input_points[npoints][4] = {{0,0}, {2560, 1920}}; // the rest will be set to 0
CvMat * src = cvCreateMat(1, npoints, CV_32FC2);
CvMat * dst = cvCreateMat(1, npoints, CV_32FC2);
// fill src matrix
float * src_ptr = (float*)src->data.ptr;
for (int pi = 0; pi < npoints; ++pi) {
for (int ci = 0; ci < 2; ++ci) {
*(src_ptr + pi * 2 + ci) = input_points[pi][ci];
}
}
cvUndistortPoints(src, dst, &camera1, &distCoeffs1);
I hope to use the cvUndistortPoints function .And used the example code to test.When I used the VS2012 to run,it dosen't work.It says“src.size dosen't match the dst.size".For I am a rookie in OpenCV.Can someone help me?
Thank you.
the result of runing by vs20121

again, please use opencv's c++ api, not the deprecated c one:
Mat_<float> cam(3,3); cam << 1531.49,0,1267.78,0,1521.439,952.078,0,0,1;
Mat_<float> dist(1,5); dist <<-0.27149,0.15384,0.0046,-0.0026;
const int npoints = 4; // number of point specified
// Points initialization.
// Only 2 ponts in this example, in real code they are read from file.
Mat_<Point2f> points(1,npoints);
points(0) = Point2f(0,0);
points(1) = Point2f(2560, 1920);
Mat dst; // leave empty, opencv will fill it.
undistortPoints(points, dst, cam, dist);
cerr << dst;
[-0.90952414, -0.69702172, 0.92829341, 0.69035494, -0.90952414, -0.69702172, -0.90952414, -0.69702172]

Related

opencv cv::mat not returning the same result

int sizeOfChannel = (_width / 2) * (_height / 2);
double* channel_gr = new double[sizeOfChannel];
// filling the data into channel_gr....
cv::Mat my( _width/2, _height/2, CV_32F,channel_gr);
cv::Mat src(_width/2, _height/2, CV_32F);
for (int i = 0; i < (_width/2) * (_height/2); ++i)
{
src.at<float>(i) = channel_gr[i];
}
cv::imshow("src",src);
cv::imshow("my",my);
cv::waitKey(0);
I'm wondering why i'm not getting the same image in my and src imshow
update:
I have changed my array into double* still same result;
I think it is something to do with steps?
my image output
src image output
this one works for me:
int halfWidth = _width/2;
int halfHeight = _height/2;
int sizeOfChannel = halfHeight*halfWidth;
// ******************************* //
// you use CV_321FC1 later so it is single precision float
float* channel_gr = new float[sizeOfChannel];
// filling the data into channel_gr....
for(int i=0; i<sizeOfChannel; ++i) channel_gr[i] = i/(float)sizeOfChannel;
// ******************************* //
// changed row/col ordering, but this shouldnt be important
cv::Mat my( halfHeight , halfWidth , CV_32FC1,channel_gr);
cv::Mat src(halfHeight , halfWidth, CV_32FC1);
// ******************************* //
// changed from 1D indexing to 2D indexing
for(int y=0; y<src.rows; ++y)
for(int x=0; x<src.cols; ++x)
{
int arrayPos = y*halfWidth + x;
// you have a 2D mat so access it in 2D
src.at<float>(y,x) = channel_gr[arrayPos ];
}
cv::imshow("src",src);
cv::imshow("my",my);
// check for differences
cv::imshow("diff1 > 0",src-my > 0);
cv::imshow("diff2 > 0",my-src > 0);
cv::waitKey(0);
'my' is array of floats but you give it pointer to arrays of double. There no way it can get data from this array properly.
It seems that the constructor version that you are using is
Mat::Mat(int rows, int cols, int type, const Scalar& s)
This is from OpenCV docs. Seems like you are using float for src and assigning from channel_gr (declared as double). Isn't that some form of precision loss?

How to copy frame data from FFmpegSource2 (FFMS2) FFMS_Frame struct to OpenCV Mat?

I'm trying to read video file using FFmpegSource2 (FFMS2) and then process frames using OpenCV. What is the proper and efficient way to copy frame data from a FFMS_Frame struct returned by FFMS_GetFrame function to an OpenCV Mat?
Thank you very much in advance.
For now I am using the following procedure which works for the BGR color format.
Step 1. I use FFMS_SetOutputFormatV2 and FFMS_GetPixFmt( "bgra" ) to set output pixel format of FFMS to BGR.
int anPixFmts[2];
anPixFmts[0] = FFMS_GetPixFmt( "bgra" );
anPixFmts[1] = -1;
if( FFMS_SetOutputFormatV2( pstFfmsVidSrc, anPixFmts
, pstFfmsFrameProps->EncodedWidth, pstFfmsFrameProps->EncodedHeight
, FFMS_RESIZER_BICUBIC, &stFfmsErrInfo ) )
{
// handle error
}
Step 2. Read the desired frame using FFMS_GetFrame.
int nCurFrameNum = 5;
const FFMS_Frame *pstCurFfmsFrame = FFMS_GetFrame( pstFfmsVidSrc, nCurFrameNum, &stFfmsErrInfo );
Step 3. Copy data from pstCurFfmsFrame to OpenCV Mat, oMatA:
Mat oMatA;
oMatA = Mat::zeros( pstCurFfmsFrame->EncodedHeight, pstCurFfmsFrame->EncodedWidth, CV_8UC3 );
int nDi = 0;
for( int nRi = 0; nRi < oMatA.rows; nRi++ )
{
for( int nCi = 0; nCi < oMatA.cols; nCi++ )
{
oMatA.data[oMatA.step[0] * nRi + oMatA.step[1] * nCi + 0] = pstCurFfmsFrame->Data[0][nDi++]; // B
oMatA.data[oMatA.step[0] * nRi + oMatA.step[1] * nCi + 1] = pstCurFfmsFrame->Data[0][nDi++]; // G
oMatA.data[oMatA.step[0] * nRi + oMatA.step[1] * nCi + 2] = pstCurFfmsFrame->Data[0][nDi++]; // R
nDi++;
}
}
This code has to be changed to support other color formats (e.g Planar format like YV12 uses more than one plane of pstCurFfmsFrame->Data). May be someone could provide a full function to support most of the color formats and an efficient way to copy data from pstCurFfmsFrame->Data to OpenCV Mat.

OpenCV Expectation Maximization

I am trying to use EM on OpenCV 2.4.5 for background and foreground image separation. However, unlike the previous version of C class, the c++ is very confusing to me and several routines are rather confusing due to lack of documentation (from my point..)
I wrote the following code, but it seems not to work. It gives error and I tried very hard to debug but still not working.
Mat image;
image = imread("rose.jpg",1);
Mat _m(image.rows, image.cols, CV_32FC3);
Mat _f(image.rows, image.cols, CV_8UC3);
Mat _b(image.rows, image.cols, CV_8UC3);
Mat sample(image.rows * image.cols, 3, CV_32FC1);
Mat float_image;
image.convertTo(float_image,CV_64F);
Mat background_ = Mat(image.rows * image.cols, 3, CV_64F);
int counter = 0;
//Converting from Float image to Column vector
for (int j = 0; j < image.rows; j++)
{
Vec3f* row = float_image.ptr<Vec3f > (j);
for (int i = 0; i < image.cols; i++)
{
sample.at<Vec3f> (counter++, 0) = row[i];
}
}
//sample.reshape(1,image.rows * image.cols);
cout<<"Training"<<endl;
EM params = EM(2);
params.train(sample);
Mat _means = params.get<Mat>("means");
Mat _weights = params.get<Mat> ("weights");
cout<<"Finished Training"<<endl;
Basically, I am converting the image to float of type CV_64F and passing it into the training routine. Perhaps I think i am wrong, can i get help on my error. Thank you
You are mixing your float types.
If you need double precision, change Vec3f to Vec3d.
Otherwise
image.convertTo(float_image,CV_64F);
Mat background_ = Mat(image.rows * image.cols, 3, CV_64F);
should be
image.convertTo(float_image,CV_32F);
Mat background_ = Mat(image.rows * image.cols, 3, CV_32F);

OpenCV polar transform selective region

I want to restrict the operating region of the polar transform in OpenCV's cvLogPolar function. I would consider rewriting the function from scratch. I am unwrapping a fisheye lens image to yield a panorama, and I want to make it as efficient as possible. Much of the image is cropped away after the transform, giving a donut-shaped region of interest in the input image:
This means much processing is wasted on black pixels.
This should be pretty simple, right? The function should take two additional arguments for clipping extents, radius1 and radius2. Here is the relevant pol-to-cart portion of the cvLogPolar function from imgwarp.cpp:
cvLogPolar( const CvArr* srcarr, CvArr* dstarr,
CvPoint2D32f center, double M, int flags )
{
cv::Ptr<CvMat> mapx, mapy;
CvMat srcstub, *src = cvGetMat(srcarr, &srcstub);
CvMat dststub, *dst = cvGetMat(dstarr, &dststub);
CvSize ssize, dsize;
if( !CV_ARE_TYPES_EQ( src, dst ))
CV_Error( CV_StsUnmatchedFormats, "" );
if( M <= 0 )
CV_Error( CV_StsOutOfRange, "M should be >0" );
ssize = cvGetMatSize(src);
dsize = cvGetMatSize(dst);
mapx = cvCreateMat( dsize.height, dsize.width, CV_32F );
mapy = cvCreateMat( dsize.height, dsize.width, CV_32F );
if( !(flags & CV_WARP_INVERSE_MAP) )
//---snip---
else
{
int x, y;
CvMat bufx, bufy, bufp, bufa;
double ascale = ssize.height/(2*CV_PI);
cv::AutoBuffer<float> _buf(4*dsize.width);
float* buf = _buf;
bufx = cvMat( 1, dsize.width, CV_32F, buf );
bufy = cvMat( 1, dsize.width, CV_32F, buf + dsize.width );
bufp = cvMat( 1, dsize.width, CV_32F, buf + dsize.width*2 );
bufa = cvMat( 1, dsize.width, CV_32F, buf + dsize.width*3 );
for( x = 0; x < dsize.width; x++ )
bufx.data.fl[x] = (float)x - center.x;
for( y = 0; y < dsize.height; y++ )
{
float* mx = (float*)(mapx->data.ptr + y*mapx->step);
float* my = (float*)(mapy->data.ptr + y*mapy->step);
for( x = 0; x < dsize.width; x++ )
bufy.data.fl[x] = (float)y - center.y;
#if 1
cvCartToPolar( &bufx, &bufy, &bufp, &bufa );
for( x = 0; x < dsize.width; x++ )
bufp.data.fl[x] += 1.f;
cvLog( &bufp, &bufp );
for( x = 0; x < dsize.width; x++ )
{
double rho = bufp.data.fl[x]*M;
double phi = bufa.data.fl[x]*ascale;
mx[x] = (float)rho;
my[x] = (float)phi;
}
#else
//---snip---
#endif
}
}
cvRemap( src, dst, mapx, mapy, flags, cvScalarAll(0) );
}
Since the routine works by iterating through pixels in the destination image, the r1 and r2 clipping region would just need to be translated to y1 and y2 row region. Then we just change the for loop: for( y = 0; y < dsize.height; y++ ) becomes for( y = y1; y < y2; y++ ).
Correct?
What about constraining cvRemap? I am hoping it ignores unmoved pixels or it is a negligible computational cost.
I ended up doing a different optimization: I store the result of the polar transform operation in persistent remapping matrices. This helps a LOT. If you're doing polar unwrap on full motion video using the same polar transform mapping at all times, you don't want to recalculate the transform with a million sin/cos operations every single frame. So this just required some small modification on the logPolar/linearPolar operations in the OpenCV source to save the remap result somewhere outside.

Drawing spectrum of an image in C++ (fftw, OpenCV)

I'm trying to create a program that will draw a 2d greyscale spectrum of a given image. I'm using OpenCV and FFTW libraries. By using tips and codes from the internet and modifying them I've managed to load an image, calculate fft of this image and recreate the image from the fft (it's the same). What I'm unable to do is to draw the fourier spectrum itself. Could you please help me?
Here's the code (less important lines removed):
/* Copy input image */
/* Create output image */
/* Allocate input data for FFTW */
in = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * N);
dft = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * N);
/* Create plans */
plan_f = fftw_plan_dft_2d(w, h, in, dft, FFTW_FORWARD, FFTW_ESTIMATE);
/* Populate input data in row-major order */
for (i = 0, k = 0; i < h; i++)
{
for (j = 0; j < w; j++, k++)
{
in[k][0] = ((uchar*)(img1->imageData + i * img1->widthStep))[j];
in[k][1] = 0.;
}
}
/* forward DFT */
fftw_execute(plan_f);
/* spectrum */
for (i = 0, k = 0; i < h; i++)
{
for (j = 0; j < w; j++, k++)
((uchar*)(img2->imageData + i * img2->widthStep))[j] = sqrt(pow(dft[k][0],2) + pow(dft[k][1],2));
}
cvShowImage("iplimage_dft(): original", img1);
cvShowImage("iplimage_dft(): result", img2);
cvWaitKey(0);
/* Free memory */
}
The problem is in the "Spectrum" section. Instead of a spectrum I get some noise. What am I doing wrong? I would be grateful for your help.
You need to draw magnitude of spectrum. here is the code.
void ForwardFFT(Mat &Src, Mat *FImg)
{
int M = getOptimalDFTSize( Src.rows );
int N = getOptimalDFTSize( Src.cols );
Mat padded;
copyMakeBorder(Src, padded, 0, M - Src.rows, 0, N - Src.cols, BORDER_CONSTANT, Scalar::all(0));
// Создаем комплексное представление изображения
// planes[0] содержит само изображение, planes[1] его мнимую часть (заполнено нулями)
Mat planes[] = {Mat_<float>(padded), Mat::zeros(padded.size(), CV_32F)};
Mat complexImg;
merge(planes, 2, complexImg);
dft(complexImg, complexImg);
// После преобразования результат так-же состоит из действительной и мнимой части
split(complexImg, planes);
// обрежем спектр, если у него нечетное количество строк или столбцов
planes[0] = planes[0](Rect(0, 0, planes[0].cols & -2, planes[0].rows & -2));
planes[1] = planes[1](Rect(0, 0, planes[1].cols & -2, planes[1].rows & -2));
Recomb(planes[0],planes[0]);
Recomb(planes[1],planes[1]);
// Нормализуем спектр
planes[0]/=float(M*N);
planes[1]/=float(M*N);
FImg[0]=planes[0].clone();
FImg[1]=planes[1].clone();
}
void ForwardFFT_Mag_Phase(Mat &src, Mat &Mag,Mat &Phase)
{
Mat planes[2];
ForwardFFT(src,planes);
Mag.zeros(planes[0].rows,planes[0].cols,CV_32F);
Phase.zeros(planes[0].rows,planes[0].cols,CV_32F);
cv::cartToPolar(planes[0],planes[1],Mag,Phase);
}
Mat LogMag;
LogMag.zeros(Mag.rows,Mag.cols,CV_32F);
LogMag=(Mag+1);
cv::log(LogMag,LogMag);
//---------------------------------------------------
imshow("Логарифм амплитуды", LogMag);
imshow("Фаза", Phase);
imshow("Результат фильтрации", img);
Can you try to do the IFFT step and see if you recover the original image ? then , you can check step by step where is your problem. Another solution to find the problem is to do this process with a small matrix predefined by you ,and calculate it FFT in MATLAB, and check step by step, it worked for me!

Resources