OpenCV 2.4.7 Mat::convertTo() 32-bit to 16-bit truncate - opencv

I've noticed that using convertTo to convert a matrix from 32-bit to 16-bit "rounds" number to the upper boud. So, values bigger than 0x0000FFFF in the source matrix will be set as 0xFFFF in the destination matrix.
What I want for my application is instead to mask the values, setting in the destination just the 2 LSB of the values.
Here is an example:
Mat mat32;
Mat mat16;
mat32 = Mat(2,2,CV_32SC1);
for(int y = 0; y < 2; y++)
for(int x = 0; x < 2; x++)
mat32.at<unsigned int>(cv::Point(x,y)) = 0x0000FFFE + (y*2+x);
mat32.convertTo(mat16, CV_16UC1);
The matrixes have these values:
32 bits matrix:
0000FFFE 0000FFFF
00010000 00010001
16 bits matrix:
0000FFFE 0000FFFF
0000FFFF 0000FFFF
In the second row of 16-bit matrix I want to have
00000000 00000001
I can do this by scanning the source matrix value-by-value and masking the values, but the performances are low.
Is there an OpenCV function that does this?
Thanks to everyone!
MIX

This can be done, but this requires a somewhat dirty trick, so it is up to you to use this approach or not. So this is how it can be done:
For this example lets create 1000x1000 32-bit matrix and set all its values to 65541 (=256*256+5). So after the conversion we expect to have a matrix filled with fives.
Mat M1(1000, 1000, CV_32S, Scalar(65541));
And here is the trick:
Mat M2(1000, 1000, CV_16SC2, M1.data);
We created matrix M2 over the same memory buffer as M1, but M2 'think' that this is a buffer of 2-channel 16-bit image. Now the last thing to do is to copy the channel you need to the place you need. This can be done by split() or mixChannels() functions. For example:
Mat M3(1000, 1000, CV_16S);
int fromto[] = {0,0};
Mat inpu[] = {M2}, outpu[] = {M3};
mixChannels(inpu, 1, outpu, 1, fromto, 1);
cout << M3.at<short>(10,10) << endl;
Ye I know that the format of mixChannels looks weird and makes the code even less readable, but it works... If you prefer split() function:
vector<Mat> v;
split(M2,v);
cout << v[0].at<short>(10,10) << " " << v[1].at<short>(10,10) << endl;

There is no OpenCV function (that I know of) which does the conversion like you want, so either you code it yourself or like you said you go through a masking step first to remove the 16 high bits.
The mask can be applied using the bitwise_and in C++ or cvAndS in C. See here.
You could also have made your hand-written code more efficient. In general, you should avoid OpenCV pixel accessors in loops because they have bad performance. I don't have an OpenCV install at hand so this could be slighlty off -- the idea is to use the data field directly, and step which is the number of bytes per row:
for(int y = 0; y < mat32.height; ++) {
int* row = (int*)( (char*)mat32.data + y * mat32.step);
for(int x = 0; x < mat32.step/ 4)
row[x] &= 0xffff;
Then, once the mask is applied, all values fit in 16 bits, and convertTo will just truncate the 16 upper bits.
The other solution is to code the conversion by hand:
mat16.resize( mat32.size() );
for(int y = 0; y < mat32.height; ++) {
const int* row32 = (const int*)( (char*)mat32.data + y * mat32.step);
short* row16 = (short*) ( (char*)mat16.data + y * mat16.step);
for(int x = 0; x < mat32.step/ 4)
row16[x] = short(row32[x]);

Related

OpenCV Mat per-element operation: vector-matrix multiplication

I is an mxn matrix and each element of I is a 1x3 vector (I is a 3-channel Mat image actually).
M is a 3x3 matrix.
J is an matrix having the same dimension as I and is computed as follows: each element of J is the vector-matrix product of the corresponding (i.e. having the same coordinates) element of I and M.
I.e. if v1(r1,g1,b1) is an element of I and v2(r2,g2,b2) is its corresponding element of J, then v2 = v1 * M (this is a vector-matrix product, not a per-element product).
Question: How to compute J efficiently (in terms of speed)?
Thank you for your help.
As far as I know, the most efficient way to implement such an operation is as follows:
Reshape I from mxnx3 to (m·n)x3, let's call it I'
Calculate J' = I' * M
Reshape J' from (m·n)x3 to mxnx3, this is the J we wanted
The idea is to stack each pixel-wise operation pi'·M into one single operation P'·M, where P is the 3x(m·n) matrix containing each pixel in columns (hence P' holds one pixel per row. It's just a convention, really).
Here is a code sample written in c++:
// read some image
cv::Mat I = cv::imread("image.png"); // rows x cols x 3
// some matrix M, that modifies each pixel
cv::Mat M = (cv::Mat_<float>(3, 3) << 0, 0, 0,
0, .5, 0,
0, 0, .5); // 3 x 3
// remember old dimension
uint8_t prevChannels = I.channels;
uint32_t prevRows = I.rows;
// reshape I
uint32_t newRows = I.rows * I.cols;
I = I.reshape(1, newRows); // (rows * cols) x 3
// compute J
cv::Mat J = I * M; // (rows * cols) x 3
// reshape to original dimensions
J = J.reshape(prevChannels, prevRows); // rows x cols x 3
OpenCV provides an O(1) reshaping operation.
Thus performance depends solely on matrix multiplication, which I expect to be as efficient as possible in a computer vision library.
To further enhance performance, you might want to take a look at matrix multiplication using the ocl and gpu modules.

OpenCV::solvePNP() - Assertion failed

I am trying to get the pose of the camera with the help of solvePNP() from OpenCV.
After running my program I get the following errors:
OpenCV Error: Assertion failed (npoints >= 0 && npoints == std::max(ipoints.checkVector(2, CV_32F), ipoints.checkVector(2, CV_64F))) in solvePnP, file /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_graphics_opencv/opencv/work/OpenCV-2.4.2/modules/calib3d/src/solvepnp.cpp, line 55
libc++abi.dylib: terminate called throwing an exception
I tried to search how to solve these errors, but I couldn't resolve it unfortunately!
Here is my code, all comment/help is much appreciated:
enum Pattern { NOT_EXISTING, CHESSBOARD, CIRCLES_GRID, ASYMMETRIC_CIRCLES_GRID };
void calcBoardCornerPositions(Size boardSize, float squareSize, vector<Point3f>& corners,
Pattern patternType)
{
corners.clear();
switch(patternType)
{
case CHESSBOARD:
case CIRCLES_GRID:
for( int i = 0; i < boardSize.height; ++i )
for( int j = 0; j < boardSize.width; ++j )
corners.push_back(Point3f(float( j*squareSize ), float( i*squareSize ), 0));
break;
case ASYMMETRIC_CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(Point3f(float((2*j + i % 2)*squareSize), float(i*squareSize), 0));
break;
}
}
int main(int argc, char* argv[])
{
float squareSize = 50.f;
Pattern calibrationPattern = CHESSBOARD;
//vector<Point2f> boardCorners;
vector<vector<Point2f> > imagePoints(1);
vector<vector<Point3f> > boardPoints(1);
Size boardSize;
boardSize.width = 9;
boardSize.height = 6;
vector<Mat> intrinsics, distortion;
string filename = "out_camera_xml.xml";
FileStorage fs(filename, FileStorage::READ);
fs["camera_matrix"] >> intrinsics;
fs["distortion_coefficients"] >> distortion;
fs.release();
vector<Mat> rvec, tvec;
Mat img = imread(argv[1], CV_LOAD_IMAGE_GRAYSCALE); // at kell adnom egy kepet
bool found = findChessboardCorners(img, boardSize, imagePoints[0], CV_CALIB_CB_ADAPTIVE_THRESH);
calcBoardCornerPositions(boardSize, squareSize, boardPoints[0], calibrationPattern);
boardPoints.resize(imagePoints.size(),boardPoints[0]);
//***Debug start***
cout << imagePoints.size() << endl << boardPoints.size() << endl << intrinsics.size() << endl << distortion.size() << endl;
//***Debug end***
solvePnP(Mat(boardPoints), Mat(imagePoints), intrinsics, distortion, rvec, tvec);
for(int i=0; i<rvec.size(); i++) {
cout << rvec[i] << endl;
}
return 0;
}
EDIT (some debug info):
I debugged it row by row. I stepped into all of the functions. I am getting the Assertion failed in SolvePNP(...). You can see below what I see when I step into the solvePNP function. First it jumps over the first if statement /if(vec.empty())/, and goes into the second if statement /if( !copyData )/, there when it executes the last line /*datalimit = dataend = datastart + rows*step[0]*/ jumps back to the first if statement and returns => than I get the Assertion failed error.
template<typename _Tp> inline Mat::Mat(const vector<_Tp>& vec, bool copyData)
: flags(MAGIC_VAL | DataType<_Tp>::type | CV_MAT_CONT_FLAG),
dims(2), rows((int)vec.size()), cols(1), data(0), refcount(0),
datastart(0), dataend(0), allocator(0), size(&rows)
{
if(vec.empty())
return;
if( !copyData )
{
step[0] = step[1] = sizeof(_Tp);
data = datastart = (uchar*)&vec[0];
datalimit = dataend = datastart + rows*step[0];
}
else
Mat((int)vec.size(), 1, DataType<_Tp>::type, (uchar*)&vec[0]).copyTo(*this);
}
Step into the function in a debugger and see exactly which assertion is failing. ( Probably it requires values in double (CV_64F) rather than float. )
OpenCVs new "inputarray" wrapper issuppsoed to allow you to call functions with any shape of mat, vector of points, etc - and it will sort it out. But a lot of functions assume a particular inut format or have obsolete assertions enforcing a particular format.
The stereo/calibration systems are the worst for requiring a specific layout, and frequently succesive operations require a different layout.
The types don't seem right, at least in the code that worked for me I used different types(as mentioned in the documentation).
objectPoints – Array of object points in the object coordinate space, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. vector can be also passed here.
imagePoints – Array of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points.
vector can be also passed here.
cameraMatrix – Input camera matrix A = \vecthreethree{fx}{0}{cx}{0}{fy}{cy}{0}{0}{1} .
distCoeffs – Input
vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4,
k_5, k_6]]) of 4, 5, or 8 elements. If the vector is NULL/empty, the
zero distortion coefficients are assumed.
rvec – Output rotation vector (see Rodrigues() ) that, together with tvec , brings points from the model coordinate system to the
camera coordinate system.
tvec – Output translation vector.
useExtrinsicGuess – If true (1), the function uses the provided rvec and tvec values as initial
approximations of the rotation and translation vectors, respectively,
and further optimizes them.
Documentation from here.
vector<Mat> rvec, tvec should be Mat rvec, tvec instead.
vector<vector<Point2f> > imagePoints(1) should be vector<Point2f> imagePoints(1) instead.
vector<vector<Point3f> > boardPoints(1) should be
vector<Point3f> boardPoints(1) instead.
Note: I encountered the exact same problem, and this worked for me(It is a little bit confusing since calibrateCamera use vectors). Haven't tried it for imagePoints or boardPoints though.(but as it is documented in the link above, vector,vector should work, I thought I'd better mention it), but for rvec,trec I tried it myself.
I run in exactly the same problem with solvePnP and opencv3. I tried to isolate the problem in a single test case. I seams passing a std::vector to cv::InputArray does not what is expected. The following small test works with opencv 2.4.9 but not with 3.2.
And this is exactly the problem when passing a std::vector of points to solvePnP and causes the assert at line 63 in solvepnp.cpp to fail !
Generating a cv::mat out of the vector list before passing to solvePnP works.
//create list with 3 points
std::vector<cv::Point3f> vectorList;
vectorList.push_back(cv::Point3f(1.0, 1.0, 1.0));
vectorList.push_back(cv::Point3f(1.0, 1.0, 1.0));
vectorList.push_back(cv::Point3f(1.0, 1.0, 1.0));
//to input array
cv::InputArray inputArray(vectorList);
cv::Mat mat = inputArray.getMat();
cv::Mat matDirect = cv::Mat(vectorList);
LOG_INFO("Size vector: %d mat: %d matDirect: %d", vectorList.size(), mat.checkVector(3, CV_32F), matDirect.checkVector(3, CV_32F));
QVERIFY(vectorList.size() == mat.checkVector(3, CV_32F));
Result opencv 2.4.9 macos:
TestObject: OpenCV
Size vector: 3 mat: 3 matDirect: 3
Result opencv 3.2 win64:
TestObject: OpenCV
Size vector: 3 mat: 9740 matDirect: 3
I faced the same issue. In my case, (in python) converted the input array type as float.
It worked fine afterwards.

How to undistort points in camera shot coordinates and obtain corresponding undistorted image coordinates?

I use OpenCV to undestort set of points after camera calibration.
The code follows.
const int npoints = 2; // number of point specified
// Points initialization.
// Only 2 ponts in this example, in real code they are read from file.
float input_points[npoints][2] = {{0,0}, {2560, 1920}};
CvMat * src = cvCreateMat(1, npoints, CV_32FC2);
CvMat * dst = cvCreateMat(1, npoints, CV_32FC2);
// fill src matrix
float * src_ptr = (float*)src->data.ptr;
for (int pi = 0; pi < npoints; ++pi) {
for (int ci = 0; ci < 2; ++ci) {
*(src_ptr + pi * 2 + ci) = input_points[pi][ci];
}
}
cvUndistortPoints(src, dst, &camera1, &distCoeffs1);
After the code above dst contains following numbers:
-8.82689655e-001 -7.05507338e-001 4.16228324e-001 3.04863811e-001
which are too small in comparison with numbers in src.
At the same time if I undistort image via the call:
cvUndistort2( srcImage, dstImage, &camera1, &dist_coeffs1 );
I receive good undistorted image which means that pixel coordinates are not modified so drastically in comparison with separate points.
How to obtain the same undistortion for specific points as for images?
Thanks.
The points should be "unnormalized" using camera matrix.
More specifically, after call of cvUndistortPoints following transformation should be also added:
double fx = CV_MAT_ELEM(camera1, double, 0, 0);
double fy = CV_MAT_ELEM(camera1, double, 1, 1);
double cx = CV_MAT_ELEM(camera1, double, 0, 2);
double cy = CV_MAT_ELEM(camera1, double, 1, 2);
float * dst_ptr = (float*)dst->data.ptr;
for (int pi = 0; pi < npoints; ++pi) {
float& px = *(dst_ptr + pi * 2);
float& py = *(dst_ptr + pi * 2 + 1);
// perform transformation.
// In fact this is equivalent to multiplication to camera matrix
px = px * fx + cx;
py = py * fy + cy;
}
More info on camera matrix at OpenCV 'Camera Calibration and 3D Reconstruction'
UPDATE:
Following C++ function call should work as well:
std::vector<cv::Point2f> inputDistortedPoints = ...
std::vector<cv::Point2f> outputUndistortedPoints;
cv::Mat cameraMatrix = ...
cv::Mat distCoeffs = ...
cv::undistortPoints(inputDistortedPoints, outputUndistortedPoints, cameraMatrix, distCoeffs, cv::noArray(), cameraMatrix);
It may be your matrix size :)
OpenCV expects a vector of points - a column or a row matrix with two channels. But because your input matrix is only 2 pts, and the number of channels is also 1, it cannot figure out what's the input, row or colum.
So, fill a longer input mat with bogus values, and keep only the first:
const int npoints = 4; // number of point specified
// Points initialization.
// Only 2 ponts in this example, in real code they are read from file.
float input_points[npoints][4] = {{0,0}, {2560, 1920}}; // the rest will be set to 0
CvMat * src = cvCreateMat(1, npoints, CV_32FC2);
CvMat * dst = cvCreateMat(1, npoints, CV_32FC2);
// fill src matrix
float * src_ptr = (float*)src->data.ptr;
for (int pi = 0; pi < npoints; ++pi) {
for (int ci = 0; ci < 2; ++ci) {
*(src_ptr + pi * 2 + ci) = input_points[pi][ci];
}
}
cvUndistortPoints(src, dst, &camera1, &distCoeffs1);
EDIT
While OpenCV specifies undistortPoints accept only 2-channel input, actually, it accepts
1-column, 2-channel, multi-row mat or (and this case is not documented)
2 column, multi-row, 1-channel mat or
multi-column, 1 row, 2-channel mat
(as seen in undistort.cpp, line 390)
But a bug inside (or lack of available info), makes it wrongly mix the second one with the third one, when the number of columns is 2. So, your data is considered a 2-column, 2-row, 1-channel.
I also reach this problems, and I take some time to research an finally understand.
Formula
You see the formula above, in the open system, distort operation is before camera matrix, so the process order is:
image_distorted ->camera_matrix -> un-distort function->camera_matrix->back to image_undistorted.
So you need a small fix to and camera1 again.
Mat eye3 = Mat::eye(3, 3, CV_64F);
cvUndistortPoints(src, dst, &camera1, &distCoeffs1, &eye3,&camera1);
Otherwise, if the last two parameters is empty, It would be project to a Normalized image coordinate.
See codes: opencv-3.4.0-src\modules\imgproc\src\undistort.cpp :297
cvUndistortPointsInternal()

OpenCV PCA question

I'm trying to create a PCA model in OpenCV to hold pixel coordinates. As an experiment I have two sets of pixel coordinates that maps out two approximate circles. Each set of coordiantes has 48 x,y pairs. I was experimenting with the following code which reads the coordinates from a file and stores them in a Mat structure. However, I don't think it is right and PCA in openCV seems very poorly covered on the Internet.
Mat m(2, 48, CV_32FC2); // matrix with 2 rows of 48 cols of floats held in two channels
pFile = fopen("data.txt", "r");
for (int i=0; i<48; i++){
int x, y;
fscanf(pFile, "%d%c%c%d%c", &x, &c, &c, &y, &c);
m.at<Vec2f>( 0 , i )[0] = (float)x; // store x in row 0, col i in channel 0
m.at<Vec2f>( 0 , i )[1] = (float)y; // store y in row 0, col i in channel 1
}
for (int i=0; i<48; i++){
int x, y;
fscanf(pFile, "%d%c%c%d%c", &x, &c, &c, &y, &c);
m.at<Vec2f>( 1 , i )[0] = (float)x; // store x in row 1, col i in channel 0
m.at<Vec2f>( 1 , i )[1] = (float)y; // store y in row 1, col i in channel 1
}
PCA pca(m, Mat(), CV_PCA_DATA_AS_ROW, 2); // 2 principle components??? Not sure what to put here e.g. is it 2 for two data sets or 48 for number of elements?
for (int i=0; i<48; i++){
float x = pca.mean.at<Vec2f>(i,0)[0]; //get average x
float y = pca.mean.at<Vec2f>(i,0)[1]; //get average y
printf("\n x=%f, y=%f", x, y);
}
However, this crashes when creating the pca object. I know this is a very basic question but I am a bit lost and was hoping that someone could get me started with pca in open cv.
Perhaps it would be helpful if you described in further detail what you need to use PCA for and what you hope to achieve (output?).
I am fairly sure that the reason your program crashes is because the input Mat is CV_32FC2, when it should be CV_32FC1. You need to reshape your data into 1 dimensional row vectors before using PCA, not knowing what you need I can't say how to reshape your data. (The common application with images is eigenFace which requires an image to be reshaped into a row vector). Additionally you will need to normalize your input data between 0 and 1.
As a further aside, usually you would choose to keep 1 less principal component than the number of input samples because the last principal component is simply orthogonal to the others.
I have worked with opencv PCA before and would like to help further. I would also refer you to this blog: http://www.bytefish.de/blog/pca_in_opencv which helped me get started with PCA in openCV.

Sum of each column opencv

In Matlab, If A is a matrix, sum(A) treats the columns of A as vectors, returning a row vector of the sums of each column.
sum(Image); How could it be done with OpenCV?
Using cvReduce has worked for me. For example, if you need to store the column-wise sum of a matrix as a row matrix you could do this:
CvMat * MyMat = cvCreateMat(height, width, CV_64FC1);
// Fill in MyMat with some data...
CvMat * ColSum = cvCreateMat(1, MyMat->width, CV_64FC1);
cvReduce(MyMat, ColSum, 0, CV_REDUCE_SUM);
More information is available in the OpenCV documentation.
EDIT after 3 years:
The proper function for this is cv::reduce.
Reduces a matrix to a vector.
The function reduce reduces the matrix to a vector by treating the
matrix rows/columns as a set of 1D vectors and performing the
specified operation on the vectors until a single row/column is
obtained. For example, the function can be used to compute horizontal
and vertical projections of a raster image. In case of REDUCE_MAX and
REDUCE_MIN , the output image should have the same type as the source
one. In case of REDUCE_SUM and REDUCE_AVG , the output may have a
larger element bit-depth to preserve accuracy. And multi-channel
arrays are also supported in these two reduction modes.
OLD:
I've used ROI method: move ROI of height of the image and width 1 from left to right and calculate means.
Mat src = imread(filename, 0);
vector<int> graph( src.cols );
for (int c=0; c<src.cols-1; c++)
{
Mat roi = src( Rect( c,0,1,src.rows ) );
graph[c] = int(mean(roi)[0]);
}
Mat mgraph( 260, src.cols+10, CV_8UC3);
for (int c=0; c<src.cols-1; c++)
{
line( mgraph, Point(c+5,0), Point(c+5,graph[c]), Scalar(255,0,0), 1, CV_AA);
}
imshow("mgraph", mgraph);
imshow("source", src);
EDIT:
Just out of curiosity, I've tried resize to height 1 and the result was almost the same:
Mat test;
cv::resize(src,test,Size( src.cols,1 ));
Mat mgraph1( 260, src.cols+10, CV_8UC3);
for(int c=0; c<test.cols; c++)
{
graph[c] = test.at<uchar>(0,c);
}
for (int c=0; c<src.cols-1; c++)
{
line( mgraph1, Point(c+5,0), Point(c+5,graph[c]), Scalar(255,255,0), 1, CV_AA);
}
imshow("mgraph1", mgraph1);
cvSum respects ROI, so if you move a 1 px wide window over the whole image, you can calculate the sum of each column.
My c++ got a little rusty so I won't provide a code example, though the last time I did this I used OpenCVSharp and it worked fine. However, I'm not sure how efficient this method is.
My math skills are getting rusty too, but shouldn't it be possible to sum all elements in columns in a matrix by multiplying it by a vector of 1s?
For an 8 bit greyscale image, the following should work (I think).
It shouldn't be too hard to expand to different image types.
int imgStep = image->widthStep;
uchar* imageData = (uchar*)image->imageData;
uint result[image->width];
memset(result, 0, sizeof(uchar) * image->width);
for (int col = 0; col < image->width; col++) {
for (int row = 0; row < image->height; row++) {
result[col] += imageData[row * imgStep + col];
}
}
// your desired vector is in result

Resources