NLopt with Armadillo data - armadillo

The NLopt objective function looks like this:
double myfunc(const std::vector<double> &x, std::vector<double> &grad, void *my_func_data)
x is the data being optimized, grad is a vector of gradients, and my_func_data holds additional data.
I am interested in supplying Armadillo matrices A and B to void *my_func_data.
I fiddled with Armadillo's member functions
mat A(5,5);
mat B(5,5);
double* A_mem = A.memptr();
double* B_mem = B.memptr();
which gives me a pointers to the matrices A and B. I was thinking of defining another pointer to these pointers:
double** CombineMat;
int* Arow = A.n_rows; int* Acols = A.n_cols; //obtain dimensions of A
int* Brows = B.n_rows; int* Bcols = B.n_cols; // dim(B)
CombineMat[0] = A_mem; CombineMat[1] = Arows; CombineMat[2] = Acols;
CombineMat[3] = B_mem; CombineMat[4] = Brows; CombineMat[5] = Bcols;
and then passing *CombineMat as my_func_data.
Is this the way to do it? It seems clumsy...
Once CombineMat is passed, how do re-cast the void type into something usable when I'm inside myfunc?
ANSWER
I answered my own question with help from here.
mat A(2,2);
A << 1 << 2 << endr << 3 << 4;
mat B(2,2);
B << 5 << 6 << endr << 7 << 8;
mat C[2];
C[0] = A;
C[1] = B;
opt.set_min_objective(myfunc, &C);
Once inside myfunc, the data in C can be converted back to Armadillo matrices like this:
mat* pC = (mat*)(my_func_data);
mat A = pC[0];
mat B = pC[1];

You can also use Armadillo's Cube class ("3D matrix", or 3-rd order tensor).
Each slice in a cube is just a matrix. For example:
cube X(4,5,2);
mat A(4,5);
mat B(4,5);
X.slice(0) = A; // set the individual slices
X.slice(1) = B;
mat& C = X.slice(1); // get the reference to a matrix stored in a cube

Related

Is there a built-in function to split a 3-channel Mat into three 3-channel Mat rather than into three 1-channel Mat?

As far as I know the built-in split will split one 3-channel Mat into three 1-channel Mat. As a result, those three Mat are just gray scale with some different intensities.
My intent is to get three 3-channel Mat as follows.
void splitTo8UC3(const Mat& input, vector<Mat>& output)
{
Mat blue = input.clone();
Mat green = input.clone();
Mat red = input.clone();
const uint N = input.rows * input.step;
for (uint i = 0; i < N; i += 3)
{
// blue.data[i]
green.data[i] = 0;
red.data[i] = 0;
blue.data[i + 1] = 0;
//green.data[i+1]
red.data[i + 1] = 0;
blue.data[i + 2] = 0;
green.data[i + 2] = 0;
//red.data[i+2]
}
output.push_back(blue);
output.push_back(green);
output.push_back(red);
}
It works but instead of reinventing the wheel, I am looking for the built-in if any.
Edit
The proposed solution must be faster than mine.
EDIT: I incorporated Dan's suggested improvements from his comment.
I can't think of a built-in function exactly doing this, and I also couldn't find one. But while doing some research, I came across the mixChannels function, which might improve your solution. At least, it avoids implementing a loop.
Here are my modifications to your code:
void splitTo8UC3(const cv::Mat& input, std::vector<cv::Mat>& output)
{
// Allocate outputs
cv::Mat b(cv::Mat::zeros(input.size(), input.type()));
cv::Mat g(cv::Mat::zeros(input.size(), input.type()));
cv::Mat r(cv::Mat::zeros(input.size(), input.type()));
// Collect outputs
cv::Mat out[] = { b, g, r };
// Set up index pairs
int from_to[] = { 0,0, 1,4, 2,8 };
cv::mixChannels(&input, 1, out, 3, from_to, 3);
output.assign(std::begin(out), std::end(out));
}
Let's have this test image colors.png:
And, let's have this test code:
cv::Mat img = cv::imread("images/colors.png");
std::vector<cv::Mat> bgr;
splitTo8UC3(img, bgr);
cv::imwrite("images/b.png", bgr[0]);
cv::imwrite("images/g.png", bgr[1]);
cv::imwrite("images/r.png", bgr[2]);
Then, we get the following outputs b.png, g.png, and r.png, which hopefully are the them as for your initial solution:
Hope that helps!

c++ vector push_back inconsistent behavior

I push back two sets(A={A1,A2,A3},B={B1,B2,B3}) of equal matrices(A1=B1,A2=B2,A3=B3) of same type(CV_32FC) in two different vectors(Va and Vb) of same type. When i compare the contents of the vectors pair by pair(Va[0] vs Vb[0], Va[1] vs Vb[1],Va[2] vs Vb[2])they are different. How is this possible?
Code explanation:
A= imgs_lab_channels.
Lab_channel_current_FG = Foreground image
Lab_channel_current_BG = Background image
Lab_channel_current=Lab_channel_current_FG+Lab_channel_current_BG
push Lab channel_current into vector Lab_channels
So B=Lab_channels
I check that
Lab_channel_current= imgs_lab_channels[i].
When I read back the matrices from the vectors A and B they are different.
Code snippet:
std::vector<cv::Mat> imgs_lab_channels;
split(imgs_lab, imgs_lab_channels);
std::vector<cv::Mat> Lab_channels;
cv::Mat Lab_channel_current_FG;
cv::Mat Lab_channel_current_BG;
cv::Mat Lab_channel_current;
for(int i = 0; i < 3; i++)
{
// FG_mask and BG_mask are 0-1 binary matrices of type 32FC1
// and with same size as imgs_lab_channels. FG_mask and BG_mask
// select the Foreground and background respectively. Omitted for
// sake of clarity
Lab_channel_current_FG=imgs_lab_channels[i].mul(FG_mask);
Lab_channel_current_BG=imgs_lab_channels[i].mul(BG_mask);
// add the FG and the BG image
Lab_channel_current=Lab_channel_current_FG+Lab_channel_current_BG;
Lab_channels.push_back(Lab_channel_current);
}
for(int j=0;j<3;j++)
{
temp2 = Lab_channels[j]-imgs_lab_channels[j];
double min2, max2;
cv::minMaxLoc(temp2, &min2, &max2);
std::cout << "temp2 min,max="<< min2 << "," << max2 << std::endl;
}

what's wrong with the cvUndistortPoints

float k[]={1531.49,0,1267.78,0,1521.439,952.078,0,0,1};
float d[]={-0.27149,0.15384,0.0046,-0.0026};
CvMat camera1=cvMat( 3, 3, CV_32FC2, k );
CvMat distCoeffs1=cvMat(1,4,CV_32FC2,d);
const int npoints = 4; // number of point specified
// Points initialization.
// Only 2 ponts in this example, in real code they are read from file.
float input_points[npoints][4] = {{0,0}, {2560, 1920}}; // the rest will be set to 0
CvMat * src = cvCreateMat(1, npoints, CV_32FC2);
CvMat * dst = cvCreateMat(1, npoints, CV_32FC2);
// fill src matrix
float * src_ptr = (float*)src->data.ptr;
for (int pi = 0; pi < npoints; ++pi) {
for (int ci = 0; ci < 2; ++ci) {
*(src_ptr + pi * 2 + ci) = input_points[pi][ci];
}
}
cvUndistortPoints(src, dst, &camera1, &distCoeffs1);
I hope to use the cvUndistortPoints function .And used the example code to test.When I used the VS2012 to run,it dosen't work.It says“src.size dosen't match the dst.size".For I am a rookie in OpenCV.Can someone help me?
Thank you.
the result of runing by vs20121
again, please use opencv's c++ api, not the deprecated c one:
Mat_<float> cam(3,3); cam << 1531.49,0,1267.78,0,1521.439,952.078,0,0,1;
Mat_<float> dist(1,5); dist <<-0.27149,0.15384,0.0046,-0.0026;
const int npoints = 4; // number of point specified
// Points initialization.
// Only 2 ponts in this example, in real code they are read from file.
Mat_<Point2f> points(1,npoints);
points(0) = Point2f(0,0);
points(1) = Point2f(2560, 1920);
Mat dst; // leave empty, opencv will fill it.
undistortPoints(points, dst, cam, dist);
cerr << dst;
[-0.90952414, -0.69702172, 0.92829341, 0.69035494, -0.90952414, -0.69702172, -0.90952414, -0.69702172]

How to create OpenCV images that are contiguous in memory?

I am an OpenCV newbie. I create a OpenCV image using cvCreateImage and apply some operations on it. Now, I want to create a series of OpenCV images whose underlying memory is contiguous. This can be helpful to process that memory later as a series of image frames using parallel or CUDA techniques.
How can I create a certain number of OpenCV images that are contiguous in memory?
You can allocate the data yourself:
const int W = 640;
const int H = 480;
const int C = 1; // number of channels (1 for CV_8U)
const int N = 10; // number of images
unsigned char buffer[W*H*C*N];
cv::Mat im0(H, W, CV_8U, buffer);
cv::Mat im1(H, W, CV_8U, buffer + W*H*C);
cv::Mat im2(H, W, CV_8U, buffer + W*H*C*2);
I have used the C++ API because I'm more used to it, but there must exist a similar behaviour in the C api with the cvCreateImage function.
You could use a cv::Mat to manage the storage, then you don't to have remember to delete the storage.
Assuming 3 channel images:
const int W = 640; const int H = 480; const int C = 1;
const int N = 10; // number of images
cv::Mat buffer (N, W * H, CV_8UC3);
cv::Mat im0(H, W, CV_8UC3, buffer.ptr<uchar>(0));
cv::Mat im1(H, W, CV_8UC3, buffer.ptr<uchar>(1));
cv::Mat im2(H, W, CV_8UC3, buffer.ptr<uchar>(2));

OpenCV C++: how access pixel value CV_32F through uchar data pointer

Briefly, I would like to know if it is possible to directly access pixel value
of a CV_32F Mat, through Mat member "uchar* data".
I can do it with no problem if Mat is CV_8U, for example:
// a matrix 5 columns and 6 rows, values in [0,255], all elements initialised at 12
cv:Mat A;
A.create(5,6, CV_8UC1);
A = cv::Scalar(12);
//here I successfully access to pixel [4,5]
uchar *p = A.data;
int value = (uchar) p[4*A.step + 5];
The problem is when I try to do the same operation with the following matrix,
// a matrix 5 columns, 6 rows, values in [0.0, 1.0], all elements initialised at 1.2
cv::Mat B;
B.create(5,6, CV_32FC1);
B = cv::Scalar(1.2);
//this clearly does not work, no syntax error but erroneous value reported!
uchar *p = B.data;
float value = (float) p[4*B.step + 5];
//this works, but it is not what I want to do!
float value = B.at<float>(4,5);
Thanks a lot, Valerio
You can use ptr method which returns pointer to matrix row:
for (int y = 0; y < mat.rows; ++y)
{
float* row_ptr = mat.ptr<float>(y);
for (int x = 0; x < mat.cols; ++x)
{
float val = row_ptr[x];
}
}
You can also cast data pointer to float and use elem_step instead of step if matrix is continous:
float* ptr = (float*) mat.data;
size_t elem_step = mat.step / sizeof(float);
float val = ptr[i * elem_step + j];
Note that CV_32F means the elements are float instead of uchar. The "F" here means "float". And the "U" in CV_8U stands for unsigned integer. Maybe that's why your code doesn't give the right value. By declaring p as uchar*, p[4*B.step+5] makes p move to the fifth row and advance sizeof(uchar)*5, which tend to be wrong. You can try
float value = (float) p[4*B.step + 5*B.elemSize()]
but I'm not sure if it will work.
Here are some ways to pass the data of [i, j] to value:
value = B.at<float>(i, j)
value = B.ptr<float>(i)[j]
value = ((float*)B.data)[i*B.step+j]
The 3rd way is not recommended though, since it's easy to overflow. Besides, a 6x5 matrix should be created by B.create(6, 5, CV_32FC1), I think?

Resources