pass Mat to CvMat* segmentation fault - opencv

I want to pass a value to a pointer variable, namely from type Mat to CvMat*?
So I have something like the following and want to pass it to the variable Si;;
Mat S=(Mat_<double>(1, 3) << 1,0,1 );
CvMat* Si;
*Si=S;
But this gives a segmentation fault, Am I doing something wrong?

Use
Si = &S if you wanted to change the pointer.
Or initialize Si first to copy S into Si
Si = new Mat_<double>(1, 3);
*Si = S;
Basically before you initialize Si, it is an invalid pointer, and assuming you want to copy a structure to whatever address the pointer refers to, is an invalid operation. You need to "own" a valid memory address (which the new operation creates for you) in order to work on an object.
Don't forget to use delete Si; at some point later on.

You need to allocate storage for the object referenced by the pointer. This can be done by calling cvCreateMat(), as below:
cv::Mat S=(cv::Mat_<double>(1, 3) << 1,0,1 );
CvMat* Si = cvCreateMat(1, 3, CV_64FC1);
*Si=S;
Note that this approach will copy the data from S into Si. If you want to only create a CvMat header without copying the data, do this:
cv::Mat S=(cv::Mat_<double>(1, 3) << 1,0,1 );
CvMat m = S;
CvMat* Si = &m;

Related

Read cv::Mat pixel without knowing its pixel format

I am aware there are several ways to read and write a pixel value of an OpenCV cv::Mat image/matrix.
A common one is the .at<typename T>(int, int) method http://opencv.itseez.com/2.4/modules/core/doc/basic_structures.html#mat-at .
However, this requires the typename to be known, for instance .at<double>.
The same thing applies to more direct pointer access OpenCV get pixel channel value from Mat image .
How can I read a pixel value without knowing its type? For instance, it would be ok to receive a more generic CvScalar value in return. Efficiency is not an issue, as I would like to read rather small matrices.
Kind of. You can construct cv::Mat_ and provide explicit type for elements, after that you don't have to write element type each time. Quoting opencv2/core/mat.hpp
While Mat is sufficient in most cases, Mat_ can be more convenient if you use a lot of element
access operations and if you know matrix type at the compilation time. Note that
Mat::at(int y,int x) and Mat_::operator()(int y,int x) do absolutely the same
and run at the same speed, but the latter is certainly shorter.
Mat_ and Mat are very similar. Again quote from mat.hpp:
The class Mat_<_Tp> is a thin template wrapper on top of the Mat class. It does not have any
extra data fields. Nor this class nor Mat has any virtual methods. Thus, references or pointers to
these two classes can be freely but carefully converted one to another.
You can use it like this
Mat_<Vec3b> dummy(3,3);
dummy(1, 2)[0] = 10;
dummy(1, 2)[1] = 20;
dummy(1, 2)[2] = 30;
cout << dummy(1, 2) << endl;
Why I said 'kind of' in the first place? Because if you want to pass this Mat_ somewhere - you have to specify it's type. Like this:
void test(Mat_<Vec3b>& arr) {
arr(1, 2)[0] = 10;
arr(1, 2)[1] = 20;
arr(1, 2)[2] = 30;
cout << arr(1, 2) << endl;
}
...
Mat_<Vec3b> dummy(3,3);
test(dummy);
Technically, you are not specifying your type during a pixel read, but actually you still have to know it and cast the Mat to the appropriate type beforehand.
I guess you can find a way around this using some low-level hacks (for example make a method that reads Mat's type, calculates element size and stride, and then accesses raw data using pointer arithmetic and casting...). But I don't know any 'clean' way to do this using OpenCV's functionality.
If you already know the type, you can use Mat_<> type for easy access. If you don't know the type, you can:
convert the data to double, so data won't be truncated in any case
switch over the number of channels to access correctly the double matrix. Note that you can have at most of 4 channels, since Scalar has at most 4 elements.
The following code will convert only the selected element of the source matrix to a double value (with N channels).
You get a Scalar containing the value at position row, col in the source matrix.
#include <opencv2/opencv.hpp>
#include <iostream>
using namespace std;
using namespace cv;
Scalar valueAt(const Mat& src, int row, int col)
{
Mat dst;;
src(Rect(col, row, 1, 1)).convertTo(dst, CV_64F);
switch (dst.channels())
{
case 1: return dst.at<double>(0);
case 2: return dst.at<Vec2d>(0);
case 3: return dst.at<Vec3d>(0);
case 4: return dst.at<Vec4d>(0);
}
return Scalar();
}
int main()
{
Mat m(3, 3, CV_32FC3); // You can use any type here
randu(m, Scalar(0, 0, 0, 0), Scalar(256, 256, 256, 256));
Scalar val = valueAt(m, 1, 2);
cout << val << endl;
return 0;
}

How to replace sem_init() with sem_open() for non pointer semaphore?

I currently stuck in a problem.
Below is the original code
sem_t s;
sem_init(&s, 0, 1);
And I need to replace sem_init with sem_open because it will be used on iOS
sem_t s;
sem_open("/s", O_CREAT, 0644, 1); //which will return sem_t*
How should I assign the return address to s?
Thanks
p.s. i do not declare sem_t* s, because this is a huge library which I won't change it too much
Create a new semaphore pointer,
sem_t *sptr;
Invoke sem_open as sptr holds the address,
sptr = sem_open("/s", O_CREAT, 0644, 1);
And below preprocessor macro should do the trick,
#define s *sptr
With the above method, when ever s is passed as argument, for example sem_wait(&s) boils to sem_wait(&*sptr) => sem_wait(sptr) which is desired without changing sem_t s.

How to train an SVM with opencv based on a set of images?

I have a folder of positives and another of negatives images in JPG format, and I want to train an SVM based on that images, I've done the following but I receive an error:
Mat classes = new Mat();
Mat trainingData = new Mat();
Mat trainingImages = new Mat();
Mat trainingLabels = new Mat();
CvSVM clasificador;
for (File file : new File(path + "positives/").listFiles()) {
Mat img = Highgui.imread(file.getAbsolutePath());
img.reshape(1, 1);
trainingImages.push_back(img);
trainingLabels.push_back(Mat.ones(new Size(1, 1), CvType.CV_32FC1));
}
for (File file : new File(path + "negatives/").listFiles()) {
Mat img = Highgui.imread(file.getAbsolutePath());
img.reshape(1, 1);
trainingImages.push_back(img);
trainingLabels.push_back(Mat.zeros(new Size(1, 1), CvType.CV_32FC1));
}
trainingImages.copyTo(trainingData);
trainingData.convertTo(trainingData, CvType.CV_32FC1);
trainingLabels.copyTo(classes);
CvSVMParams params = new CvSVMParams();
params.set_kernel_type(CvSVM.LINEAR);
clasificador = new CvSVM(trainingData, classes, new Mat(), new Mat(), params);
When I try to run that I obtain:
OpenCV Error: Bad argument (train data must be floating-point matrix) in cvCheckTrainData, file ..\..\..\src\opencv\modules\ml\src\inner_functions.cpp, line 857
Exception in thread "main" CvException [org.opencv.core.CvException: ..\..\..\src\opencv\modules\ml\src\inner_functions.cpp:857: error: (-5) train data must be floating-point matrix in function cvCheckTrainData
]
at org.opencv.ml.CvSVM.CvSVM_1(Native Method)
at org.opencv.ml.CvSVM.<init>(CvSVM.java:80)
I can't manage to train the SVM, any idea? Thanks
Assuming that you know what you are doing by reshaping an image and using it to train SVM, the most probable cause of this is that your
Mat img = Highgui.imread(file.getAbsolutePath());
fails to actually read an image, generating a matrix img with null data property, which will eventually trigger the following in the OpenCV code:
// check parameter types and sizes
if( !CV_IS_MAT(train_data) || CV_MAT_TYPE(train_data->type) != CV_32FC1 )
CV_ERROR( CV_StsBadArg, "train data must be floating-point matrix" );
Basically train_data fails the first condition (being a valid matrix) rather than failing the second condition (being of type CV_32FC1).
In addition, even though reshape works on the *this object, it acts like a filter and its effect is not permanent. If it's used in a single statement without immediately being used or assigned to another variable it will be useless. Change the following lines in your code:
img.reshape(1, 1);
trainingImages.push_back(img);
to:
trainingImages.push_back(img.reshape(1, 1));
Just as the error says, You need to change type of Your matrix, from integer type, probably CV_8U, to floating point one, CV_32F or CV_64F. To do it You can use cv::Mat::convertTo(). Here is a bit about depths and types of matrices.

OpenCV changing Mat inside a function (Mat scope)

I am passing a Mat to another function and changing it inside the called function. I had expected that being a more complex type it was automatically passed by reference so that the matrix would have changed in the calling function, but it doesn't. Could someone point me at the explanation of how to correctly return a changed Mat from a function?
Here's the code snippet:
void callingFunction(Mat img)
{
Mat tst(100,500,CV_8UC3, Scalar(0,255,0));
saveImg(tst, "Original image", true);
testImg(tst);
saveImg(tst, "Want it to be same as inside testImg but is same as Original", true);
}
void testImg(Mat img)
{
int rs = 50; // rows
int cs = 100; // columns
img = Mat(rs, cs, CV_8UC3, Scalar(255,0,0));
Mat roi(img, Rect(0, 0, cs, rs/2));
roi = Scalar(0,0,255); // change a subsection to a different color
saveImg(img, "inside testImg", true);
}
Thanks!
You have to define Mat as parameter-reference (&). Here's edited code:
void callingFunction(Mat& img)
{
Mat tst(100,500,CV_8UC3, Scalar(0,255,0));
saveImg(tst, "Original image", true);
testImg(tst);
saveImg(tst, "Want it to be same as inside testImg but is same as Original", true);
}
void testImg(Mat& img)
{
int rs = 50; // rows
int cs = 100; // columns
img = Mat(rs, cs, CV_8UC3, Scalar(255,0,0));
Mat roi(img, Rect(0, 0, cs, rs/2));
roi = Scalar(0,0,255); // change a subsection to a different color
saveImg(img, "inside testImg", true);
}
I wondered about the same question myself, so I would like to further clarify the answer given by #ArtemStorozhuk (which is correct).
The OpenCV documentation is misleading here, because it appears you're passing the matrix by value, but in fact the constructor of cv::OutputArray is defined as follows:
_OutputArray::_OutputArray(Mat& m)
so it gets the matrix by reference!
Since operations like cv::Mat::create create a new matrix, the operation releases the reference and set the couter to 1. Thus, in order to keep the result in the calling function, you have to pass the matrix by reference.
If its true that you have to explicitly pass by reference, then how do all the OpenCV functions work? None of them pass values by reference, yet they somehow seem to write to the passed in Mat just fine. For example, here is the declaration for the Sobel function in imgproc.hpp:
//! applies generalized Sobel operator to the image
CV_EXPORTS_W void Sobel( InputArray src, OutputArray dst, int ddepth,
int dx, int dy, int ksize=3,
double scale=1, double delta=0,
int borderType=BORDER_DEFAULT );
as you can see, it passes in src and dst without a &. And yet I know that after I call the Sobel with an empty dst, it will end up filled. No '&' involved.

cvExtractSURF don't work when useProvidedKeypoints = true

So, I'm trying to extract some SURF keypoints, but I want to impose these key points! So, I put the last parameter to "true" which is "useProvidedKeypoints".
Also, when I create my Keypoint, I used the default constructor (so some default values there). I only change the point "pt" and the octave that I set to 3.
I'm using the C++ interface with SURF. But I know that the problem is right at cvExtractSURF because I copied that part of the code in mine to help me debug.
When I call that function, with the last parameter set to true, I got this error:
OpenCV Error: Bad argument (Unknown array type) in cvarrToMat, file /home/widgg/opencv/trunk/modules/core/src/matrix.cpp, line 651
terminate called after throwing an instance of 'cv::Exception'
what(): /home/widgg/opencv/trunk/modules/core/src/matrix.cpp:651: error: (-5) Unknown array type in function cvarrToMat
I really don't know what I'm doing wrong!
EDIT:
Here's some code. First how I create the keypoints (I left a couple of informations, like the layer_id stuff, but you get the main idea):
for (json_pt_info_vector::iterator b_beg = beg->points.begin(); b_beg != b_end; ++b_beg)
{
int layer_id = b_beg->layer_id;
json_point_info_coord &jpic = b_beg->coord;
jpic.feature_id = features[layer_id].keypoints.size();
KeyPoint kp;
kp.octave = 3;
kp.pt.x = jpic.x;
kp.pt.y = jpic.y;
features[layer_id].keypoints.push_back(kp);
}
Here's the call to SURF:
SURF surf(300, 3, 4);
for (int i = 0; i < nb_img; ++i)
{
debug_msg("extract_features #4.1");
cv::detail::ImageFeatures &cdif = features[i];
Mat gray_image = imread(param.layer_images[i], 0); // 0 = force to gray scale!
debug_msg("extract_features #4.2");
vector<float> descriptors;
debug_msg("extract_features #4.3");
surf(gray_image, Mat(), cdif.keypoints, descriptors, true); // MUST BE TRUE TO FORCE THE PROVIDED KEYPOINTS
debug_msg("extract_features #4.4");
cdif.descriptors = Mat(descriptors, true).reshape(1, (int)cdif.keypoints.size());
debug_msg("extract_features #4.5");
gray_image.release();
debug_msg("extract_features #4.6");
images[i] = imread(param.layer_images[i]); // keep the image open
}
It crashes after #4.3 in the debug message!
Hope that helps!
EDIT 2:
I replaced some part by cv::SurfDescriptorExtracter. I replace everything from 4.3 to 4.5 with the following line:
extractor.compute(gray_image, cdif.keypoints, cdif.descriptors);
So now, there's still a bug, but it's located somewhere else, not necessary related to this question!
I'm surprised that the call to surf(gray_image, Mat(), cdif.keypoints, descriptors, true) even compiles. the descriptors argument should be a cv::Mat, not a vector.

Resources