Rust OpenCV: Convert Vec to Mat? - opencv

I'm brand new to rust. I have a vector of vectors (Vec<Vec<f32>>) containing values between 0 and 1, and I need to convert it to an openCV Mat so that I can use the algorithms in opencv::imgproc on it.
Could anyone give me some advice on how to do this?

Related

How to convert 3D point in OpenCV to Eigen representation?

What is the best way to convert a 3D point (cv::Point3) in OpenCV to Eigen representation?
I think the best representation in Eigen is Vector3d, such as:
cv::Point3d p(1,2,3);
Eigen::Vector3d point(p.x, p.y, p.z);

Extract point descriptors from small images using OpenCV

I am trying to extract different point descriptors (SIFT, SURF, ORB, BRIEF,...) to build Bag of Visual words. The problem seems to be that I am using very small images : 12x60px.
Using a dense extractor I am able to get some keypoints, but then when extracting the descriptor no data is extracted.
Here is the code :
vector<KeyPoint> points;
Mat descriptor; // descriptor of the current image
Ptr<DescriptorExtractor> extractor = DescriptorExtractor::create("BRIEF");
Ptr<FeatureDetector> detector(new DenseFeatureDetector(1.f,1,0.1f,6,0,true,false));
image = imread(filename, 0);
roi = Mat(image,Rect(0,0,12,60));
detector->detect(roi,points);
extractor->compute(roi,points,descriptor);
cout << descriptor << endl;
The result is [] (with BRIEF and ORB) and SegFault (with SURF and SIFT).
Does anyone have a clue on how to densely extract point descriptors from small images on OpenCV ?
Thanks for your help.
Indeed, I finally managed to work my way to a solution. Thanks for the help.
I am now using an Orb detector with initalised parameters instead of a random one, e.g:
Ptr<DescriptorExtractor> extractor(new ORB(500, 1.2f, 8, orbSize, 0, 2, ORB::HARRIS_SCORE, orbSize));
I had to explore the documentation of OpenCV thoroughly before finding the answer to my problem : Orb documentation.
Also if people are using the dense point extractor they should be aware that after the descriptor computing process they may have less keypoints than produced by the keypoint extractor. The descriptor computing removes any keypoints for which it cannot get the data.
BRIEF and ORB use a 32x32 patch to get the descriptor. Since it doesn't fit your image, they remove those keypoints (to avoid returning keypoints without descriptor).
In the case of SURF and SIFT, they can use smaller patches, but it depends on the scale provided by the keypoint. In this case, I guess they have to use a bigger patch and the same as before happens. I don't know why you get a segfault, though; maybe the SIFT/SURF descriptor extractors don't check that keypoints are inside the image boundaries, as BRIEF/ORB ones do.

What's the full name of `IplImage` and `CvMat` in OpenCV?

There is a IplImage and CvMat in OpenCV. What are the full names of them?
IPL in IplImage stands for Intel Processing Library, which is a remnant of when OpenCV was maintained by Intel.
CV in cvMat stands for Computer Vision Matrix, which is a data structure commonly used in graphics.
IplImage is an old structure, which I believe is internally converted into cv::Mat, or just Mat if you're in the cv namespace already. Likewise, cvMat is converted into Mat as well.
http://opencv.willowgarage.com/documentation/cpp/basic_structures.html?highlight=iplimage

Feeding extracted HoG features into CvSVM's train function

This is a silly question since I'm quite new to SVM,
I've managed to extract features and locations using OpenCV's HoGDescriptor:
vector< float > features;
vector< Point > locations;
hog_descriptors.compute( image, features, Size(0, 0), Size(0, 0), locations );
Then I proceed to use CvSVM to train the SVM based on the features I've extracted.
Mat training_data( features );
CvSVM svm;
svm.train( training_data, labels, Mat(), Mat(), params );
Which gave me an error:
OpenCV Error: Bad argument (There is only a single class) in cvPreprocessCategoricalResponses, file /opt/local/var/macports/build/
My question is that, how do I convert the vector < features > into appropriate matrix to be fed into CvSVM ? Obviously I am doing something wrong, the OpenCV's tutorial shows that a 2D matrix containing the training data is fed into SVM. So, how do I convert vector < features > into a 2D matrix, what are the values in the 2nd dimension ?
What are these features exactly ? Are they the 9 bins consisting of normalized magnitude histograms ?
I found out the issue, since I was testing whether it is correct to pass feature vectors into the SVM in order to train it, I didn't bother to prepare both negative and positive samples.
Yet, CvSVM requires at least 2 different classes for training, that's why the error it threw.
Thanks a lot anyway !

OpenCV CalcPca input data

I am trying to implement a face recognition training function with opencv, using "eigenfaces". I have the sample data, but I can't find any info on CalcPCA function arguments. All I know is that it takes data matrix, reference to average eigenface matrix, reference to eigen vector, and reference to eigen values matrix.
My question is, how should I pass the data from several test image matrices into the first argument of CalcPCA so I can get the average eigenface and vectors?
This seems to be a good example: http://tech.groups.yahoo.com/group/OpenCV/message/47627
You can do in this way:
You have for example 10 Mat where each math represent an image.
Now you can create a new Mat that you can put into this new Mat the previus 10 Mat.
At this point use Mat.pushback(...) to insert the 10 Mat.
Hope this is helpful for you.
Marco

Resources