SiftDescriptorExtractor with CvSURFPoint - image-processing

is there any possibility, how can I use SiftDescriptorExtractor with old CvSURFPoint instead of required cv::Mat? The reason why I would like to do this is, that I have many results computed with old detector and I need to perform some kind of comparison, however the new version of surf descriptor (cv::SurfFeatureDetector) returns different keypoints than the old one... (and settings are same)

Related

ELKI: Normalization undo for result

I am using the ELKI MiniGUI to run LOF. I have found out how to normalize the data before running by -dbc.filter, but I would like to look at the original data records and not the normalized ones in the output.
It seems that there is some flag called -normUndo, which can be set if using the command-line, but I cannot figure out how to use it in the MiniGUI.
This functionality used to exist in ELKI, but has effectively been removed (for now).
only a few normalizations ever supported this, most would fail.
there is no longer a well defined "end" with the visualization. Some users will want to visualize the normalized data, others not.
it requires carrying over normalization information along, which makes data structures more complex (albeit the hierarchical approach we have now would allow this again)
due to numerical imprecision of floating point math, you would frequently not get out the exact same values as you put in
keeping the original data in memory may be too expensive for some use cases, so we would need to add another parameter "keep non-normalized data"; furthermore you would need to choose which (normalized or non-normalized) to use for analysis, and which for visualization. This would not be hard with a full-blown GUI, but you are looking at a command line interface. (This is easy to do with Java, too...)
We would of course appreciate patches that contribute such functionality to ELKI.
The easiest way is this: Add a (non-numerical) label column, and you can identify the original objects, in your original data, by this label.

how to convert List<MatofPoint> to Mat in opencv?

Recently I'm developing android app using OpenCV. Now I encounter a problem:
Imgproc.findContours(grayMat, contours1, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
After this function, I want to call Imgproc.MatchShapes to detect whether 2 images are matched. but under Java edition MatchShapes requires parameters of type Mat.
How can I convert List<MatOfPoint> to Mat?
The function you use to detect contours returns a list of MatOfPoints. Each contour - because there can be many - has its own MatOfPoints.
You have to find a way to choose which contour you want to use with Imgproc.MatchShapes. If you know there's only one, then just use the first entry in the List<MatOfPoints>. If you want the biggest one, use some contour properties to find the biggest contour. If you have time, you can check every single contour.
Then, once you've found the single contour you want to compare, you can use that MatOfPoints. According to this StackOverflow question, they are perfectly compatible.

Unsupported format or combination of formats when using cv::reduce method in OpenCV

I am using OpenCV 2.4.2 and I am trying to take projections of two matrices (tmpl(32x44), subj(32x44)) along row and column. I have initialised a result matrix as rowProjectionSubj(subj.rows,1,CV_8UC1) Then I call cv::reduce(subj,rowProjectionSubj,1,CV_REDUCE_SUM,-1);
Why is this complaining about the type mismatch? I have kept the types same (by keeping dtype=-1 in cv::reduce. I get the tmpl and subj objects by doing cv::imread("image_path",0) i.e. scanning grayscale images in.
I might not be right, but after I saw this:
http://answers.opencv.org/question/3698/cvreduce-gives-unsupported-format-exception/?answer=3701#post-id-3701
and with a little experiment and using an old friend called "register math", I realised that when you add two 8-bit numbers, you need to consider a 8+1+1 bit register to store the sum because it potentially has carry output. so any result of reduce should have bigger space than the source i.e. if the source is 8-bit unsigned, it should be at least 16-bit unsigned or signed; might as well be 32-bit if it is going to be used for some product calculation and stuff...
NOTE: The destination type must be EXPLICITLY stated in the cv::reduce method. Please follow my openCV link for further information.

How to save CV_32F type CV::Mat to a file without loosing precision?

I'm using cv::PCA class for a face recognition project. I convert photos of faces to one row vectors, concatenate them to one big array and feed to pca, to acquire a new space in which I can try to use distance for recognition. Problem is, that calculating the pca from scratch each time I start the program is really time consuming (almost five minutes). I figured out that I need to save the calculated pca to hard drive, and load it when I start the program again. And here is the problem. As I can see, all cv::Mat objects in cv::PCA are of type CV_32F. When i try to save it as a normal picture, its converted to 8 bit image, and there is some data lost. When i use XML/YAML persistence, the generated file is really big, and data is also lost (I have saved it, loaded to another structure and ran cerr<<sum(pca_orginal.mean==pca_loaded.mean)[0]<<endl to check how big is the difference). Right now I'm trying to use std::ofstream::write with std::ofstream::binary flag, and istream::read, but there are some type issues (out.write(_pca.mean.data,_pca.mean.rows*_pca.mean.cols*4/*CV_32F->4*CV_8U*/\); generates error: no matching function for call to ‘std::basic_ofstream<char, std::char_traits<char> >::write(uchar*&, int). I've also heard about openexr library and it's file format, but I would rather avoid using additional libraries. I'm using OpenCV 2.3.1 and OpenCV 2.2.
edit:
I'm sorry for the confusion. I misread cv::Mat operator== description, and thought that it works the opposite way that it does, so sum(pca_orginal.mean==pca_loaded.mean)[0] giving 0 is the worse possible result, not the best. It means that XML/YML works fine apart from generating huge files. Also, after using c-style casting I was able to make the binary streams work, but the files generated are also big (over 150MB).
In the C interface, there are functions cvSave and cvLoad for saving arbitrary matrices. There are probably C++ interface counterparts, too.

OpenCV: how to access one contour in O(1) after call cvFindContours()?

I'm using OpenCV to compare two blobs in two images. Suppose I've known
a pair of blobs that are likely to be similar, and I know their indices
in the contour arrays (generated by cvFindContours()), how can I get
access to one contour in a constant time?
The most cumbersome way is to use the link operation (contours=contours->h_next) multiple times, but I wonder if there is a faster way to retrieve one contour in an array.
I use CV_RETR_EXTERNAL and CV_CHAIN_APPROX_NONE in calling cvFindContours().
Thanks!
-J.C.
I think the function cvGetSeqElem does what you want. Quoting the OpenCV docs: "The function has O(1) time complexity assuming that the number of blocks is much smaller than the number of elements." I suppose "blocks" means "contours" in this context.
Also, take a look at cvCvtSeqToArray (link), which copies a sequence to one continuous block of memory.

Resources