OpenCV provides SVD decomposition, but I cannot find general QR decomposition in its library. Is there any alternative to achieve this?
This is an old question, but for the sake of completeness (and because google took me here), here's the answer using OpenCV --
OpenCV has the solve() function, which can be invoked with a flag specifying the matrix inversion method to be used. Use the flag DECOMP_QR to perform a QR decomposition.
As someone mentioned 3 years ago, this is an old question (lolz)
But QR decomposition is very much possible in OpenCV. In 3.2 it can be done using cv::decomposeProjectionMatrix, as documented here: http://docs.opencv.org/trunk/d9/d0c/group__calib3d.html#gaaae5a7899faa1ffdf268cd9088940248
Note: I recognize I'm really answering for a specific case of QR decomposition (that on a projection matrix), but that doc page says that this function is based on RQDecomp3x3, which could be used for the generic RQ decomposition.
I'm answering this now as all of the answers that pop up when googling this are wrong in saying that it's not possible.
You can use a matrix library like Newmat11 http://www.robertnz.net/nm11.htm#qr
Related
I have used OpenCV library to for Stereo Camera Calibration and disparity map estimation. I used the tutorials available in OpenCV 3.3.1 documentation. For example, for disparity I have used the code from the following link:
https://docs.opencv.org/3.3.1/d3/d14/tutorial_ximgproc_disparity_filtering.html
It is working but I can't find detils of what is happenning in the functions used in the code such as left_matcher->computer or createdisparityWLSfilter. I want to read the theory behind these fucntion. So here I am looking for a good link or suggestion for that. I came across the following book:
http://shop.oreilly.com/product/0636920044765.do
But I am not sure if this is the correct resource to read about details of opencv function.
Any help is appreciated.
I'm learning orb-slam and opencv source code, and inside the orb.cpp which lies on modules/features2d/src/ directory I see a bit pattern named as
bit_pattern_31_[256*4]
But I really don't know what's its usage. I search the google and bing long time without any answer given.
So any one know the usage or reference of this majic bit pattern?
Since I came across this on google and eventually found what I think is the answer, I'll give it a shot:
bit_pattern_31_[]
is a pre-computed set of points P1(x,y) and P2(x,y).
I believe it to be the set of points obtained by the greedy search described in section 4.3 Learning Good Binary Features of the original orb paper (ORB: an efficient alternative to SIFT or SURF)
I have seen numerous examples and sample code for detecting emotions from a human face. I am in desperate need of some algorithm to change expressions. I am a new OpenCV learner. I am also confused if this image manipulation can be done using opencv ? Can functions such as warpaffine() be used for this ? If shall be grateful if someone can guide me in steps how to perform this eg. input a neutral face emotion and convert it to smile ?
Try using FaceAPI, it is free to use for non-commercial purposes and works brilliantly. It is well documented and easy to use.
I am trying to use ORB descriptors with LshMatcher for a faster matching.
I have found somewhere LSH implementations (example: https://code.ros.org/trac/wg-ros-pkg/browser/branches/trunk_diamondback/stacks/object_recognition_experimental/rbrief/src/lsh.cpp)
But it seems it is not implemented yet in opencv 2.4.2.
Do you have any hint how to include LshMatcher within opencv?
I have asked the same question on the OpenCV dev forum, without a good answer.
http://answers.opencv.org/question/503/how-to-use-the-lshindexparams/
Yet, I hope for some more docs. You can just check it again in a few days to see whether there is a new answer.
BTW, if you try to use it with SIFT/SURF/ORB, which are float descriptors, as I know, it will not work LSH are for binary descriptors only.
Edit
It seems to be a bug in OpenCV (2.4.2), as stated in the accepted answer here
http://answers.opencv.org/question/503/how-to-use-the-lshindexparams/
I'm trying to cluster a really large dataset - 3030764x162 into 4000 clusters using the cvKmeans2 function in OpenCV 2.1.
I would like to see which iteration the K-means algorithm is currently in (similar to what is displayed in Matlab), but I don't see any documentation that points to how I can do this.
It's kind of frustrating seeing a blank screen and not knowing when the code is going to terminate!
Thank you.
Unfortunate as it seems, the answer is No, you cannot. There are no debugging/informative statements anywhere in the kmeans function as provided by OpenCV. However, you may edit and add statements to the method as you deem appropriate.
#Sau,
May be you need some other way of doing it. Though my answer is not relevant to OpenCV.
I have not tried in OpenCV, I had once done KMeans clustering for a extremely large data set and it was more a option better than OpenCV as it worked in a distributed mode. Though very lengthy, but still you might be interested. Its Kmeans clustering using Mahout
Check it out