Regarding use of cvStereorectify - opencv

I am trying to use cvstereorectify (link) to give me the Q matrix that I could then use back in cvReprojectImageto3D.
In the documentation of cvstereorectify though I am unsure how to get the R & T- The rotation matrix and the translation vector between the two cameras.Are there any methods that can help me do this? Any guidance is appreciated.

Use StereoCalibrate

Related

bayesianoptimization in machine learning

Thanks for reading this. I am currently studying bayesoptimization problem and follow the tutorial. Please see the attachment.bayesian optimization tutorial
In page 11, about the acquisition function. Before I raise my question I need state my understanding about bayesian optimization to see if there is anything wrong.
First we need take some training points and assume them as multivariable gaussian ditribution. Then we need use acquisiont function to find the next point we want to sample. So for example we use x1....x(t) as training point then we need use acquisition function to find x(t+1) and sample it. Then we'll assume x1....x(t),x(t+1) as multivariable gaussian ditribution and then use acquisition function to find x(t+2) to sample so on and so forth.
In page 11, seems we need find the x that max the probability of improvement. f(x+) is from the sample training point(x1...xt) and easy to get. But how to get u(x) and that variance here? I don't know what is the x in the eqaution. It should be x(t+1) but the paper doesn't say that. And if it is indeed x(t+1), then how could I get its u(x(t+1))? You may say use equation at the bottom page 8, but we can use that equation on condition that we have found the the x(t+1) and put it into multivariable gaussian distribution. Now we don't know what is the next point x(t+1) so I have no way to calculate, in my opinion.
I know this is a tough question. Thanks for answering!!
In fact I have got the answer.
Indeed it is x(t+1). The direct way is we compute every u and varaince of the rest x outside of the training data and put it into acquisition function to find which one is the maximum.
This is time consuming. So we use nonlinear optimization like DIRECT to get the x that max the acquisition function instead of trying one by one

optimizer.compute_gradients how the gradients are calculated programatically?

I'm new to machine learning.I was going through tensorflow and i have a doubt on a particular function.
grads_and_vars = optimizer.compute_gradients(loss) can someone explain how the gradients are calculated programatically(i.e what formula does it use to compute the gradients)?
Tensorflow uses an algorithm called reverse-mode automatic differentiation. It's too complex a topic to explain here, but the Wikipedia page is a good starting point:
https://en.wikipedia.org/wiki/Automatic_differentiation
Hope that helps!

USAN filter in OpenCV - custom filter

I'm trying to implement SUSAN corner detector in OpenCV Details here.
So far I have the filtering function, but there is a problem, that this is not an linear operation. According to documentation it's possible to use FilterEngine and BaseFilter to write custom filters. There are unfortunately no detail how to implement the filtering function dst(x,y) = F(src x kernel). I'm using C++ and OpenCV 2.3.
Thanks in advance.
A nice tutorial how to implement custom 2D filter based on kernel convolution is here!

OpenCV Multilevel B-Spline Approximation

Hi (sorry for my english) .. i'm working in a project for University in this project i need to use the MBA (Multilevel B-Spline Approximation) algorithm to get some points (control points) of a image to use in other operations.
I'm reading a lot of papers about this algorithm, and i think i understand, but i can't writing.
The idea is: Read a image, process a image (OpenCV), then get control points of the image, use this points.
So the problem here is:
The algorithm use a set of points {(x,y,z)} , this set of points are approximated with a surface generated with the control points obtained from MBA. the set of points {(x,y,z)} represents de data we need to approximate (the image)..
So, the image is in a cv::Mat format , how can transform this format to an ordinary array to simply access to the data an manipulate...
Here are one paper with an explanation of the method:
(Paper) REGULARIZED MULTILEVEL B-SPLINE REGISTRATION
(Paper)Scattered Data Interpolation with Multilevel B-splines
(Matlab)MBA
If someone can help, maybe a guideline, idea or anything will be appreciate ..
Thanks in advance.
EDIT: Finally i wrote the algorithm in C++ using armadillo and OpenCV ...
Well i'm using armadillo a C++ linear algebra library to works with matrix for the algorithm

How to compute SVD using Cimg (or maybe openCV or eigen library)?

May anyone give me a quick guide on how to use Cimg to compute SVD for a 3-dimension array?
I just want to get the decomposition of the array in order to compress it small for speeding up further process.
What value should I input at where, and how to get the output?
I've been searched around and still can't understand how it works. and not really fully understand how SVD works as well..only know that it can used to decompress matrix.
At the same time I found that OpenCV and Eigen library also can done the job, do let me know their steps if is much more easier..
(Alternative for me instead of SVD is PCA, which I found its source/library but also don't know how to use..)
Thanks!
See http://cimg.sourceforge.net/reference/structcimg__library_1_1CImg.html#a9a79f3a0849388b3ec13bd140b67a12e
CImg<float> A(3,3); // A = U'*S*V
A.rand(0,1);
CImgList<float> USV = A.get_SVD(); //USV[0] = U and so forth

Resources