very bad disparity map using Stereo SGBM - opencv

I'm using Stereo SGBM of openCV to generate disparity map from a couple of images satellite,
the result is very bad even when I change the parameters. what should I do ?

I developped a tool similar to the one given in Robst's answer, but a little more convenient, and cross-platform. You can find it here.

SGBM parameter tuner based on vmarquet's great QT GUI for BM:
https://github.com/hitimo/opencv-disparity-map-tuner
Might be helpful to find the correct parameter set.

You could try StereoBM to see if you can get better results. You could use the tool developed by Martin Peris found here to modify the parameters. It will be much easier with the GUI provided. You could do something similar for StereoSGBM. Make sure your images are properly rectified.
Regards

Related

bit_pattern_31_[256*4] in opencv orb feature detector

I'm learning orb-slam and opencv source code, and inside the orb.cpp which lies on modules/features2d/src/ directory I see a bit pattern named as
bit_pattern_31_[256*4]
But I really don't know what's its usage. I search the google and bing long time without any answer given.
So any one know the usage or reference of this majic bit pattern?
Since I came across this on google and eventually found what I think is the answer, I'll give it a shot:
bit_pattern_31_[]
is a pre-computed set of points P1(x,y) and P2(x,y).
I believe it to be the set of points obtained by the greedy search described in section 4.3 Learning Good Binary Features of the original orb paper (ORB: an efficient alternative to SIFT or SURF)

Face Expression Change

I have seen numerous examples and sample code for detecting emotions from a human face. I am in desperate need of some algorithm to change expressions. I am a new OpenCV learner. I am also confused if this image manipulation can be done using opencv ? Can functions such as warpaffine() be used for this ? If shall be grateful if someone can guide me in steps how to perform this eg. input a neutral face emotion and convert it to smile ?
Try using FaceAPI, it is free to use for non-commercial purposes and works brilliantly. It is well documented and easy to use.

LshMatcher with opencv?

I am trying to use ORB descriptors with LshMatcher for a faster matching.
I have found somewhere LSH implementations (example: https://code.ros.org/trac/wg-ros-pkg/browser/branches/trunk_diamondback/stacks/object_recognition_experimental/rbrief/src/lsh.cpp)
But it seems it is not implemented yet in opencv 2.4.2.
Do you have any hint how to include LshMatcher within opencv?
I have asked the same question on the OpenCV dev forum, without a good answer.
http://answers.opencv.org/question/503/how-to-use-the-lshindexparams/
Yet, I hope for some more docs. You can just check it again in a few days to see whether there is a new answer.
BTW, if you try to use it with SIFT/SURF/ORB, which are float descriptors, as I know, it will not work LSH are for binary descriptors only.
Edit
It seems to be a bug in OpenCV (2.4.2), as stated in the accepted answer here
http://answers.opencv.org/question/503/how-to-use-the-lshindexparams/

Is it possible to see the current iteration number in OpenCV's cvKmeans2?

I'm trying to cluster a really large dataset - 3030764x162 into 4000 clusters using the cvKmeans2 function in OpenCV 2.1.
I would like to see which iteration the K-means algorithm is currently in (similar to what is displayed in Matlab), but I don't see any documentation that points to how I can do this.
It's kind of frustrating seeing a blank screen and not knowing when the code is going to terminate!
Thank you.
Unfortunate as it seems, the answer is No, you cannot. There are no debugging/informative statements anywhere in the kmeans function as provided by OpenCV. However, you may edit and add statements to the method as you deem appropriate.
#Sau,
May be you need some other way of doing it. Though my answer is not relevant to OpenCV.
I have not tried in OpenCV, I had once done KMeans clustering for a extremely large data set and it was more a option better than OpenCV as it worked in a distributed mode. Though very lengthy, but still you might be interested. Its Kmeans clustering using Mahout
Check it out

Computer Vision with Mathematica

Does anybody here do computer vision work on Mathematica? I would like to know what external libraries are available for doing that. The built in image processing functions are not enough. I am looking for things like SURF, stereo, camera calibration, multi-view geometry etc.
How difficult would it be to wrap OpenCV for use in Mathematica?
Apart from the extensive set of image processing tools that are now (version 8) natively present in Mathematica, and which include a number of CV algorithms like finding morphologic objects, image segmentation and feature detection (see figure below), there's the new LibraryLink functionality, which makes working with DLLs very easy. You wouldn't have to change OpenCV much to be able to call it from Mathematica. Just some wrappers for the functions to be called and you're basically done.
I don't think such a thing exists, but I'm getting started.
It has the advantage that you can perform some analytic methods... for example rather than hacking in openCV or even Matlab endlessly, you can compute analytically a quantity, and see that the method leading to this matrix is numerically unstable as a function of input variables. Thus you do not need to hack, as it would be pointless.
As for wrapping opencv, that doesn't seem to make sense. The correct procedure would be to fix bad implementations in opencv based on your analysis in Mathematica and on paper.
Agreeing with Peter, I don't believe that forcing Mathematica to use OpenCV is a great thing.
All of the computer vision people that I've talked to, read about, and seen examples are using Matlab and the Imaging toolkit. Its either that, or go with a OpenCV compatible language + OpenCV.
Mathematica has a rich set of tools for image processing, but I'm uncertain about the computer vision capabilities.

Resources