OpenCV : Building a simple 3d model - opencv

I Have decided to use OpenCV to build a 3d scene by using a series of 2D Images. I found the example code that came with OpenCV [ build3dmodel.cpp Here ].
I just want to run this once and see what kind of outcome this gives. My knowledge with OpenCV is low, I don't want to understand the whole code, I just want to know how to give inputs to this program (the image set) to see the output.

The line command of this code example requires the following parameters:
build3dmodel -i intrinsics_filename.yml [-d detector] [-de
descriptor_extractor] -m model_name.yml
The first file is the camera matrix which you obtain after the calibration process (there is an especific example with it). Detector and descriptor detector must match with valid FeatureDetector and DescriptorExtractor names. Model name is a bit confusing, it looks like part of the yml file name where data will be saved.

First see some tutorial like introduction to OpenCv or OpenCV tutorial. Also, see input and output with OpenCv.

Related

Epipolar Geometry, Not Visually sane output in OpenCV

I've tried using the code given https://docs.opencv.org/3.2.0/da/de9/tutorial_py_epipolar_geometry.html to find the epipolar lines, but instead of getting the output given in the link, I am getting the following output.
but when changing the line F, mask = cv2.findFundamentalMat(pts1,pts2,cv2.FM_LMEDS) to
F, mask = cv2.findFundamentalMat(pts1,pts2,cv2.FM_RANSAC) i.e: using RANSAC algorithm to find Fundamental matrix instead of LMEDS this is the following output.
When the same line is replaced with F, mask = cv2.findFundamentalMat(pts1,pts2,cv2.FM_8POINT) i.e: use eight point algorithm this is the following output.
All of the above about output does not have any visual sanity nor anyway near to close to the given output in opencv documentation for finding epipolar lines. But ironically, if the same code if executed by changing the algorithm to find fundamental matrix in this particular sequence
FM_LMEDS
FM_8POINT
FM_7POINT
FM_LMEDS
most accurate results are generated. This is the output.
I thought we are suppose to get the above output in one run of any of the algorithm (with the variations in matrix values and error). Am I running the code incorrectly? What is that I've to do, to get the correct epipolar lines (i.e.; visually sane)? I am using opencv version 3.3.0 and python 2.7.
Looking forward for reply.
Thank you.

OpenCV + ARToolkit

For school, I have to do an augmented reality project. ARToolkit is good for tracking markers but my problem is my procamcalib calibration can't be used by artoolkit (distortion coefficient in procamcalib and distortion factor in artoolkit).
I see that with openCV i can calibrate my ps eye and can apply the undistortion directly.
So my question is can get the ps eye image, undistorted it and give it to artoolkit after to have my markers's positions?
Thanks
(Sorry for my english, I'm a french student, if you've got some troubles to read I can explain again)
might be a bit of work to de-couple the video code, but in the end you can use just:
arDetectMarker(dataPtr, thresh, &marker_info, &marker_num)
with pixels from anywhere ( e.g an undistorted opencv-Mat from your pseye )
Not entirely sure if I understanded your question. But you could run the example calibration program that comes with ARToolKit. More information could be found here: Calibrating Your Camera
Then you would be able to get the calibration result "camera_para.dat" at ARToolKit's bin/Data, which could be used later in your project.
If by any chance you are using Unity for your AR project (If not, ignore below), simply import the ARToolKit, then in the AR Controller inspector, give your .dat file a unique name, then include it in "Camera Parameters" option.

OpenCV - training new LatentSVMDetector Models

I haven't found any method to train new latent svm detector models using openCV. I'm currently using the existing models given in the xml files, but I would like to train my own.
Is there any method for doing so?
Thank you,
Gil.
As of now only DPM-detection is implemented in OpenCV, not training.
If you want to train your own models, the most reliable approach is to use Felzenszwalb's and Girshick's matlab code (most of the heavy stuff is implemented in C) (http://www.cs.berkeley.edu/~rbg/latent/)(http://www.rossgirshick.info/latent/) It is reliable and works reasonably fast
If you want to do it in C-only, there is an implementation here (http://libccv.org/doc/doc-dpm/) that I haven't tried myself.
I think there is a function in the octave version of the author's code here
(Octave Version of DPM). It is in step #5,
mat2opencvxml('./INRIA/inriaperson_final.mat', 'inriaperson_cascade_cv.xml');
I will try it and let you know about the result.
EDIT
I tried to convert the .mat file from the octave version i mentioned before to .xml file, and compared the result with the built in opencv .xml model and the construction of the 2 xmls was different (tags, #components,..), it seems that this version of octave dpm generates xml files for later opencv version (i am using 2.4).
VOC-release3.1 is the one matches opencv2.4.14. I tried to convert the already trained model from this version using mat2xml function available in opencv and the result xml file is successfully loaded and working with opencv. Here are some helpful links:
mat2xml code
VOC-release-3.1
How To Train DPM on a New Object

Fine tuning image stitching in OpenCV

(A newbie in computer vision)
Goal is to reconstitute a game level using image stitching or any other method. The level played by someone is video-recorded, these frames will be the input.
Expected result : (level 4-4 of SMB from http://www.vgmaps.com/)
This is my first attempt at addressing this problem, using OpenCV (EmguCV). So far results are excellent but I was wondering whether there are more appropriate techniques knowing that my input will be strictly in 2D ?
I am open to try another framework/technique providing it's not overly complex.
Here are source images :
Result of the 7 first images :
(for some reasons, the Stitcher in OpenCV did not accept 10 at once ...)
Result of the last 3 images :

OpenCV Multilevel B-Spline Approximation

Hi (sorry for my english) .. i'm working in a project for University in this project i need to use the MBA (Multilevel B-Spline Approximation) algorithm to get some points (control points) of a image to use in other operations.
I'm reading a lot of papers about this algorithm, and i think i understand, but i can't writing.
The idea is: Read a image, process a image (OpenCV), then get control points of the image, use this points.
So the problem here is:
The algorithm use a set of points {(x,y,z)} , this set of points are approximated with a surface generated with the control points obtained from MBA. the set of points {(x,y,z)} represents de data we need to approximate (the image)..
So, the image is in a cv::Mat format , how can transform this format to an ordinary array to simply access to the data an manipulate...
Here are one paper with an explanation of the method:
(Paper) REGULARIZED MULTILEVEL B-SPLINE REGISTRATION
(Paper)Scattered Data Interpolation with Multilevel B-splines
(Matlab)MBA
If someone can help, maybe a guideline, idea or anything will be appreciate ..
Thanks in advance.
EDIT: Finally i wrote the algorithm in C++ using armadillo and OpenCV ...
Well i'm using armadillo a C++ linear algebra library to works with matrix for the algorithm

Resources