Say I plan to use OpenCV for 3D reconstruction using a stereo approach...and I do not have any special stereo camera but only webcams.
1.)How do I build a cheap stereo setup using a set of web cams?
2.)Is it possible to snap two images using web cams and convert them to stereo using openCV API?
I will use the stereo algorithm from the link below
Stereo vision with OpenCV
Using this approach I want to create a detailed mapping of an indoor environment.
(I would not like to use any projects like Insight3D which cannot be used for commercial purposes without distributing the source code)
You can find here a lot of resources including tutorials and stereo vision cameras
Firstly, ensure that your web cams don't have any inbuilt autofocus technology. As the cameras should have fixed focal length.
1) Align the cameras in canonical configuration with varying baseline distance. Then calibrate them using opencv's stereo_calib.cpp program. Usually, the distance will be 20-60cms. For some web cameras even 10cm will give you better results. If rms error and reprojection error are less than 0.5 then you could consider that the stereo setup is ready.
2) Yes, it is possible to capture stereo images from the setup which I just mentioned. Check out this link for capturing images from cameras.
OpenCV provides better algorithms from which one can do wonders with 3D vision.
Stereo is better suited for indoor environment as it is very sensitive to lighting variations.
Related
Open cv stereo calibration with FOH 120°, getting "bad" rms
I'm currently working on a Project in which I have to calibrate a pair of cameras to rectify the images for a stream.
For the calibration I'm using a chessboard pattern.
I'm doing the calibration by following those steps:
I calibrate the master and slave camera independently with 30 images and save the intrinsics and extrinsics
Then I do the stereo calibration on both cameras using 10 images and feeding the intrinsics found by mono calibration.
Now the results I get are not really satisfying. Rms from mono calibration is in the range from 1.2 to 5 and the rms from the stereo calibration is in the range from 1.7 to 3.
What confuses me is the fact that opencv has a namespace for fisheye calibration but I'm not sure if I should use that because my cameras are not fisheye but wide angle.
I tried to set a lot of different flags in the cv::calibrate and cv::stereoCalibrate functions but that didn't lead me to a satisfying output.
Has anyone some experience in this field? Id be happy to share some code but I would need to know what part of it would be interesting, because I implemented it in a custom gstreamer plugin which makes it difficult to just share the whole code.
I want to verify an algorithm about camera calibration. But using the pictures taken by myself is not convincing. Are there any canonical image libraries for camera calibration simulation?
Look at the page A Flexible New Technique for Camera Calibration by Zhengyou Zhang; in the section "Experimental data and result for camera calibration", you will find five images like this one , five sets of image coordinates (like for example this one https://www.microsoft.com/en-us/research/wp-content/uploads/2016/12/imagepointsone.txt) and the result of the calibration here: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/12/completecalibration.txt
I do not know whether it is canonical or not, for sure Zhengyou Zhang is well known for his work related to camera calibration and his article:
ZHANG, Zhengyou. A flexible new technique for camera calibration. IEEE Transactions on pattern analysis and machine intelligence, 2000, 22.11: 1330-1334.
is highly cited.
You may also have a look at Camera Calibration Toolbox for Matlab by Jean-Yves Bouguet, his code is the basis for the OpenCV algorithms but I do not know whether there are images for accuracy and correctness testing.
I have a stationary mono camera which captures a single image frame at some fps.
Assume the camera is not allowed to move,how do I generate a stereo image pair from the obtained single image frame? Is there any algorithms exists for this? If so, are they available in Open-CV?
To get a stereo image, you need a stereo camera, i.e. a camera with two calibrated lenses. So you cannot get a stereo image from a single camera with traditional techniques.
However, with the magic of deep learning, you can obtain the depth image from single camera.
And no, there's no builtin OpenCV function to do that.
The most common use of this kind of techniques is in 3D TVs, which often offer 2D-to-3D conversion, and thus mono to stereo conversion.
Various algorithms are used for this, you can look at this state of the art report.
There is also optical way for this.
If you can add binocular prisms/mirrors to your camera objective ... then you could obtain real stereoscopic image from single camera. That of coarse need access to the camera and setting up the optics. This also introduce some problems like wrong auto-focusing , need for image calibration, etc.
You can also merge Red/Cyan filtered images together to maintain the camera full resolution.
Here is a publication which might be helpful Stereo Panorama with a single Camera.
You might also want to have a look at the opencv camera calibration module and a look at this page.
I need to code a fire detector using OpenCV and Ive been googling for days on what to use but failed. Everything I find in google is all about haar detecting rigid objects especially face
What is the best ML to detect fire? I have to use a ML algorithm, that means no Haar or Viola algorithms.
Any suggestions for this? and if possible can explain why that certain algorithm is applicable in detecting fire
Better if you consider it as machine vision problem rather than computer vision problem. Instead of using RGB camera, its better to go for RGB-IR camera.
Infrared cameras are sensitive to heat content in scene. When you use IR camerasm with simple algorithms or mere thresholding you can detect fire in scene in case of dark environment.
Cheap RGB-IR cameras are available online like Raspberry Pi's Pi-Noir camera or you can convert your camera to RGB-IR camera by removing IR protection film.
DISCLAIMER: Apologies for this very large question, as it could take a lot of your time.
I have a stereo setup consisting of two webcams these cameras possess auto-focus technology. The stereo setup is in canonical configuration where the cameras are separated by 10 cm distance.
I am using the stereo_calib.cpp program for stereo calibration, provided by the OpenCV sample programs.
Initially, I have captured the chessboard images by my stereo rig as the way it is shown in the sample cpp stereo images, and then tried to calibrate the setup, but the stereo rectification was either completely blank or the undistorted left and right images are tilted about 40 degrees.
As this was the case, then I have captured a set of 17 stereo pair chessboard images by keeping the Z distance constant, without any rotation, at this point the stereo images were correctly rectified in the process of stereo calibration. This Working Set contains the images of chessboard taken by the stereo setup along with the program and the image of how well the rectification has been achieved.
Later, when I was trying to calibrate the stereo setup again (as the cameras in the stereo setup was disturbed), with another new set of images of the chessboard taken by my stereo rig, the program is unable to rectify the stereo images. I am providing the Non Working Set where you can check the images taken by the stereo setup along with the images of the rectification.
As a picture is worth a thousand words.
You gotta see the output images of the provided program, which let you know much more than what I could say in my own words.
I am trying to find some new ways of stereo face recognition techniques.
Any help in this regard is highly appreciated.
And adding to this, I also need some existing techniques by which I could kick start my experimentation on new ways of face recognition using the stereo information.