I have a stationary mono camera which captures a single image frame at some fps.
Assume the camera is not allowed to move,how do I generate a stereo image pair from the obtained single image frame? Is there any algorithms exists for this? If so, are they available in Open-CV?
To get a stereo image, you need a stereo camera, i.e. a camera with two calibrated lenses. So you cannot get a stereo image from a single camera with traditional techniques.
However, with the magic of deep learning, you can obtain the depth image from single camera.
And no, there's no builtin OpenCV function to do that.
The most common use of this kind of techniques is in 3D TVs, which often offer 2D-to-3D conversion, and thus mono to stereo conversion.
Various algorithms are used for this, you can look at this state of the art report.
There is also optical way for this.
If you can add binocular prisms/mirrors to your camera objective ... then you could obtain real stereoscopic image from single camera. That of coarse need access to the camera and setting up the optics. This also introduce some problems like wrong auto-focusing , need for image calibration, etc.
You can also merge Red/Cyan filtered images together to maintain the camera full resolution.
Here is a publication which might be helpful Stereo Panorama with a single Camera.
You might also want to have a look at the opencv camera calibration module and a look at this page.
Related
I am trying to differentiate between camera motion and tool motion in a surgical video.
I have tried optical flow using opencv farneback and pass the results to an ML model to learn but no success.a major issue is getting good keypoints in case of camera motion. Is there an alternate technique to distinguish between camera motion and tool/tissue movement? Note: camera motion happens only in 10% of the video
I wish I could add a comment (too new to be able to comment), as I don't have a good answer for you.
I think it really depends on the nature of the input image. Can you show some typical input images here?
What is your optical flow result look like? I thought you might get some reasonable results.
Have you tried some motion estimation method, to analyze if there is global movement across different frames, or there is only some local movements?
The purpose of calibration is to calibrated distortion the image.
What's main source of this distortion in the image when the lens is used, for example fish-eyes lens?
Q1-You think we are going to identify some of the objects and using fish-eyes lenses in order to cover a wide view of the environment, Do we need to calibrate the camera? That is, we must correct the image distortions and then identify the objects? Does the corrected image still cover the same amount of objects? If it's not cover all objects of distorted image, then what is the point of taking a wide-angle lens? Wouldn't it be better to use the same flat lens without having to calibrate the camera?
Q2-For calculating the distortion param like intrinsic and extrinsic param and etc, Is need to calculate parameters for all of camera with same specifics independently? That's, the finding parameters of distortion for one camera can be correctly work with other camera with same specifics?
Q1 Answer : You need to dewarp the image/video that comes out of the camera. There are some libraries that do it for you. You can also calibrate the dewarping according to your needs.
When dewarping the fisheye input, the corners of the video feed are a little lost. This won't be a huge loss.
Q2 Answer : Usually you don't have to do a different dewarping configuration based on your camera. But if you want to finetune it, there are parameters for it.
FFmpeg has lens correction filter, the parameters to finetune are also present in the link.
I want to verify an algorithm about camera calibration. But using the pictures taken by myself is not convincing. Are there any canonical image libraries for camera calibration simulation?
Look at the page A Flexible New Technique for Camera Calibration by Zhengyou Zhang; in the section "Experimental data and result for camera calibration", you will find five images like this one , five sets of image coordinates (like for example this one https://www.microsoft.com/en-us/research/wp-content/uploads/2016/12/imagepointsone.txt) and the result of the calibration here: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/12/completecalibration.txt
I do not know whether it is canonical or not, for sure Zhengyou Zhang is well known for his work related to camera calibration and his article:
ZHANG, Zhengyou. A flexible new technique for camera calibration. IEEE Transactions on pattern analysis and machine intelligence, 2000, 22.11: 1330-1334.
is highly cited.
You may also have a look at Camera Calibration Toolbox for Matlab by Jean-Yves Bouguet, his code is the basis for the OpenCV algorithms but I do not know whether there are images for accuracy and correctness testing.
How do I recover correct image from a radially distorted image using OpenCV? for example:
Please provide me useful links.
Edit
The biggest problem is I neither have the camera used for taking the pic nor the chessboard image.
Is that even possible?
Well, there is not much to do if you don't have the camera, or at least the model of it. As you may know a usual camera model is pin-hole, this basically consist in the 3D world coordinates are transformed (mapped) to the camera image plane 2D coordinates.
Camera Resectioning
If you don't have access to the camera or at least two chessboard images, you can't estimate the focal, principal point, and distortion coefficients. At least not in a traditional way, if you have more images than the one that you showed or a video from that camera you could try auto or self calibration.
Camera auto-calibration
Another auto-calibration
yet another
Opencv auto-calibration
DISCLAIMER: Apologies for this very large question, as it could take a lot of your time.
I have a stereo setup consisting of two webcams these cameras possess auto-focus technology. The stereo setup is in canonical configuration where the cameras are separated by 10 cm distance.
I am using the stereo_calib.cpp program for stereo calibration, provided by the OpenCV sample programs.
Initially, I have captured the chessboard images by my stereo rig as the way it is shown in the sample cpp stereo images, and then tried to calibrate the setup, but the stereo rectification was either completely blank or the undistorted left and right images are tilted about 40 degrees.
As this was the case, then I have captured a set of 17 stereo pair chessboard images by keeping the Z distance constant, without any rotation, at this point the stereo images were correctly rectified in the process of stereo calibration. This Working Set contains the images of chessboard taken by the stereo setup along with the program and the image of how well the rectification has been achieved.
Later, when I was trying to calibrate the stereo setup again (as the cameras in the stereo setup was disturbed), with another new set of images of the chessboard taken by my stereo rig, the program is unable to rectify the stereo images. I am providing the Non Working Set where you can check the images taken by the stereo setup along with the images of the rectification.
As a picture is worth a thousand words.
You gotta see the output images of the provided program, which let you know much more than what I could say in my own words.
I am trying to find some new ways of stereo face recognition techniques.
Any help in this regard is highly appreciated.
And adding to this, I also need some existing techniques by which I could kick start my experimentation on new ways of face recognition using the stereo information.