I want to try augmented reality applications on opencv and opengl
I have a 170 degree FOV fisheye camera.
I want to draw a 3D polygons with these libraries.
I'm trying put distortion coeffs on opengl
Also ı want to use opencv camera matrix --> opengl perspective projection matrix
I'm trying 3d mesh polygons with opencv on distorted images (my camera model scaramuzza) .I get freezing when the polygons start to go out of frame. very low fps problem
But I don't success this process.
what way can i follow?
Related
I know the classical tools for camera calibration use a planar board that we know its dimension.
My problem is slightly different. Instead of having a planar board, I do have some 3D points that I know their coordinates (6 dof) with respect to the camera. Those 3D points can be detected in the image, so we know their pixel coordinates.
Is there any tools that can calibration the camera using 3D points?
I'm using a RGB-D camera (Intel Realsense D345) to implement a table top projected augmented reality system. Using chessboard calibration I obtain a transformation matrix which I use to transform each incoming frame using warpPerspective from openCV. It works really well for the color frames. The problem is, am I allowed to do this for depth images as well ? Considering depth images are 3D geometrical data. What's the right way to apply a transformation matrix to depth images?
I see some 3D Facial devices that using 3 Camera and find the 3D picture of face.
IS there any specific angle these camera should be fixed for this
calculation?
Is there any SDK, or tools in this domain that could simplify producing 3D image
from these fixed camera?
The less angle you have, the less information about depth you will get from the cameras. So an angle is important, but i cannot say it will need x° degrees.
so I am trying to make a panorama in the browser out of 6 videos taken from 6 cameras. the stitching is done with OpenCV in python and that gives me back the homographies. How can I apply the homographies to 6 Three.js planes so that I get the same result as OpenCV ? and is it possible to extract the translations/rotations from a homography so that I can apply it to the plane?
Thanks a lot!
I am using OpenCV, a newbie to the entire thing.
I have a scenario, I am projecting on a wall, I am building a kind of a robot which has a camera. I wanted to know how can I process the image so that I could get the real-world values of the co-ordinates of the blobs tracked by my camera?
First of all, you need to calibrate the intrinsic of the camera. Use checkerboard-patterns printed on cardboard to do this, OpenCV has methods for this although there are finished tools for this as well.
To get an idea, I have written some python code to calibrate from a live video stream, move the cardboard along the camera in some different angles and distances. Take a look here: http://svn.ioctl.eu/pub/opencv/py-camera_intrinsic/
Then you need to calibrate the extrinsic of the camera, that is the position of the camera wrt. your world coordinates. You can place some markers on the wall, define the 3D-position of those markers and let OpenCV calibrate the extrinsic for this (cvFindExtrinsicCameraParams2).
In my sample code, I calculate the extrinsic wrt. the checkerboard so I can render a Teapot in the correct perspective of the camera. You have to adjust this to your needs.
I assume you project only onto a flat surface. You have to know the geometry to get the 3D coordinates of your detected blob. You can then find the blobs in your camera image and knowing intrinsic, extrinsic and the geometry, you can cast rays for each blob from the camera according to your intrinsic/extrinsic and calculate the intersection of each such ray with your known geometry. The intersection then is your 3D point in world space where the blob is projected to.