How to convert Depth Image to Pointcloud in ROS? - ros

I am using simulated kinect depth camera to receive depth images from the URDF present in my gazebo world. I have made a filter using python which only takes a part of the depth image as shown in the image and now i want to visualize this depth image as point-cloud on rviz.
Since i am new to ROS it would be great if i could get some examples.
My depth image

have you ever tried this http://wiki.ros.org/depth_image_proc
also, you can find examples here:
https://github.com/ros-perception/image_pipeline

Related

How can I make a simulation rendered depth image look like a stereo matching created depth image

For a AI project I am collecting images from a simulated environment and a real environment. In both scenarios there is a grayscale depth image generated. However the simulated environment generates perfect depth images which are not representative of the real world. This is why I want to artificially make the depth image of the simulation look like the one from the real world.
I am looking for some functions in for example opencv to generate noise to make the simulated image look like the real world. I already tried opencv filter2D which improved the image a bit. But I am looking for some other functions which work better.
The real world depth image is generated using a ZED2 stereo vision camera.
Note that these images are not from the same situation. However they both have threes so they should give a bit the idea.
Simulated image:
real world image:
Thanks
Sieuwe
You can try synthesizing the 2d images and then reconstruct the depth map with noises. Below are detailed steps:
Assume 1 camera matrix with random extrinsic parameters and then create another camera matrix with similar pose to make it look like stereo paired cameras.
Project the simulated depth map to 2d image using these 2 camera poses to generate stereo pairs of images.
Add random noises in the images to introduce noise in feature point correspondences
Given 2 views of same scene, estimate the depth map which will contain real world noises.

Applying lens distortion correction during panorama image stitching

I am trying to do a live panorama stitching while using 6 cameras streams (same camera model). Currently I am adapting the stitching_detailed.cpp file from OpenCV according to my needs.
My cameras lenses have a great amount of barrel distortion. Therefore, I calibrated my cameras with the checkboard function provided by OpenCV and got the respective intrinsic parameters and distortion parameters. By applying getOptimalNewCameraMatrix and initUndistortRectifyMap I get an undistorted image which fulfills my needs.
I have read in several sources that image lens correction should benefit image stitching. So far I have used the previously undistorted images as input to stitching_detailed.cpp, and the resulting stitched image looks fine.
However, my question is if I could somehow include the undistort step in the stitching pipeline.
[Stitching Pipeline taken from OpenCV documentation 1]
I am doing a cylindrical warping of the images in the stitching
process. My guess is that maybe during this warping I could somehow
include the undistortion maps calculated beforehand by
initUndistortRectifyMap.
I also do not know if the camera intrinsic matrix K from
getOptimalNewCameraMatrix could somehow help me in the whole process.
I am kind of lost and any help would be appreciated, thanks in advance.

How to convert webcam image to RGB Depth

I'm building an iPhone-like FaceID program using my PC's webcam. I'm following this notebook which uses Kinect to create RGB-D images. So can I use my webcam to capture several images for the same purpose?
Here's how to predict the person in the Kinect image. It uses a .dat file.
inp1 = create_input_rgbd(file1)
file1 = ('faceid_train/(2012-05-16)(154211)/011_1_d.dat')
inp2 = create_input_rgbd(file1)
model_final.predict([inp1, inp2])
They use Kinect to create RGB-D images where you want to only use RGB camera to do the similar? Hardwarely they are different. So there wont be a direct method
You have to first estimate a depth map using only monocular Image.
Well you can try with Revisiting Single Image Depth Estimation: Toward Higher Resolution Maps with Accurate Object Boundaries as shown below. The depth obtained is pretty much close to real ground truth. For non-life threatening case(e.g control UAV or control car), you can use it anytime.
The code and model are available at
https://github.com/JunjH/Revisiting_Single_Depth_Estimation
Edit the demo py file to do a single image detection.
image = you
deep_learned_fake_depth = model(image)
#Add your additional classification routing behind.
Take note this method cant work real time. So you can only do it at keyframes. Usualy people use the feature tracking technique to fake continuous detection( which is the common practice).
Also take note that some of the phone devices does have a small depth estimation sensor that you can make use of. Details I'm not very sure as I deal android and ios at a very minimal level.

How can I use 32bit color depth image to convert to pcd or ply with PCL

I try to get a point cloud using a 32bit color depth image from Hololens, But I am having a hard time because I do not have much information about it. Do I have to have camera parameters to get point clouds from the depth image? Is there a way to convert from PCL or OpenCV?
I add some comment and picture. Finally I can get the point cloud using depth image from hololens. But I convert 32bit depth image to grayscale and feel that the sensors of the lens alone have a lot of distortion. To complement this, I think we need to find a way to undistortion and filtering the depth image.
Do you have any other information about this?

OpenCV stitching with georeferencing

Is it possible to create stitched image without loosing image position and geo-referencing using OpenCV?
For example I have 2 images taken from the plane and I have 2 polygons that describes where on the ground they are located. I am using OpenCV stitching example. OpenCV stitching process will rotate and change position of images. How can I preserve my geography information after stitching? Is it possible?
Thanks in advance!

Resources