Rviz Pointcloud in wrong location - ros

I designed a robot in gazebo. The robot is equipped with a depth camera Kinect. That is, the file "libgazebo_ros_openni_kinect.so" is called.
When I show the pointcloud2 in the rviz, I found out the location of the pointcloud is wrong. These images should be displayed in front of the camera. Not above. How can I solve that problem?
Please help me
Here is the image of rviz:enter image description here

Related

How to convert Depth Image to Pointcloud in ROS?

I am using simulated kinect depth camera to receive depth images from the URDF present in my gazebo world. I have made a filter using python which only takes a part of the depth image as shown in the image and now i want to visualize this depth image as point-cloud on rviz.
Since i am new to ROS it would be great if i could get some examples.
My depth image
have you ever tried this http://wiki.ros.org/depth_image_proc
also, you can find examples here:
https://github.com/ros-perception/image_pipeline

opencv: Correcting these distorted images

What will be the procedure to correct the following distorted images ? It looks like the images are bulging out from center. These are of the same QR code, and so a combination of such images can be used to arrive at a single correct and straight image.
Please advice.
The distortion you are experiencing is called "barrel distortion". A technical name is "combination of radial distortion and tangential distortions"
The solution for your problem is openCV camera calibration module. Just google it and you will find documentations in openCV wiki. More over, openCV already has built in source code examples of how to calibrate the camera.
Basically, You need to print an image of a chess board, take a few pictures of it, run the calibration module (built in method) and get as output transformation matrix. For each video frame you apply this matrix (I think the method called cvUndistort()) and it will straighten the curved lines in the image.
Note: It will not work if you change the zoom or focal length of the camera.
If camera details are not available and uncontrollable - then your problem is very serious. There is a way to solve the distortion, but I don't know if openCV has built in modules for that. I am afraid that you will need to write a lot of code.
Basically - you need to detect as much as possible long lines. Then from those lines (vertical and horizontal) you build a grid of intersection points. Finally you fit the grid of those points to openCV calibration module.
If you have enough intersection points (say 20 or more) you will be able to calculate the distortion matrix and un-distort the image.
You will not be able to fully calibrate the camera. In other words, you will not be able to run a one time process that calculates the expected distortion. Rather - in each and every video frame, you will calculate the distortion matrix directly - reverse it and un-distort the image.
If you are not familiar with image processing techniques or unable to find a reliable open source code which directly solves your problem - then I am afraid that you will not be able to remove the distortion. sorry

Opencv: correcting radially distorted images when chessboard images are not available

How do I recover correct image from a radially distorted image using OpenCV? for example:
Please provide me useful links.
Edit
The biggest problem is I neither have the camera used for taking the pic nor the chessboard image.
Is that even possible?
Well, there is not much to do if you don't have the camera, or at least the model of it. As you may know a usual camera model is pin-hole, this basically consist in the 3D world coordinates are transformed (mapped) to the camera image plane 2D coordinates.
Camera Resectioning
If you don't have access to the camera or at least two chessboard images, you can't estimate the focal, principal point, and distortion coefficients. At least not in a traditional way, if you have more images than the one that you showed or a video from that camera you could try auto or self calibration.
Camera auto-calibration
Another auto-calibration
yet another
Opencv auto-calibration

What is the Difference OpenCV Face Detection from an image and from a Camera?

i write a OpenCV document over the Face Detection from a Video and Face Recognation using eigenface from a video.
I have finding in internet many of documantations over face detection from a image. And Face Recognition using eigenfaces from a image.
My question is:
If i detect or recognize faces from camera: the camera take a photo and save this in "vectors" or in a xml file? LIKE detect face and recognize face from a image?
best regards Adrianos
In OpenCV, when you try to do a face-recognition, the existing algorithms work with an input array, representing the image. So you hve to work with an image.
Plus, there are different ways to get a stream from a camera (like VideoCapture, or cvCaptureFromCAM depending on the language), but the only thing you can do with theses is to "take a snapshot", save this image in a matrix, and treat the image.
So yeah, there's no difference between face detection in an image and face detection from a camera

Is stitching module of OpenCV able to stitch images taken from a parallel motion camera?

I was wondering if the stitching(http://docs.opencv.org/modules/stitching/doc/stitching.html) module of OpenCV is able to stitch the images taken from a camera that is in parallel motion to the plane which is being photographed ?
I know that generally all the panoramic stitching tools assume that the center of the camera is fixed and that the camera only experiences motion such as pan or pitch.
I was thinking if I can use this module to stitch the image taken from a camera which moves parallel to the plane. The idea is to create a panoramic map of the ground.
Regards
Just for the record.
The current stitching utility in open cv does not consider translation of the camera and only assumes that camera is rotated around its axis. So, basically it tries to create and project images on cylindrical or spherical canvas.
But in my case, I needed to consider the translation motion while predicting the camera transformation and this is not possible with the existing stitching utility of opencv.
All these observations are made based on the code walk-through of opencv code and through trials.
But, you are welcome to correct this information or to add more information so that this can be a useful future reference.
Regards

Resources