How do we fix our flickering depth image when using an Orbecc Astra Camera and Rviz? - ros

We try to set up the Orbbec Astra Embedded S camera with ROS and our goal is to detect objects by reconstructing a 3D point cloud from the camera images. We are using ROS Noetic and the ROS-Package "astra-camera" (https://github.com/orbbec/ros_astra_camera) as well as Rviz to visualize the images and the 3D point cloud.
Here are the rostopics:
/camera/color/camera_info
/camera/color/image_raw
/camera/depth/camera_info
/camera/depth/image_raw
/camera/depth/points
/camera/ir/camera_info
/camera/ir/image_raw
First Issue:
The color (/camera/color/image_raw) and IR (/camera/ir/image_raw) image stream seems to be working fine, but the big issue is the depth (/camera/depth/image_raw) image stream as it is flickering very fast and does not seem to detect anything.
Second Issue:
When launching the camera by running "roslaunch astra_camera astra_pro.launch" we received three warnings:
Publishing dynamic camera transforms (/tf) at 10 Hz
Camera calibration file /home/astra/.ros/camera_info/rgb_camera.yaml not found.
Camera calibration file /home/astra/.ros/camera_info/ir_camera.yaml not found.
By calibrating the color camera using a checkerboard, we were able to solve the 2. warning, as it generated the rgb_image.yaml file containing the intrinsic parameters. We tried calibrating the ir camera as well, but the ir_camera.yaml file was not generated. We have not yet solved the 1. warning.
Even though we are unsure if this is related to the issue regarding the flickering depth image stream, we believe it is worth mentioning.
We are ROS beginners and would be grateful for any feedback that could help us finding a solution. If you need any other or more information, please let us know.
Thanks,enter image description here
The following gif shows the issue Flickering-Issue

Related

How to track an opened hand in any environment with RGB camera?

I want to make a movable camera that tracks an opened hand (toward the floor). It just needs to track the opened hand but it has to also know the rotation (2d rotation).
This is what I searched for so far:
Contour- As the camera is movable, the background is unknown, even the lighting is not fixed. It's hard for me to get a clear hand
segment in real time.
Haar- It seems this just returns a rect and can't deal with rotation.
Feature detect- A hand doesn't have enough detail for this.
I am using the Opencv Unity plugin to do this.
EDIT
https://www.codeproject.com/Articles/826377/Rapid-Object-Detection-in-Csharp
I see another library can do something like this. Can OpenCV also do this?

Camera parameters adjusting failed issue in stitching detail

I am using stitching_detaild.cpp sample code from opencv. When I tried to stitch free hand images it worked. But when I kept camera at constant angle and just moved image then I am getting Camera parameters adjusting failed issue. Is there way where I can specify the constant camera parameters ?
Sometimes this issue is related to not having the proper overlap(matching). Try to move the image less.

Kurento - Blurness in the Remote stream stored images

What I did:
I am using Kurento Media Server to store the video streaming frames in the server. I can store the frames in the server by using opencv-plugin sample.
I am storing the video frames in the below two scenarios.
1) I need to take the images when the user show their faces in front of
the camera.(Note: No movements)
Issues: No issue. I can get the quality images.
2) I need to take the images when the user walks in a room.(Note: The
user is moving)
Issues: Most of the stored images are blurred in the server when they
are in moving (while walking).
What I want:
i) Is this the default behavior of the KMS (gstreamer)?
Note: I can see the local stream videos clearly in the browser while moving. But
the remote stream videos only got blurred while moving.
ii) Did anyone face this issue before. If yes, how do I solve this issue?
iii) Do I want to change any gstreamer configuration?
iv) Anyone give me a suggestion to overcome this issue?
The problem you are having is that the exposition time of your camera is high. It's like taking a picture of a car with low light.
When there is movement in the image, getting a simple frame, specially if the camera exposition time is long (due to low light conditions of low camera quality), will end in this kind of images.
On continuous video you don't notice this blurriness because there is a sequence of images, and your brain fills the gaps.
Edit
You can try to improve the quality that you are sending to the server by changing constrains on WebRTCEndpoint using properties setMaxVideoSendBandwidth and setMaxVideoRecvBandwidth. As long as there is available bandwidth you'll get a better quality.

OpenCV Ip Camera Image Deterioration

I have successfully connected with a IP camera using OpenCV. If I just show the image using imshow, then it's all fine... But if I try to do some CPU processing with the image (I equalize the image and run a face detector...), the image starts to deteriorate (I keep getting ac-tex damaged in the console)... Is starts to blur and blur and blur... I dont know why this is happening. I can confirm that this does not happen when getting images from my iSight camera (I am running on a iMac...) Besides that, I am having a really weird time with OpenCV. The face detection doesn't seem to work when I run the app in Release mode. I am on Windows 8 and using VS 2010.
Can someone shed some light at these problems?
I suggest you break up your problem into small parts. Some questions that I have for you:
The image capture and presentation are working without any processing?
A local camera with the face detector algorithm is working?
You said about release mode, it means that is working on the debug version?

OpenCV delay in camera output on the screen

I noticed a strange thing about OpenCV. I used one of the basic sample C programs delivered with OpenCV to show the camera output on the screen. I, however, see the output on the screen with a tiny delay compared to what the camera sees. So if I move my hand in front of the camera, it will show up on the screen with about 0.1 second delay. We are developing an application that is very sensitive to these delays. Is there a way to remove this delay such that the image transfer is instantaneous? I don't see tiny delay when I look at my camera output via Skype, for example.
Thank you very much!
P.
The openCV highgui display window is only meant for simple display of image processing results - it's not optimised for high performance or low latency.
You will have to write something to talk between the videoinput library and whatever display lib you want to use.
Just to confirm - yes, once I turned off the highgui video output, the processing speed went significantly up and the FPS along with it. Now the app is capable of getting and processing frames at 80 FPS. One solution to similar problems that doesn't require writing a new video output library is to display only every, say, tenth frame of the video to save processing power.
Thanks

Resources