How to convert 'velodyne_msgs/VelodyneRawScan' to PointCloud or PointCloud2? - ros

I am new to ROS. I have Rosbag files recorded from Velodyne Lidar mounted on the top of a car. However the lidar detection information is in 'velodyne_msgs/VelodyneRawScan'. I wanted to visualize it using ROS RVIZ, which requires 'sensor_msgs/PointCloud2. Is there I can convert it into PointCloud?
Thank you very much.

Have you tried this package: http://wiki.ros.org/velodyne_pointcloud?
Refer to the documentation, once the package is installed, you can execute the node converting the data as following:
$ rosrun velodyne_pointcloud cloud_node velodyne_packets:={your/velodyne/topic} velodyne_points:={your/expected/pcl/topic}

Related

OpenCV can't grab-retrieve on Windows 11

I have been using the "grab and retrieve" flow with OpenCV's VideoCapture on linux for a long time. Now migrating the code to Windows 11, it seems with the same USB Webcams that using retrieve is not working:
import sys
import cv2
camera_number = 2
print(f"video output encoding backends available to OpenCV: "
f"{[cv2.videoio_registry.getBackendName(backend) for backend in cv2.videoio_registry.getWriterBackends()]}")
print(f"camera video acquisition backends available to OpenCV: "
f"{[cv2.videoio_registry.getBackendName(backend) for backend in cv2.videoio_registry.getStreamBackends()]}")
video_stream = cv2.VideoCapture(camera_number, cv2.CAP_DSHOW)
if video_stream.isOpened():
print(f"successfully opened camera number {camera_number}")
else:
print(f"\nfailed to open camera number {camera_number}")
sys.exit(1)
print(f"OpenCV is using the following backend library for camera video acquisition: {video_stream.getBackendName()}")
success, image = video_stream.read()
if success:
print('read image succeeded')
else:
print('read image failed')
while video_stream.isOpened():
grabbed = video_stream.grab()
if grabbed:
print('image grab succeeded')
else:
print('image grab failed')
success, image = video_stream.retrieve()
if not success:
raise ValueError(f'image retrieve failed')
This code succeeds up until the retrieve().
Here's the full output:
video output encoding backends available to OpenCV: ['FFMPEG', 'GSTREAMER', 'INTEL_MFX', 'MSMF', 'CV_IMAGES', 'CV_MJPEG']
camera video acquisition backends available to OpenCV: ['FFMPEG', 'GSTREAMER', 'INTEL_MFX', 'MSMF', 'CV_IMAGES', 'CV_MJPEG']
successfully opened camera number 2
OpenCV is using the following backend library for camera video acquisition: DSHOW
read image succeeded
image grab succeeded
Traceback (most recent call last):
line 49, in <module>
raise ValueError(f'image retrieve failed')
ValueError: image retrieve failed
Process finished with exit code 1
As seen above, this is using DSHOW. Notably, none of the backends other than DSHOW seemed to manage to open the Webcam cameras, although the OpenCV API states them as supported.
Enabling the env variable OPENCV_VIDEOIO_DEBUG=1 does not reveal any warnings or errors.
The problem regards USB cameras but not the laptop's in-built camera: switching to camera number 0, the laptop's built-in camera, the same above code seamlessly works and manages to loop on grab and retrieve, but not with any of two Logitech Webcams (Logitech Brio and Logitech C390e) on this Windows 11 laptop.
Version info
opencv-python: 4.5.5.62
opencv-contrib-python: 4.5.5.62
winrt: 1.0.21033.1
python: 3.9.10
How would you approach this?
Ok, seems that (not much unlike on Linux) each webcam exposes more than one camera number through the OS, and by choosing one and not the other camera number, give or take a Windows Update bringing in some Logitech software update, it works.
Although both camera numbers manage opening the camera, only one of them enables the full flow. Enumerating the properties of each camera number through code is rough but through trial and error over just two of them camera numbers, it works.

How to save map by using map_server package according to the topics?

I used the multi_map_merge package to merge map from robot 1 and robot 2. But when I used rosrun map_server map_saver -f ~/map, the map is only saved the merge map which is from /map topic. But I also would like to save map from /tb3_0/map and /tb3_1/map topics. I would like to ask how to save map according to the map topics? Thank you in advance.
You can remap the default costmap topic when invoking the map_server node. All that is required is to set the map to your costmap topic.
For example:
rosrun map_server map_saver -f mymap map:=/move_base/global_costmap/costmap
Here the topic from which the map will be saved is /move_base/global_costmap/costmap
You can substitute your own topics, i.e., /tb3_0/map and /tb3_0/map instead of the /move_base/global_costmap/costmap to create the respective maps
Feel free to explore the ROS Wiki for more details:
Link to map_server documentation: http://wiki.ros.org/map_server

Convert bag file to a common pointcloud format

OS: Ubuntu 18.04
Lidar: Livox Mid 40
Hello,
im using livox mid 40 which export point cloud to .lvx files or rosbags. I am looking for a way to convert these files to a common pointcloud format like ply, las, laz so i can use it to programs like cloudcompare, meshlab or other. I tried some solutions like rosrun pcl_ros bag_to_pcd <input_file.bag> <output_directory> or cloud_assembler but the first one creates multiple pcds and not a single point cloud (it created one pcd for each frame) and the second one doesn't work (cloud_assembler_try).
Im really confused!!
Any help would be appreciated.

How to upload an image of a map on ROS using OpenCV?

I need to apply Dijkstra's algorithm using ROS and Opencv.
I have been given a png file and I need to convert it into an occupancy grid.
Map
I've tried searching online but didn't find anything that fits my case.
(Also, if anyone knows any good beginner tutorials on ROS then I would be very grateful)
Since you have the pic file (usually in png format), you create the yaml file
which may look like
image: map.png
resolution: 0.1
origin: [0.0, 0.0, 0.0]
occupied_thresh: 0.65
free_thresh: 0.196
negate: 0
You can read the details here
Then start roscore and try
rosrun map_server map_server mymap.yaml
Be careful with the paths (use the same folder for both)

LiDAR Velodyne Veloview PointCloud Data

I'm new to this .. I am working on some mapping project and I need help filtering LIDAR pointcloud data. I have two questions and i hope you can help in this:
1) Can you use Veloview to filter Point Cloud data? Can you crop the data captured? if so, what's the process?
2) Can you export/convert PCAP file in .csv?
Thanks
Have a look at VeloView User Guide
For manipulation and analysis purpose, you can use Veloview-python console interface mentioned in section III - Python Console.
For cropping the captured data:
File --> Save As --> [preferred option] --> Frames from.

Resources