I am new to ROS. I have Rosbag files recorded from Velodyne Lidar mounted on the top of a car. However the lidar detection information is in 'velodyne_msgs/VelodyneRawScan'. I wanted to visualize it using ROS RVIZ, which requires 'sensor_msgs/PointCloud2. Is there I can convert it into PointCloud?
Thank you very much.
Have you tried this package: http://wiki.ros.org/velodyne_pointcloud?
Refer to the documentation, once the package is installed, you can execute the node converting the data as following:
$ rosrun velodyne_pointcloud cloud_node velodyne_packets:={your/velodyne/topic} velodyne_points:={your/expected/pcl/topic}
OS: Ubuntu 18.04
Lidar: Livox Mid 40
Hello,
im using livox mid 40 which export point cloud to .lvx files or rosbags. I am looking for a way to convert these files to a common pointcloud format like ply, las, laz so i can use it to programs like cloudcompare, meshlab or other. I tried some solutions like rosrun pcl_ros bag_to_pcd <input_file.bag> <output_directory> or cloud_assembler but the first one creates multiple pcds and not a single point cloud (it created one pcd for each frame) and the second one doesn't work (cloud_assembler_try).
Im really confused!!
Any help would be appreciated.
I have a GeoJSON file with a FeatureCollection (more than 300 000 features) of LineStrings. It is a road traffic records. I need to convert it to the MVT format using Tippecanoe. I'm trying to convert the GeoJSON with this params:
tippecanoe data.geojson -pf -pS -zg --detect-shared-borders -o data.mbtiles -f
Then I uploading it to Mapbox account as a tileset and use to render with Mapbox GL JS. And there is a problem - not all the features are visible. Moreover, if if will reconvert the GeoJSON file - then I will get a different result! So - what is the best options to use with tippecanoe to convert all the features (lineStrings) without oversimplification to use it with Mapbox GL JS?
P.S. One more thing which I noticed is that datasets uploaded with Mapbox Studio and then converted to tileset has some info like this: "This layer contains mostly LineStrings", but with my own tilesets converted with the tippecanoe I see a next message: "* No dominant geometry type*"
-ae will auto-increase the maxzoom if features are still being dropped at that zoom level. But when zoomed out it doesn't always look good depending on the type of features (e.g.: mising cadastre doesn't look good)...
I'm trying to convert a tangent space normal map to a height/displacement map. For sure this will be not 100% accurate speaking in terms of "exact height" for each pixel. But the relative height from each pixel to the next is more than enough.
Available Algorithm + Info's:
http://www.cournia.com/devnull/n2h/n2h.pdf
Questions:
1. How to convert a normal-to-height map in Photoshop/Gimp? Is there a way using these tools? Beside; I don't wan't to use CrazyBump or any other Texture-Tools. This has to run via CL later on. A Photoshop solution is more or less just a pre-step to understand workflow a bit better.
If not possible with PS/Gimp; how to include the algorithm in an imagemagick process?
I've checked already Doom3:-Normal2Height; Crazybump & all other texture tools like Nvidia's PS-Plugin, xNormal, Awesomebump, SSBump, etc. I'd need this function working with Imagemagick.
Any help is very much welcome. Python preferable.
thx
There are a couple of possibilities for doing that with ImageMagick.
Firstly, you could implement your own process module. When running configure to install ImageMagick, you would then do:
./configure --with-modules=yes
Then, when you want to apply your bumpmap processing on the command-line, you would do:
convert input.png -process analyse <param1> <param2> result.png
Your processing needs to be written in C/C++ and the best description I know of is on Alan Gibson's webpages here.
Secondly, you could write your entire processing using Magick++ which is the C++ binding to ImageMagick. Best description I know of is here with sample code here.
I am working on an image manipulation problem. I have an overhead projector that projects onto a screen, and I have a camera that takes pictures of that. I can establish a 1:1 correspondence between a subset of projector coordinates and a subset of camera pixels by projecting dots on the screen and finding the centers of mass of the resulting regions on the camera. I thus have a map
proj_x, proj_y <--> cam_x, cam_y for scattered point pairs
My original plan was to regularize this map using the Mathscript function griddata. This would work fine in MATLAB, as follows
[pgridx, pgridy] = meshgrid(allprojxpts, allprojypts)
fitcx = griddata (proj_x, proj_y, cam_x, pgridx, pgridy);
fitcy = griddata (proj_x, proj_y, cam_y, pgridx, pgridy);
and the reverse for the camera to projector mapping
Unfortunately, this code causes Labview to run out of memory on the meshgrid step (the camera is 5 megapixels, which apparently is too much for labview to handle)
I then started looking through openCV, and found the cvRemap function. Unfortunately, this function takes as its starting point a regularized pixel-pixel map like the one I was trying to generate above. However, it made me hope that functions for creating such a map might be available in openCV. I couldn't find it in the openCV 1.0 API (I am stuck with 1.0 for legacy reasons), but I was hoping it's there or that someone has an easy trick.
So my question is one of the following
1) How can I interpolate from scattered points to a grid in openCV; (i.e., given z = f(x,y) for scattered values of x and y, how to fill an image with f(im_x, im_y) ?
2) How can I perform an image transform that maps image 1 to image 2, given that I know a scattered mapping of points in coordinate system 1 to coordinate system 2. This could be implemented either in Labview or OpenCV.
Note: I am tagging this post delaunay, because that's one method of doing a scattered interpolation, but the better tag would be "scattered interpolation"
So this ends up being a specific fix for bugs in Labview 8.5. Nevertheless, since they're poorly documented, and I've spent a day of pain on them, I figure I'll post them so someone else googling this problem will come across it.
1) Meshgrid bombs. Don't know when this was fixed, definitely a bug in 8.5. Solution: use the meshgrid-like function on the interpolation&extrapolation pallet instead. Or upgrade to LV2009 which apparently works (thanks Underflow)
2) Griddata is defective in 8.5. This is badly documented. The 8.6 upgrade notes say that a problem with griddata and the "cubic" setting, but it is fact also a problem with the DEFAULT LINEAR setting. Solutions in descending order of kludginess: 1) pass 'v4' flag, which does some kind of spline interpolation, but does not have bugs. 2) upgrade to at least version 8.6. 3) Beat the ni engineers with reeds until they document bugs properly.
3) I was able to use the openCV remap function to do the actual transformation from one image to another. I tried just using the matlab built in interp2 vi, but it choked on large arrays and gave me out of memory errors. On the other hand, it is fairly straightforward to map an IMAQ image to an IPL image, so this isn't that bad, except for the addition of the outside library.