Download point cloud as e57 from ROS topic - ros

I am currently working with point clouds in Gazebo and when trying to download the point cloud information it downloads as a PCD file. Do others with more experience have any tips for getting this information in a more standardized form?

Related

How to generate ".svo" file from rosbag for ZED SDK

I would like to generate a point cloud from stereo videos using the ZED SDK from Stereolabs.
What I have now is some rosbags with left and right images (and other data from different sensors).
My problem comes when I extract the images and I create the videos from them, what I get are the videos in some format (e.g. .mp4) using ffmpeg, but the ZED SDK needs a .svo format, and I don't know how to generate it.
Does it exist some way to obtain ".svo" videos from rosbags?
Also, I would like to ask, (once I get the .svo files) how could I get the point cloud using the SDK if I am not able to use a graphic interface? I am working from a DGX workstation by using ROS (Melodic and Ubuntu 18.04) in Docker and I am not able to make rviz and any graphic tool to work inside the Docker image, so I think I should do the point cloud generation "automated", but I don't know how.
I have to say that this is my first project using ROS, ZED SDK and Docker, so that's why I am asking this (maybe) basics questions.
Thank you in advance.
You can't. The .svo file format is a propriety file format that can only be recorded to by using a ZED and their SDK (or wrapper), can only be read by their SDK/wrapper, and only be exported by their SDK/wrapper.
To provide some helpful direction, I suggest that all functionality & processing you would like to get out of the images, by processing with or making use of the SDK features, can be done with open source 3rd party trusted community software projects. Examples include OpenCV (which bundles many other AI/DNN object detection or position estimation or 3D world reconstruction algorithms), PCL, or their wrappers in ROS, or other excellent algorithms whose chief API and reference is their ROS node.

Is there any way to export Point Cloud data from LiDAR iOS14?

I am new for Metal and ARkit. I started learning about Lidar and scene’s depth data to visualize the shape. Below is the link for the point cloud sample code provided by Apple Developers.
https://developer.apple.com/documentation/arkit/visualizing_a_point_cloud_using_scene_depth
Can someone please help me how to export the 3D file for the point cloud or some guidance that how to achieve Or is there any way to convert the point cloud data to MDLMesh so that i can export file from it.
yes there is, see this project here: https://github.com/zeitraumdev/iPadLIDARScanExport which saves a snapshot as obj file. More detailed information are here:
https://medium.com/zeitraumgruppe/what-arkit-3-5-and-the-new-ipad-pro-bring-to-the-table-d4bf25e5dd87

Latitude and Latitude (location name) Upload a file or image from ionic 4 whit angular 7?

How to get file properties should be GPS(latitude and longitude or location name) and upload a file or image from ionic 4?
This is not really the point of StackOverflow, you are supposed to post when you got stuck, not just request tutorials to be custom written for you.
You have also requested two separate tutorials, and the second one is not clearly defined.
I'll try to help though.
To get latitude and longitude you need to read the EXIF data of an image. There are libraries out there and there are guides that show how to use them to calculate the lat/lon info:
https://awik.io/extract-gps-location-exif-data-photos-using-javascript/
For the uploading a file or image then you might find this tutorial helpful to learn the basics:
https://devdactic.com/ionic-4-image-upload-storage/
But that is just a simple example and you will quickly realise you need more complicated code to secure it and multiple servers to deploy it to, and then there are scaling issues and cdn's and more to think about. This is when people generally turn to something like Firebase Storage which will let you push files into the cloud and gives you all the structure without having to write it. Start here for a tutorial that explains these concepts:
https://blog.smartcodehub.com/how-to-upload-an-image-to-firebase-from-an-ionic-4-app/

how to draw custom lands for OSM

I would like to know how to draw custom lands for an Openstreetmap project. My final purpose is to reproduce a fantasy map with OSM technology.
It's not clear to me how I can generate lands data (continents, islands and so on).
I know is it possible because the project https://opengeofiction.net/ do basically the same thing.
I am a new OSM user and I am moving the firsts steps with GIS software.
I have built my own tile server on the cloud (Ubuntu 18-04) following different tutorials.
I installed JOSM and QGIS to edit maps, but I feel a bit lost with all that options and features.
I already posted questions in openstreetmap forum but I got no response.
I am sure I need only a little hint to get started.
My expected result is to be able to draw a little "imaginary" island.
On a small scale you can use JOSM without OSM download/upload, and just save your edited data locally as an OSM XML file.
That again can then be fed into the different renderers as source file.
On a large scale you would end up creating a copy of the whole OSM stack, serving your own data, like https://wiki.openstreetmap.org/wiki/OpenGeofiction does

how to do image processing through bluemix

I'm a starter of IBM Bluemix and i don't know how to use image processing tools. Please help me out with this. and also please tell me how to load the images into bluemix image processing tool.
Have a check on Alchemy API of IBM Bluemix.
AlchemyAPI offers a set of three services that enable businesses and developers to build cognitive applications that understand the content and context within text and images. For instance, using AlchemyAPI, developers can perform tasks such as extracting the people, places, companies, and other entities mentioned in a news article or analyze an image to understand the contents of the photo.
AlchemyLanguage
AlchemyLanguage is a collection of 12 APIs that offer text analysis through natural language processing. The AlchemyLanguage APIs can process text and help you to understand its sentiment, keywords, entities, high-level concepts and more.
AlchemyVision
AlchemyVision understands and analyzes complex visual scenes without needing textual clues. Developers can use this API to do tasks like image recognition, scene recognition, and understanding the objects within images.
AlchemyData
AlchemyData provides news and blog content enriched with natural language processing to allow for highly targeted search and trend analysis. Now you can query the world's news sources and blogs like a database.
An example screenshot of how it looks-
They have a great tutorial here on how to Get started - Step 1.
If you are looking for image processing using Python, here is a great tutorial with simplistic steps on how to kick off.
More examples or references-
Bluemix - Tutorials Videos
Analyze notes with the AlchemyAPI service on IBM Bluemix
Getting started with the Visual Recognition service
Real Time Analysis of Images Posted on Twitter Using Bluemix
Editors' picks: Top 15 Bluemix tutorials
If you would like to use runtimes, you could use imagemagick libraries, recently added on Cloud Foundry. The binaries should be on this path
/var/vcap/packages/imagemagick/bin
Otherwise you can refer to the chosen buildpack specific options: for example with the php one you could use GD library, installing through composer utility
{ "require": { "ext-gd": "*" } }
Another opportunity could be to use a docker container instead of a runtime, which allows you to keep the scalability opportunities of Bluemix but giving you wider configuration options.
Generally speaking it depends a lot from the technology you would like to use (java/php/python etc...)

Resources