Collect 3d Point Cloud from gazebo - ros

I want to collect a pointcloud of a simulated space in gazebo. I have tried scanning the environment and saving the scans as individual pcd files and then concatenating them but this did not work. I have also tried to take the scans from Gazebo and visualise them in open3d but this ended up just being the same as concatenating the pcd files. I know that the issue is not being able to transform the messages correctly but I have not found a working method with clear steps to execute the transformation. I am doing this on Ros noetic and would really appreciate help.

You should be using rosbag record for saving topic data inside ros. This command simply records topic data, saves it to a file, and allows you to analyze or play it back later.
In your situation, you would also need to record the transform topic if you’re having tf issues.
To record data you can simple run a command such as: rosbag record /tf /my_scan_topic
Based on your comments what you actually want to do is: first combine multiple scans from the lidar into a single topic; i.e. create a single pointcloud from multiple. The easiest way might be to use the laser_assembler package. Since this is all in ros the transforms will be handled automatically for you. Then once you have all your scans assembled put it into a PCD file.

Related

Manually build an occupancy grid on ros

I am trying to build and publish a /nav_msgs/OccupancyGrid message to test another node that depends on actual data from a robot. Before I use real data, I just wanted to build a message from an array or matrix of numbers without any real sensors. How can I do that?
Thanks!
If you have a look at the nav_msgs/OccupancyGrid message definition you will see that the data is just stored as an array of int8s with some MapMetaData. So if you just need something filled in to test the other node without any assumptions about usefulness or plausibility of the data you can write a script that fills in random data into the data structure.
If the data needs to be somewhat useful and plausible you probably should have a look at the Map Server package. It allows you to generate a nav_msgs/OccupancyGrid from an image. This approach might overall even be easier than generating random data.

AWS Sagemaker BlazingText Multiple Training Files

Trying to find out if you can use multiple files for your dataset in Amazon Sagemaker BlazingText.
I am trying to use it in Text Classification mode.
It appears that it's not possible, certainly not in File mode, but wondering about whether Pipe mode supports it. I don't want to have all my training data in 1 file, because if it's generated by an EMR cluster I would need to combine it afterwards which is clunky.
Thanks!
You are right in that File mode doesn't support multiple files (https://docs.aws.amazon.com/sagemaker/latest/dg/blazingtext.html).
Pipe mode would in theory work but there are a few caveats:
The format expected is Augmented Manifest (https://docs.aws.amazon.com/sagemaker/latest/dg/augmented-manifest.html). This is essentially Json lines, for instance:
{"source":"linux ready for prime time ", "label":1}
{"source":"bowled by the slower one ", "label":2}
and then you have to pass the _ AttributeNames_ argument to the createTrainingJob SageMaker API (it is all explained in the link above).
With Augmented Manifest, currently only one label is supported.
In order to use Pipe mode, you would need to modify your EMR job to generate Augmented Manifest format, and you could only use one label per sentece.
At this stage, concatenating the files generated by your EMR job into a single file seems like the best option.

Bag to Depth Matrix

I am working with the Persee 3D camera from Orbecc. I am currently having problems with the processing of the bag files. Someone has provided some code here: https://github.com/JaouadROS/bagtodepth. I tried looking over this and I can't quite make heads or tails of this. I really only have two main questions:
1: Where is the output being saved to? Will it be saved into one of his directories or will it be output somewhere else?
2: Is the output a sort of stream or will it just convert the data to a certain point?
I have successful downloaded (catkin_ws directory) and ran the program with the Persee, but it doesn't help if I can't access the output. I am looking to access this matrix in real time and was hoping I could just adapt his code to my project. He does mention something about information being stored at depthclean. Sadly, the guy that has posted this has not replied to any of the messages that I have sent. Thanks!

Is there a simple DirectShow filter that can mix audio together of the exact same format?

I have a DirectShow application written in Delphi 6 using the DSPACK component library. I want to be able to mix together audio coming from the output pins from multiple Capture Filters that are set to the exact same media format. Is there an open source or "sdk sample" filter that does this?
I know that intelligent mixing is a big deal and that I'd most likely have to buy a commercial library to do that. But all I need is a DirectShow filter that can accept wave audio input from multiple output pins and does a straight addition of the samples received. I know there are Tee Filter's for splitting a single stream into multiple streams (one-to-many), but I need something that does the opposite (many-to-one), preferably with format checking on each input connection attempt so that any attempt to attach an output pin with a different media format than the ones already added is thwarted with an error. Is there anything out there?
Not sure about anything available out of the box, however it would be definitely a third party component.
The complexity of creating this custom filter is not very high (it is not a rocket science in terms of creating such component yourself for specific need). You basically need to have all input audio converted to the same PCM format, match the timestamps, add the data and then deliver via output pin.

How to programmatically manipulate an EPS file

I am looking for libraries that would help in programatically manipulating EPS (Encapsulated PostScript) files. Basically, what I want to do is following:
Show / Hide preexisting layers in the EPS file (toggle them on and off)
Fill (color) named shapes in the EPS file
Retrieve coordinates of named points in the EPS file
draw shapes on a new layer in the EPS file
on a server, without user interaction (scripting Adobe Illustrator won't work)
I am aware of how the EPS file format is based on the PostScript language and must therefore be interpreted - for creating simple drawings from scratch this is rather easy. But for actually modifying existing files, I guess you need a library that interprets the file and provides some kind of "DOM" for manipulation.
Can I even have named shapes and points inside an EPS file?
EDIT: Assuming I had the layers saved in separate EPS files. Or better still: Just the "data" part of the layers. Could I then concatenate this stuff to create a new EPS file? And append drawing commands? Fill existing named objects?
This is extremely difficult and here is why: a PS file is a program whose execution results in pixels put on a page. Instruction in a PS program are at the level of "draw a line using the current pen and color" or "rotate the coordinate system by 90 degrees" but there is no notion of layers or complex objects like you would see them in a vector drawing application.
There are very few conventions in the structure of PS files to allow external programs to modify them: pages are marked separately, font resources, and media dimensions are spelled out in special comments. This is especially true for Embedded Postscript (EPS) which must follow these guidelines because they are meant to be read by applications but not for general PS as it is sent to a printer. A PS program is a much lower level of abstraction than what you need and there is now way to reconstruct it for arbitrary PS code. In principle could a PS file result in different output every time it is printed because it may query its execution environment and branch based on random decisions.
Applications like Adobe Illustrator emit PS code that follow a rigid structure. There is a chance that these could be parsed and manipulated without interpreting the code. I would stil suggest to rethink the current architecture: you are at a too low level of abstraction for what you need.
PDF is not manipulable since it is not possible to change any existing parts of a pdf (in general) only add stuff. EPS is the same as PostScript except that it has a boundary header.
Problem with doing what you want is that PS is a programming language whose output (mostly) is some kind of image. So the question could be stated as "how can I draw shapes on a new layer in the Java file". You probably need to generate the complete PS on the fly, or use another image format altogether.
I am not aware of any available libraries for this but you may be able to build something to meet your needs based on epstool from Ghostscript/GSview
I think your best bet is to generate a PDF from the EPS and then manipulate the PDF. Then back to EPS. PDF is much more "manipulable" than is EPS.

Resources