How can I publish pre-serialized data to ROS? - ros

I'm facing a situation where I have multiple robots, most running full ROS stacks (complete with Master) and I'd like to selectively route some topics through another messaging framework to the other robots (some of which not running ROS).
The naive way to do this works, namely, to set up a node that subscribes to the ROS topics in question and sends that over the network, after which another node publishes it (if its ROS). Great, but it seems odd to have to do this much serializing. Right now the message goes from its message type to the ROS serialization, back to the message type, then to a different serialization format (currently Pickle), across the network, then back to the message type, then back to the ROS serialization, then back to the message type.
So the question is, can I simplify this? How can I operate on the ROS serialized data (ie subscribe without rospy automagically deserializing for me)? http://wiki.ros.org/rospy/Overview/Publishers%20and%20Subscribers suggests that I can access the connection information as dict of strings, which may be half of the solution, but how can the other end take the connection information and republish it without first deserializing and then immediately reserializing?
Edit: I just found https://gist.github.com/wkentaro/2cd56593107c158e2e02 , which seems to solve half of this. It uses AnyMsg to avoid deserializing on the ROS subscriber side, but then when it republishes it still deserializes and immediately reserializes the message. Is what I'm asking impossible?

Just to close the loop on this, it turns out you can publish AnyMsgs, it's just that the linked examples chose not to.

Related

django channels and running event loop

For a game website, I want a player to contest either agains a human or an AI.
I am using Django + Channels (Django-4.0.2 asgiref-3.5.0 channels-3.0.4)
This is a long way of learning...
Human vs Human: the game take place is the web browser turn by turn. Each time a player connects, it opens a websocket connexion, a move is sent through the socket, processed by the consumer (validated and saved in the database) and sent to the other player.
It is managed only with sync programming.
Human vs AI: I try to use the same route as previously. A test branch check if the game is against the computer and process a move instead of receiving it from the other end of the websocket. This AI move can be a blocking operation as it can take from 2 to 5sec.
I don't want the receive method of the consumer to wait for the AI to return its move, since I have other operations to do quickly (like update some informations on the client side).
Then I thought I could easily take advantage of the allegedly already existing event loop of the channels framework. I could send the AI thinking process to this loop and return the result later to the client through the send method of the consumer.
However, when I write:
loop = asyncio.get_event_loop()
loop.create_task(my_AI_thinking())
Django raises a runtime effort error (the same as described here: https://github.com/django/asgiref/issues/278) telling me there is no running event loop.
The solution seemed to be to upgrade asgiref to 3.5.0 which I did but issue not solved.
I think I am a little bit short of background, and some enlightments should help me to understand a little bit more what is the root cause of this fail.
My first questions would be:
In the combo django + channels + asgi: which is in charge to run the eventloop?
How to check if indeed one event loop is running whatever the thread?
Maybe your answers wil raise other questions.
Did you try running your event_loop example on Django 3.2? (and/or with different Python version)? I experienced various problems with Django 4.0 & Python 3.10, so I keep with Django 3.2 and Python3.7/3.8/3.9 for now, maybe your errors are one of these problems?
If you won't be able to get event_loop running, I see two possible alternative solutions:
Open two WS connections: one only for the moves, and the other for all the other stuff, such as updating information on Player's UI, etc.
You can also use multiprocessing to "manually" send calculating AI move to other thread, and then join the two threads again, after receiving the result (the move). To be honest, multiprocessing in Python is quite simple -- it's pretty handy, if you are familiar with the idea of multithreaded applications.
Unfortunately, I have not yet used event loops in channels myself, maybe someone more experienced in that matter will be able to better address your issue.

What is the data contained in ROS topic velodyne_msgs/VelodynePacket Message?

I am doing a light weight program to monitor received beams for lidar. Preferably, I do not want to cache the entire UDP data packet or point cloud data due to the light weight nature.
The question is what is the data contained in ROS message velodyne_msgs/VelodynePacket. This message contains smaller data but I do not know if it is related.
By read the Ros Wiki on this topic but the link for velodynepackt did not provide useful info on the content.
Check the message definition to see what fields a message contains and their types. Message files will usually either have field names that are self explanatory or will have comments (## text) describing the fields. You can look at the message definitions either online or locally. To look at them locally use roscd to get to the package directory roscd <package_name>/msg and then using cat to see the contents of the message file. In your case, this would be:
roscd velodyne_msgs/msg
cat VelodynePacket.msg
cat VelodyneScan.msg
The relevant message files are available online from the page you linked to:
http://docs.ros.org/api/velodyne_msgs/html/msg/VelodyneScan.html
http://docs.ros.org/api/velodyne_msgs/html/msg/VelodynePacket.html
In regards to your specific question about creating a lightweight application, you have a few options.
Use the provided ROS message and subscribe to it. Most of the time if you don't have a ton of large data traveling around, you'll be okay and will be able to keep up with real time data. The majority of the time associated with ROS usually comes from the network transport, so if that's a problem, you'll need to not pass the data over ROS.
Put your code in a ROS Nodelet. This gives you the advantages of ROS data abstractions while eliminating the network data transfer that occurs between nodes. This is akin to using a pointer to the data field.
If you really don't want all the scan data, but still want to use ROS, you can write your own driver node. This will read from the LIDAR the data you want and discard the data you don't. You can do the raw data processing in that node (no ROS message required) or publish the data you care about and do the processing in another node.

Kafka KTables missing data when joining KStream to KTable

Has anyone posted a response to this problem? There have been other posts with no answers. Our situation is that we are pushing messages onto a topic that is backing a KTable in the first step of our stream process. We are then pulling a small amount of data from those messages and passing them along. We are doing multiple computations on that smaller amount of data for grouping and aggregation. At the end of the streaming process, we simply want to join back to that original topic via a KTable to pick up the full message content again. The results of the join are only a subset of the data because it can not find the entries in the KTable.
This is just the beginning of the problem. In another case, we are using KTables as indexes for lookups meant to enrich the data coming in. Think of these lookups as identifying whether we have seen a specific pattern in the streaming message before. If we have seen the pattern we want to tag it with an ID (used for grouping) pulled from an existing KTable. If we have not seen the pattern before we would assign it an ID and place it back into the KTable to be used to tag future messages. What we have found is that there is no guaranty that the information will be present in the KTable for future messages. This lack of guaranty seems to make KTables useless. We can not figure out why there is a very little discussion of this on the forums.
Finally, none of this seemed to be a problem when running with a single instance of the streams application. However, as soon as our data got large and we were forced to have 10 instances of the app, everything broke. As well, there is no way that we could use things like GlobalKTables because there is too much data to be loaded into a single machine's memory.
What can we do? We are currently planning to abandon KTables all together and use something like Hazelcast to store the lookup data. Should we just move to Hazelcast Jet and drop Kafka streams all together?
Adding flow:
Kafka data flow
I'm sorry for this non-answer answer, but I don't have enough points to comment...
The behavior you describe is definitely inconsistent with my understanding and experience with streams. If you can share the topology (or a simplified one) that is causing the problem, there might be a simple mistake we can point out.
Once we get more info, I can edit this into a "real" answer...
Thanks!
-John

Simple classification with Kafka Streams

I am currently trying to find a straight forward and performant way to classify records with Kafka Streams.
All the records contain at least an id and a failed property.
(id is just a String and failed is Boolean)
The idea is to, in the beginning, classify all the incoming records as "message".
Once one of the incoming records has the failed field set, this should be "persisted" somewhere and the record should be classified as "failure".
Every from now on incoming record with the same id should be classified as "failure" as well, no matter if the failed property is set.
I'm thinking about either using the internal State Store of Kafka Streams (together with the interactive query feature) or an external database that will be queried each time a record comes in. I think the State Store of Kafka itself sounds like a more lightweight solution.
Here is a little concept sketch to, hopefully, help understand the problem.
Does someone have an idea on how to tackle this the right way?
Thank you
All the best
- Tim
Your approach sounds good to me. Don't think you need IQ feature though. Just define a custom Transformer and attach a key-value store to it. During processing, if you get a message with failed=true you put the ID into the store. For each incoming message with failed=false you additionally check the store to check if there was a previous failed message with the same ID.
To persist failed messages, you just split your stream into two (maybe use branch() and write failed messages into a special topic.

Distributed message passing in D?

I really like the message passing primitives that D implements. I have only seen examples of message passing within a program though. Is there support for distributing the messages over e.g. a network?
The message passing functions are in std.concurrency, which only deals with threads. So, the type of message passing used to pass messages between threads is for threads only. There is no RMI or anything like that in Phobos. That's not to say that we'll never get something like that in Phobos (stuff is being added to Phobos all the time), but it doesn't exist right now.
There is, however, the std.socket module which deals with talking to sockets, which is obviously network-related. I haven't used it myself, but it looks like it sends and receives void[]. So, it's not as nice as sending immutable objects around like you do with std.concurrency, but it does allow you to do network communication via sockets and presumably in a much nicer manner than if you were using C calls directly.
Seems that this has been considered. From the Phobos documentation (found it through Jonathan M Davis answer)
This is a low-level messaging API upon
which more structured or restrictive
APIs may be built. The general idea is
that every messageable entity is
represented by a common handle type
(called a Cid in this implementation),
which allows messages to be sent to
in-process threads, on-host processes,
and foreign-host processes using the
same interface. This is an important
aspect of scalability because it
allows the components of a program to
be spread across available resources
with few to no changes to the actual
implementation.
Right now, only in-process threads are
supported and referenced by a more
specialized handle called a Tid. It is
effectively a subclass of Cid, with
additional features specific to
in-process messaging.

Resources