synchronize between publisher and subscriber - ros

I am working on test program which subscribes to one topic published by main program. There is only single message on this topic published by main program. Now it might happen that my subscriber is not alive when publisher is publishing a message and it will get lost. One way to avoid it is put while inside main program till numsubcribers are not zero. but I cannot put while inside my main program. How do i achieve it?

So since you say that there is only a single message send by the main program, this is probably a job for the latch argument when you allocate the publisher:
latch [optional]
Enables "latching" on a connection. When a connection is latched, the last message published is saved and automatically sent to any future subscribers that connect. This is useful for slow-changing to static data like a map. Note that if there are multiple publishers on the same topic, instantiated in the same node, then only the last published message from that node will be sent, as opposed to the last published message from each publisher on that single topic.
Just give your publisher the argument as described here, and ROS handles the message passing to your subscriber when it comes up:
...
bool latch = true;
ros::Publisher advertise(topic, latch)
...

Related

Get Status of meetings in a channel in Microsoft Teams

Given a channel-id of a channel is there any endpoint that allows us to query if there is a meeting happening in that channel at the time of the query. Irrespective of whether the meeting was scheduled before or started directly.
From what I have seen so far
GET /teams/{team-id}/channels/{channel-id}/messages
Gives the scheduled meeting, but sometimes the meeting may not have started as per time, so how do I know if there is a live meeting (i.e, a meeting has started) in a channel, given its channel-id
The response for the request
GET /teams/{team-id}/channels/{channel-id}/messages
returns collection of chatMessage.
chatMessage has a property eventDetail of eventMessageDetail resource type which represents details of an event that happened in a channel.
One of the event is callStartedEventMessageDetail which represents the details of an event message about call started. This message is generated when a call starts.
callStartedEventMessageDetail has a property callEventType which represents the call event type. Possible values are: call, meeting, screenShare, unknownFutureValue.
So check eventDetail if it contains callStartedEventMessageDetail with the property callEventType set to meeting.
Opposite to callStartedEventMessageDetail is callEndedEventMessageDetail which represents the details of an event message about an ended call
Resources:
chatMessage resource type
eventMessageDetail resource type
callStartedEventMessageDetail resource type
callEndedEventMessageDetails resource type

Processing Total Ordering of Events By Key using Apache Beam

Problem Context
I am trying to generate a total (linear) order of event items per key from a real-time stream where the order is event time (derived from the event payload).
Approach
I had attempted to implement this using streaming as follows:
1) Set up a non overlapping sequential windows, e.g. duration 5 minutes
2) Establish an allowed lateness - it is fine to discard late events
3) Set accumulation mode to retain all fired panes
4) Use the "AfterwaterMark" trigger
5) When handling a triggered pane, only consider the pane if it is the final one
6) Use GroupBy.perKey to ensure all events in this window for this key will be processed as a unit on a single resource
While this approach ensures linear order for each key within a given window, it does not make that guarantee across multiple windows, e.g. there could be a window of events for the key which occurs after that is being processed at the same time as the earlier window, this could easily happen if the first window failed and had to be retried.
I'm considering adapting this approach where the realtime stream can first be processed so that it partitions the events by key and writes them to files named by their window range.
Due to the parallel nature of beam processing, these files will also be generated out of order.
A single process coordinator could then submit these files sequentially to a batch pipeline - only submitting the next one when it has received the previous file and that downstream processing of it has completed successfully.
The problem is that Apache Beam will only fire a pane if there was at least one time element in that time window. Thus if there are gaps in events then there could be gaps in the files that are generated - i.e. missing files. The problem with having missing files is that the coordinating batch processor cannot make the distinction between knowing whether the time window has passed with no data or if there has been a failure in which case it cannot proceed until the file finally arrives.
One way to force the event windows to trigger might be to somehow add dummy events to the stream for each partition and time window. However, this is tricky to do...if there are large gaps in the time sequence then if these dummy events occur surrounded by events much later then they will be discarded as being late.
Are there other approaches to ensuring there is a trigger for every possible event window, even if that results in outputting empty files?
Is generating a total ordering by key from a realtime stream a tractable problem with Apache Beam? Is there another approach I should be considering?
Depending on your definition of tractable, it is certainly possible to totally order a stream per key by event timestamp in Apache Beam.
Here are the considerations behind the design:
Apache Beam does not guarantee in-order transport, so there is no use within a pipeline. So I will assume you are doing this so you can write to an external system with only the capability to handle things if they come in order.
If an event has timestamp t, you can never be certain no earlier event will arrive unless you wait until t is droppable.
So here's how we'll do it:
We'll write a ParDo that uses state and timers (blog post still under review) in the global window. This makes it a per-key workflow.
We'll buffer elements in state when they arrive. So your allowed lateness affects how efficient of a data structure you need. What you need is a heap to peek and pop the minimum timestamp and element; there's no built-in heap state so I'll just write it as a ValueState.
We'll set a event time timer to receive a call back when an element's timestamp can no longer be contradicted.
I'm going to assume a custom EventHeap data structure for brevity. In practice, you'd want to break this up into multiple state cells to minimize the data transfered. A heap might be a reasonable addition to primitive types of state.
I will also assume that all the coders we need are already registered and focus on the state and timers logic.
new DoFn<KV<K, Event>, Void>() {
#StateId("heap")
private final StateSpec<ValueState<EventHeap>> heapSpec = StateSpecs.value();
#TimerId("next")
private final TimerSpec nextTimerSpec = TimerSpec.timer(TimeDomain.EVENT_TIME);
#ProcessElement
public void process(
ProcessContext ctx,
#StateId("heap") ValueState<EventHeap> heapState,
#TimerId("next") Timer nextTimer) {
EventHeap heap = firstNonNull(
heapState.read(),
EventHeap.createForKey(ctx.element().getKey()));
heap.add(ctx.element().getValue());
// When the watermark reaches this time, no more elements
// can show up that have earlier timestamps
nextTimer.set(heap.nextTimestamp().plus(allowedLateness);
}
#OnTimer("next")
public void onNextTimestamp(
OnTimerContext ctx,
#StateId("heap") ValueState<EventHeap> heapState,
#TimerId("next") Timer nextTimer) {
EventHeap heap = heapState.read();
// If the timer at time t was delivered the watermark must
// be strictly greater than t
while (!heap.nextTimestamp().isAfter(ctx.timestamp())) {
writeToExternalSystem(heap.pop());
}
nextTimer.set(heap.nextTimestamp().plus(allowedLateness);
}
}
This should hopefully get you started on the way towards whatever your underlying use case is.

Sending message with CAPL and dbc signal values

I am using CAPL to simulate a test envirmonet for some small tests and i am having problems sending messages or more specific setting up the values.
I am able to read Signal Values with $SignalName, also i am able to set signal values like that.
If i am using this code to send a message the message data is always 0:
on key 't'
{
message MessageName msg;
setSignal(SignalName,i);
write("Value: %d",i);
outport(msg);
}
Witch makes kinda sence becouse i think the message objects are intended to be used to send bytes witch you can access through msg.byte()
I know that i can set signals in messages by msg.SignalName, but again this seems not the right way. I think there should be a way to send a message and all the signals contained in the message are set to the values set by SetSignal() function. Otherwise the SetSignal Funktion is a bit useless
Maybe somebody has an idea.
Thank you
I am using CANalyzer version 8.2 and I do not have the option to use SetSignal(signal, value) function. Setting the signal values by accessing the message selectors seems to be a reasonable approach. However you used the function outport! You need to use the output function to transmit messages.
on key 't' {
message MessageName msg;
msg.signal1 = value1;
output(msg);
}
For this method the database has to be configured so that the message msg contains all the necessary signals (signal1).
If you want to set all signal values to the start values configured in the database use the function:
setSignalStartValues(message msg);
You can set up an interaction layer that will handle the messages as defined in the CAN database (DBC file) assigned to the node. The interaction layer will need some attributes in the database to define how the messages have to be sent. If not already present you may have to add these attributes. If the Tx messages are not sent as expected, check the attributes.
Function output() is useful if you want to implement (and fully control) the sending of the message yourself.
Instead of using SetSignal() it is also possible to write the signal using $SignalName = value;
See this support note:
https://kb.vector.com/upload_551/file/SN-IND-1-011_InteractionLayer(1).pdf
You may have to guess and experiment a bit. In the DBC files provided by a customer I found attribute values that are not mentioned in this document.

What's the use of getCollisionKey() in relayjs?

I am new in relay and saw this on getCollisionKey on treasurehunt tutorial:
getCollisionKey() {
return `check_${this.props.game.id}`;
}
In the docs it states - Implement this method to return a collision key. Relay will send any mutations having the same collision key to the server serially and in-order.
Please help me understand what is getCollisionKey. Would really appreciate.
collisionKey is an identifier to help know when mutations needs to be executed one after the other or when they can be parallelised.
Why we need this is mostly because of network inconsistencies.
Take for example a mutation LikeOrUnlikePost. This mutation likes or unlikes the post depending on if you already like it or not.
Suppose you like the post, then 1s after you decide to unlike.
But the first mutation fails, so it isn't sent to your server, so only one LikeOrUnlikePost mutation is sent.
The result is that you think you unliked the post (you clicked twice), but in fact you only liked it (only one mutation succeed).
This is what collisionKey is for. It tells Relay to queue any mutations which have the same collision key.
In the case above, what would happen is the second mutation would get queued, and would never get executed as the first one fails.

order of arrival of message on a mailboxprocessor

Is there /How can I get guarantee on the order of arrival for the messages sent to a mailboxprocessor
That is, on a thread if I do
agent.post(msg1)
agent.post(msg2)
How can I be sure that, in the treatment loop for the agent, the messages will be received in order ?
They are. The implementation of Post is as you might guess, it just adds an item to the queue (on the current thread, under a lock), and posts work to notify any waiting agent to wake up and process it. So if you call Post twice on the same thread, one after another, the messages get into the queue in that order.
You can also use inbox.Scan(function _ -> None // return an Option) to find the messages if you have some way of detecting order. Of course, this comes at a price to performance, so leaving the queue alone is the best idea.

Resources