I have a lot of topics that I want to store in a buffer, but the individual topics shouldn't be recorded for more than 10 seconds each. For a couple of topics this line functions fine, but if I want to subscribe to all topics it starts lagging behind. I need to use something more effective than re-writing a new list, I need to pop all the elements that are older than 10s.
buffer[topic] = [ msg for msg in buffer[topic] if timestamp - msg[0] < rospy.Duration(10.0) ]
Each topics has a timestamp, if this timestamp is larger than 10s we want to remove those elements. Hoping you guys can help.
You can try using a deque from the collections library (https://docs.python.org/2/library/collections.html#collections.deque) and call popleft whenever the message is too old.
Related
I am new to Beam/Dataflow and am trying to figure out if it is suited to this problem. I am trying to keep a running sum of which types of messages are currently backlogged in a queueing system. The system uses a monotonically increasing offset number to order messages: producers learn the number when the send a message, and consumers track the watermark offset as they process each message in FIFO order. This pipeline would have two inputs: counts from the producers and watermarks from the consumers.
The queue producer would regularly flush a batch of count metrics to Beam:
(type1, offset, count)
(type2, offset, count)
...
where the offset was the last offset the producer wrote for typeN, and count is how many typeN messages it enqueued in the current batch period.
The queue consumer will regularly send its latest consumed watermark offset. The effect this should have is to invalidate any counts that have an offset lower than this consumer watermark.
The output of the pipeline is the sum of all counts with a higher offset than the largest consumer watermark yet seen, grouped by message type. (snapshotted every 5 minutes or so.)
(Of course there would be 100k message "types", hundreds of producer servers, occasional 2-hour periods where the consumer doesn't report an advancing watermark, etc.)
Is this doable? That this pipeline would need to maintain and scan an unbounded-ish history of count records is the part that seems maybe unsuited to Beam.
One possible approach would be to model this as two timeseries (left , right) where you want to match left.timestamp <= right.timestamp. You can do this using the State and Timer API.
In order to achieve this unbounded, you will need to be working within a GlobalWindow. Important note in the Global Window there is no expiry of the state, so you will need to make sure to do Garbage Collection on your left and right streams. Also data will arrive in the onprocess unordered, so you will need to make use of Event Time timers to do the actual work.
Very roughly:
onProcess(){
Store data in BagState.
Setup Event time timer to go off
}
OnTimer(){
Do your buiss logic.
}
This is a lot easier with Apache Beam > 2.24.0 as OrderedListState has been added.
Although the timeseries use case is different from the one in this question, this talk from the 2019 Beam summit also has some pointers (but does not make use of OrderedListState, which was not available at the time);
State and Timer API and Timeseries
My company receives both batch and stream based event data. I want to process the data using Google Cloud dataflow over a predictable time period. However, I realize that in some instances the data comes late or out of order. How to use Dataflow to handle late or out of order?
This is a homework question, and would like to know the only answer in below.
a. Set a single global window to capture all data
b. Set sliding window to capture all the lagged data
c. Use watermark and timestamps to capture the lagged data
d. Ensure every datasource type (stream or batch) has a timestamp, and use the timestamps to define the logic for lagged data.
My reasoning - I believe 'C' is the answer. But then, watermark is actually different from late data. Please confirm. Also, since the question mentioned both batch and stream based, i also think if 'D' could be the answer since 'batch'(or bounded collection) mode doesn't have the timestamps unless it comes from source or is programmatically set. So, i am a bit confused on the answer.
Please help here. I am a non-native english speaker, so not sure if I could have missed some cues in the question.
How to use Dataflow to handle late or out of order
This is a big question. I will try to give some simple explanations but provide some resources that might help you understand.
Bounded data collection
You have gotten a sense of it: bounded data does not have lateness problem. By the nature of bounded data, you can read the full data set at once before pipeline starts.
Unbounded data collection
Your C is correct, and watermark is different from late data. Watermark in implementation is a monotonically increasing timestamp. When Beam/Dataflow see a record with a event timestamp that is earlier than the watermark, the record is treated as late data (this is only conceptual and you might want to check[1] for some detailed discussion).
Here are [2], [3], [4] as reference for this topic:
https://docs.google.com/document/d/12r7frmxNickxB5tbpuEh_n35_IJeVZn1peOrBrhhP6Y/edit#heading=h.7a03n7d5mf6g
https://www.oreilly.com/ideas/the-world-beyond-batch-streaming-102
https://www.oreilly.com/library/view/streaming-systems/9781491983867/
https://docs.google.com/presentation/d/1ln5KndBTiskEOGa1QmYSCq16YWO9Dtmj7ZwzjU7SsW4/edit#slide=id.g19b6635698_3_4
B and C may be the answer.
With sliding windows, you have the order of the data, so if you recive the data in position 9 and you don't recive the data in the position 8, you know that data 8 is delayed and wait for it. The problem is, if the latest data is delayed, you can't know this data is delayed and you lost it. https://en.wikipedia.org/wiki/Sliding_window_protocol
Watermark, wait a period of time for the lagged data, if this time passes and the data doesn't arrive, you lose this data.
So, the answer is C, because B says "capture all the lagged data" and C ignores the word all
We have a requirement to implement the following. Given a Redis channel that will provide a known number of messages:
For each message consumed from the channel:
Get a JSON document from Redis
Parse the JSON document, extracting a list of result objects
Aggregate across all result objects to produce a single result
We would like to distribute both steps 1 and 2 across many workers, and avoid collecting all results into memory. We would also like to display progress bars for both steps.
However, we can't see a nice way to structure the application such that we can see progress and keep work moving through the system without blocking as inopportune times.
For example, in step 1 if we read from the Redis channel into a queue then we can pass the queue to Dask, in which case we start processing each message as it comes in without waiting for all messages. However, we can't see a way to show progress if we use a queue (presumably because a queue typically has an unknown size?)
If we collect from the Redis channel into a list and pass this to Dask then we can see progress, but we have to wait for all messages from Redis before we can start processing the first one.
Is there a recommended way to approach this kind of problem?
If your Redis channels are concurrent-access-safe then you might submit many futures to pull an element from the channel. These would run on different machines.
from dask.distributed import Client, progress
client = Client(...)
futures = [client.submit(pull_from_redis_channel, ..., pure=False) for _ in range(n_items)]
futures2 = client.map(process, futures)
progress(futures2)
I have a Stream of items (u32, Bytes) where the integer is an index in the range 0..n I would like to split this stream into n streams, basically filtering by the integer.
I considered several possibilities, including
creating n streams each of which peeks at the underlying stream to determine if the next item is for it
pushing the items to one of n sinks when they arrive, and then use the other side of the sink as a stream again. (This seems to be related to
Forwarding from a futures::Stream to a futures::Sink.).
I feel that neither of these possibilities is convincing. The first one seems to create unnecessary overhead and the second one is just not elegant (if it even works, I am not sure).
What's a good way of splitting the stream?
At one point I had a similar requirement and wrote a group_by operator for Stream.
I haven't yet published this to crates.io as I didn't really feel it was ready for consumption but feel free to take a look at the code at https://github.com/Lukazoid/lz_stream_tools or attempt to use it for yourself.
Add the following to your cargo.toml:
[dependencies]
lz_stream_tools = { git = "https://github.com/Lukazoid/lz_stream_tools" }
And extern crate lz_stream_tools; to your bin.rs/lib.rs.
Then from your code you may use it like so:
use lz_stream_tools::StreamTools;
let groups = some_stream.group_by(|x| x.0);
groups will now be a Stream of (u32, Stream<Item=Bytes)).
You could use channels to represent the index-specific streams. You'd have to spawn one Task that pulls from the original stream and has a map of Senders.
A number of examples show aggregation over windows of an unbounded stream, but suppose we need to get a count-per-key of the entire stream seen up to some point in time. (Think word count that emits totals for everything seen so far rather than totals for each window.)
It seems like this could be a Combine.perKey and a trigger to emit panes at some interval. In this case the window is essentially global, and we emit panes for that same window throughout the life of the job. Is this safe/reasonable, or perhaps there is another way to compute a rolling, total aggregate?
Ryan your solution of using a global window and a periodic trigger is the recommended approach. Just make sure you use accumulation mode on the trigger and not discarding mode. The Triggers page should have more information.
Let us know if you need additional help.