What I am trying to do:
Consume json messages from PubSub subscription using Apache Beam Streaming pipeline & Dataflow Runner
Unmarshal payload strings into objects.
Assume 'messageId' is the unique Id of incoming message. Ex: msgid1, msgid2, etc
Retrieve child records from a database for each object resulted from #2. Same child can be applicable for multiple messages.
Assume 'childId' as the unique Id of child record. Ex: cid1234, cid1235 etc
Group child records by their unique id as shown in example below
KV.of(cid1234,Map.of(msgid1, msgid2)) and KV.of(cid1235,Map.of(msgid1, msgid2))
Write grouped result at childId level to the database
Questions:
Where should the windowing be introduced? we currently have 30minutes fixed windowing after step#1
How does Beam define start and end time of 30mins window? is it right after we start pipeline or after first message of batch?
What if the steps 2 to 5 take more than 1hour for a window and next window batch is ready. Would both windows batches gets processed in parallel?
How can make the next window messages wait until previous window batch is completed?
If we dont do this, the result at childId level will be overwritten by next batches
Code snippet:
PCollection<PubsubMessage> messages = pipeline.apply("ReadPubSubSubscription",
PubsubIO.readMessagesWithAttributes()
.fromSubscription("projects/project1/subscriptions/subscription1"));
PCollection<PubsubMessage> windowedMessages = messages.apply(Window.into(FixedWindows
.of(Duration.standardMinutes(30))));
PCollectionTuple unmarshalResultTuple = windowedMessages.apply("UnmarshalJsonStrings",
ParDo.of(new JsonUnmarshallFn())
.withOutputTags(JsonUnmarshallFn.mainOutputTag,
TupleTagList.of(JsonUnmarshallFn.deadLetterTag)));
PCollectionTuple childRecordsTuple = unmarshalResultTuple
.get(JsonUnmarshallFn.mainOutputTag)
.apply("FetchChildsFromDBAndProcess",
ParDo.of(new ChildsReadFn() )
.withOutputTags(ChildsReadFn.mainOutputTag,
TupleTagList.of(ChildsReadFn.deadLetterTag)));
// input is KV of (childId, msgids), output is mutations to write to BT
PCollectionTuple postProcessTuple = childRecordsTuple
.get(ChildsReadFn.mainOutputTag)
.apply(GroupByKey.create())
.apply("UpdateChildAssociations",
ParDo.of(new ChildsProcessorFn())
.withOutputTags(ChildsProcessorFn.mutations,
TupleTagList.of(ChildsProcessorFn.deadLetterTag)));
postProcessTuple.get(ChildsProcessorFn.mutations).CloudBigtableIO.write(...);
Addressing each of your questions.
Regarding questions 1 and 2 When you us Windowing within Apache Beam, you need to understand that the "windows existed before the job". What I mean is that the windows start at the UNIX epoch (timestamp = 0). In other words, your data will be allocated within each fixed time range, example with fixed 60 seconds windows:
PCollection<String> items = ...;
PCollection<String> fixedWindowedItems = items.apply(
Window.<String>into(FixedWindows.of(Duration.standardSeconds(60))));
First window: [0s;59s) - Second : [60s;120s)...and so on
Please refer to the documentation 1, 2 and 3
About question 3, the default of Windowing and Triggering in Apache Beam is to ignore late data. Although, it is possible to configure the handling of late data using withAllowedLateness. In order to do so, it is necessary to understand the concept of Watermarks before. Watermark is a metric of how far behind the data is. Example: you can have a 3 second watermark, then if your data is 3 seconds late it will be assigned to the right window. On the other hand, if it is passed the watermark, you define what it will happen with this data, you can reprocess or ignore it using Triggers.
withAllowedLateness
PCollection<String> items = ...;
PCollection<String> fixedWindowedItems = items.apply(
Window.<String>into(FixedWindows.of(Duration.standardMinutes(1)))
.withAllowedLateness(Duration.standardDays(2)));
Pay attention that an amount of time is set for late data to arrive.
Triggering
PCollection<String> pc = ...;
pc.apply(Window.<String>into(FixedWindows.of(1, TimeUnit.MINUTES))
.triggering(AfterProcessingTime.pastFirstElementInPane() .plusDelayOf(Duration.standardMinutes(1)))
.withAllowedLateness(Duration.standardMinutes(30));
Notice that the window is re-processed and re-computed event time there is late data. This trigger gives you the opportunity to react to the late data.
Finally, about question 4, which is partially explained with the concepts described above. The computations will occur within each fixed window and recomputed/processed every time a trigger is fired. This logic will guarantee your data it is in the right window.
Related
I have an UnboundedSource that generates N items (it's not in batch mode, it's a stream -- one that only generates a certain amount of items and then stops emitting new items but a stream nonetheless). Then I apply a certain PTransform to the collection I'm getting from that source. I also apply the Window.into(FixedWindows.of(...)) transform and then group the results by window using Combine. So it's kind of like this:
pipeline.apply(Read.from(new SomeUnboundedSource(...)) // extends UnboundedSource
.apply(Window.into(FixedWindows.of(Duration.millis(5000))))
.apply(new SomeTransform())
.apply(Combine.globally(new SomeCombineFn()).withoutDefaults())
And I assumed that would mean new events are generated for 5 seconds, then SomeTransform is applied to the data in the 5 seconds window, then a new set of data is polled and therefore generated. Instead all N events are generated first, and only after that is SomeTransform applied to the data (but the windowing works as expected). Is it supposed to work like this? Does Beam and/or the runner (I'm using the Flink runner but the Direct runner seems to exhibit the same behavior) have some sort of queue where it stores items before passing it on to the next operator? Does that depend on what kind of UnboundedSource is used? In my case it's a generator of sorts. Is there a way to achieve the behavior that I expected or is it unreasonable? I am very new to working with streaming pipelines in general, let alone Beam. I assume, however, it would be somewhat illogical to try to read everything from the source first, seeing as it's, you know, unbounded.
An important thing to note is that windows in Beam operate on event time, not processing time. Adding 5 second windows to your data is not a way to prescribe how the data should be processed, only the end result of aggregations for that processing. Further, windows only affect the data once an aggregation is reached, like your Combine.globally. Until that point in your pipeline the windowing you applied has no effect.
As to whether it is supposed to work that way, the beam model doesn't specify any specific processing behavior so other runners may process elements slightly differently. However, this is still a correct implementation. It isn't trying to read everything from the source; generally streaming sources in Beam will attempt to read all elements available before moving on and coming back to the source later. If you were to adjust your stream to stream in elements slowly over a long period of time you will likely see more processing in between reading from the source.
I have a pipeline in Beam that uses CoGroupByKey to combine 2 PCollections, first one reads from a Pub/Sub subscription and the second one uses the same PCollection, but enriches the data by looking up additional information from a table, using JdbcIO.readAll. So there is no way there would be data in the second PCollection without it being there in the first one.
There is a fixed window of 10seconds with an event based trigger like below;
Repeatedly.forever(
AfterWatermark.pastEndOfWindow().withEarlyFirings(
AfterProcessingTime.pastFirstElementInPane().plusDelayOf(Duration.standardSeconds(40))
).withLateFirings(AfterPane.elementCountAtLeast(1))
);
The issue I am seeing is that when I stop the pipeline using the Drain mode, it seems to be randomly generating elements for the second PCollection when there has not been any messages coming in to the input Pub/Sub topic. This also happens randomly when the pipeline is running as well, but not consistent, but when draining the pipeline I have been able to consistently reproduce this.
Please find the variation in input vs output below;
You are using a non-deterministic triggering, which means the output is sensitive to the exact ordering in which events come in. Another way to look at this is that CoGBK does not wait for both sides to come in; the trigger starts ticking as soon as either side comes in.
For example, lets call your PCollections A and A' respectively, and assume they each have two elements a1, a2, a1', and a2' (of common provenance).
Suppose a1 and a1' come into the CoGBK, 39 seconds passes, and then a2 comes in (on the same key), another 2 seconds pass, then a2' comes in. The CoGBK will output ([a1, a2], [a1']) when the 40-second mark hits, and then when the window closes ([], [a2']) will get emitted. (Even if everything is on the same key, this could happen occasionally if there is more than a 40-second walltime delay going through the longer path, and will almost certainly happen for any late data (each side will fire separately).
Draining makes things worse, e.g. I think all processing time triggers fire immediately.
Problem:
When using Cloud Dataflow, we get presented 2 metrics (see this page):
system latency
data freshness
These are also available in Stackdriver under the following names (extract from here):
system_lag: The current maximum duration that an item of data has been awaiting processing, in seconds.
data_watermark_age: The age (time since event timestamp) of the most recent item of data that has been fully processed by the pipeline.
But, these descriptions are still very vague:
what does "awaiting processing" mean? is how long a message waits in pubsub? or the total time it has to wait inside the pipeline?
the "maximum duration": after that maximum item is processed, will the metric be adjusted?
"time since event timestamp" does that mean if my event was put in pubsub at timestamp t1 and it flows out of one end of the pipeline at timestamp t2, the pipeline is at t1? I think I can assume that if the metric is at t1, everything before t1 can be assumed processed.
Question:
As these metrics coincide with the semantics of Apache Beam, I would love to see some examples, or at least more clear definitions of these metrics to make them usable.
These metrics are notoriously tricky. An in-depth dive into how they work can be seen in this talk by a member of the Beam / Dataflow team.
Pipelines are split in series of computations that occur in memory, and computations that require serializing your data to some sort of data store. For example, consider the following pipeline:
with Pipeline() as p:
p | beam.ReadFromPubSub(...) \
| beam.Map(parse_data)
| beam.Map(into_key_value_pairs) \
| beam.WindowInto(....) \
| beam.GroupByKey() \
| beam.Map(format_data) \
| beam.WriteToBigquery(...)
This pipeline would get broken up into two stages. A stage is a series of computations that can be applied in memory.
The first stage goes from ReadFromPubSub to the GroupByKey operation. Everything in between those two PTransforms can be done in-memory. To perform the GroupByKey, the data needs to be written to persistent state (and therefore into a new source).
The second stage goes from GroupByKey to WriteToBigQuery. In this case, the data is read from a 'source'.
Each source has its own set of watermarks. The watermarks that you see in the Dataflow UI are the maximum watermarks coming from any source in the pipeline.
--
Answering your questions:
What's awaiting processing?
Answer
It is how long an element waits in PubSub. Specifically, how long an element waits inside any source in the pipeline.
Consider a simpler pipeline:
ReadFromPubSub -> Map -> WriteToBigQuery.
This pipeline does the following operations for each item: Read an item from PubSub -> Operate on it -> Insert to BigQuery -> **Confirm to PubSub that the item has been consumed**.
Now, imagine that the BigQuery service goes down for 5 minutes. This means that PubSub will not receive confirmations for any of the elements for 5 minutes. Therefore, these elements will be stuck in PubSub for a while.
This means that the system latency (and the data freshness metric as well) will balloon up to 5 minutes while BQ writes are blocked.
Does maximum duration get adjusted after processing?
Answer
That's right. For instance, consider the previous pipeline again: BQ is dead for 5 minutes. When BQ comes back, a large batch of items may be written to it, and confirmed as read from PubSub. This will drastically reduce the system latency (and data freshness) back to a few seconds.
What's time since event timestamp?
Answer
An event timestamp can be provided as an attribute of the message to PubSub. It's a bit of a tricky concept, but essentially:
For each stage there is an output data watermark. An output data watermark of T indicates that the computation has processed all elements with event time before T. The latest an output data watermark can be is the earliest input watermark of all its upstream computations. However, the output watermark could be held back if there is some input data that has not yet been processed.
This metric is, of course, heuristic. If some data point comes in very late, then the Data Freshness will be held back.
--
I'd advice you to check out the talk by Slava. It goes over all these concepts.
I'm writing a Beam data pipeline reading from an unbounded source like Kafka. I am not performing any analytic functions. I would like to transform the elements and write to the sink let's say after the record count of the PCollection reaches a certain threshold. This is to throttle the data being sent to the sink
Looked at the existing triggers but couldn't figure if they are a good fit
I have tested triggers and they work as expected,
here is a scala code example
val data: PCollection[Type] = results
.apply(
Window
.into[Type](FixedWindows.of(Duration.millis(2000)))
.withAllowedLateness(Duration.millis(1000))
.triggering(AfterPane.elementCountAtLeast(4)
.accumulatingFiredPanes()
)
It waits for 4 elements then trigger the window.
I am using a global unbounded stream in combination with Stateful processing and timers in order to totally order a stream per key by event timestamp. The solution is described with the answer to this question:
Processing Total Ordering of Events By Key using Apache Beam
In order to restart the pipeline after a failure or stopping for some other reason, I need to determine the lowest event timestamp at which we are guaranteed that all other events have been processed downstream. This timestamp can be calculated periodically and persisted to a datastore and used as the input to the source IO (Kinesis) so that the stream can be re-read without having to go back to the beginning. (It is ok for us to have events replayed)
I considered having the stateful transformation emit the lowest processed timestamp as the output when the timer triggers and then combine all the outputs globally to find the minimum value. However, it is not possible to use a Global combine operation because a either a Window or a Trigger must be applied first.
Assuming that my stateful transform emits a Long when the timer fires which represents the smallest timestamp, I am defining the pipeline like this:
p.apply(events)
.apply("StatefulTransform", ParDo.of(new StatefulTransform()))
.apply(Window.<Long>configure().triggering(Repeatedly.forever(AfterFirst.of(
AfterPane.elementCountAtLeast(100),
AfterProcessingTime.pastFirstElementInPane().plusDelayOf(Duration.standardMinutes(1))))))
.apply(Combine.globally(new MinLongFn()))
.apply("WriteCheckpoint", ParDo.of(new WriteCheckpoint()));
Will this ensure that the checkpoints will only be written when all of the parallel workers have emitted at least one of their panes? I am concerned that a the combine operation may operate on panes from only some of the workers, e.g. there may be a worker that has either failed or is still waiting for another event to trigger it's timer.
I'm a newbie of the Beam, but according to this blog https://beam.apache.org/blog/2017/08/16/splittable-do-fn.html, Splittable DoFn might be the thing you are looking for!
You could create an SDF to fetch the stream and accept the input element as the start point.