time trigger + multiple events in Pubsub - google-cloud-composer

I am ingesting 2 different datasets on GCS. Lets say I write a event e1 and event e2 respectively in pubsub which happens at a different times.
I want to start a job and 9 AM and check when both events e1 and e2 have happened for that particular day (After 9 AM) then i kick a process to generate another dataset from these 2 datasets.
Is Cloud composer right to build this kind of requirement. If yes, then please provide some guidance how it can be done

Cloud Composer (and Airflow) should be right for this use case.
You can create a DAG with a daily schedule_interval that starts at 9 am. Use a PubsubSensor per event (s1 and s2). Assuming the process to generate another dataset is an operator, you could then ensure that generate_dataset occurs by setting dependencies:
s1 >> generate_dataset
s2 >> generate_dataset

Related

Best practice of solving race issue with pubsub architecture

Suppose that I have this microservices architecture and they all interact using pubsub.
Basically, if a car sale is made, it will do task 1, which leads to task 2a & 2b, and if leather seat is available, task 3 will be executed.
Note that task 2a & 2b happens at the same time since its a pubsub
In theory, this works perfectly, however in practice, I am finding cases where task 2b & 3 is executed before 2a is executed. This causes task 3 to cause errors since CAR 123 has not yet been made from task 2a.
I am thinking of adding a delay when task 2b is being executed, but I was wondering whether there is a "more standard" way of solving this type of issue ?
(And no, we can't change task 3 to upsert command)
To get this to work as you want, you can have Sales and the Leather Seat service publish to the same topic.
If the topic has one partition, then Sales message will always be before the message produced from Seat Service.
If the topic has multiple partitions, set the same value(some sort of ID, uuid preferably, to not create hot partitions) as the partition key on both messages, so both messages can be produced on the same partition and still maintain the order you need.

Beam CoGroupByKey with fixed window and event time based trigger generates random elements

I have a pipeline in Beam that uses CoGroupByKey to combine 2 PCollections, first one reads from a Pub/Sub subscription and the second one uses the same PCollection, but enriches the data by looking up additional information from a table, using JdbcIO.readAll. So there is no way there would be data in the second PCollection without it being there in the first one.
There is a fixed window of 10seconds with an event based trigger like below;
Repeatedly.forever(
AfterWatermark.pastEndOfWindow().withEarlyFirings(
AfterProcessingTime.pastFirstElementInPane().plusDelayOf(Duration.standardSeconds(40))
).withLateFirings(AfterPane.elementCountAtLeast(1))
);
The issue I am seeing is that when I stop the pipeline using the Drain mode, it seems to be randomly generating elements for the second PCollection when there has not been any messages coming in to the input Pub/Sub topic. This also happens randomly when the pipeline is running as well, but not consistent, but when draining the pipeline I have been able to consistently reproduce this.
Please find the variation in input vs output below;
You are using a non-deterministic triggering, which means the output is sensitive to the exact ordering in which events come in. Another way to look at this is that CoGBK does not wait for both sides to come in; the trigger starts ticking as soon as either side comes in.
For example, lets call your PCollections A and A' respectively, and assume they each have two elements a1, a2, a1', and a2' (of common provenance).
Suppose a1 and a1' come into the CoGBK, 39 seconds passes, and then a2 comes in (on the same key), another 2 seconds pass, then a2' comes in. The CoGBK will output ([a1, a2], [a1']) when the 40-second mark hits, and then when the window closes ([], [a2']) will get emitted. (Even if everything is on the same key, this could happen occasionally if there is more than a 40-second walltime delay going through the longer path, and will almost certainly happen for any late data (each side will fire separately).
Draining makes things worse, e.g. I think all processing time triggers fire immediately.

Cloud Dataflow what is the exact definition of freshness and latency?

Problem:
When using Cloud Dataflow, we get presented 2 metrics (see this page):
system latency
data freshness
These are also available in Stackdriver under the following names (extract from here):
system_lag: The current maximum duration that an item of data has been awaiting processing, in seconds.
data_watermark_age: The age (time since event timestamp) of the most recent item of data that has been fully processed by the pipeline.
But, these descriptions are still very vague:
what does "awaiting processing" mean? is how long a message waits in pubsub? or the total time it has to wait inside the pipeline?
the "maximum duration": after that maximum item is processed, will the metric be adjusted?
"time since event timestamp" does that mean if my event was put in pubsub at timestamp t1 and it flows out of one end of the pipeline at timestamp t2, the pipeline is at t1? I think I can assume that if the metric is at t1, everything before t1 can be assumed processed.
Question:
As these metrics coincide with the semantics of Apache Beam, I would love to see some examples, or at least more clear definitions of these metrics to make them usable.
These metrics are notoriously tricky. An in-depth dive into how they work can be seen in this talk by a member of the Beam / Dataflow team.
Pipelines are split in series of computations that occur in memory, and computations that require serializing your data to some sort of data store. For example, consider the following pipeline:
with Pipeline() as p:
p | beam.ReadFromPubSub(...) \
| beam.Map(parse_data)
| beam.Map(into_key_value_pairs) \
| beam.WindowInto(....) \
| beam.GroupByKey() \
| beam.Map(format_data) \
| beam.WriteToBigquery(...)
This pipeline would get broken up into two stages. A stage is a series of computations that can be applied in memory.
The first stage goes from ReadFromPubSub to the GroupByKey operation. Everything in between those two PTransforms can be done in-memory. To perform the GroupByKey, the data needs to be written to persistent state (and therefore into a new source).
The second stage goes from GroupByKey to WriteToBigQuery. In this case, the data is read from a 'source'.
Each source has its own set of watermarks. The watermarks that you see in the Dataflow UI are the maximum watermarks coming from any source in the pipeline.
--
Answering your questions:
What's awaiting processing?
Answer
It is how long an element waits in PubSub. Specifically, how long an element waits inside any source in the pipeline.
Consider a simpler pipeline:
ReadFromPubSub -> Map -> WriteToBigQuery.
This pipeline does the following operations for each item: Read an item from PubSub -> Operate on it -> Insert to BigQuery -> **Confirm to PubSub that the item has been consumed**.
Now, imagine that the BigQuery service goes down for 5 minutes. This means that PubSub will not receive confirmations for any of the elements for 5 minutes. Therefore, these elements will be stuck in PubSub for a while.
This means that the system latency (and the data freshness metric as well) will balloon up to 5 minutes while BQ writes are blocked.
Does maximum duration get adjusted after processing?
Answer
That's right. For instance, consider the previous pipeline again: BQ is dead for 5 minutes. When BQ comes back, a large batch of items may be written to it, and confirmed as read from PubSub. This will drastically reduce the system latency (and data freshness) back to a few seconds.
What's time since event timestamp?
Answer
An event timestamp can be provided as an attribute of the message to PubSub. It's a bit of a tricky concept, but essentially:
For each stage there is an output data watermark. An output data watermark of T indicates that the computation has processed all elements with event time before T. The latest an output data watermark can be is the earliest input watermark of all its upstream computations. However, the output watermark could be held back if there is some input data that has not yet been processed.
This metric is, of course, heuristic. If some data point comes in very late, then the Data Freshness will be held back.
--
I'd advice you to check out the talk by Slava. It goes over all these concepts.

Question about SPSS modeler (There is an obstacle for make the stream run automatically)

I have SPSSmodeler stream which is now used and updated every week constantly to generate a certain dataset. A raw data for this stream is also renewed on a weekly basis.
In part of this stream, there is a chunk of nodes that were necessary to modify and update manually every week, and the sequence of this part is below: Type Node => Restructure Node => Aggregate Node
To simplify the explanation of those nodes' role, I drew an image of them as bellow.
Because the original raw data is changed weekly basis, the range of Unit value above is always varied, sometimes more than 6 (maybe 100) others less than 6 (maybe 3). That is why somebody has to modify there and update those chunk of nodes on a weekly basis until now. *Unit value has a certain limitation (300 for now)
However, now we are aiming to run this stream automatically without touching any human operations on it that we need to customize there to work perfectly, automatically. Please help and will appreciate your efforts, thanks!
In order to automatize, I suggest to try to use global nodes combined with clem scripts inside the execution (default script). I have a stream that calculates the first date and the last date and those variables are used to rename files at the end of execution. I think you could use something similar as explained here:
1) Create derive nodes to bring the unit values used in the weekly stream
2) Save this information in a table named 'count_variable'
3) Use a Global node named Global with a query similar to this:
#GLOBAL_MAX(variable created in (2)) (only to record the number of variables. The step 2 created a table with only 1 values, so the GLOBAL_MAX will only bring the number of variables).
4) The query inside the execution tab will be similar to this:
execute count_variable
var tabledata
var fn
set tabledata = count_variable.output
set count_variable = value tabledata at 1 1
execute Global
5) You now can use the information of variables just using the already creatde "count_variable"
It's not easy to explain just by typing, but I hope to have been helpful.
Please mark as +1 in this answer if it was relevant one.
I think there is a better, simpler and more effective (yet risky, due to node's requirements to input data) solution to your problem. It is called Transpose node and does exactly that - pivot your table. But just from version 18.1 on. Here's an example:
https://developer.ibm.com/answers/questions/389161/how-does-new-feature-partial-transpose-work-in-sps/

How to profile CPU used by method/function in Swift [duplicate]

This question already has answers here:
How to measure total time spent in a function?
(5 answers)
Closed 6 years ago.
The app I'm developing does some intensive processing and I'd like to understand where the time is being used. The Time Profiler in Instruments shows the tree of calls but I can't see how to get the information I need.
My app has a structure like this:
A
B
B1
E1
B2
B3
E1
C
C1
C2
E1
C3
D
D1
D2
E1
Now, method E1 is called from a number of places and I'd like to see how much CPU it is using. However, from the profiler output I can only see the time taken in E1 down each branch of the tree. Is there any way of getting a report by method/function regardless of where it is called from? e.g. Sum the total time spent in E1?
Thanks,
Julian
The link posted by naglerrr is the answer - How to measure total time spent in a function?
You need to find the function in the profiler tree, right click and select "Focus on Calls Made By". Check the linked answer for more information.

Resources