How are Dataflow bundles created after GroupBy/Combine? - google-cloud-dataflow

Setup:
read from pubsub -> window of 30s -> group by user -> combine -> write to cloud datastore
Problem:
I'm seeing DataStoreIO writer errors as objects with similar keys are present in the same transaction.
Question:
I want to understand how my pipeline combines results into bundles after a group by/combine operation. I would expect the bundle to be created for every window after the combine. But apparently, a bundle can contain more than 2 occurrences of the same user?
Can re-execution (retries) of bundles cause this behavior?
Is this bundling dependent of the runner?
Is deduplication an option? if so, how would I best approach that?
Note that I'm not looking for a replacement for the datastore writer at the end of the pipeline, I already know that we can use a different strategy. I'm merely trying to understand how the bundling happens.

There are two answers to your question. One is specific to your use case, and the other is in general about bundling / windowing in streaming.
Specific to your pipeline
I am assuming that the 'key' for Datastore is the User ID? In that case, if you have events from the same user in more than one window, your GroupByKey or Combine operations will have one separate element for every pair of user+window.
So the question is: What are you trying to insert into datastore?
An individual user's resulting aggregate over all time? In that case, you'd need to use a Global Window.
A user's resulting aggregate for every 30 seconds in time? Then you need to use the window as part of the key you use to insert to datastore. Does that help / make sense?
Happy to help you design your pipeline to do what you want. Chat with me in the comments or via SO chat.
The larger question about bundling of data
Bundling strategies will vary by runner. In Dataflow, you should consider the following two factors:
Every worker is assigned a key range. Elements for the same key will be processed by the same worker.
Windows belong to single elements; but a bundle may contain elements from multiple windows. As an example, if the data freshness metric makes a big jump*, a number of windows may be triggered - and elements of the same key in different windows would be processed in the same bundle.
*- when can Data freshness jump suddenly? A stream with a single element with a very old timestamp, and that is very slow to process may hold the watermark for a long time. Once this element is processed, the watermark may jump a lot, to the next oldest element (Check out this lecture on watermarks ; )).

Related

JSR-352: Save chunk checkpoint after ItemReader reads items

Using JSR-352 batch job along with Java EE, I'm trying to process items on chunk from a source in partitions. On retriable exception I want to be able to return to a past checkpoint, so I could get items already read from the source.
The nature of the source is such that in parallel environment I cannot require the same chunk of items twice. The only feasible way to be able to get the exact same items when reading twice is by having to restart the whole job.
I need to write a generic ItemReader which can manage sources of such a kind (so it may be reusable). This basically means that want to find nice and clear design/implementation of such a reader.
To achieve the required behavior of ItemReader to process the source, what I currently do is getting the items in the beginning of the readItem() if they have not been fetched for the current chunk, and then iterate one by one through them. In order to manage retriable exceptions I'm trying to use the checkpoint properties of the ItemReader.
The problem I'm facing is that the behavior of checkpoints is such that they are loaded in open(...) method, before readItem() and saved only after the chunk has been successful. This results in a problem with saving all the items of the chunk into a valid checkpoint before I must actually retry the chunk in case of an retriable exception.
My question is there a way to make augment the behavior of checkpoints, so they are saved after the initial readItem(), or do you happen to know any other nice and clear strategy, without the usage of additional listeners, userTransientData which would make the reader hard to integrate into other batch job steps with the same read behavior?

In Dataflow with PubsubIO is there any possibility of late data in a global window?

I was going to start developing programs in Google cloud Pubsub. Just wanted to confirm this once.
From the beam documentation the data loss can only occur if data was declared late by Pubsub. Is it safe to assume that the data will always be delivered without any message drops (Late data) when using a global window?
From the concepts of watermark and lateness I have come to a conclusion that these metrics are critical in conditions where custom windowing is applied over the data being received with event based triggers.
When you're working with streaming data, choosing a global window basically means that you are going to completely ignore event time. Instead, you will be taking snapshots of your data in processing time (that is, as they arrive) using triggers. Therefore, you can no longer define data as "late" (neither "early" or "on time" for that matter).
You should choose this approach if you are not interested in the time at which these events actually happened but, instead, you just want to group them according to the order in which they were observed. I would suggest that you go through this great article on streaming data processing, especially the part under When/Where: Processing-time windows which includes some nice visuals comparing different windowing strategies.

Marking a key as complete in a GroupBy | Dataflow Streaming Pipeline

To our Streaming pipeline, we want to submit unique GCS files, each file containing multiple event information, each event also containing a key (for example, device_id). As part of the processing, we want to shuffle by this device_id so as to achieve some form of worker to device_id affinity (more background on why we want to do it is in this another SO question. Once all events from the same file are complete, we want to reduce (GroupBy) by their source GCS file (which we will make a property of the event itself, something like file_id) and finally write the output to GCS (could be multiple files).
The reason we want to do the final GroupBy is because we want to notify an external service once a specific input file has completed processing. The only problem with this approach is that since the data is shuffled by the device_id and then grouped at the end by the file_id, there is no way to guarantee that all data from a specific file_id has completed processing.
Is there something we could do about it? I understand that Dataflow provides exactly_once guarantees which means all the events will be eventually processed but is there a way to set a deterministic trigger to say all data for a specific key has been grouped?
EDIT
I wanted to highlight the broader problem we are facing here. The ability to mark
file-level completeness would help us checkpoint different stages of the data as seen by external consumers. For example,
this would allow us to trigger per-hour or per-day completeness which are critical for us to generate reports for that window. Given that these stages/barriers (hour/day) are clearly defined on the input (GCS files are date/hour partitioned), it is only natural to expect the same of the output. But with Dataflow's model, this seems impossible.
Similarly, although Dataflow guarantees exactly-once, there will be cases where the entire pipeline needs to be restarted since something went horribly wrong - in those cases, it is almost impossible to restart from the correct input marker since there is no guarantee that what was already consumed has been completely flushed out. The DRAIN mode tries to achieve this but as mentioned, if the entire pipeline is messed up and draining itself cannot make progress, there is no way to know which part of the source should be the starting point.
We are considering using Spark since its micro-batch based Streaming model seems to fit better. We would still like to explore Dataflow if possible but it seems that we wont be able to achieve it without storing these checkpoints externally from within the application. If there is an alternative way of providing these guarantees from Dataflow, it would be great. The idea behind broadening this question was to see if we are missing an alternate perspective which would solve our problem.
Thanks
This is actually tricky. Neither Beam nor Dataflow have a notion of a per-key watermark, and it would be difficult to implement that level of granularity.
One idea would be to use a stateful DoFn instead of the second shuffle. This DoFn would need to receive the number of elements expected in the file (from either a side-input or some special value on the main input). Then it could count the number of elements it had processed, and only output that everything has been processed once it had seen that number of elements.
This would be assuming that the expected number of elements can be determined ahead of time, etc.

Making use of workers in Custom Sink

I have a custom sink which will publish the final result from a pipeline to a repository.
I am getting the inputs for this pipeline from BigQuery and GCS.
The custom writer present in the sink is called for each in all workers. Custom Writer will just collect the objects to be psuhed and return it as part of WriteResult. And then finally I merge these records in the CustomWriteOperation.finalize() and push it into my repository.
This works fine for smaller files. But, my repository will not accept if the result is greater than 5 MB. Also it will not accept not more than 20 writes per hour.
If I push the result via worker, then the writes per day limit will be violated. If I write it in a CustomWriteOperation.finalize(), then it may violate size limt i.e. 5MB.
Current approach is to write in chunks in CustomWriteOperation.finalize(). As this is not executed in many workers it might cause delay in my job. How can I make use of workers in finalize() and how can I specify the number of workers to be used inside a pipeline for a specific job (i.e) write job?
Or is there any better approach?
The sink API doesn't explicitly allow tuning of bundle size.
One work around might be to use a ParDo to group records into bundles. For example, you can use a DoFn to randomly assign each record a key between 1,..., N. You could then use a GroupByKey to group the records into KV<Integer, Iterable<Records>>. This should produce N groups of roughly the same size.
As a result, an invocation of Sink.Writer.write could write all the records with the same key at once and since write is invoked in parallel the bundles would be written in parallel.
However, since a given KV pair could be processed multiple times or in multiple workers at the same time, you will need to implement some mechanism to create a lock so that you only try to write each group of records once.
You will also need to handle failures and retries.
So, if I understand correctly, you have a repository that
Accepts no more than X write operations per hour (I suppose if you try to do more, you get an error from the API you're writing to), and
Each write operation can be no bigger than Y in size (with similar error reporting).
That means it is not possible to write more than X*Y data in 1 hour, so I suppose, if you want to write more than that, you would want your pipeline to wait longer than 1 hour.
Dataflow currently does not provide built-in support for enforcing either of these limits, however it seems like you should be able to simply do retries with randomized exponential back-off to get around the first limitation (here's a good discussion), and it only remains to make sure individual writes are not too big.
Limiting individual writes can be done in your Writer class in the custom sink. You can maintain a buffer of records, and have write() add to the buffer and flush it by issuing the API call (with exponential back-off, as mentioned) if it becomes just below the allowed write size, and flush one more time in close().
This way you will write bundles that are as big as possible but not bigger, and if you add retry logic, throttling will also be respected.
Overall, this seems to fit well in the Sink API.
I am working with Sam on this and here are the actual limits imposed by our target system: 100 GB per api call, and max of 25 api calls per day.
Given these limits, the retry method with back-off logic may cause the upload to take many days to complete since we don't have control on the number of workers.
Another approach would be to leverage FileBasedSink to write many files in parallel. Once all these files are written, finalize (or copyToOutputFiles) can combine files until total size reaches 100 GB and push to target system. This way we leverage parallelization from writer threads, and honor the limit from target system.
Thoughts on this, or any other ideas?

Using Asana events API for task monitoring

I'm trying to use Asana events API to track changes in one of our projects, more specific task movement between sections.
Our workflow is as follows:
We have a project divided into sections.
Each section represents a
step in the process. When one step is done, the task is moved to
section below.
When a given task reaches a specific step we want to pass it to an external system. It doesn't have to be the full info - basic things + url would be enough.
My idea was to use https://asana.com/developers/api-reference/events to implement a pull-based mechanism to obtain recent changes in tasks.
My problems are:
Events API seem to generate a lot of information, but not the useful ones. Moving one single task between sections generates 3 events (2 "changed" actions, one "added" action marked as "system"). During work many tasks will be moved between many sections, but I'm interested one in one specific sections. How can I finds items moved into that section? I know that there's a
resource->text field, but it gives me something like moved from X to Y (ProjectName) which probably is a human readable message that might change in the future
According to documentation the resource key should contain task data, but the only info I see is id and name which is not enough for my case. Is it possible to get hold on tags using events API? Or any other data that would allow us to classify tasks in our system?
Can I listen for events for a specific section instead of tracking the whole project?
Ideas or suggestions are welcome. Thanks
In short:
Yes, answer below.
Yes, answer below.
Unfortunately not, sections are really tasks with a bit of extra functionality. Currently the API represents the relationship between sections and the tasks in them via the memberships field on a task and not the other way.
This should help you achieve what you are looking for, I think.
Let's say you have a project Ninja Pipeline with 2 sections Novice & Expert. Keep in mind, sections are really just tasks whose name ends with a : character with a few extra features in that tasks can belong to them.
Events "bubble up" from children to their parents; therefore, when you the Wombat task in this project form the Novice section to Expert you get 3 events. Starting from the top level going down, they are:
The Ninja Pipeline project changed.
The Wombat task changed.
A story was added to the Wombat task.
For your use case, the most interesting event is the second one about the task changing. The data you really want to know is now that the task changed what is the value of the memberships field on the task. If it is now a member of the section you are interested in, take action, otherwise ignore.
By default, many resources in the API are represented in compact form which usually only includes the id & name. Use the input/output options in order to expand objects or select specific fields you need.
In this case your best bet is to include the query parameter opt_expand=resource when polling events on the project. This should expand all of the resource objects in the payload. For events of type: "task" then if resource.memberships[0].section.id=<id_of_the_section> is true, take action, otherwise ignore.

Resources