Apache Beam: Reading in PCollection as PBegin for a pipeline - google-cloud-dataflow

I'm debugging this beam pipeline and my end goal is to write all of the strings in a PCollection to a text file.
I've set a breakpoint at the point after the the PCollection I want to inspect is created and what I've been trying to do is create a new Pipeline that
Reads in this output PCollection as the inital input
Prints it to a file (using `TextIO.write().to("/Users/my/local/fp"))
I'm struggling with #1 of how to read in the PCollection as initial input.
The skeleton of what I've been trying:
Pipeline p2 = Pipeline.create();
p2.apply(// READ IN THE PCOLLECTION HERE)
.apply(TextIO.write().to("/Users/my/local/fp")));
p2.run()
Any thoughts or suggestions would be appreciated

In order to read a pcollection into input, you need to read it from a source. I.e. some data stored in BigQuery, Google Cloud Storage, etc. There are specific source transforms you can use to read from each of these locations. Depending on where you have stored your data you will need to use the correct source and pass in the relevant parameters (i.e. the GCS path, BigQuery table)
Please take a look at the Minimal Word Count Example on the apache beam website (Full source on github). I suggest starting from this code and iterating on it until you build the pipeline you need.
In this example files are read from GCS
p.apply(TextIO.read().from("gs://apache-beam-samples/shakespeare/*"))
Please also see this guide on using IOs and also this list of beam IO transforms. If you just want a basic example working, you can use Create.of to read from variables in your program.

Related

apache beam streaming and processing multiple files at same time and windowed joins?

I just read this article
https://medium.com/bb-tutorials-and-thoughts/how-to-create-a-streaming-job-on-gcp-dataflow-a71b9a28e432
What I am truly missing here though is if I drop 50 files and this is a streaming job like the article says(always live), then won't the output be a windowed join of all the files?
If not, what would it look like and how would it change to be a windowed join? I am trying to get a picture of my head of both worlds of
A windowed join in a streaming job(outputting 1 file for all the files input)
A not windowed join in a streaming job(outputting 1 file PER input file)
Can anyone shed light on that article and what would change?
I also read something about 'Bounded PCollections'. In that case, perhaps windowing is not needed as inside the stream it is sort of like a batch of until we have the entire Pcollection processed, we do not move to the next stage? Perhaps if the article is using bounded pcollcation, then all input files map 1 to 1 with output files?
How can one tell from inside a function if I am receiving data from a bounded or unbounded collection? Is there some other way I can tell that? Is bounded collections even possible in apache beam streaming job?
I'll try to answer some of your questions.
What I am truly missing here though is if I drop 50 files and this is
a streaming job like the article says(always live), then won't the
output be a windowed join of all the files?
Input (source) and output (sink) are not directly linked. So this depends on what you do in your pipeline. TextIO.watchForNewFiles is an streaming source transform that keeps observing a given file location and keeps reading news files and outputting lines read from such files. Hence the output from this step will be a PCollection<String> that stream lines of text read from such files.
Windowing is set next, this decides how your data will be bundled into Windows. For this pipeline, they choose to use FixedWindows of 1 minute. Timestamp will be the time the file was observed.
Sink transform is applied at the end of your pipeline (sometimes sinks also produce outputs, so it might not really be the end). In this case they choose TextIO.write() which writes lines of Strings from an input PCollection<String> to output text files.
So whether the output will include data from all input files or not depends on how your input files are processed and how they are bundled into Windows within the pipeline.
I also read something about 'Bounded PCollections'. In that case,
perhaps windowing is not needed as inside the stream it is sort of
like a batch of until we have the entire Pcollection processed, we do
not move to the next stage? Perhaps if the article is using bounded
pcollcation, then all input files map 1 to 1 with output files?
You could use bounded inputs in a streaming pipeline. In a streaming pipeline, the progression is tracked through a watermark function. If you use a bounded input (for example, a bounded source) the watermark will just go from 0 to infinity instead of progressing gradually. Hence your pipeline might just end instead of waiting for more data.
How can one tell from inside a function if I am receiving data from a
bounded or unbounded collection? Is there some other way I can tell
that? Is bounded collections even possible in apache beam streaming
job?
It is definitely possible as I mentioned above. If you have access to the input PCollection, you can use the isBounded function to determine if it is bounded. See here for an example. You have access to input PCollections when expanding PTransforms (hence during job submission). I don't believe you have access to this at runtime.

Apache Beam - Parallelize Google Cloud Storage Blob Downloads While Maintaining Grouping of Blobs

I’d like to be able to maintain a grouping of entities within a single PCollection element, but parallelize the fetching of those entities from Google Cloud Storage (GCS). i.e.PCollection<Iterable<String>> --> PCollection<Iterable<String>> where the starting PCollection is an Iterable of file paths and the resulting PCollection is Iterable of file contents. Alternatively, PCollection<String> --> PCollection<Iterable<String>> would also work and perhaps even be preferable, where the starting PCollection is a glob pattern, and the resulting PCollection is an iterable of file contents which matched the glob.
My use-case is that at a point in my pipeline I have as input PCollection<String>. Each element of the PCollection is a GCS glob pattern. It’s important that files which match the glob be grouped together because the content of the files–once all files in a group are read–need to be grouped downstream in the pipeline. I originally tried using FileIO.matchAll and a subsequently GroupByKey . However, the matchAll, window, and GroupByKey combination lacked any guarantee that all files matching the glob would be read and in the same window before performing the GroupByKey transform (though I may be misunderstanding Windowing). It’s possible to achieve the desired results if a large time span WindowFn is applied, but it’s still probabilistic rather than a guarantee that all files will be read before grouping. It’s also the main goal of my pipeline to maintain the lowest possible latency.
So my next, and currently operational, plan was to use an AsyncHttpClient to fan out fetching file contents via GCS HTTP API. I feel like this goes against the grain in Beam and is likely sub-optimal in terms of parallelization.
So I’ve started investigating SplittableDoFn . My current plan is to allow splitting such that each entity in the input Iterable (i.e. each matched file from the glob pattern) could be processed separately. I've been able to modify FileIO#MatchFn (defined here in the Java SDK) to provide mechanics for PCollection<String> -> PCollection<Iterable<String>> transform between input of GCS glob patterns and output of Iterable of matches for the glob.
The challenge I’ve encountered is: how do I go about grouping/gathering the split invocations back into a single output value in my DoFn? I’ve tried using stateful processing and using a BagState to collect file contents along the way, but I realized part way along that the ProcessElement method of a splittable DoFn may only accept ProcessContext and Restriction tuples, and no other args therefore no StateId args referring to a StateSpec (throws an invalid argument error at runtime).
I noticed in the FilePatternWatcher example in the official SDF proposal doc that a custom tracker was created wherein FilePath Objects kept in a set and presumably added to the set via tryClaim. This seems as though it could work for my use-case, but I don’t see/understand how to go about implementing a #SplitRestriction method using a custom RestrictionTracker.
I would be very appreciative if anyone were able to offer advice. I have no preference for any particular solution, only that I want to achieve the ability to maintain a grouping of entities within a single PCollection element, but parallelize the fetching of those entities from Google Cloud Storage (GCS).
Would joining the output PCollections help you?
PCollectionList
.of(collectionOne)
.and(collectionTwo)
.and(collectionThree)
...
.apply(Flatten.pCollections())

Apach Beam / Dataflow transform advice

I have a batch data parsing job where the inputs is a list of zip files and each zip file has numerous small text files to parse. In the order of 100Gb compressed across 50 zip files, each zip has 1 million text files.
I am using Apache Beam's package in Python and running the job through Dataflow.
I wrote it as
Create collection from the list of zip file paths
FlatMap with a function that yields for every text file inside the zip (one output is a bytes string for all the bytes read from the text file)
ParDo with a method that yields for every row in the data from the text file / bytes read
...do other stuff like insert each row in the relevant table of some database
I notice this is too slow - CPU resources are only a few % utilised. I suspect that each node is getting a zip file, but work is not distributed among local CPUs - so it's just one CPU working per node. I don't understand why that is the case considering I used FlatMap.
The Dataflow runner makes use of Fusion optimisation:
'...Such optimizations can include fusing multiple steps or transforms in your pipeline's execution graph into single steps.'
If you have a transform which in its DoFn has a large fan-out, which I suspect the Create transform in your description does, then you may want to manually break fusion by introducing a shuffle stage to your pipeline as described in the linked documentation.

Beam.io.WriteToPubSub throws error "The given pcoll PDone[WriteToPubSub/Write/NativeWrite.None] is not a dict, an iterable or a PCollection"

I'm getting an error whenever I use "WriteToPubSub". The code below is me trying to debug the issue. My actual code is trying to take data from failures of WriteToBigQuery in order to push it to a deadletter pubsub topic. But when I tried to do that I kept encountering the error below.
I am running Apache Beam 2.27, Python 3.8
import apache_beam as beam
from apache_beam.runners.interactive.interactive_runner import InteractiveRunner
from apache_beam.io.gcp.bigtableio import WriteToBigTable
from apache_beam.runners import DataflowRunner
import apache_beam.runners.interactive.interactive_beam as ib
from apache_beam.options import pipeline_options
from apache_beam.options.pipeline_options import GoogleCloudOptions
import google.auth
import json
import pytz
# Setting up the Apache Beam pipeline options.
options = pipeline_options.PipelineOptions(flags=[])
# Sets the project to the default project in your current Google Cloud environment.
_, options.view_as(GoogleCloudOptions).project = google.auth.default()
# Sets the Google Cloud Region in which Cloud Dataflow runs.
options.view_as(GoogleCloudOptions).region = 'asia-east1'
# Sets the job name
options.view_as(GoogleCloudOptions).job_name = 'data_ingest'
# IMPORTANT! Adjust the following to choose a Cloud Storage location.
dataflow_gcs_location = '[REDACTED]'
# Dataflow Staging Location. This location is used to stage the Dataflow Pipeline and SDK binary.
options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location
# Dataflow Temp Location. This location is used to store temporary files or intermediate results before finally outputting to the sink.
options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location
# The directory to store the output files of the job.
output_gcs_location = '%s/output' % dataflow_gcs_location
ib.options.recording_duration = '1m'
# The Google Cloud PubSub topic for this example.
topic = "[REDACTED]"
output_topic = "[REDACTED]"
subscription = "[REDACTED]"
deadletter_topic = "[REDACTED]"
class PrintValue(beam.DoFn):
def process(self, element):
print(element)
return [element]
p = beam.Pipeline(InteractiveRunner(),options=options)
data = p | beam.io.ReadFromPubSub(topic=topic) | beam.ParDo(PrintValue()) | beam.io.WriteToPubSub(topic=deadletter_topic)
ib.show(data, include_window_info=False)
The error given is
ValueError: The given pcoll PDone[WriteToPubSub/Write/NativeWrite.None] is not a dict, an iterable or a PCollection.
Can someone spot what the problem is?
No matter what I do, WriteToPubSub says it's receiving PDone.
EDIT:
If i use p.run(), I get the following error instead:
'PDone' object has no attribute 'to_runner_api'
In both cases, the pipeline does not try to run, it immediately errors out.
EDIT:
I've realised the problem
p = beam.Pipeline(InteractiveRunner(),options=options)
It is this line. If I remove the interactiverunner everything works. Not sure why
Beam Terminology
Apache Beam has some base concepts, that we should adhere to while leveraging the power of this programming model.
Pipeline
In simple terms, a pipeline is a series of tasks for a desired output. It can be as simple as a linear flow or could have a complex branching of tasks. The fundamental concept is read from input source(s), perform some transformations and emit to output(s).
Mathematically, beam pipeline is just a Directed Acyclic Graph of tasks.
PCollection
In simple terms, PCollections is an immutable bag of elements which could be distributed across machines. Each step in a beam pipeline will have it's input and output as a PCollection (apart from sources and sinks)
PCollection is a powerful distributed data structure that a beam pipeline operates on. It could be bounded or unbounded based on your source type.
PTransforms
In simple terms, Transforms are the operations of your pipleine. It provides processing logic and this logic is applied to each element of one or more input of PCollections.
Example : PTransform<PCollection<X>,PCollection<Y>> will transform X to Y.
Based on processing paradigm, beam provides us multiple core transforms - ParDo, GroupByKey, Flatten, Combine etc.
I/O Transforms
When you create a pipeline one need a data source to read data such as a file or a database. Likewise, you want to emit your result data to an external storage system such as topic or an object store. The transforms which deal with External Input and Output are I/O Transforms.
Usually for an external source, you will have the following
Source : A PTransform to read data from the external system. This will read from
an external system(like file, db). It excepts a PBegin (pipeline entry point) and return a PCollection.
PTransform<PBegin,PCollection>
This would be one of the entry points of your pipeline.
Sink : A PTransform that will output data to an external system. This will write to an external system(like topic, storage). It excepts a PCollection and return a PDone (pipeline entry point).
PTransform<PCollection,PDone>
This would be one of the exit points of your pipeline.
Combination of a source and sink is an I/O Connector like RedisIO, PubSubIO etc. Beam provides multiple in-built connectors and one can write a custom one also.
There are still various concepts and extenions of the above, that allow users to program complex requirements that could be run on different runners. This is what makes Beam so powerful.
Solution
In your case, ib.show(data, include_window_info=False) is throwing the below error
ValueError: The given pcoll PDone[WriteToPubSub/Write/NativeWrite.None] is not a dict, an iterable or a PCollection.
Source Code
Because your data contains result of beam.io.WriteToPubSub(topic=deadletter_topic) which is a sink and returns a PDone not a PCollection.
For your use case of BQ Writing Failures to PubSub, you could follow something below
data = beam.io.ReadFromPubSub(topic=topic) | 'Write to BQ' >> beam.io.WriteToBigQuery( ...)
(data['beam.io.gcp.bigquery.BigQueryWriteFn.FAILED_ROWS']
| 'publish failed' >> beam.io.WriteToPubSub(topic=deadletter_topic)
However, if this does not solve your issue posting the code would be useful or else you could write a custom PTransform with output tags for writing to BQ and to return failures(via tuple tags) for publising to PubSub.
P.S. : WriteToBigQuery is not a sink, but a custom PTransform that writes to big query and returns failures.

Output sorted text file from Google Cloud Dataflow

I have a PCollection<String> in Google Cloud DataFlow and I'm outputting it to text files via TextIO.Write.to:
PCollection<String> lines = ...;
lines.apply(TextIO.Write.to("gs://bucket/output.txt"));
Currently the lines of each shard of output are in random order.
Is it possible to get Dataflow to output the lines in sorted order?
This is not directly supported by Dataflow.
For a bounded PCollection, if you shard your input finely enough, then you can write sorted files with a Sink implementation that sorts each shard. You may want to refer to the TextSink implementation for a basic outline.

Resources