Not sure whether this is the right place to ask but I am currently trying to run a dataflow job that will partition a data source to multiple chunks in multiple places. However I feel that if I try to write to too many table at once in one job, it is more likely for the dataflow job to fail on a HTTP transport Exception error, and I assume there is some bound one how many I/O in terms of source and sink I could wrap into one job?
To avoid this scenario, the best solution I can think of is to split this one job into multiple dataflow jobs, however for which it will mean that I will need to process same data source multiple times (once for which dataflow job). It is okay for now but ideally I sort of want to avoid it if later if my data source grow huge.
Therefore I am wondering there is any rule of thumb of how many data source and sink I can group into one steady job? And is there any other better solution for my use case?
From the Dataflow service description of structuring user code:
The Dataflow service is fault-tolerant, and may retry your code multiple times in the case of worker issues. The Dataflow service may create backup copies of your code, and can have issues with manual side effects (such as if your code relies upon or creates temporary files with non-unique names).
In general, Dataflow should be relatively resilient. You can Partition your data based on the location you would like it output. The writes to these output locations will be automatically divided into bundles, and any bundle which fails to get written will be retried.
If the location you want to write to is not already supported you can look at writing a custom sink. The docs there describe how to do so in a way that is fault tolerant.
There is a bound on how many sources and sinks you can have in a single job. Do you have any details on how many you expect to use? If it exceeds the limit, there are also ways to use a single custom sink instead of several sinks, depending on your needs.
If you have more questions, feel free to comment. In addition to knowing more about what you're looking to do, it would help to know if you're planning on running this as a Batch or Streaming job.
Our solution to this was to write a custom GCS sink that supports partitions. Though with the responses I got I'm unsure whether that was the right thing to do or not. Writing Output of a Dataflow Pipeline to a Partitioned Destination
Related
I'm considering using apache beam to write a streaming pipeline to apply a stream of mutations to replicate events from a source database into a destination database in the order of event time. The source could be either kafka or pubsub.
An example would be something like this except that the order in which the mutations are applied to the sink must be in order in which they arrived.
I did go over some of the previous questions asked on preserving order:
Processing Total Ordering of Events By Key using Apache Beam
Sort elements within a fixed window - Cloud Dataflow - This seems to be same use case i'm interested in.
I understand that if I go down the apache beam road i would have to
choose a windowing strategy with accommodation for late data (either a fixed windowing strategy with a allowed lateness or with global window, have triggers to emit panes and buffer for late data)
apply transformations
GroupByKey over a single key(so that everthing goes to the same worker), sort and write to sink
In addition to the above, I would have to make sure the windows(if i follow a fixed window strategy) are executed in order. Step 3 is bound to be the bottleneck.
If [2] above in the list of steps is a lot of computation then apache beam would make sense to take advantage of parallelism which beam offers. But if [2] is just a simple one to one mapping, does apache beam make sense for this replication usecase. Please let me know if i'm missing something.
Note: We do have a batch pipeline on dataflow using apache beam to load a datadump on gcs to database where the entire data is on disk and the order in which its written to sink does not matter.
Preserving order it's possible, but not sure if it's straightforward or efficient.
It also depends on how much data (elements/sec) you're expecting as well as what the sink type is. Potentially you could have the pipeline write out ordered entries to GCS, and the sink just reads the files in, in order, as a secondary process.
Your other option, of using parallel writes and make sure the database is usable only till the output watermark time of the last beam stage, it's maybe doable, but not really the core use case of Dataflow/Apache Beam.
Maybe there could be ways to process the stream out of order, but write to an intermediate sink that can easily be read from in order. i.e. writing out the mutation batches with a step or file number that can easily be used to order the files when applied to the final sink.
The window + write to final sink architecture is going to be difficult to get right, probably too complex for low volume of elements, and too inefficient for large volume. This is a good example of what this could look like.
But again, keep in mind that all this approaches are definitely not the core use case for Dataflow/Apache Beam.
So I am working on a little project that sets up a streaming pipeline using Google Dataflow and apache beam. I went through some tutorials and was able to get a pipeline up and running streaming into BigQuery, but I am going to want to Stream it into a full relational DB(ie: Cloud SQL). I have searched through this site and throughout google and it seems that the best route to achieve that would be to use the JdbcIO. I am a bit confused here because when I am looking up info on how to do this it all refers to writing to cloud SQL in batches and not full out streaming.
My simple question is can I stream data directly into Cloud SQL or would I have to send it via batch instead.
Cheers!
You should use JdbcIO - it does what you want, and it makes no assumption about whether its input PCollection is bounded or unbounded, so you can use it in any pipeline and with any Beam runner; the Dataflow Streaming Runner is no exception to that.
In case your question is prompted by reading its source code and seeing the word "batching": it simply means that for efficiency, it writes multiple records per database call - the overloaded use of the word "batch" can be confusing, but here it simply means that it tries to avoid the overhead of doing an expensive database call for every single record.
In practice, the number of records written per call is at most 1000 by default, but in general depends on how the particular runner chooses to execute this particular pipeline on this particular data at this particular moment, and can be less than that.
The situation right now:
Every Monday morning I manually check Jenkins jobs jUnit results that ran over the weekend, using Project Health plugin I can filter on the timeboxed runs. I then copy paste this table into Excel and go over each test case's output log to see what failed and note down the failure cause. Every weekend has another tab in Excel. All this makes tracability a nightmare and causes time consuming manual labor.
What I am looking for (and hoping that already exists to some degree):
A database that stores all failed tests for all jobs I specify. It parses the output log of a failed test case and based on some regex applies a 'tag' e.g. 'Audio' if a test regarding audio is failing. Since everything is in a database I could make or use a frontend that can apply filters at will.
For example, if I want to see all tests regarding audio failing over the weekend (over multiple jobs and multiple runs) I could run a query that returns all entries with the Audio tag.
I'm OK with manually tagging failed tests and the cause, as well as writing my own frontend, is there a way (Jenkins API perhaps?) to grab the failed tests (jUnit format and Jenkins plugin) and create such a system myself if it does not exist?
A good question. Unfortunately, it is very difficult in Jenkins to get such "meta statistics" that spans several jobs. There is no existing solution for that.
Basically, I see two options for getting what you want:
Post-processing Jenkins-internal data to get the statistics that you need.
Feeding a database on-the-fly with build execution data.
The first option basically means automating the tasks that you do manually right now.
you can use external scripting (Python, Perl,...) to process Jenkins-internal data (via REST or CLI APIs, or directly reading on-disk data)
or you run Groovy scripts internally (which will be faster and more powerful)
It's the most direct way to go. However, depending on the statistics that you need and depending on your requirements regarding data persistance , you may want to go for...
The second option: more flexible and completely decoupled from Jenkins' internal data storage. You could implement it by
introducing a Groovy post-build step for all your jobs
that script parses job results and puts data of interest in a custom, external database
Statistics you'd get from querying that database.
Typically, you'd start with the first option. Once requirements grow, you'd slowly migrate to the second one (e.g., by collecting internal data via explicit post-processing scripts, putting that into a database, and then running queries on it). You'll want to cut this migration phase as short as possible, as it eventually requires the effort of implementing both options.
You may want to have a look at couchdb-statistics. It is far from a perfect fit, but at least seems to do partially what you want to achieve.
I am working on a project which receives requests from multiple clients through pubsub which dataflow pipelines will process in streaming mode to give out the responses. Each flow has some logic in common and also has read/writes from/to BigTable/BigQuery.
What are the pros and cons ( both development and maintenance side ) of using one single pipeline which receives input from different clients over separate pipeline for each input ?
In terms of development, these have about the same amount of complexity: you probably still have the common code written in one place, or perhaps even the entire pipeline code is identical but you're launching it with different parameters for different clients.
Maintenance-wise, there are pros and cons to both approaches.
One pipeline is likely to be cheaper. E.g. if traffic is overall very low and processing all the clients could fit on 1 machine, then it will actually happen on 1 machine - but if you do separate pipelines, each of them can't use less than 1 machine, so you'll be using at least N all the time.
One pipeline might be easier to observe and monitor in the UI, and easier to deploy. That, though, depends on the structure of the pipeline: are you going to pipe all clients' data through the same transforms, or, say, have 1 read transform per client (say, if each client is reading from a different PubSub topic and writing to a different BigQuery table)? If it's all the same transforms, then you'll get the benefit of launching the pipeline once and not having to do anything at all when a client is added or removed (otherwise, you'll need to update the pipeline).
With several pipelines (one per client), it's easier to isolate the issues with different clients. E.g. you could stop processing individual clients one by one, or update them one by one (say, if you're testing out some experimental code and don't want to break all the clients at the same time if it's wrong). It becomes unlikely that a bug in the pipeline will cause one client's data to mix up with another client's data.
We have a pipeline reading data from BigQuery and processing historical data for various calendar years. It fails with OutOfMemoryError errors if the input data is small (~500MB)
On startup it reads from BigQuery about 10.000 elements/sec, after short time it slows down to hundreds elements/s then it hangs completely.
Observing 'Elements Added' on the next processing step (BQImportAndCompute), the value increases and then decreases again. That looks to me like some already loaded data is dropped and then loaded again.
Stackdriver Logging console contains errors with various stack traces that contain java.lang.OutOfMemoryError, for example:
Error reporting workitem progress update to Dataflow service:
"java.lang.OutOfMemoryError: Java heap space
at com.google.cloud.dataflow.sdk.runners.worker.BigQueryAvroReader$BigQueryAvroFileIterator.getProgress(BigQueryAvroReader.java:145)
at com.google.cloud.dataflow.sdk.util.common.worker.ReadOperation$SynchronizedReaderIterator.setProgressFromIteratorConcurrent(ReadOperation.java:397)
at com.google.cloud.dataflow.sdk.util.common.worker.ReadOperation$SynchronizedReaderIterator.setProgressFromIterator(ReadOperation.java:389)
at com.google.cloud.dataflow.sdk.util.common.worker.ReadOperation$1.run(ReadOperation.java:206)
I would suspect that there is a problem with topology of the pipe, but running the same pipeline
locally with DirectPipelineRunner works fine
in cloud with DataflowPipelineRunner on large dataset (5GB, for another year) works fine
I assume problem is how Dataflow parallelizes and distributes work in the pipeline. Are there any possibilities to inspect or influence it?
The problem here doesn't seem to be related to the size of the BigQuery table, but likely the number of BigQuery sources being used and the rest of the pipeline.
Instead of reading from multiple BigQuery sources and flattening them have you tried reading from a query that pulls in all the information? Doing that in a single step should simplify the pipeline and also allow BigQuery to execute better (one query against multiple tables vs. multiple queries against individual tables).
Another possible problem is if there is a high degree of fan-out within or after the BQImportAndCompute operation. Depending on the computation being done there, you may be able to reduce the fan-out using clever CombineFns or WindowFns. If you want help figuring out how to improve that path, please share more details about what is happening after the BQImportAndCompute.
Have you tried debugging with Stackdriver?
https://cloud.google.com/blog/big-data/2016/04/debugging-data-transformations-using-cloud-dataflow-and-stackdriver-debugger