Aggregate results when using Kubeflow Pipelines kfp.ParallelFor - kubeflow

What is a good pattern for aggregating the results from Kubeflow Pipleine kfp.ParallelFor?

Not exactly what you asked for, but our workaround was to write the results of the parallelfor tasks into S3 and simply collect them afterwards in a postprocessing task.
with dsl.ParallelFor(preprocessing_task.output) as plant_item:
predict_plant='{}'.format(plant_item)
forecasting_task = forecasting_op(predict_plant, ....).after(preprocessing_task)
postprocessing_task = postprocessing_op(...).after(forecasting_task)

At the moment this might not be supported:
Support inputs with multiple arguments #1933

Related

Is there any way I can limit record while performing TextIO?

I have a use case where I'm reading around billions of records, but I need to limit the record to see the data behaviour. I have a pardo where I'm analysing the limited data and performing some functionality based on that. But I'm reading entire billion records and then applying limit inside Pardo to get 10000 records. Since my pipeline is reading billion records, it hampers the pipeline performance. Is there any way I could just limit the records, while reading text file using TextIO.
Where are you reading the records from? I think the answer depends on that.
If they all come from, e.g. a same file then I don't think Beam supports sampling a part of them. If they are, e.g. from different files, maybe you can design the file matching pattern you use such that you only read some of them?
You might have to try using a Sample transform, like Sample.any(10000). Perhaps, it will work faster.

Set num of output shard in Write.to(Sink) in dataflow

I am having a customized sink extending FileBasedSink to which I write to by calling PCollection.apply(Write.to(MySink)) in dataflow (very simpler to XmlSink.java). However it seems by default simply calling Write.to will always result to 3 output shards? Is there any way that I could define the number of output shard (like TextTO.Write.withNumShards) just in customized sink class definition? or I have to define another customized PTransformer like TextIO.Write?
Unfortunately, right now FileBasedSink does not support specifying the number of shards.
In practice, the number of shards you get will be dependent on how the framework chooses to optimize the parts of the pipeline producing the collection you're writing, so there's essentially no control over that.
I've filed a JIRA issue for your request so you can subscribe to the status.

Iterative processing in Dataflow

As shown here Dataflow pipelines are represented by a fixed DAG. I'm wondering if it's possible to implement a pipeline where the processing proceeds until a dynamically evaluated condition is satisfied based on the data computed so far.
Here's some pseudo code to illustrate what I'd like to implement:
PCollection pco = null
while(true):
pco = pco.apply(someTransform())
if (conditionSatisfied(pco)):
break
pco.Write()
It seems like you really want iterative computations. Right now Dataflow does not provide support for that, but we are aware that it is a very important use case and we are working on finding the right set of APIs to express it.
For now your workarounds are:
Iteratively run whole pipelines (run pipeline, inspect output, run again if the condition is not satisfied, etc). This has the obvious downside of pipeline setup and teardown overhead.
Build a pipeline with a hard-coded number of iterations by .apply()'ing in a loop unconditionally, then run the whole pipeline.
A combination of the two, e.g. run fixed 5-iteration pipelines until you're satisfied with the result.

How do I plot benchmark data in a Jenkins matrix project

I have several Jenkins matrix projects in where I output benchmark results (i.e. execution times) in a CSV file. I'd like to plot these execution times as a function of the build number, so I can see if my projects are regressing over time.
I can confirm Plot Plugin is a correct and quite useful approach. BTW, it supports CSV as well: plot configuration example
I've been using it for several years without any problem. Benchmarks results were generated as a property file. Benchmark id (series id) was used as a key and result as a value. One build produces one result for each benchmark. Having that data it is quite easy to create plot configuration ant track performance.
This may help you:
https://wiki.jenkins-ci.org/display/JENKINS/Plot+Plugin
It adds plotting capabilities to Jenkins.

Your advice on a Hadoop MapReduce job

I have 2 files stored on a HDFS filesystem:
tbl_userlog: <website url (non canonical)> <tab> <username> <tab> <timestamp>
example: www.website.com, foobar87, 201101251456
tbl_websites: <website url (canonical)> <tab> <total hits>
example: website.com, 25889
I have written an Hadoop sequence of jobs which joins the 2 files on the website, performs a filter on the amount of total hits > n per website and then counts for each user the amount of websites he has visited which has > n total hits. The details of the sequence are as following:
A Map-only job which canonicizes the url in tbl_userlog (i.e. removes www, http:// and https:// from the url field)
A Map-only job which sorts tbl_websites on the url
An identity Map-Reduce job which takes the output of the 2 previous jobs as KeyValueTextInput and feeds them to a CompositeInput in order to make use of Hadoop native joining feature defined with jobConf.set("mapred.join.expr", CompositeInputFormat.compose("inner" (...))
A Map and Reduce job which filters the result of the previous job on total hits > n in its Map phase, groups the results on the in the shuffling phase, and performs the count on the number of websites for each user in the Reduce phase.
In order to chain these steps, I just call the jobs sequentially in the described order. Each individual job outputs its results into HDFS which the following job in the chain then retrieves and processes in turn.
As I am new to Hadoop, I would like to ask for your counseling:
Is there a better way to chain these jobs? In this configuration all intermediate results are written to HDFS and then read back.
Do you see any design flaw in this job, or could it be written more elegantly by making use of some Hadoop feature that I have missed?
I am using Apache Hadoop 0.20.2 and using higher-level frameworks such as Pig or Hive is not possible in the scope of the project.
Thanks in advance for your replies!
I think what you have will work with a couple of caveats. Before I start listing them, I want to make two definitions clear. A map-only job is a job that has a defined Mapper and run's with 0 reducers. If the job is running with > 0 IdentityReducers, then the job is not a map-only job. A reduce-only job is a job that has a define Reducer and run's with an IdentityMapper.
Your first job, can be a map-only job, since all you're doing is canonicalizing URLs. But if you want to use CompositeInputFormat, you should run with an IdentityReducer with more than 0 reducer's.
For your second job, I don't know what you mean by a map-only job that sorts. Sorting by it's very nature is a reduce side task. You probably mean that it has a define Mapper but no Reducer. But in order for the URLs to be sorted, you should run with an IdentityReducer with more than 0 reducer's.
Your third job is an interesting idea, but you have to be careful with CompositeInputFormat. There are two conditions that must be met for you to be able to use this input format. The first is that there has to be the same number of files in both input directories. This can be achieved by setting the same number of reducer's for Job1 and Job2. The second condition is that the input files CANNOT be splittable. This can be achieved by using a non splittable compression such as bzip.
This job sounds good. Although you can filter website that have < n hits in the reducer of the previous job and save yourself some I/O.
There's obviously more than one solution to a problem in software, so while you're solution would work, I wouldn't recommend it. Having 4 MapReduce jobs for this task is a bit expensive IMHO. The implementation I have in mind is a M-R-R workflow that uses Secondary Sort.
As far as chaining jobs is concerned, you should have a look at Oozie, which is a workflow manager. I have yet to use it, but that's where I'd start.

Resources