Kubeflow pipeline Exit Handler with multiple components/steps - kubeflow

I'm trying to configure an Exit Handler in Kubeflow Pipelines to use two different components that perform two distinct actions at the end of the pipeline run.
I've tried to do it in so many different ways, mostly using trial and error, as I've not been able to find any doc or example only, but I've not managed to do it.
Has any of you managed to get it done?
Thank you!

Related

How to get lineage info of dataflow jobs?

I am new to dataflow and am trying to get the lineage information about any dataflow job, for an app I am trying to build. I am trying to fetch atleast the source and destination names from a job and if possible find out the transformation applied on the pcollection in the pipeline, something like a trace of the function calls.
I have been analyzing the logs for different kind of jobs, but could not figure out a definite way to fetch any of the informations I am looking for.
You should be able to get this information from the graph itself. One way to do this would be to implement your own runner which delegates to the Dataflow runner.
For Dataflow, you could also use the fetch the job (whose steps will give the topology) from the service via the Dataflow API.

Run one or many concurrent jobs in Jenkins with different parameters

I am trying to create a jenkins job that will run some code on various servers to validate them.
I would like to be able to specify either an individual server or give a directive such as "evens" for servers 02, 04, 06... or "odds" for servers 01, 03, 05... and have the job run for either a single or many servers.
I'm searching for the cleanest way to do this, I've tried using a scheduler job that would handle the odds and evens cases but would prefer, if possible, not to have to split the single and many server cases into different jobs. I've also looked into using a matrix job that could be configured to run under different parameter set but haven't found any documentation to fully solve my problem.
Can anyone point me in the right direction?
I am not sure I fully understood you but I will try:
I understood that you want to activate the same job many times with different parameters.
your options are :
1. using master job that will activate all other jobs with different parameters ( like you said with matrix or even simpler )
2. you can do it in pipeline rather easily with node scopes and loops for using different parameters
3. you can use jenkins-cli and activate the same job with different parameters every activation.
I hope it helped you

Common Gui Using Jenkins

I have one question with Jenkins?..
How can we create a Common Gui by using Jenkins for jobs, i tried searching in google it showed very few results with confusing results pipeline or some other plugin to achieve this ?..
https://zeroturnaround.com/rebellabs/top-10-jenkins-featuresplugins/
please provide your thoughts on This ,I will be monitoring this thread Actively !.

Jenkinsfile - Mutual exclusivity across multiple pipelines

I'm looking for a way to make multiple declaratively written Jenkinsfiles only run exclusively and block each other. They consume test instances who will be terminated after they run which causes problems when PRs are being tested as they come in.
I cannot find an option to make the BuildBlocker plugin do this, all the jenkinsfiles that use this plugin are not running in our Plugin/Jenkins version schema and it seems as if these [$class: <some.java.expression>] strings being exported from the syntax generator don't work here anyways.
I cannot find a way to run these Locks over all the steps involved in the pipeline.
I could hack a file-lock but this won't help me with multi-node builds.
This plugin could perhaps help you as it allows to lock resources you've declared previously so that if a resource is currently locked, any other job that requires the same resource will wait until it is released.
https://plugins.jenkins.io/lockable-resources/
Since you say you want declarative, probably wait for the currently-in-review Allow locking multiple stages in declarative pipeline jira issue to be completed. You can also vote for it and watch it.
And if you can't wait, this is your opportunity to learn golang (or whatever language you want to learn) by implementing a microservice that holds these locks that you call from your pipeline scripts. :D
The JobDSL plugin is for configuring Jenkins execution policies including blocking another and calling pipeline code.
https://jenkinsci.github.io/job-dsl-plugin/#method/javaposse.jobdsl.dsl.jobs.FreeStyleJob.blockOn describes the method, which also the blocker plugin uses.
This is the tutorial for usage https://github.com/jenkinsci/job-dsl-plugin/wiki/Tutorial---Using-the-Jenkins-Job-DSL, the api https://github.com/jenkinsci/job-dsl-plugin/wiki/Job-DSL-Commands.
Taken from https://www.digitalocean.com/community/tutorials/how-to-automate-jenkins-job-configuration-using-job-dsl:
It should be possible to use https://github.com/jenkinsci/job-dsl-plugin/wiki/Dynamic-DSL, but I found no good usage example yet.

How to add labels to an existing Google Dataflow job?

I am using the Java GAPI client to work with Google Cloud Dataflow (v1b3-rev197-1.22.0). I am running a pipeline from template and the method for doing that (com.google.api.services.dataflow.Dataflow.Projects.Templates#create) does not allow me to set labels for the job. However I get the Job object back when I execute the pipeline, so I updated the labels and tried to call com.google.api.services.dataflow.Dataflow.Projects.Jobs#update to persist that information in Dataflow. But the labels do not get updated.
I also tried updating labels on finished jobs (which I also need to do), which didn't work either, so I thought it's because the job is in a terminal state. But updating labels seems to do nothing regardless of the state.
The documentation does not say anything about labels not being mutable on running or terminated pipelines, so I would expect things to work. Am I doing something wrong and if not what is the rationale behing the decision no to allow label updates? (And how are template users supposed to set the initial label set when executing the template?)
Background: I want to mark terminated pipelines that have been "processed", i.e. those that our automated infrastructure already sent notification about to appropriate places. Labels seemed as a good approach that would shield me from having to use some kind of local persitence to track stuff (big complexity jump). Any suggestions on how to approach this if labels are not the right tool? Sadly, Stackdriver cannot monitor finished pipelines, only failed ones. And sending a notification from within the pipeline code doesn't seem as a good idea to me (wrong?).

Resources