I'd like to run some code after my pipeline finishes all processing, so I'm using BlockingDataflowPipelineRunner and placing code after pipeline.run() in main.
This works properly when I run the job from the command line using BlockingDataflowPipelineRunner. The code under pipeline.run() runs after the pipeline has finished processing.
However, it does not work when I try to run the job as a template. I deployed the job as a template (with TemplatingDataflowPipelineRunner), and then tried to run the template in a Cloud Function like this:
dataflow.projects.templates.create({
projectId: 'PROJECT ID HERE',
resource: {
parameters: {
runner: 'BlockingDataflowPipelineRunner'
},
jobName: `JOB NAME HERE`,
gcsPath: 'GCS TEMPLATE PATH HERE'
}
}, function(err, response) {
if (err) {
// etc
}
callback();
});
The runner does not seem to take. I can put gibberish under runner and the job still runs.
The code I had under pipeline.run() does not run when each job runs -- it runs only when I deploy the template.
Is it expected that the code under pipeline.run() in main would not run each time the job runs? Is there a solution for executing code after the pipeline is finished?
(For context, the code after pipeline.run() moves a file from one Cloud Storage bucket to another. It's archiving the file that was just processed by the job.)
Yes, this expected behavior. A template represents the pipeline itself, and allows (re-)executing the pipeline by launching the template. Since the template doesn't include any of the code from the main() method, it doesn't allow doing anything after the pipeline execution.
Similarly, the dataflow.projects.templates.create API is just the API to launch the template.
The way the blocking runner accomplished this was to get the job ID from the created pipeline and periodically poll to observe when it has completed. For your use case, you'll need to do the same:
Execute dataflow.projects.templates.create(...) to create the Dataflow job. This should return the job ID.
Periodically (every 5-10s, for instance) poll dataflow.projects.jobs.get(...) to retrieve the job with the given ID, and check what state it is in.
Related
We have a Jenkins declarative pipeline which, after deploying our product on a cloud vm, needs to run some product tests on it. Tests are implemented as a separate job on another jenkins and tests will be run by main pipeline by triggering remote job on 2nd jenkins using parameterized remote trigger plugin parameterized remote trigger plugin.
While this plugin works great, when using option blockBuildUntilComplete it blocks for remote job to finish but doesn't release the jenkins executor. Since tests can take a lot of time to complete(upto 2 days), all this time executor will be blocked just waiting for another job to complete. When setting blockBuildUntilComplete as false it returns a job handle which can be used to fetch build status and result etc. Example here:
while( !handle.isFinished() ) {
echo 'Current Status: ' + handle.getBuildStatus().toString();
sleep 5
handle.updateBuildStatus()
}
But this still keeps consuming the executor, so we still have the same problem.
Based on comments in the article, we tried with waitForWebhook, but even when waiting for webhook it still keeps using the executor.
Based on article we tried with input, and we observed that it wasn't using executor when you have input block in stage and none agent for pipeline :
pipeline {
agent none
stages {
stage("test"){
input {
message "Should we continue?"
}
agent any
steps {
script{
echo "hello"
}
}
}
}
}
so input block does what we want to do with waitForWebhook, at least as far as not consuming an executor while waiting.
What our original idea was to have waitForWebhook inside a timeout surrounded by try catch surrounded by a while loop waiting for remote job to finish :
while(!handle.isFinished()){
try{
timeout(1) {
data = waitForWebhook hook
}
} catch(Exception e){
log.info("timeout occurred")
}
handle.updateBuildStatus()
}
This way we could avoid using executor for long period of time and also benefit from job handle returned by plugin. But we cannot find a way to do this with the input step. input doesn't use executor only when it's separate block inside stage, but it uses one if inside steps or steps->script. So we cannot use try catch, cannot check handle status andcannot loop on it. Is there a way to do this?
Even if waitForWebhook works like input is working we could use that.
Our main pipeline runs on a jenkins inside corporate network and test jenkins runs on cloud and cannot communicate in with our corp jenkins. so when using waitForWebhook, we would publish a message on a message pipeline which a consumer will read, fetch webhook url from db corresponding to job and post on it. We were hoping avoid this with our solution of using while and try-catch.
I have a Jenkins job config that uses the "Build whenever the specified event is seen" trigger (supported by the Cloudbee's Notification API plugin) and specifies a Jmespath Query (e.g. ref=='refs/heads/master') and runs a pipeline script. I want to access other properties in the trigger event (e.g. repository.full_name) from within the pipeline script. How can I do this?
Found the answer. The data I was looking for is in the com.cloudbees.jenkins.plugins.pipeline.events.EventTriggerCause instance of the build causes. For example, the following code finds all the commits:
def newCommits = currentBuild.rawBuild.getCauses().findAll {
it instanceof com.cloudbees.jenkins.plugins.pipeline.events.EventTriggerCause
}.collect{
it.getEvent().commits
}
I created a pipelined jobs in jenkins and want to get it triggered as another jobs ends.
I introduced into my pipeline this way:
pipeline {
agent any
triggers { upstream(upstreamProjects: "jobname" )}
...
}
It does not start when the first job ends. I tryed with the web interface build trigger section and it worked.
I wonder what am I missing to get it work in the pipeline code.
I also add "../folder/jobname" and "threshold: hudson.model.Result.SUCCESS".
What is the correct usage of the upstream trigger in a declarative Jenkinsfile?
I'm trying to add dependency triggers, so that the pipeline is triggered after another project has built successfully.
The jenkisci doku on github is listing upstream events as possible pipeline triggers here.
My Jenkisfile is currently looking like this:
pipeline {
agent {
docker {
...
}
}
options {
timeout(time: 1, unit: 'HOURS')
buildDiscarder(logRotator(numToKeepStr:'10'))
}
triggers {
upstream 'project-name,other-project-name', hudson.model.Result.SUCCESS
}
which leads to the following error:
WorkflowScript: 16: Arguments to "upstream" must be explicitly named. # line 16, column 9.
upstream 'project-name,other-project-name', hudson.model.Result.SUCCESS
^
Update
I changed the syntax for the upstream trigger according to the code snippet here. So, now there is at least no syntax error anymore. But the trigger is still not working as intended.
triggers {
upstream(
upstreamProjects: 'project-name,other-project-name',
threshold: hudson.model.Result.SUCCESS)
}
If I understand the documentation correctly this pipeline should be triggered if one of the two declared jobs has completed successfully right?
If both projects are in same folder and jenkins is aware of jenkinsfile (with below mentioned code) of downstream project(i.e downstream project has ran atleast once), this should work for you :
triggers {
upstream(upstreamProjects: "../projectA/master", threshold: hudson.model.Result.SUCCESS)
}
I'm still relatively new to Jenkins and I have been struggling to get this to work, too. Specifically, I thought that just saving the pipeline code with the triggers directive in the web UI editor would connect the two jobs. It doesn't.
However, if one then manually runs the downstream job, the upstream... code in its pipeline appears to modify the job definition and configure the triggers, resulting in the same situation one would have if they had just set things up in the "build triggers" section of the job config form. In other words, the directive appears to be just a way to tell Jenkins to do the web UI config work for you, when/if the job gets run for some reason.
I spent hours on this and it could have been documented better. Also, the "declarative directive generator" in Jenkins for triggers gave me something like upstream threshold: 'FAILURE' which resulted in something like:
WorkflowScript: 5: Expecting "class hudson.model.Result" for parameter "threshold" but got "FAILURE" of type class java.lang.String instead # line 5, column 23.
To fix this I had to change the parameter to read upstream threshold: hudson.model.Result.FAILURE. Fair enough, but then the generator is broken. Not impressed.
I'm trying to write a plugin that listens for node executions during a Jenkins pipeline. The pipeline will have some code like this:
stage ('production deploy') {
input 'enter change ticket #'...
node('prod') {
// production deploy code here
}
}
Either on allocation of node, or before any tasks run on the node, I want to verify a change management ticket has been approved. For Freestyle jobs I could use QueueListener or RunListener, but neither of these are invoked when I run a pipeline.
I can't put this code in the pipeline script because anyone that can edit the pipeline script could remove the verification.
Are there any other listeners I could hook into before, or just after a node is allocated in a pipeline?
In my previous implementation for freestyle builds, I had overridden the setUpEnvironment method. I didn't realize this was not called in pipeline runs - makes sense. I then implemented onStarted in my RunListener and I successfully broke into my code. Just confusion on my part.