I'm working on building a Jenkins job that will trigger a pipeline to build a new AMI in one group, when a Golden AMI in another group becomes available. I need a way for a Jenkins job to initiate another Jenkins job without using the remote build URL method, because in the environment I'm working in it's not feasible to keep them all updated.
I've seen some things about plug-ins, but I'm a little fuzzy about which should be used here. Is there one that's recommended? Or is there a way to script this without the remote URL?
You can use build job to start an existing pipeline in Jenkins.
It's part of the Build Step plugin.
A simple example:
pipeline {
agent {
node { label 'nuc3' }
}
stages {
stage('Run External Jobs') {
steps {
build job: 'FOLDER_TEST_JOBS/test_job1_located_under_folder', wait: false
sleep(60)
build job: 'test_job_without_folder_path', wait: false
}
}
}
}
Related
I have a main pipeline in jenkins that will checkout the code , compile, test and build, then push image to docker. This is the high level of CI pipeline that I have. Say job name "MainJobA"
I need to create a new job , just for JavaDoc generation. For that i have created a new script in Git Repo and configured the same in Pipeline job.
Now i need to execute this sub job of javadoc generation and publishing the html reports from the workspace of "MainJobA" . I need to run the SubJobA's pipeline stages from
/home/jenkins/workspace/MainJobA
How can i achieve this?
There is build step exist in jenkins declarative pipelines.
Use it like:
pipeline {
agent any
stages {
stage ("build") {
steps {
build 'Pipeline_B'
}
}
}
}
I want to use the Jenkins "PRQA" plugin, which seems not to have the option to use it from a pipeline. The plugin would run static code analysis and publish the results.
In my case, it requires some preparations that are already done in a pipelinejob. Because of that, I want to include the job into that pipeline, but on the same executor with the data prepared by the pipeline as some kind of inlined job-step.
I have tried to create a job for the PRQA-Plugin-Step and execute this with the build step from the pipeline. But this tries to start the job on a new executor (and stalls because I have only one executor).
pipeline {
agent any
stages {
stage('Build') {
steps {
echo 'Prepare'
}
}
stage('SCA') {
steps {
//Run this without using a new executor with the Environment that exists now
build 'PRQA_Job'
}
}
}
}
What is the correct way to run the job on the same executor with the current working directory.
With specified build 'PRQA_Job' it's not possible to run second job on the same executor (1 job = 1 executor), since main job just waiting for a triggered job to be finished. But you can run another job on the same agent with more than 1 executor to reach workspace from main job.
For a test porpose specify agent name in both jobs: agent 'agent_name_here'
If you want to use plugin functionality for a plugin, which has no native pipeline support, you could try using "step: General Build step" feature for Jenkins Pipelines. You can use the Pipeline Syntax wizzard linked in the Job configuration windows to generate the needed Pipeline description.
If the plugin does not show up in the "step: General Build step" part of Jenkins you can use a separate Job. To copy all the needed files/Data into this second Job you will require to use Archive Artifact/Copy Artifact functionality of Jenkins to save files from your Pipeline build.
For more information on how to sue Archive Artifact/Copy Artifact see https://plugins.jenkins.io/copyartifact/ and
https://www.jenkins.io/doc/pipeline/tour/tests-and-artifacts/
I'm trying to get the following features to work in Jenkins' Declarative Pipeline syntax:
Conditional execution of certain stages only on the master branch
input to ask for user confirmation to deploy to a staging environment
While waiting for confirmation, it doesn't block an executor
Here's what I've ended up with:
pipeline {
agent none
stages {
stage('1. Compile') {
agent any
steps {
echo 'compile'
}
}
stage('2. Build & push Docker image') {
agent any
when {
branch 'master'
}
steps {
echo "build & push docker image"
}
}
stage('3. Deploy to stage') {
when {
branch 'master'
}
input {
message "Deploy to stage?"
ok "Deploy"
}
agent any
steps {
echo 'Deploy to stage'
}
}
}
}
The problem is that stage 2 needs the output from 1, but this is not available when it runs. If I replace the various agent directives with a global agent any, then the output is available, but the executor is blocked waiting for user input at stage 3. And if I try and combine 1 & 2 into a single stage, then I lose the ability to conditionally run some steps only on master.
Is there any way to achieve all the behaviour I'm looking for?
You need to use the stash command at the end of your first step and then unstash when you need the files
I think these are available in the snippet generator
As per the documentation
Saves a set of files for use later in the same build, generally on
another node/workspace. Stashed files are not otherwise available and
are generally discarded at the end of the build. Note that the stash
and unstash steps are designed for use with small files. For large
data transfers, use the External Workspace Manager plugin, or use an
external repository manager such as Nexus or Artifactory
I was reading about the best practices of a Jenkins pipeline.
I have created a declarative pipeline which is not executing parallel jobs and I want to run everything on the same slave.
I use:
agent {
label 'xxx'
}
The rest of my pipeline looks like:
pipeline {
agent {
label 'xxx'
}
triggers {
pollSCM pipelineParams.polling
}
options {
buildDiscarder(logRotator(numToKeepStr: '3'))
}
stages {
stage('stage1') {
steps {
xxx
}
}
stage('stage2') {
steps {
xxx
}
}
}
post {
always {
cleanWs()
}
failure {
xxx"
}
success {
xxx
}
}
}
Now I read the best practices here.
Point 4 is telling:
Do: All Material Work Within a Node
Any material work within a pipeline should occur within a node block.
Why? By default, the Jenkinsfile script itself runs on the Jenkins
master, using a lightweight executor expected to use very few
resources. Any material work, like cloning code from a Git server or
compiling a Java application, should leverage Jenkins distributed
builds capability and run an agent node.
I suspect this is for scripted pipelines.
Now my questions are:
Do I ever have to create a node inside a stage in a declarative pipeline (it is possible) or do I have to use agent inside the stage when I want to run my stage on another specific agent?
My current pipeline has defined a label which is on 4 agents. But my whole pipeline is always executed on one agent (what I want) but I would suspect it's executing stage1 on slaveX and maybe stage2 on slaveY. Why is this not happening?
The documentation is quite misleading.
What the documentation is suggesting is to take advantage of distributed builds. Distributed builds activated either by using the agent or node block.
The agent should be used when you want to run the pipeline almost exclusively on one node. The node block allows for more flexibilty as it allows you to specify where a granular task should be executed.
If you running the pipeline on some agent and you encapsulate a step with node with the same agent, there won't be any effect execpt that a new executor will be allocated to the step encapsulated with node.
There is no obvious benefit in doing so. You will simply be consuming executors that you don't need.
In conclusion, you are already using distributed builds when using agent and this is what the documentation is vaguely recommending.
We are using Jenkins and the Pipeline plugin for CI/CD. We have two pipelines that we need to run parallel, and there is a downstream pipeline which should trigger ONLY if the two upstream pipelines both finish and are successful.
P1--
| -- P3
P2--
Basically P3 should run only when P1 and P2 are finished and successful and not depend on just one of them.
Is there a way to achieve this? We are using 2.5 version of the plugin.
Since stages only run if previous stages run successfully, and since you can execute other pipelines via build, and since there is a magical instruction called parallel, I think this might do it:
pipeline {
agent { label 'docker' }
stages {
stage("build_p1_and_p2_in_parallel") {
steps {
parallel p1: {
build 'p1'
}, p2: {
build 'p2'
}
}
}
stage("build_p3_if_p1_and_p2_succeeded") {
steps {
build 'p3'
}
}
}
}
Use the "Snippet Generator" embedded in your jenkins instance to figure out what the argument to build should be. If it's another pipeline at the same level as the top level Jenkinsfile, you could just reference it by job name. Caveat: I've used parallel, but never build within parallel, but it seems like it should work.
You can try and wrap the pipelines jobs with MultiJob plugin that can implement the logic that you require as 2 jobs inside a phase.