In my Jenkins pipeline, I trigger several other jobs using the build step and pass some parameters to it. I'm having issues visualizing the different jobs I've triggered in addition to my pipeline. I have set up the Jenkins Delivery Pipeline plugin but the documentation for it is extremely vague and I have only been able to visualize the steps within my pipeline, despite tagging the jobs with both a stage and task name.
Example:
I have two jobs in Jenkins as pipelines/workflow jobs with the following pipeline script:
Job Foo:
stage('Building') {
println 'Triggering job'
build 'Bar'
}
Job Bar:
node('master') {
stage('Child job stage') {
println 'Doing stuff in child job'
}
}
When visualizing this with the Jenkins Pipeline Delivery plugin, I only get this:
How do I make it also show the stage in job Bar in a separate box?
Unfortunately, this use case is currently not supported in the Delivery Pipeline plugin version 1.0.0. The delivery pipeline plugin views for Jenkins pipelines are only rendering what is contained within one pipeline definition at this point. This feature request is tracked in JENKINS-43679.
Related
I am converting a Jenkins Freestyle build to a pure pipeline based job.
Current configuration uses https://plugins.jenkins.io/build-blocker-plugin/ as shown in the image.
How can I use it in a Declarative pipeline?
pipeline {
agent { label 'docker-u20' }
//Don't run until the "test-job" running
blockon("test-job")
stage{}
}
I did try to look into various jenkins docs but haven't found yet.
I have a Jenkinsfile setup for our CI/CD pipeline, and it runs through the pipeline on git actions like Pull Requests, Branch Creation, Tag Pushes, etc..
Prior to this setup, I was used to setting up Jenkins build jobs in the Jenkins UI. The advantage of this, was that I could setup dedicated build jobs that I could trigger remotely, and independently of git webhook actions. I could do a POST to the job endpoint with parameters to trigger various actions.
Documentation for this process would be referenced here - see "Trigger Builds Remotely"
I could also hit the big button that says "Build", or "Build with Parameters" in the UI, which was super nice.
How would one do this with a Jenkinsfile? Is this even possible to define build jobs in a pipeline definition within a Jenkinsfile? I.E. define functions / build jobs that have dedicated URLs that could be called on the Jenkins URL independent of webhook callbacks?
What's the best practice here?
Thanks for any tips, references, suggestions!
I would recommend starting with Multibranch pipelines. In general you get all the things you mentioned, but a little better. Because thhe paramteres can be defined within your Jenkinsfile. In short just do it like this:
Create a Jenkinsfile an check this into a Git Repository.
To create a Multibranch Pipeline: Click New Item on Jenkins home page.
Enter a name for your Pipeline, select Multibranch Pipeline and click OK.
Add a Branch Source (for example, Git) and enter the location of the repository.
Save the Multibranch Pipeline project.
A declarative Jenkinsfile can look like this:
pipeline {
agent any
parameters {
string(name: 'Greeting', defaultValue: 'Hello', description: 'How should I greet the world?')
}
stages {
stage('Example') {
steps {
echo "${params.Greeting} World!"
}
}
}
}
A good tutorial with screenshhots can be found here: https://www.jenkins.io/doc/book/pipeline/multibranch/
I want to use the Jenkins "PRQA" plugin, which seems not to have the option to use it from a pipeline. The plugin would run static code analysis and publish the results.
In my case, it requires some preparations that are already done in a pipelinejob. Because of that, I want to include the job into that pipeline, but on the same executor with the data prepared by the pipeline as some kind of inlined job-step.
I have tried to create a job for the PRQA-Plugin-Step and execute this with the build step from the pipeline. But this tries to start the job on a new executor (and stalls because I have only one executor).
pipeline {
agent any
stages {
stage('Build') {
steps {
echo 'Prepare'
}
}
stage('SCA') {
steps {
//Run this without using a new executor with the Environment that exists now
build 'PRQA_Job'
}
}
}
}
What is the correct way to run the job on the same executor with the current working directory.
With specified build 'PRQA_Job' it's not possible to run second job on the same executor (1 job = 1 executor), since main job just waiting for a triggered job to be finished. But you can run another job on the same agent with more than 1 executor to reach workspace from main job.
For a test porpose specify agent name in both jobs: agent 'agent_name_here'
If you want to use plugin functionality for a plugin, which has no native pipeline support, you could try using "step: General Build step" feature for Jenkins Pipelines. You can use the Pipeline Syntax wizzard linked in the Job configuration windows to generate the needed Pipeline description.
If the plugin does not show up in the "step: General Build step" part of Jenkins you can use a separate Job. To copy all the needed files/Data into this second Job you will require to use Archive Artifact/Copy Artifact functionality of Jenkins to save files from your Pipeline build.
For more information on how to sue Archive Artifact/Copy Artifact see https://plugins.jenkins.io/copyartifact/ and
https://www.jenkins.io/doc/pipeline/tour/tests-and-artifacts/
I am investigating the use of Jenkins Pipeline (specifically using Jenkinsfile). The context of my implementation is that I'm deploying a Jenkins instance using Chef. Part of this deployment may include some seed jobs, which will pull job configurations from source control (Jenkinsfile), to automate creation of our build jobs via Chef.
I've investigated the Jenkins documentation for both Pipeline as well as Jenkinsfile, and it seems to me that in order to use Jenkins Pipeline agents are required to be configured and set up in addition to Jenkins Master.
Am I understanding this correctly? Must Jenkins agents exist in order to use Jenkins Pipeline's Jenkinsfile? This specific line in the Jenkinsfile documentation leads me to believe this to be true:
Jenkinsfile (Declarative Pipeline)
pipeline {
agent any
stages {
stage('Build') {
steps {
echo 'Building..'
}
}
stage('Test') {
steps {
echo 'Testing..'
}
}
stage('Deploy') {
steps {
echo 'Deploying....'
}
}
}
}
The Declarative Pipeline example above contains the minimum necessary
structure to implement a continuous delivery pipeline. The agent
directive, which is required, instructs Jenkins to allocate an
executor and workspace for the Pipeline.
Thanks in advance for any Jenkins guidance!
The 'agent' part of the pipeline is required however this does not mean that you are required to have an external agent in addition to your master. If all you have is the master this pipeline will execute on the master. If you have additional agents available the pipeline would execute on whichever agent happens to be available when you run the pipeline.
If you go into
Manage Jenkins -> Manage Nodes and Clouds, you can see 'Master' itself is treated as one of the Default Nodes. With declarative format agent anyindicates any available agent which (including 'Master' as well from node configuration see below).
In case if you configure any New node, this can then be treated as New Agent in the pipeline agent any can be replaced by agent 'Node_Name'
You may can refer this LINK which give hint on Agent, Node and Slave briefly.
Now that the Multibranch Pipeline job type has matured, is there any reason to use the simple Pipeline job type any longer? Even if you only have one branch today, it's probably wise to account for the possibility of multiple branches in the future, so what would the motivation be to use the Pipeline job type for your Jenkins Pipeline vs. always using the Multibranch Pipeline job type, assuming you are storing your Jenkinsfile in SCM? Is there feature parity between the two job types now?
In my experience with multibranch pipelines, the ONLY downside is that you can't see the last success/failure/duration columns on the Jenkins main page. They just show "NA" on the Jenkins front page since it's technically a 'folder' of sub-jobs.
Other than that I can't think of any other "cons" to using multibranch.
I disagree with the other answer.... that case was that multibranch sends changes for "any" branch. That's not necessarily true. If a Jenkinsfile exists on a random feature branch, but that branch is not defined in the pipeline then you can just not do anything with it using typical if/else conditionals.
For example:
node {
checkout scm
def workspace = pwd()
if (env.BRANCH_NAME == 'master') {
stage ('Some Stage 1 for master') {
sh 'do something'
}
stage ('Another Stage for Master') {
sh 'do something else here'
}
}
else if (env.BRANCH_NAME == 'stage') {
stage ('Some stage branch step') {
sh 'do something'
}
stage ('Deploy to stage target') {
sh 'do something else'
}
}
else {
sh 'echo "Branch not applicable to Jenkins... do nothing"'
}
}
Multibranch Pipeline works well if your Jenkins job deals with a single git repository. On the other hand, the pipeline job can be repository-neutral and branch-neutral and very flexible when working with multiple git repositories with a single Jenkins job.
For example, assume you have artifact-1 from repo-1, artifact-2 from repo-2, and integration tests from repo-3. And artifact-2 depends on artifact-1. A Jenkins job has to build artifact-1, then build artifact-2, and finally run integration tests from repo-3. And assume your code change goes to a feature-1 branch of repo-1 and feature-1 branch for new tests in repo-3. In this case, the Jenkins job builds feature-1 for artifact-1, then uses 'dev' branch as default from repo-2 (if feature-1 is not detected in repo-2), and runs 'feature-1' from repo-3 for new integration tests. As you can see, the job works well with three git repositories. A repo-neutral/branch-neutral pipeline job is ideal in this setting.
In a CI/CD situation, it may not be desirable to send every branch to the target environment. Using pipeline and specifying a single branch would allow you to filter, and send only /master to Staging or Production environments. Multibranch would be useful for sending any change on any branch specifically to a test environment.
On the other hand, if the QA/AutomatedTesting process is thorough enough, the risk with sending any branch to Production could be acceptable.
If you are still developing your flow, the simple pipeline has the added advantage of supporting parameterized projects. This feature is useful for developing the declarative pipelines in the jenkins gui, using the parameter to control what branch/repository you are targeting.