When having build job with implemented promotion cycle, i.e. Dev->QA->Performance->Production.
What will be the correct way to migrate this cycle into pipeline? it looks rather clean\structured to call each of the above mentioned jobs, Yet, How can I query the build ID (to be able to call the deployment job)? Or I have have totally misunderstood the pipeline concept?
You might consider multiple solutions :
Trigger each job sequentially
Just call each job sequentially using build step :
node() {
stage "Dev"
build job: 'Dev'
stage "QA"
build job: 'QA'
// Your other promotion cycles...
}
It is easy to use and will probably be already compliant with your actual solution, but I'm not a big fan of this solution because the actual output of your pipeline stages (Dev, QA, etc.) will really be in the dedicated job (Dev job, QA job) instead of being directly inside your pipeline. Your pipeline will be an empty shell just calling other jobs...
Call pipelines functions instead of jobs
Define a pipeline function for each of your promotion cycle (preferably in an external file) and then call each function sequentially. Example :
node {
git 'http://urlToYourGit/projectContainingYourFunctions'
cycles = load 'promotions-cycles.groovy'
stage "Dev"
cycles.dev()
stage "QA"
cycles.qa()
// Your other promotion cycles calls...
}
The biggest advantages is that your promotions cycles code is commited in your Git repository and that all your stages output is actually a part of your pipeline output, which is great for easy debugging.
Plus you can easily apply conditions based on the success/failure of your functions (e.g. if your QA stage fails you don't want to go any further).
Note that both solutions should allow you to launch your promotions cycles in parallel if needed and to pass parameters to either your jobs or functions.
It is better to call each build in separate pipeline stages. Something like this:
stage "Dev"
node{
build job: 'Dev', parameters:
[
[$class: 'StringParameterValue', name: 'param', value: "param"],
];
}
stage "QA"
node{
build job: 'QA'
}
etc...
To cycle this process you ca use retry option or endless cycle in Groovy
Related
I am trying to setup various Jenkins pipelines whose last stage is always to run some acceptance tests. To cut a long story short, acceptance tests and test data (much of which is shared) for all products are checked into the same repository which is about 0.5 GB in size. It therefore seemed best to have a separate job for the acceptance tests and trigger it with a "build" step from each pipeline with the appropriate arguments to run the relevant tests. (It is also sometimes useful to rerun these tests without rebuilding the product)
stage('AcceptanceTest') {
steps {
build job: 'run-tests', parameters: ..., wait: true
}
}
So far I have seen that I can either:
trigger the job as normal. But this uses an extra agent/executor,
there doesn't seem to be a way to tell it to reuse the one from the
build (main pipeline). Both pipelines start with "agent { label 'master' }" but that
seems to mean "allocate a new agent on a node matching master".
trigger the job with the "wait: false" argument. This doesn't
block an executor but it does mean I can't report the results of the
tests in the main pipeline. It gives the impression that the test
stage has always succeeded.
Is there a better way?
I seem to have solved this, by adding "agent none" at the top of my main pipeline and moving "agent { label 'master' }" into the build stage. I can then leave my 'AcceptanceTest' stage without an agent and define it in the 'run-tests' job as before. I was under the impression from the docs that if you put agents in stages then all stages needed to have one, but it seems not to be the case. Which is lucky for this usecase...
I don't think that there's another way for declarative pipeline.
On the other hand for scripted pipeline you could execute this outside of node {} and it would just hold onto one executor on master releasing the one on slave.
stage("some") {
build job: 'test'
node {
...
Related question: Jenkis - Trigger another pipeline job in same machine - without creating new "Executor"
I'm trying to understand how to structure my Jenkins 2.7 pipeline groovy script. I've read through the pipeline tutorial, but feel that it could expand more on these topics.
I can understand that a pipeline can have many stages and each stage can have many steps. But what is the difference between a step(); and a method call inside a stage, say sh([script: "echo hello"]);. Should nodes be inside or outside of stages? Should the overall properties of a job be inside or outside a node?
Here is my current structure on an ubuntu master node:
#!/usr/bin/env groovy
node('master') {
properties([
[$class: 'BuildDiscarderProperty', strategy: [$class: 'LogRotator', numToKeepStr: '10']]
]);
stage 'Checkout'
checkout scm
stage 'Build'
sh([script: "make build"]);
archive("bin/*");
}
The concepts of node, stage and step are different:
node specifies where something shall happen. You give a name or a label, and Jenkins runs the block there.
stage structures your script into a high-level sequence. Stages show up as columns in the Pipeline Stage view with average stage times and colours for the stage result.
step is one way to specify what shall happen. sh is of a similar quality, it is a different kind of action. (You can also use build for things that are already specified as projects.)
So steps can reside within nodes, (if they don't, they are executed on the master), and nodes and steps can be structured into an overall sequence with stages.
It depends. Any node declaration allocates an executor (on Jenkins master or slave). This requires that you stash and unstash the workspace, as another executor does not have the checked out sources available.
Several steps of the Pipeline DSL run in a flyweight executor and thus do not require to be inside a node block.
This might be helpful for an example such as the following, where you need to allocate multiple nodes anyway:
stage("Checkout") {
checkout scm
}
stage("Build") {
node('linux') {
sh "make"
}
node('windows') {
bat "whatever"
}
}
stage("Upload") {
...
Another (maybe more realistic example) would be to allocate multiple nodes in parallel. Then there's no need to let the stage call be executed in another allocated executor (aka within node).
Your example looks good to me. There's no need to allocate multiple nodes within the single stages, as this would be only additional overhead.
I'm new to Jenkins and I've been tasked with a simple task of passing the output from one pipeline to the other.
Lets say that the first pipeline has a script that says echo HelloWorld, how would i pass this output to another pipeline so it displays the same thing.
I've looked at parameterized triggers and couple of other answers but I was hoping if someone could layout the step by step procedure to me.
If you want to implement it purely with Jenkins pipeline code - what I do is have an orchestrator pipeline job that builds all the pipeline jobs in my process, waits for them to complete then gets the build number:
Orchestrator job
def result = build job: 'jobA'
def buildNumber = result.getNumber()
echo "jobA build number : ${buildNumber}"
In each job like say 'jobA' I arrange to write the output to a known file (a properties file for example) which is then archived:
jobA
writeFile encoding: 'utf-8', file: 'results.properties', text: 'a=123\r\nb=foo'
archiveArtifacts 'results.properties'
Then after the build of each job like jobA, use the build number and use the Copy Artifacts plugin to get the file back into your orchestrator job and process it however you want:
Orchestrator job
step([$class : 'CopyArtifact',
filter : 'results.properties',
flatten : true,
projectName: 'jobA',
selector : [$class : 'SpecificBuildSelector',
buildNumber: buildNumber.toString()]])
You will find these plugins useful to look at:
Copy Artifact Plugin
Pipeline Utility Steps Plugin
If you are chaining jobs instead of using an orchestrator - say jobA builds jobB builds jobC etc - then you can use a similar method. CopyArtifacts can copy from the upstream job or you can pass parameters with the build number and name of the upstream job. I chose to use an orchestrator job after changing from chained jobs because I need some jobs to be built in parallel.
Now that the Multibranch Pipeline job type has matured, is there any reason to use the simple Pipeline job type any longer? Even if you only have one branch today, it's probably wise to account for the possibility of multiple branches in the future, so what would the motivation be to use the Pipeline job type for your Jenkins Pipeline vs. always using the Multibranch Pipeline job type, assuming you are storing your Jenkinsfile in SCM? Is there feature parity between the two job types now?
In my experience with multibranch pipelines, the ONLY downside is that you can't see the last success/failure/duration columns on the Jenkins main page. They just show "NA" on the Jenkins front page since it's technically a 'folder' of sub-jobs.
Other than that I can't think of any other "cons" to using multibranch.
I disagree with the other answer.... that case was that multibranch sends changes for "any" branch. That's not necessarily true. If a Jenkinsfile exists on a random feature branch, but that branch is not defined in the pipeline then you can just not do anything with it using typical if/else conditionals.
For example:
node {
checkout scm
def workspace = pwd()
if (env.BRANCH_NAME == 'master') {
stage ('Some Stage 1 for master') {
sh 'do something'
}
stage ('Another Stage for Master') {
sh 'do something else here'
}
}
else if (env.BRANCH_NAME == 'stage') {
stage ('Some stage branch step') {
sh 'do something'
}
stage ('Deploy to stage target') {
sh 'do something else'
}
}
else {
sh 'echo "Branch not applicable to Jenkins... do nothing"'
}
}
Multibranch Pipeline works well if your Jenkins job deals with a single git repository. On the other hand, the pipeline job can be repository-neutral and branch-neutral and very flexible when working with multiple git repositories with a single Jenkins job.
For example, assume you have artifact-1 from repo-1, artifact-2 from repo-2, and integration tests from repo-3. And artifact-2 depends on artifact-1. A Jenkins job has to build artifact-1, then build artifact-2, and finally run integration tests from repo-3. And assume your code change goes to a feature-1 branch of repo-1 and feature-1 branch for new tests in repo-3. In this case, the Jenkins job builds feature-1 for artifact-1, then uses 'dev' branch as default from repo-2 (if feature-1 is not detected in repo-2), and runs 'feature-1' from repo-3 for new integration tests. As you can see, the job works well with three git repositories. A repo-neutral/branch-neutral pipeline job is ideal in this setting.
In a CI/CD situation, it may not be desirable to send every branch to the target environment. Using pipeline and specifying a single branch would allow you to filter, and send only /master to Staging or Production environments. Multibranch would be useful for sending any change on any branch specifically to a test environment.
On the other hand, if the QA/AutomatedTesting process is thorough enough, the risk with sending any branch to Production could be acceptable.
If you are still developing your flow, the simple pipeline has the added advantage of supporting parameterized projects. This feature is useful for developing the declarative pipelines in the jenkins gui, using the parameter to control what branch/repository you are targeting.
I have a build pipeline in jenkins that builds and deploys the back-end components which expose a REST API. I have another build pipeline that builds and deploys the front-end components which call the back-end components. The back-end and front-end components live in seperate Git repositories.
The build job of each pipeline is kicked off when a commit occurs in each respective Git repository.
I would like to run automated functional tests at the end of the build pipeline of each build pipeline. But how do I know that both pipelines are finished and it should run the functional tests? Can it link the two pipelines together?
One approach is to use the Locks and Latches plugin and give each of the jobs on each pipeline their own Lock eg Pipeline-A and Pipeline-B, then the job that runs the tests is configured to obtain the lock on both Pipeline-A and Pipeline-B. This both prevents the test job running if any part of either pipeline is running, and blocks any changes on the pipeline whilst the tests are running.
If you'd only like to lock on the deploy jobs, you can use the same approach but only configure the deploy jobs with the locks; this will allow normal builds to run as normal, but deploy jobs queue up whilst the tests run.
Assumptions;
Any Deploy jobs are triggering a test execution
A second approach is to have your job pipelines setup such that before performing a deployment they trigger a single job in the following layout;
EndOfPipelineA -> SystemDeploymentController
EndOfPipelineB -> SystemDeploymentController
SystemDeploymentController -> DeployAppOne
SystemDeploymentController -> DeployAppTwo
DeployAppTwo -> TestExecution
DeployAppOne -> TestExecution
Then you use the Join plugin to only run the TestExecution job when both the deployments are complete AND successful.
The second approach allows you to:
conditionally control the execution of the test execution depending on the success of
deployments,
Have a single job that'll let you redeploy your whole system if you make any changes to the system it runs on, AND then run tests automatically.
Potentially make use of the Promotions plugin to highlight "good configurations" where both apps worked well together
However it is a bit trickier to manage.
Although this is an old question, you might consider restructuring your build pipeline using the Build Flow Plugin
https://wiki.jenkins-ci.org/display/JENKINS/Build+Flow+Plugin
It will have the advantage of keeping your pipeline logic in one place.
You can use build step for this. Let's say you have a pipeline named parent and child. In parent pipeline you can define:
pipeline {
agent any
stages {
stage ('call-child-pipeline') {
steps {
build job: 'child'
}
}
}
}
you can also pass some parameters to the child pipeline:
stage ('call-child-pipeline') {
steps {
build job: 'child', parameters: [string(name: 'my_param', value: "my_value")]
}
}
if you don't want to wait until the child pipeline is finished, add wait: false, e.g.
build job: 'child', wait: false