In my project, I have requirements to run multiple steps.
I followed this guideline : Jenkins Guide
Here are the code :
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'echo "Hello World"'
sh '''
echo "Multiline shell steps works too"
ls -lah
'''
}
}
}
}
Do I have other alternative to handle the multiple steps in Jenkins pipeline ? I also thinking to use script inside steps I am not sure that's also good way to do
I am trying to understand what is the best practice to run multiple steps
You need to better define what it is you intend to. Do. A starting example is here .
You need to understand the meaning of stage and step.
A stage block defines a conceptually distinct subset of tasks performed through the entire Pipeline (e.g. "Build", "Test" and "Deploy" stages)
Step: A single task. Fundamentally, a step tells Jenkins what to do at a particular point in time (or "step" in the process).
You need to think of both stage and step as atomic units. Eg: Deploy is an atomic activity, but may consist of numerous steps, some of which might have multiple instructions/commands, copy artifact (to different targets, copy data, , launch app. etc.
This Tutorial may be useful, too.
Also, review best practices
Related
I am trying to setup various Jenkins pipelines whose last stage is always to run some acceptance tests. To cut a long story short, acceptance tests and test data (much of which is shared) for all products are checked into the same repository which is about 0.5 GB in size. It therefore seemed best to have a separate job for the acceptance tests and trigger it with a "build" step from each pipeline with the appropriate arguments to run the relevant tests. (It is also sometimes useful to rerun these tests without rebuilding the product)
stage('AcceptanceTest') {
steps {
build job: 'run-tests', parameters: ..., wait: true
}
}
So far I have seen that I can either:
trigger the job as normal. But this uses an extra agent/executor,
there doesn't seem to be a way to tell it to reuse the one from the
build (main pipeline). Both pipelines start with "agent { label 'master' }" but that
seems to mean "allocate a new agent on a node matching master".
trigger the job with the "wait: false" argument. This doesn't
block an executor but it does mean I can't report the results of the
tests in the main pipeline. It gives the impression that the test
stage has always succeeded.
Is there a better way?
I seem to have solved this, by adding "agent none" at the top of my main pipeline and moving "agent { label 'master' }" into the build stage. I can then leave my 'AcceptanceTest' stage without an agent and define it in the 'run-tests' job as before. I was under the impression from the docs that if you put agents in stages then all stages needed to have one, but it seems not to be the case. Which is lucky for this usecase...
I don't think that there's another way for declarative pipeline.
On the other hand for scripted pipeline you could execute this outside of node {} and it would just hold onto one executor on master releasing the one on slave.
stage("some") {
build job: 'test'
node {
...
Related question: Jenkis - Trigger another pipeline job in same machine - without creating new "Executor"
I use Jenkinse(file) Piplines and want to run multiple build steps in parallel (different constants for example - these can't be passed to the compiler, source code has to be changed by a script).
this could look something like that:
stage('Build') {
steps {
parallel(
build_default: {
echo "WORKSPACE: ${WORKSPACE}"
bat 'build.bat'
},
build_remove: {
echo "WORKSPACE 2: ${WORKSPACE}"
// EXAMPLE: only to test interference
deleteDir() // <- this would be code changes
}
)
}
}
This is not working since all the code is deleted before compilation is done. I want to run both steps in parallel like jenkins does it (creating multiple temp directories #2 and so on) when 2 builds run in parallel (triggered by button presses for example).
The only thing I found out so far is to create temp dirs myself in the working dir and copy the sourcecode to them and work there. But I'm looking for a nicer/more automatic solution. (when using the node command I have the same problems since I only have one node)
I havent had much luck with google due to the phrasing of the question so I apologize for the very basic question.
If i have 3 build stages setup in my jenkinsfile and stages 1 and 3 always run, but 2 only runs if its a PR, do I make it keep the stage but not run the command, or just filter out the whole stage?
stage {
if (env.isPR) {
sh()
or more like
if (env.isPR) {
stage {
sh()
If you filter out the whole stage, then after a non-PR build, the stage visualization on the build page will remove that PR stage (for all builds, even the ones which in fact did run the PR stage).
So, from a visualization perspective, I would suggest that you retain the stage.
stage {
if (env.isPR) {
sh()
At this moment we use JJB to compile Jenkins jobs (mostly pipelines already) in order to configure about 700 jobs but JJB2 seems not to scale well to build pipelines and I am looking for a way to drop it from the equation.
Mainly i would like to be able to have all these pipelines stored in a single centralized repository.
Please note that keeping the CI config (Jenkinsfile) inside each repository and branch is not possible in our use case, we need to keep all pipelines in a single "jenkins-jobs.git" repo.
As far as I know this is not possible yet, but in progress. See: https://issues.jenkins-ci.org/browse/JENKINS-43749
I think this is the purpose of jenkins shared libraries
I didn't dev such library my-self but I am using some. Basically:
Develop the "shared code" of the jenkins pipeline in a shared library
it can contains the whole pipeline (seq of steps)
Add this library to the jenkins server
In each project, add a jenkinsfile that "import" those using #Library
as #Juh_ said, you can use jenkins shared libraries, here is a complete steps, Suppose that we have three branches:
master
develop
stage
and we want to create a single Jenkins file so that we can change in only one place. All you need is creating a new branch ex: common. This branch MUST have this structure. What we are interested for now is adding a new groovy file in vars directory, ex: common.groovy. Here we can put the common Jenkins file that you wish to be used across all branches.
Here is a sample:
def call() {
node {
stage("Install Stage from common file") {
if (env.BRANCH_NAME.equals('master')){
echo "npm install from common files master branch"
}
else if(env.BRANCH_NAME.equals('develop')){
echo "npm install from common files develop branch"
}
}
stage("Test") {
echo "npm test from common files"
}
}
}
You must wrap your code call function in order to be used in other branches. now we have finished work in common branch we need to use it in our branches. go to any branch you wish to use this pipline ex: master and create Jenkinsfile and put this one line of code:
common()
This will call the common function that you have created before in common branch and will execute the pipeline.
I'm trying to understand how to structure my Jenkins 2.7 pipeline groovy script. I've read through the pipeline tutorial, but feel that it could expand more on these topics.
I can understand that a pipeline can have many stages and each stage can have many steps. But what is the difference between a step(); and a method call inside a stage, say sh([script: "echo hello"]);. Should nodes be inside or outside of stages? Should the overall properties of a job be inside or outside a node?
Here is my current structure on an ubuntu master node:
#!/usr/bin/env groovy
node('master') {
properties([
[$class: 'BuildDiscarderProperty', strategy: [$class: 'LogRotator', numToKeepStr: '10']]
]);
stage 'Checkout'
checkout scm
stage 'Build'
sh([script: "make build"]);
archive("bin/*");
}
The concepts of node, stage and step are different:
node specifies where something shall happen. You give a name or a label, and Jenkins runs the block there.
stage structures your script into a high-level sequence. Stages show up as columns in the Pipeline Stage view with average stage times and colours for the stage result.
step is one way to specify what shall happen. sh is of a similar quality, it is a different kind of action. (You can also use build for things that are already specified as projects.)
So steps can reside within nodes, (if they don't, they are executed on the master), and nodes and steps can be structured into an overall sequence with stages.
It depends. Any node declaration allocates an executor (on Jenkins master or slave). This requires that you stash and unstash the workspace, as another executor does not have the checked out sources available.
Several steps of the Pipeline DSL run in a flyweight executor and thus do not require to be inside a node block.
This might be helpful for an example such as the following, where you need to allocate multiple nodes anyway:
stage("Checkout") {
checkout scm
}
stage("Build") {
node('linux') {
sh "make"
}
node('windows') {
bat "whatever"
}
}
stage("Upload") {
...
Another (maybe more realistic example) would be to allocate multiple nodes in parallel. Then there's no need to let the stage call be executed in another allocated executor (aka within node).
Your example looks good to me. There's no need to allocate multiple nodes within the single stages, as this would be only additional overhead.