What is differences between either of using def and without using def in Jenkinsfile in script block? - jenkins

I have two Jenkinsfile for sample:
The content of A_Jenkinsfile is:
pipeline {
agent any
stages {
stage("first") {
steps {
script {
foo = "bar"
}
sh "echo ${foo}"
}
}
stage("two") {
steps {
sh "echo ${foo}"
}
}
}
}
The other one is B_Jenkinsfile and its content is:
pipeline {
agent any
stages {
stage("first") {
steps {
script {
def foo = "bar"
}
sh "echo ${foo}"
}
}
stage("two") {
steps {
sh "echo ${foo}"
}
}
}
}
When I build them, B_Jenkinsfile is failed and A_Jenkinsfile is success.
What is differences between either of using def and without using def in Jenkinsfile in script block?

There are two types of Pipeline syntax. Declarative Pipeline and Scripted Pipeline. A declarative pipeline starts with a pipeline {} wrapper and will have Stages and Steps. Declarative pipeline limits what is available to the user with a more strict and pre-defined structure. Where in scripted Pipeline it's more closer to groovy, and users will have more flexibility on what they can do. When you run something in a Script block in a declarative Pipeline, The script step takes a block of the Scripted Pipeline and executes that in the Declarative Pipeline. Basically, it runs a Groovy script for you. So your question can be rephrased as what def means in a Groovy script.
Simply in a Groovy script, if you omit adding the def keyword the variable will be added to the current script's binding. So it will be considered as a Global variable. If you use def the variable will be scoped, and you will only be able to use it in the current script block. There are multiple detailed answers for this here, so I'm not going to repeat them.

Related

I want to know python build in parallel (Jenkins)

Now I'm building using execute shell in Jenkins.
(currently) The code below is built in order. I want to implement this in parallel.
now code status
(I want) build action -> test1.py ~ test4.py executed in parallel
Is there a way to build in parallel in this way(execute shell) or other strategy?
You have several options to run things in parallel within a Jenkins pipeline.
The first option is to use the static Parallel Directive Stages which allow you to easily define parallel stages inside your declarative pipeline, something like:
pipeline {
agent any
stages {
stage('Non-Parallel Stage') {
steps {
echo 'This stage will be executed first.'
}
}
stage('Parallel Stages') {
parallel {
stage('Test 1') {
steps {
sh "python3 $WORKSPACE/folder/test1.py"
}
}
stage('Test 2') {
steps {
sh "python3 $WORKSPACE/folder/test2.py"
}
}
.....
}
}
}
}
A second and more dynamic option is to use the built in parallel keyword which takes a map from branch names to closures:
parallel firstBranch: {
// do something
}, secondBranch: {
// do something else
},
failFast: true|false
and use it to dynamically create your parallel execution steps, something like:
tests = ['test1','test2','test3', 'test4']
parallel tests.collectEntries{ test ->
["Running test ${test}" : {
sh "python3 $WORKSPACE/folder/${test}.py"
}]
}
This code can reside anywhere in a scripted pipeline, and in a script directive in a declarative pipeline.

Extracting an entire Jenkins stage to a shared library?

Is it possible to take an entire stage('foo') {...} definition and extract it into a shared library within Jenkins? The docs are very clear on how to pull an individual step out, but I can't find any way to take an entire stage, parameterize it, and re-use it globally. I thought perhaps just return stage... would work, but it errors out as an invalid return value.
It depends if you use scripted or declarative pipeline.
Scripted pipeline is more flexible and it allows you e.g. create stages based on some conditions (each pipeline run can have a different number and kind of stages). In this kind of pipeline you can extract a full stage to the shared library class and call it from inside the node {} block. Consider following example:
// src/ScriptedFooStage.groovy
class ScriptedFooStage {
private final Script script
ScriptedFooStage(Script script) {
this.script = script
}
// You can pass as many parameters as needed
void execute(String name, boolean param1) {
script.stage(name) {
script.echo "Triggering ${name} stage..."
script.sh "echo 'Execute your desired bash command here'"
if (param1) {
script.sh "echo 'Executing conditional command, because param1 == true'"
}
}
}
}
Then the Jenkinsfile may look like this:
node {
new ScriptedFooStage(this).execute('Foo', true)
}
As you can see the whole stage was encapsulated in the ScriptedFooStage.execute() method. Its name is also taken from the parameter name - scripted pipeline allows you doing such thing.
Declarative pipeline on the other hand is more strict and opinionated. It's fixed if it comes to the number of stages and their names (you can't model dynamically what stages are present per build and what are their names). You can still take advantage of shared library classes, but you are limited to execute them inside script {} block inside stage('Name') { steps {} } block. It means that you can't extract the whole stage to the separate class, but only some part that gets executed at the steps level. Consider following example:
// src/DeclarativeFooStage.groovy
class DeclarativeFooStage {
private final Script script
DeclarativeFooStage(Script script) {
this.script = script
}
// You can pass as many parameters as needed
void execute(String name, boolean param1) {
script.echo "Triggering script with name == ${name}"
script.sh "echo 'Execute your desired bash command here'"
if (param1) {
script.sh "echo 'Executing conditional command, because param1 == true'"
}
}
}
And the Jenkinsfile may look like this:
// Jenkinsfile
pipeline {
agent any
stages {
stage('Foo') {
steps {
script {
new DeclarativeFooStage(this).execute('something', false)
}
}
}
}
}
If we would try execute new DeclarativeFooStage(this).execute('something', false) outside script {} block in the declarative pipeline we would get compilation errors.
Conclusion
The choice between scripted or declarative pipeline depends on specific use case. If you want to get the best flexibility when it comes to modeling your pipeline business logic, scripted pipeline might be the good choice. However, it comes with some price. For instance, scripted pipeline does not support restarting pipeline build from specific stage - this is supported only by declarative pipeline. (Imagine you have 10 stages in the pipeline and stage 7 failed because of some silly mistake and you would like to restart the build from 7th stage - in scripted pipeline you would have to re-run from the very beginning, while declarative pipeline can restart from 7th stage by remembering the results from all 6 previous stages).
To complete Szymon Stepniak answer I will leave here note that in declarative pipeline you may also share whole pipeline:
// vars/myDeliveryPipeline.groovy
def call(Map pipelineParams) {
pipeline {
agent any
stages {
stage('build') {
...
}
stage ('test') {
...
}
...
}
}
}
And then call it
// Jenkinsfile
myDeliveryPipeline(foo: 'FOO', bar: 'BAR')
But as far as remember you may call only one pipeline in a Jenkins file which make it not very customizable.
Source
https://www.jenkins.io/blog/2017/10/02/pipeline-templates-with-shared-libraries/

Can I use a Closure to define a stage in a Jenkins Declarative Pipeline?

I'm trying to do something like this:
def makeStage = {
stage('a') {
steps {
echo 'Hello World'
}
}
}
pipeline {
agent none
stages {
makeStage()
}
}
But it gives me this exception:
WorkflowScript: 11: Expected a stage # line 11, column 5.
makeStage()
^
Is it possible to define a stage as a external closure and if so - how?
Super late, but in case anyone runs into this issue, a possible solution would be to wrap your generated stage around a script declarative and invoke .call on the generated stage.
So for you:
def makeStage = {
return {
stage('a') {
echo 'Hello World'
}
}
}
pipeline {
agent none
stages {
stage ('hello world') {
steps {
script {
makeStage().call()
}
}
}
}
}
Whoops. edited, I had "steps" inside on my stage('a') in the makeStage declaration. "steps" is a declarative pipeline directive so it was throwing an error inside the script block.
You can't define stages outside the declarative pipeline. The main purpose of declarative pipeline is to provide simplified and opinionated syntax so you can focus on what should be done (by using some of the available steps) and not how to do it.
If you are interested in more flexible way of implementing pipeline, you may choose Scripted Pipeline approach which is not that strict if it comes to the syntax - it's only limited by Groovy and CPS execution module.
Working (scripted) pipeline from your example would look like this:
#!groovy
def makeStage = {
stage('a') {
echo 'Hello World'
}
}
node {
makeStage()
}
Attention: There is no steps method inside stage in a scripted pipeline. If you leave it there you will get
java.lang.NoSuchMethodError: No such DSL method 'steps' found among
steps [archive, bat, build, catchError, checkout, deleteDir, dir,
dockerFingerprintFrom, ...
Scripts in declarative pipeline
Declarative pipeline defines a script step that allows you to put a block of scripted pipeline. However it still does not allow you to define stage dynamically or/and extract stage definition to a function or closure. script step gets executed inside the stage so you can't control inside this block if stage is executed or not. In some cases however this step might be very useful if you want to do something more complex than just calling pre-defined step of a declarative pipeline.

Create new Jenkins jobs using Pipeline Job and Groovy script

I have Jenkins pipeline Job with parameters (name, group, taskNumber)
I need to write pipeline script which will call groovy script (this one?: https://github.com/peterjenkins1/jenkins-scripts/blob/master/add-job.groovy)
I want to create new job (with name name_group_taskNamber) every times when I build main Pipeline Job.
I don't understand:
Where do I need to put may groovy script ?
How does Pipeline script should look like? :
node{
stage('Build'){
def pipeline = load "CreateJob.groovy"
pipeline.run()
}
}
You can use and configure a shared library like here (a git repo): https://github.com/lvthillo/shared-library . You need to configure this in your Jenkins global configuration.
It contains a folder vars/. Here you can manage pipelines and groovy scripts like my slackNotifier.groovy. The script is just a groovy script to print the build result in Slack.
In the jenkins pipeline job we will import our shared library:
#Library('name-of-shared-pipeline-library')_
mavenPipeline {
//define parameters
}
In the case above also the pipeline is in the shared library but this isn't necessary.
You can just write your pipeline in the job itself and call only the function from the pipeline like this:
This is the script in the shared library:
// vars/sayHello.groovy
def call(String name = 'human') {
echo "Hello, ${name}."
}
And in your pipeline:
final Lib= library('my-shared-library')
...
stage('stage name'){
echo "output"
Lib.sayHello.groovy('Peter')
}
...
EDIT:
In new declarative pipelines you can use:
pipeline {
agent { node { label 'xxx' } }
options {
buildDiscarder(logRotator(numToKeepStr: '3', artifactNumToKeepStr: '1'))
}
stages {
stage('test') {
steps {
sh 'echo "execute say hello script:"'
sayHello("Peter")
}
}
}
post {
always {
cleanWs()
}
}
}
def sayHello(String name = 'human') {
echo "Hello, ${name}."
}
output:
[test] Running shell script
+ echo 'execute say hello script:'
execute say hello script:
[Pipeline] echo
Hello, Peter.
[Pipeline] }
[Pipeline] // stage
We do it by using the https://wiki.jenkins.io/display/JENKINS/Jobcopy+Builder+plugin, try build another step in pipeline script and pass the parms which are to be considered

Can I "import" the stages in a Jenkins Declarative pipeline

I have several pipeline jobs, which are configured very similarly.
They all have the same stages (of which there are about 10).
I am now I am thinking about moving to the declarative pipeline (https://jenkins.io/blog/2016/09/19/blueocean-beta-declarative-pipeline-pipeline-editor/).
But I do not want to define the ~10 stages in every pipeline. I want to define them at one place, and "import" them somehow.
Is this possible with declarative pipelines at all? I see that there are Libraries, but it does not seem like I could include the stage definition using them.
You will have to create a shared-library to implement what i am about to suggest. For shared-library implementation, you may check the following posts:
Using Building Blocks in Jenkins Declarative Pipeline
Upload file in Jenkins input step to workspace (Mainly for images so one can easily figure out things)
Now if you want to use a Jenkinsfile (kind of a template) which can be reused across multiple projects (jobs), then that is indeed possible.
Once you have created a shared-library repository with vars directory in it, then you just have to create a Groovy file (let's say, commonPipeline.groovy) inside vars directory.
Here's an example that works because I have used it earlier in multiple jobs.
$ cat shared-lib/vars/commonPipeline.groovy
// You can create function(s) as shown below, if required
def someFunctionA() {
// Your code
}
// This is where you will define all the stages that you want
// to run as a whole in multiple projects (jobs)
def call(Map config) {
pipeline {
agent {
node { label 'slaveA || slaveB' }
}
environment {
myvar_Y = 'apple'
myvar_Z = 'orange'
}
stages {
stage('Checkout') {
steps {
deleteDir()
checkout scm
}
}
stage ('Build') {
steps {
script {
check_something = someFunctionA()
if (check_something) {
echo "Build!"
# your_build_code
} else {
error "Something bad happened! Exiting..."
}
}
}
}
stage ('Test') {
steps {
echo "Running tests..."
// your_test_code
}
}
stage ('Deploy') {
steps {
script {
sh '''
# your_deploy_code
'''
}
}
}
}
post {
failure {
sh '''
# anything_you_need_to_perform_in_failure_step
'''
}
success {
sh '''
# anything_you_need_to_perform_in_success_step
'''
}
}
}
}
With above Groovy file in place, all you have to do now is to call it in your various Jenkins projects. Since you might already be having an existing Jenkinsfile (if not, create it) in your Jenkins project, you just have to replace the existing content of that file with the following:
$ cat Jenkinsfile
// Assuming you have named your shared-library as `my-shared-lib` & `Default version` to `master` branch in
// `Manage Jenkins` » `Configure System` » `Global Pipeline Libraries` section
#Library('my-shared-lib#master')_
def params = [:]
params=[
jenkins_var: "${env.JOB_BASE_NAME}",
]
commonPipeline params
Note: As you can see above, I am calling commonPipeline.groovy file. So, all your bulky Jenkinsfile will get reduced to just five or six lines of code, and those few lines are also going to be common across all those projects. Also note that I have used jenkins_var above. It can be any name. It's not actually used but is required for pipeline to run. Some Groovy expert can clarify that part.
Ref: https://www.jenkins.io/blog/2017/10/02/pipeline-templates-with-shared-libraries/

Resources