I have Jenkins pipeline Job with parameters (name, group, taskNumber)
I need to write pipeline script which will call groovy script (this one?: https://github.com/peterjenkins1/jenkins-scripts/blob/master/add-job.groovy)
I want to create new job (with name name_group_taskNamber) every times when I build main Pipeline Job.
I don't understand:
Where do I need to put may groovy script ?
How does Pipeline script should look like? :
node{
stage('Build'){
def pipeline = load "CreateJob.groovy"
pipeline.run()
}
}
You can use and configure a shared library like here (a git repo): https://github.com/lvthillo/shared-library . You need to configure this in your Jenkins global configuration.
It contains a folder vars/. Here you can manage pipelines and groovy scripts like my slackNotifier.groovy. The script is just a groovy script to print the build result in Slack.
In the jenkins pipeline job we will import our shared library:
#Library('name-of-shared-pipeline-library')_
mavenPipeline {
//define parameters
}
In the case above also the pipeline is in the shared library but this isn't necessary.
You can just write your pipeline in the job itself and call only the function from the pipeline like this:
This is the script in the shared library:
// vars/sayHello.groovy
def call(String name = 'human') {
echo "Hello, ${name}."
}
And in your pipeline:
final Lib= library('my-shared-library')
...
stage('stage name'){
echo "output"
Lib.sayHello.groovy('Peter')
}
...
EDIT:
In new declarative pipelines you can use:
pipeline {
agent { node { label 'xxx' } }
options {
buildDiscarder(logRotator(numToKeepStr: '3', artifactNumToKeepStr: '1'))
}
stages {
stage('test') {
steps {
sh 'echo "execute say hello script:"'
sayHello("Peter")
}
}
}
post {
always {
cleanWs()
}
}
}
def sayHello(String name = 'human') {
echo "Hello, ${name}."
}
output:
[test] Running shell script
+ echo 'execute say hello script:'
execute say hello script:
[Pipeline] echo
Hello, Peter.
[Pipeline] }
[Pipeline] // stage
We do it by using the https://wiki.jenkins.io/display/JENKINS/Jobcopy+Builder+plugin, try build another step in pipeline script and pass the parms which are to be considered
Related
I have two Jenkinsfile for sample:
The content of A_Jenkinsfile is:
pipeline {
agent any
stages {
stage("first") {
steps {
script {
foo = "bar"
}
sh "echo ${foo}"
}
}
stage("two") {
steps {
sh "echo ${foo}"
}
}
}
}
The other one is B_Jenkinsfile and its content is:
pipeline {
agent any
stages {
stage("first") {
steps {
script {
def foo = "bar"
}
sh "echo ${foo}"
}
}
stage("two") {
steps {
sh "echo ${foo}"
}
}
}
}
When I build them, B_Jenkinsfile is failed and A_Jenkinsfile is success.
What is differences between either of using def and without using def in Jenkinsfile in script block?
There are two types of Pipeline syntax. Declarative Pipeline and Scripted Pipeline. A declarative pipeline starts with a pipeline {} wrapper and will have Stages and Steps. Declarative pipeline limits what is available to the user with a more strict and pre-defined structure. Where in scripted Pipeline it's more closer to groovy, and users will have more flexibility on what they can do. When you run something in a Script block in a declarative Pipeline, The script step takes a block of the Scripted Pipeline and executes that in the Declarative Pipeline. Basically, it runs a Groovy script for you. So your question can be rephrased as what def means in a Groovy script.
Simply in a Groovy script, if you omit adding the def keyword the variable will be added to the current script's binding. So it will be considered as a Global variable. If you use def the variable will be scoped, and you will only be able to use it in the current script block. There are multiple detailed answers for this here, so I'm not going to repeat them.
I'm a total noob with Groovy. For context I have a Jenkins pipeline that uses a deploy.pipeline.groovy script. I also have a test.pipeline.groovy script for git PRs.
I'm trying to reduce duplicated code in both scripts so I created a Globals.groovy script to store my constants and a Functions.groovy script to store reusable functions for both pipeline scripts. All of the files are in the same directory, but I can't figure out how to import my Globals and Functions scripts into the pipeline scripts for use.
My Globals.groovy file is like this:
import groovy.transform.Field
#CompileStatic class Globals {
#Field final String test1 = 'first test'
#Field final String test2 = 'second test'
}
My Functions.groovy file is like this:
#CompileStatic class Functions {
def TestMessage1() { println globals.test1 }
def TestMessage2() { println globals.test2 }
}
Both pipeline scripts have a "Test" stage like so:
def runTests{
stage('Test') {
functions.TestMessage1()
functions.TestMessage1()
}
}
I can't figure out how to import or load my Globals.groovy script into my Functions.groovy script and then my Functions.groovy script into my scripts.
I've tried putting this at the top of my Functions.groovy scripts:
def globals = load('Globals.groovy')
And this at the top of my pipeline scripts
def functions = load('Functions.groovy')
What am I doing wrong?
You can just have functions in your groovy files and need a return at the bottom
Here is my Jenkinsfile:
def pipeline
node('master') {
checkout scm
pipeline = load 'script.groovy'
pipeline.execute()
}
This is my script.groovy (same level as Jenkinsfile in repo)
//import ...
def execute() {
println 'Test'
}
return this
Jenkins job Output:
[Pipeline] load
[Pipeline] { (script.groovy)
[Pipeline] }
[Pipeline] // load
[Pipeline] echo
Test
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
I have two Jenkins Pipelines :
Pipeline A : In a stage, I defined an environment variable called MAVEN_PROFILE (the user can choose a value from a list)
Pipeline B : I need to get the MAVEN_PROFILE environment variable value that was set in Pipeline A
I need two pipelines because I can't do it in a single Pipeline for process reason.
I saw there was some answers on how to share variable between stages within a single Pipeline but this not my case.
I want to share environment variable value between different Pipelines.
Pipeline A
pipeline {
agent any
...
stages {
stage('Profile Selection'){
steps {
script {
env.MAVEN_PROFILE = input message: 'Choose the profile :',
parameters: [choice(name: 'MAVEN_PROFILE',
choices: 'all\nserver\nclient', description: 'Profiles')]
}
}
}
stage(...){
steps {
script {
bat "mvn deploy -P ${env.MAVEN_PROFILE}"
}
}
}
... other stages
}
}
Pipeline B
pipeline {
agent any
...
stages {
... other stages
stage(...){
steps {
script {
bat "mvn release ... -P ${env.environmentVariableValueFromPipelineA}"
}
}
}
}
}
They're not running in the same environment, so they can't directly share environment variables. The easiest is probably to write these values to a file in the workspace in pipeline A, and read them back in in pipeline B. Something like this:
Pipeline A:
sh "echo ${MAVEN_PROFILE} > .MAVEN_PROFILE"
Pipeline B:
def MAVEN_PROFILE = sh(script: 'cat .MAVEN_PROFILE', returnStdout: true).trim()
I want to use the mentioned plugin with a declarative pipeline, to be precise I want to convert the following documentation example to a declarative pipeline:
The pipeline code in the upstream job is the following:
stage ('Stage 1. Allocate workspace in the upstream job')
def extWorkspace = exwsAllocate 'diskpool1'
node ('linux') {
exws (extWorkspace) {
stage('Stage 2. Build in the upstream job')
git url: 'https://github.com/alexsomai/dummy-hello-world.git'
def mvnHome = tool 'M3'
sh '${mvnHome}/bin/mvn clean install -DskipTests'
}
}
And the downstream's Pipeline code is:
stage ('Stage 3. Select the upstream run')
def run = selectRun 'upstream'
stage ('Stage 4. Allocate workspace in the downstream job')
def extWorkspace = exwsAllocate selectedRun: run
node ('test') {
exws (extWorkspace) {
stage('Stage 5. Run tests in the downstream job')
def mvnHome = tool 'M3'
sh '${mvnHome}/bin/mvn test'
}
}
Thanks!
I searched everywhere for a clear answer to this, yet never found a definitive answer. So, I pulled the External Workspace Plugin code and read it. The answer is simple as long as the plugins Model doesn't change.
Anytunc's answer is very close, but the issue is getting the path from the External Workspace Plugin and getting it into the customWorkspace configuration.
What I ended up doing was creating a method:
def getExternalWorkspace() {
extWorkspace = exwsAllocate diskPoolId: "jenkins"
return extWorkspace.getCompleteWorkspacePath()
}
and setting my agent to:
agent {
node {
label 'Linux'
customWorkspace getExternalWorkspace()
}
}
If you'd rather not set the entire pipeline to that path, you could create as many external workspaces as you want, then use
...
steps {
dir(getExternalWorkspace()) {
do fancy stuff
...
}
}
...
You can use this agent directive:
agent {
node {
label 'my-defined-label'
customWorkspace '/some/other/path'
}
}
Is there a way to use the Jenkins "Execute system groovy script" step from a pipeline file which is SCM commited ?
If yes, how would I access the predefined variables (like build) in it ?
If no, would I be able to replicate the functionality otherwise, using for example the Shared Library Plugin ?
Thanks !
You can put groovy code in a pipeline in a (always-source-controlled) Jenkinsfile, like this:
pipeline {
agent { label 'docker' }
stages {
stage ('build') {
steps {
script {
// i used a script block because you can jam arbitrary groovy in here
// without being constrained by the declarative Jenkinsfile DSL
def awesomeVar = 'so_true'
print "look at this: ${awesomeVar}"
// accessing a predefined variable:
echo "currentBuild.number: ${currentBuild.number}"
}
}
}
}
}
Produces console log:
[Pipeline] echo
look at this: so_true
[Pipeline] echo
currentBuild.number: 84
Click on the "Pipeline Syntax" link in the left navigation of any of pipeline job to get a bunch of examples of things you can access in the "Global Variables Reference."