I want to give specific names to every Pipeline Job that I build. Eg. #Build_number parameter1 parameter2
I have done it in the freestyle project job, but can't find how to do it in Pipeline project.
you can use below script section in any stage inside your pipeline
pipeline {
agent any
stages {
stage("Any stage"){
steps {
script {
currentBuild.displayName = '#' + currentBuild.number +
'-' + params.parameter1 +
'-' + params.parameter2
currentBuild.description = "The best description."
}
}
}
}
}
Related
I'm attempting to validate that all Jenkins pipelines, at least in a single group/organization, have published their junit tests. Is there a way to programmatically do this? Also, would it be relegated to Jenkinsfiles or work on all pipelines? Thanks!
I could manually check this via looking for the "Test Results" on the page that I have included the image for below. This indicates that the job has published Test Results to the JUnit plugin.
If I were to write a Jenkinsfile, it might look something like this. But it is possible to attach these to the JUnit pipeline via manual methods as well:
pipeline {
agent any
stages {
stage('Compile') {
steps {
// Login to Repository
configFileProvider([configFile(fileId: 'nexus_maven_configuration', variable: 'MAVEN_SETTINGS')]) {
sh 'mvn -s $MAVEN_SETTINGS compile'
}
}
}
stage('Test') {
steps {
configFileProvider([configFile(fileId: 'nexus_maven_configuration', variable: 'MAVEN_SETTINGS')]) {
sh 'mvn -s $MAVEN_SETTINGS test'
}
}
}
}
post {
always {
junit '**/target/surefire-reports/*.xml'
archive 'target/*.jar'
}
}
}
Here is a script you can use to check whether you have tests attached for Jobs in a specific Subdirectory. You can either run this through a Pipeline or using the Jenkins Script Console.
def subFolderToCheck = "folder1" // We will only check Jobs in a specific sub directory
Jenkins.instance.getAllItems(Job.class).each { jobitem ->
def jobName = jobitem.getFullName()
def jobInfo = Jenkins.instance.getItemByFullName(jobName)
// We will check if the last successfull build has any tests attached.
if(jobName.contains(subFolderToCheck) && jobInfo.getLastSuccessfulBuild() != null) {
def results = jobInfo.getLastSuccessfulBuild().getActions(hudson.tasks.junit.TestResultAction.class).result
println("Job : " + jobName + " Tests " + results.size())
if(results == null || results.size() <= 0) {
print("Job " + jobName + " Does not have any tests!!!!!")
}
}
}
I have a scripted pipeline with parallel branches on different configurations.
def platforms = ['SLES11', 'SLES12']
def build_modes = ['debug', 'release']
def prepareStages(def build_mode, def os) {
return {
node(os){
stage('Build ' + build_mode + ' ' + os){
//do stuff
}
stage('Install ' + build_mode + ' ' + os){
//do stuff
}
stage('Run tests ' + build_mode + ' ' + os){
//do stuff
}
}
}
}
stage('Setup environment'){
node('SLES12'){
//do stuff
}
}
stage('Fetch Sources'){
node('sys_utdb_gh_iil_sles12'){
//do stuff
}
}
stage("PIPELINE") {
def branches = [:]
platforms.each { o ->
build_modes.each { m ->
branches[m + ' ' + o] = prepareStages(m, o)
}
}
parallel branches
}
When I look at the pipeline in blue ocean, I see that until "Build ..." stages in all branches haven't finished - "Install ..." stages in the branches where build did finish do not start.
I saw many different ways to run pipelines with parallel stages, this way I took from one example. Is there another way that will allow the branches to be independent?
i neeed your help please
i'm working on groovy script to list all scm polling jobs.
the script is working fine on jenkins scripting console but when i integrate it in jenkinsfile and run it in pipeline i get this error :
12:51:21 WorkflowScript: 10: The current scope already contains a variable of the name it
12:51:21 # line 10, column 25.
12:51:21 def logSpec = { it, getTrigger -> String spec = getTrigger(it)?.getSpec(); if (spec ) println ("job_name " + it.name + " job_path " + it.getFullName() + " with spec " + spec )}
12:51:21 ^
12:51:21
12:51:21 1 error
12:51:21
12:51:21 at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310)
12:51:21 at org.codehaus.groovy.control.CompilationUnit.applyToSourceUnits(CompilationUnit.java:958)
Here is the jenkinsfile :
#!/usr/bin/env groovy
import hudson.triggers.*
import hudson.maven.MavenModuleSet
import org.jenkinsci.plugins.workflow.job.*
pipeline {
agent any
stages {
stage('list jobs with scm polling') {
steps {
def logSpec = { it, getTrigger -> String spec = getTrigger(it)?.getSpec(); if (spec ) println ("job_name " + it.name + " job_path " + it.getFullName() + " with spec " + spec )}
println("--- SCM Frequent Polling for Pipeline jobs ---")
Jenkins.getInstance().getAllItems(WorkflowJob.class).each() { logSpec(it, {it.getSCMTrigger()}) }
println("\n--- SCM Frequent Polling for FreeStyle jobs ---")
Jenkins.getInstance().getAllItems(FreeStyleProject.class).each() { logSpec(it, {it.getSCMTrigger()}) }
println("\n--- SCM Frequent Polling for Maven jobs ---");
Jenkins.getInstance().getAllItems(MavenModuleSet.class).each() { logSpec(it, {it.getTrigger(SCMTrigger.class)}) }
println("--- SCM Frequent Polling for Abstract jobs---")
Jenkins.getInstance().getAllItems(AbstractProject.class).each() { logSpec(it, {it.getTrigger(SCMTrigger.class)}) }
println '\nDone.'
}} }}
Does anyone can help ?
thanksss
it is an implicit variable that is provided in closures, when the closure doesn't have an explicitly declared parameter. So when you declare a parameter, make sure it is not called it to avoid conflicts with parent scopes that already define it (in your case the closure of .each()).
Also, to integrate a script section in a pipeline, either use the script step or define a function that you could call like a built-in step.
Lastly, .each() doesn't work well in pipeline code, due to the restrictions imposed by the CPS transformations applied by Jenkins to the pipeline code (unless tagged #NonCPS - which has other restrictions). So .each() should be replaced by a for loop.
pipeline {
agent any
stages {
stage('list jobs with scm polling') {
steps {
script {
def logSpec = { job, getTrigger -> String spec = getTrigger(job)?.getSpec(); if (spec ) println ("job_name " + job.name + " job_path " + job.getFullName() + " with spec " + spec )}
println("--- SCM Frequent Polling for Pipeline jobs ---")
for( item in Jenkins.getInstance().getAllItems(WorkflowJob.class) ) {
logSpec( item, {item.getSCMTrigger()})
}
// ... other code ...
println '\nDone.'
}
}} }}
Variant with separate function:
pipeline {
agent any
stages {
stage('list jobs with scm polling') {
steps {
doStuff()
}} }}
void doStuff() {
def logSpec = { job, getTrigger -> String spec = getTrigger(job)?.getSpec(); if (spec ) println ("job_name " + job.name + " job_path " + job.getFullName() + " with spec " + spec )}
println("--- SCM Frequent Polling for Pipeline jobs ---")
for( item in Jenkins.getInstance().getAllItems(WorkflowJob.class) ) {
logSpec( item, {item.getSCMTrigger()})
}
// ... other code ...
println '\nDone.'
}
Using Jenkins Declarative Pipeline, one can easily specify a Dockerfile, agent label, build args and run args as follows:
Jenkinsfile (Declarative Pipeline)
agent {
dockerfile {
dir './path/to/dockerfile'
label 'my-label'
additionalBuildArgs '--build-arg version=1.0'
args '-v /tmp:/tmp'
}
}
I am trying to achieve the same using the scripted pipeline syntax. I found a way to pass the agent label and run args, but was unable to to pass the directory and build args. Ideally, I would write something like this (label and run args are already working):
Jenkinsfile (Scripted Pipeline)
node ("my-label"){
docker.dockerfile(
dir: './path/to/dockerfile',
additionalBuildArgs:'--build-arg version=1.0'
).inside('-v /tmp:/tmp') {
\\ add stages here
}
}
The documentation shows how this can be done using an existing docker image, i.e., with the image directive in the pipeline.
Jenkinsfile (Declarative Pipeline)
pipeline {
agent {
docker { image 'node:7-alpine' }
}
stage('Test') {
//...
}
}
Jenkinsfile (Scripted Pipeline)
node {
docker.image('node:7-alpine').inside {
stage('Test') {
//...
}
}
}
However, the scripted pipeline syntax for the dockerfile directive is missing.
The workaround I am using at the moment is building the image myself.
node ("my-label"){
def testImage = docker.build(
"test-image",
"./path/to/dockerfile",
"--build-arg v1.0"
)
testImage.inside('-v /tmp:/tmp') {
sh 'echo test'
}
}
Any help is much appreciated!
I personally put the docker cli arguments before the image folder path and would specify the docker filename with -f argument
Apart from that, you are doing this the right way. agent dockerfile is building a docker image the same way docker.build step is doing. Except you can push your image to a registry by using the docker.build step
Here is I how do
def dockerImage
//jenkins needs entrypoint of the image to be empty
def runArgs = '--entrypoint \'\''
pipeline {
agent {
label 'linux_x64'
}
options {
buildDiscarder(logRotator(numToKeepStr: '100', artifactNumToKeepStr: '20'))
timestamps()
}
stages {
stage('Build') {
options { timeout(time: 30, unit: 'MINUTES') }
steps {
script {
def commit = checkout scm
// we set BRANCH_NAME to make when { branch } syntax work without multibranch job
env.BRANCH_NAME = commit.GIT_BRANCH.replace('origin/', '')
dockerImage = docker.build("myImage:${env.BUILD_ID}",
"--label \"GIT_COMMIT=${env.GIT_COMMIT}\""
+ " --build-arg MY_ARG=myArg"
+ " ."
)
}
}
}
stage('Push to docker repository') {
when { branch 'master' }
options { timeout(time: 5, unit: 'MINUTES') }
steps {
lock("${JOB_NAME}-Push") {
script {
docker.withRegistry('https://myrepo:5000', 'docker_registry') {
dockerImage.push('latest')
}
}
milestone 30
}
}
}
}
}
Here is a purely old-syntax scripted pipeline that solves the problem of checking out, building a docker image and pushing the image to a registry. It assumes the Jenkins project is type "Pipeline script from SCM".
I developed this pipeline for a server that requires proxies to reach the public internet. The Dockerfile accepts build arguments to configure its tools for proxies.
I think this has a pretty good structure #fredericrous :) but I'm new to pipelines, please help me improve!
def scmvars
def image
node {
stage('clone') {
// enabled by project type "Pipeline script from SCM"
scmvars = checkout(scm)
echo "git details: ${scmvars}"
}
stage('env') {
// Jenkins provides no environment variable view
sh 'printenv|sort'
}
stage('build') {
// arg 1 is the image name and tag
// arg 2 is docker build command line
image = docker.build("com.mycompany.myproject/my-image:${env.BUILD_ID}",
" --build-arg commit=${scmvars.GIT_COMMIT}"
+ " --build-arg http_proxy=${env.http_proxy}"
+ " --build-arg https_proxy=${env.https_proxy}"
+ " --build-arg no_proxy=${env.no_proxy}"
+ " path/to/dir/with/Dockerfile")
}
stage('push') {
docker.withRegistry('https://registry.mycompany.com:8100',
'jenkins-registry-credential-id') {
image.push()
}
}
}
I'm trying to create a declarative Jenkins pipeline script but having issues with simple variable declaration.
Here is my script:
pipeline {
agent none
stages {
stage("first") {
def foo = "foo" // fails with "WorkflowScript: 5: Expected a step # line 5, column 13."
sh "echo ${foo}"
}
}
}
However, I get this error:
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 5: Expected a step # line 5, column 13.
def foo = "foo"
^
I'm on Jenkins 2.7.4 and Pipeline 2.4.
The Declarative model for Jenkins Pipelines has a restricted subset of syntax that it allows in the stage blocks - see the syntax guide for more info. You can bypass that restriction by wrapping your steps in a script { ... } block, but as a result, you'll lose validation of syntax, parameters, etc within the script block.
I think error is not coming from the specified line but from the first 3 lines. Try this instead :
node {
stage("first") {
def foo = "foo"
sh "echo ${foo}"
}
}
I think you had some extra lines that are not valid...
From declaractive pipeline model documentation, it seems that you have to use an environment declaration block to declare your variables, e.g.:
pipeline {
environment {
FOO = "foo"
}
agent none
stages {
stage("first") {
sh "echo ${FOO}"
}
}
}
Agree with #Pom12, #abayer. To complete the answer you need to add script block
Try something like this:
pipeline {
agent any
environment {
ENV_NAME = "${env.BRANCH_NAME}"
}
// ----------------
stages {
stage('Build Container') {
steps {
echo 'Building Container..'
script {
if (ENVIRONMENT_NAME == 'development') {
ENV_NAME = 'Development'
} else if (ENVIRONMENT_NAME == 'release') {
ENV_NAME = 'Production'
}
}
echo 'Building Branch: ' + env.BRANCH_NAME
echo 'Build Number: ' + env.BUILD_NUMBER
echo 'Building Environment: ' + ENV_NAME
echo "Running your service with environemnt ${ENV_NAME} now"
}
}
}
}
In Jenkins 2.138.3 there are two different types of pipelines.
Declarative and Scripted pipelines.
"Declarative pipelines is a new extension of the pipeline DSL (it is basically a pipeline script with only one step, a pipeline step with arguments (called directives), these directives should follow a specific syntax. The point of this new format is that it is more strict and therefore should be easier for those new to pipelines, allow for graphical editing and much more.
scripted pipelines is the fallback for advanced requirements."
jenkins pipeline: agent vs node?
Here is an example of using environment and global variables in a Declarative Pipeline. From what I can tell enviroment are static after they are set.
def browser = 'Unknown'
pipeline {
agent any
environment {
//Use Pipeline Utility Steps plugin to read information from pom.xml into env variables
IMAGE = readMavenPom().getArtifactId()
VERSION = readMavenPom().getVersion()
}
stages {
stage('Example') {
steps {
script {
browser = sh(returnStdout: true, script: 'echo Chrome')
}
}
}
stage('SNAPSHOT') {
when {
expression {
return !env.JOB_NAME.equals("PROD") && !env.VERSION.contains("RELEASE")
}
}
steps {
echo "SNAPSHOT"
echo "${browser}"
}
}
stage('RELEASE') {
when {
expression {
return !env.JOB_NAME.equals("TEST") && !env.VERSION.contains("RELEASE")
}
}
steps {
echo "RELEASE"
echo "${browser}"
}
}
}//end of stages
}//end of pipeline
You are using a Declarative Pipeline which requires a script-step to execute Groovy code. This is a huge difference compared to the Scripted Pipeline where this is not necessary.
The official documentation says the following:
The script step takes a block of Scripted Pipeline and executes that
in the Declarative Pipeline.
pipeline {
agent none
stages {
stage("first") {
script {
def foo = "foo"
sh "echo ${foo}"
}
}
}
}
you can define the variable global , but when using this variable must to write in script block .
def foo="foo"
pipeline {
agent none
stages {
stage("first") {
script{
sh "echo ${foo}"
}
}
}
}
Try this declarative pipeline, its working
pipeline {
agent any
stages {
stage("first") {
steps{
script {
def foo = "foo"
sh "echo ${foo}"
}
}
}
}
}