Pass a string parameter to Jenkins declarative script - jenkins

I am declaring a String Parameter in Jenkins -
Name is SLACKTOKEN
Value is qwsaw2345
My Jenkins file has script
pipeline {
agent { label 'trial' }
stages {
stage("Build Docker Image") {
steps{
sh 'docker build -t trial:latest --build-arg SLACK_SIGNING_SECRET=${SLACKTOKEN}'
}
}
}
}
I tried like this, but it didnt work. Could you please let me know how can I pass a value from Jenkins string parameter to Jenkins declarative script file.
I have added the Password parameter in job like below

Inside parameters directive you can provide parameters, details is here.
To pass parameter use params.SLACKTOKEN inside double quotes, not single:
pipeline {
agent { label 'trial' }
parameters {
password(name: 'SLACKTOKEN', defaultValue: '', description: 'Slack Token')
}
stages {
stage("Build Docker Image") {
steps{
sh "docker build -t trial:latest --build-arg SLACK_SIGNING_SECRET=${params.SLACKTOKEN}"
}
}
}
}

Where have you declared your variable?
There are a lot of options:
Example: Use env section from declarative syntax:
pipeline {
agent {
label 'trial'
}
environment {
SLACKTOKEN = 'qwsaw2345'
}
stages {
stage('Build') {
steps {
sh "docker build -t trial:latest --build-arg SLACK_SIGNING_SECRET=${SLACKTOKEN}"
}
}
}
}

Related

Using a dockerfile with Jenkins Scripted Pipeline Syntax

Using Jenkins Declarative Pipeline, one can easily specify a Dockerfile, agent label, build args and run args as follows:
Jenkinsfile (Declarative Pipeline)
agent {
dockerfile {
dir './path/to/dockerfile'
label 'my-label'
additionalBuildArgs '--build-arg version=1.0'
args '-v /tmp:/tmp'
}
}
I am trying to achieve the same using the scripted pipeline syntax. I found a way to pass the agent label and run args, but was unable to to pass the directory and build args. Ideally, I would write something like this (label and run args are already working):
Jenkinsfile (Scripted Pipeline)
node ("my-label"){
docker.dockerfile(
dir: './path/to/dockerfile',
additionalBuildArgs:'--build-arg version=1.0'
).inside('-v /tmp:/tmp') {
\\ add stages here
}
}
The documentation shows how this can be done using an existing docker image, i.e., with the image directive in the pipeline.
Jenkinsfile (Declarative Pipeline)
pipeline {
agent {
docker { image 'node:7-alpine' }
}
stage('Test') {
//...
}
}
Jenkinsfile (Scripted Pipeline)
node {
docker.image('node:7-alpine').inside {
stage('Test') {
//...
}
}
}
However, the scripted pipeline syntax for the dockerfile directive is missing.
The workaround I am using at the moment is building the image myself.
node ("my-label"){
def testImage = docker.build(
"test-image",
"./path/to/dockerfile",
"--build-arg v1.0"
)
testImage.inside('-v /tmp:/tmp') {
sh 'echo test'
}
}
Any help is much appreciated!
I personally put the docker cli arguments before the image folder path and would specify the docker filename with -f argument
Apart from that, you are doing this the right way. agent dockerfile is building a docker image the same way docker.build step is doing. Except you can push your image to a registry by using the docker.build step
Here is I how do
def dockerImage
//jenkins needs entrypoint of the image to be empty
def runArgs = '--entrypoint \'\''
pipeline {
agent {
label 'linux_x64'
}
options {
buildDiscarder(logRotator(numToKeepStr: '100', artifactNumToKeepStr: '20'))
timestamps()
}
stages {
stage('Build') {
options { timeout(time: 30, unit: 'MINUTES') }
steps {
script {
def commit = checkout scm
// we set BRANCH_NAME to make when { branch } syntax work without multibranch job
env.BRANCH_NAME = commit.GIT_BRANCH.replace('origin/', '')
dockerImage = docker.build("myImage:${env.BUILD_ID}",
"--label \"GIT_COMMIT=${env.GIT_COMMIT}\""
+ " --build-arg MY_ARG=myArg"
+ " ."
)
}
}
}
stage('Push to docker repository') {
when { branch 'master' }
options { timeout(time: 5, unit: 'MINUTES') }
steps {
lock("${JOB_NAME}-Push") {
script {
docker.withRegistry('https://myrepo:5000', 'docker_registry') {
dockerImage.push('latest')
}
}
milestone 30
}
}
}
}
}
Here is a purely old-syntax scripted pipeline that solves the problem of checking out, building a docker image and pushing the image to a registry. It assumes the Jenkins project is type "Pipeline script from SCM".
I developed this pipeline for a server that requires proxies to reach the public internet. The Dockerfile accepts build arguments to configure its tools for proxies.
I think this has a pretty good structure #fredericrous :) but I'm new to pipelines, please help me improve!
def scmvars
def image
node {
stage('clone') {
// enabled by project type "Pipeline script from SCM"
scmvars = checkout(scm)
echo "git details: ${scmvars}"
}
stage('env') {
// Jenkins provides no environment variable view
sh 'printenv|sort'
}
stage('build') {
// arg 1 is the image name and tag
// arg 2 is docker build command line
image = docker.build("com.mycompany.myproject/my-image:${env.BUILD_ID}",
" --build-arg commit=${scmvars.GIT_COMMIT}"
+ " --build-arg http_proxy=${env.http_proxy}"
+ " --build-arg https_proxy=${env.https_proxy}"
+ " --build-arg no_proxy=${env.no_proxy}"
+ " path/to/dir/with/Dockerfile")
}
stage('push') {
docker.withRegistry('https://registry.mycompany.com:8100',
'jenkins-registry-credential-id') {
image.push()
}
}
}

Declarative Pipeline shared library

I'm facing an issue when trying to implement shared library in my Jenkins servers.
The error I'm getting is around the following
No such DSL method 'agent' found among steps
I have tried to remove the agent and just run on node, but still issue.
I was following the following: https://jenkins.io/blog/2017/09/25/declarative-1/
could someone please point out where I'm be going wrong
vars/jenkinsJob.groovy
def call() {
// Execute build pipeline job
build_pipeline()
}
def build_pipeline() {
agent {
node {
label params.SLAVE
}
}
parameters {
string(name: 'SETTINGS_CONFIG_FILE_NAME', defaultValue: 'maven.settings')
string(name: 'SLAVE', defaultValue: 'new_slave')
}
environment {
mvn = "docker run -it --rm --name my-maven-project -v "$(pwd)":/usr/src/mymaven -w /usr/src/mymaven maven:3.3-jdk-8"
}
stages {
stage('Inject Settings.xml File') {
steps {
configFileProvider([configFile(fileId: "${env.SETTINGS_CONFIG_FILE_NAME}", targetLocation: "${env.WORKSPACE}")]) {
}
}
}
stage('Clean') {
steps {
sh "${mvn} clean"
}
}
stage('Lint') {
steps {
sh "${mvn} lint"
}
}
stage('Build package and execute tests') {
steps {
sh "${mvn} build"
}
}
}
post {
always {
archive "**/target/surefire-reports/*"
junit '**/target/surefire-reports/*.xml'
step([$class: 'JacocoPublisher'])
}
}
}
Jenkinsfile
#Library('pipeline-library-demo') _
jenkinsJob.call()
All valid Declarative Pipelines must be enclosed within a pipeline block
eg:
pipeline {
/* insert Declarative Pipeline here */
/* import libraries and call functions */
}
The file jenkinsJob.groovy needs to have a single method only by the name:
def call(Map params[:]){
// method body
}

How can I run something during agent setup in a Jenkins declarative pipeline?

Our current Jenkins pipeline looks like this:
pipeline {
agent {
docker {
label 'linux'
image 'java:8'
args '-v /home/tester/.gradle:/.gradle'
}
}
environment {
GRADLE_USER_HOME = '/.gradle'
GRADLE_PROPERTIES = credentials('gradle.properties')
}
stages {
stage('Build') {
steps {
sh 'cp ${GRADLE_PROPERTIES} ${GRADLE_USER_HOME}/'
sh './gradlew clean check'
}
}
}
post {
always {
junit 'build/test-results/**/*.xml'
}
}
}
We mount /.gradle because we want to reuse cached data between builds. The problem is, if the machine is a brand new build machine, the directory on the host does not yet exist.
Where do I put setup logic which runs before, so that I can ensure this directory exists before the docker image is run?
You can run a prepare stage before all the stages and change agent after that
pipeline {
agent { label 'linux' } // slave where docker agent needs to run
environment {
GRADLE_USER_HOME = '/.gradle'
GRADLE_PROPERTIES = credentials('gradle.properties')
}
stages {
stage('Prepare') {
// prepare host
}
stage('Build') {
agent {
docker {
label 'linux' // should be same as slave label
image 'java:8'
args '-v /home/tester/.gradle:/.gradle'
}
}
steps {
sh 'cp ${GRADLE_PROPERTIES} ${GRADLE_USER_HOME}/'
sh './gradlew clean check'
}
}
}
post {
always {
junit 'build/test-results/**/*.xml'
}
}
}
Specifying a Docker Label
Pipeline provides a global option in the Manage Jenkins page, and on the Folder level, for specifying which agents (by Label) to use for running Docker-based Pipelines.
How to restrict the jenkins pipeline docker agent in specific slave?

Using Jenkins job parameter in a bat command

I'm trying to configure a parameter in Jenkins pipeline and then execute it within bat command:
pipeline {
agent {
label 'master'
}
parameters {
string (
defaultValue: '"someExe.exe"',
description: '',
name : 'varExe'
)
}
stages {
stage("hi") {
steps {
script {
bat '${params.varExe}'
}
}
}
}
}
Unfortunately, i'm getting this error:
'${varExe}'is not recognized as an internal or external command
For some reason, Jenkins doesn't use varExe value.
I've also tried bat '${varExe}' but still no luck.
Any ideas ?
You need to use a double quote here to replace the variable.
bat "${params.varExe}"
You have to be careful with single and double quotes. For the following example, the first one would echo someExe.exe, while the second one would throw a Bad substitution error.
pipeline {
agent any
parameters {
string (
defaultValue: '"someExe.exe"',
description: '',
name : 'varExe')
}
stages {
stage ('Test') {
steps {
script {
sh "echo '${params.varExe}'"
sh 'echo "${params.varExe}"'
}
}
}
}
}
I think for bat command should be like below
bat ''' echo %varExe% '''
reference : pass parameter from jenkins parameterized build to windows batch command

Use private docker registry with Authentication in Jenkinsfile

How can I teach my Jenkisfile to login via basic auth in this setup?
I'm using a custom docker image for my Jenkins build.
As described in the documentation here I defined a docker agent like so:
pipeline {
agent {
docker {
image 'registry.az1:5043/maven-proto'
registryUrl 'https://registry.az1'
args '-v /var/jenkins_home/.m2:/root/.m2'
}
}
options {
timeout(time: 1, unit: 'HOURS')
buildDiscarder(logRotator(numToKeepStr:'10'))
}
stages {
stage ('Build') {
steps{
sh ...
}
}
stage ('Test') {
steps {
sh ...
}
}
stage ('Deploy') {
steps {
sh ...
}
}
}
post {
always {
echo 'Clean up workspace'
deleteDir()
}
}
}
If I use the following agent setup:
pipeline {
agent {
docker.withRegistry('https://registry.az1', 'registry_login'){
image 'registry.az1:5043/maven-proto'
registryUrl 'https://registry.az1'
args '-v /var/jenkins_home/.m2:/root/.m2'
}
}
The execution of the pipeline fails with the following exception:
WorkflowScript: 3: Too many arguments for map key "withRegistry" # line 3, column 16.
docker.withRegistry('https://registry.az1', 'registry_login'){
^
WorkflowScript: 3: Invalid agent type "withRegistry" specified. Must be one of [docker, dockerfile, label, any, none] # line 3, column 16.
docker.withRegistry('https://registry.az1', 'registry_login'){
^
The problem is that the used registry requires a basic auth login. The registry runs behind a nginx reverse proxy using this configuration.
As specified in Using a custom registry, you can specify the credentials and registry url to use as such:
docker.withRegistry('https://registry.az1', 'credentials-id') {
...
}
You need to create a Jenkins credentials object which will contain the credentials for the repository and give it a name to replace credentials-id above.
Update:
For declarative pipelines, the syntax is as such:
agent {
docker {
image 'registry.az1:5043/maven-proto'
registryUrl 'https://registry.az1'
registryCredentialsId 'credentials-id'
args '-v /var/jenkins_home/.m2:/root/.m2'
}
}

Resources