Using Jenkins Declarative Pipeline, one can easily specify a Dockerfile, agent label, build args and run args as follows:
Jenkinsfile (Declarative Pipeline)
agent {
dockerfile {
dir './path/to/dockerfile'
label 'my-label'
additionalBuildArgs '--build-arg version=1.0'
args '-v /tmp:/tmp'
}
}
I am trying to achieve the same using the scripted pipeline syntax. I found a way to pass the agent label and run args, but was unable to to pass the directory and build args. Ideally, I would write something like this (label and run args are already working):
Jenkinsfile (Scripted Pipeline)
node ("my-label"){
docker.dockerfile(
dir: './path/to/dockerfile',
additionalBuildArgs:'--build-arg version=1.0'
).inside('-v /tmp:/tmp') {
\\ add stages here
}
}
The documentation shows how this can be done using an existing docker image, i.e., with the image directive in the pipeline.
Jenkinsfile (Declarative Pipeline)
pipeline {
agent {
docker { image 'node:7-alpine' }
}
stage('Test') {
//...
}
}
Jenkinsfile (Scripted Pipeline)
node {
docker.image('node:7-alpine').inside {
stage('Test') {
//...
}
}
}
However, the scripted pipeline syntax for the dockerfile directive is missing.
The workaround I am using at the moment is building the image myself.
node ("my-label"){
def testImage = docker.build(
"test-image",
"./path/to/dockerfile",
"--build-arg v1.0"
)
testImage.inside('-v /tmp:/tmp') {
sh 'echo test'
}
}
Any help is much appreciated!
I personally put the docker cli arguments before the image folder path and would specify the docker filename with -f argument
Apart from that, you are doing this the right way. agent dockerfile is building a docker image the same way docker.build step is doing. Except you can push your image to a registry by using the docker.build step
Here is I how do
def dockerImage
//jenkins needs entrypoint of the image to be empty
def runArgs = '--entrypoint \'\''
pipeline {
agent {
label 'linux_x64'
}
options {
buildDiscarder(logRotator(numToKeepStr: '100', artifactNumToKeepStr: '20'))
timestamps()
}
stages {
stage('Build') {
options { timeout(time: 30, unit: 'MINUTES') }
steps {
script {
def commit = checkout scm
// we set BRANCH_NAME to make when { branch } syntax work without multibranch job
env.BRANCH_NAME = commit.GIT_BRANCH.replace('origin/', '')
dockerImage = docker.build("myImage:${env.BUILD_ID}",
"--label \"GIT_COMMIT=${env.GIT_COMMIT}\""
+ " --build-arg MY_ARG=myArg"
+ " ."
)
}
}
}
stage('Push to docker repository') {
when { branch 'master' }
options { timeout(time: 5, unit: 'MINUTES') }
steps {
lock("${JOB_NAME}-Push") {
script {
docker.withRegistry('https://myrepo:5000', 'docker_registry') {
dockerImage.push('latest')
}
}
milestone 30
}
}
}
}
}
Here is a purely old-syntax scripted pipeline that solves the problem of checking out, building a docker image and pushing the image to a registry. It assumes the Jenkins project is type "Pipeline script from SCM".
I developed this pipeline for a server that requires proxies to reach the public internet. The Dockerfile accepts build arguments to configure its tools for proxies.
I think this has a pretty good structure #fredericrous :) but I'm new to pipelines, please help me improve!
def scmvars
def image
node {
stage('clone') {
// enabled by project type "Pipeline script from SCM"
scmvars = checkout(scm)
echo "git details: ${scmvars}"
}
stage('env') {
// Jenkins provides no environment variable view
sh 'printenv|sort'
}
stage('build') {
// arg 1 is the image name and tag
// arg 2 is docker build command line
image = docker.build("com.mycompany.myproject/my-image:${env.BUILD_ID}",
" --build-arg commit=${scmvars.GIT_COMMIT}"
+ " --build-arg http_proxy=${env.http_proxy}"
+ " --build-arg https_proxy=${env.https_proxy}"
+ " --build-arg no_proxy=${env.no_proxy}"
+ " path/to/dir/with/Dockerfile")
}
stage('push') {
docker.withRegistry('https://registry.mycompany.com:8100',
'jenkins-registry-credential-id') {
image.push()
}
}
}
Related
Below is my pipeline:-
#!groovy
String version
String awsRegion = "us-east-1"
String appName = "abcde"
String dockerFilePath = "."
def featureEnv = env.BRANCH_NAME != 'master'
String branchName = env.BRANCH_NAME
String env = (env.BRANCH_NAME == 'master') ? 'release' : 'develop'
String ecrRepo = featureEnv ? "123456789012.dkr.ecr.${awsRegion}.amazonaws.com/abcde_${env}" : "987654321098.dkr.ecr.${awsRegion}.amazonaws.com/abcde_master"
String terraformPath = "terraform/dev"
println "Feature Environment=${featureEnv}"
pipeline {
agent none
options {
buildDiscarder(logRotator(numToKeepStr: '30'))
disableConcurrentBuilds()
timeout(time: 6, unit: 'HOURS')
ansiColor('xterm')
}
stages {
stage('version build'){
agent { label 'linux' }
steps {
script {
version = VersionNumber(
versionNumberString: '1.0.${BUILD_NUMBER, X}',
skipFailedBuilds: false)
currentBuild.displayName = version
println "Pipeline Version='${version}'"
}
}
}
stage('Build') {
when {
anyOf { branch 'develop'; branch 'release' }
}
agent { label 'linux' }
steps {
checkout scm
unstash name: "${appName}-docker"
dir(dockerFilePath) {
sh("""
while IFS= read -r line; do
build_args+=" --build-arg \$line"
done < "env_vars.txt"
#echo \$build_args
docker build -t ${ecrRepo}:${version} \$build_args --no-cache=true .
eval \$(aws ecr get-login --no-include-email --region ${awsRegion})
docker push ${ecrRepo}:${version}
docker rmi ${ecrRepo}:${version}
""")
}
}
}
}
}
I am using Multibranch pipelines to execute Jenkins job but for branch release, Its by default taking develop branch i am attaching docker build and docker push outputs of Jenkins instead release ECR repo. Please suggest.
Jenkins Output:-
+ docker build -t 123456789012.dkr.ecr.us-east-1.amazonaws.com/abcde_develop:1.0.2 --build-arg HOST=0.0.0.0 --build-arg PORT=8080 --build-arg DOMAIN=abcde --build-arg MSAL_CLIENT_ID=1234567-bd11-4d2e-add5-d78f5e59e976 --build-arg
+ docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/abcde_develop:1.0.2
Suppose you can use for test purpose this version
when {
expression { BRANCH_NAME ==~ /(develop|release)/ }
}
I am declaring a String Parameter in Jenkins -
Name is SLACKTOKEN
Value is qwsaw2345
My Jenkins file has script
pipeline {
agent { label 'trial' }
stages {
stage("Build Docker Image") {
steps{
sh 'docker build -t trial:latest --build-arg SLACK_SIGNING_SECRET=${SLACKTOKEN}'
}
}
}
}
I tried like this, but it didnt work. Could you please let me know how can I pass a value from Jenkins string parameter to Jenkins declarative script file.
I have added the Password parameter in job like below
Inside parameters directive you can provide parameters, details is here.
To pass parameter use params.SLACKTOKEN inside double quotes, not single:
pipeline {
agent { label 'trial' }
parameters {
password(name: 'SLACKTOKEN', defaultValue: '', description: 'Slack Token')
}
stages {
stage("Build Docker Image") {
steps{
sh "docker build -t trial:latest --build-arg SLACK_SIGNING_SECRET=${params.SLACKTOKEN}"
}
}
}
}
Where have you declared your variable?
There are a lot of options:
Example: Use env section from declarative syntax:
pipeline {
agent {
label 'trial'
}
environment {
SLACKTOKEN = 'qwsaw2345'
}
stages {
stage('Build') {
steps {
sh "docker build -t trial:latest --build-arg SLACK_SIGNING_SECRET=${SLACKTOKEN}"
}
}
}
}
Our current Jenkins pipeline looks like this:
pipeline {
agent {
docker {
label 'linux'
image 'java:8'
args '-v /home/tester/.gradle:/.gradle'
}
}
environment {
GRADLE_USER_HOME = '/.gradle'
GRADLE_PROPERTIES = credentials('gradle.properties')
}
stages {
stage('Build') {
steps {
sh 'cp ${GRADLE_PROPERTIES} ${GRADLE_USER_HOME}/'
sh './gradlew clean check'
}
}
}
post {
always {
junit 'build/test-results/**/*.xml'
}
}
}
We mount /.gradle because we want to reuse cached data between builds. The problem is, if the machine is a brand new build machine, the directory on the host does not yet exist.
Where do I put setup logic which runs before, so that I can ensure this directory exists before the docker image is run?
You can run a prepare stage before all the stages and change agent after that
pipeline {
agent { label 'linux' } // slave where docker agent needs to run
environment {
GRADLE_USER_HOME = '/.gradle'
GRADLE_PROPERTIES = credentials('gradle.properties')
}
stages {
stage('Prepare') {
// prepare host
}
stage('Build') {
agent {
docker {
label 'linux' // should be same as slave label
image 'java:8'
args '-v /home/tester/.gradle:/.gradle'
}
}
steps {
sh 'cp ${GRADLE_PROPERTIES} ${GRADLE_USER_HOME}/'
sh './gradlew clean check'
}
}
}
post {
always {
junit 'build/test-results/**/*.xml'
}
}
}
Specifying a Docker Label
Pipeline provides a global option in the Manage Jenkins page, and on the Folder level, for specifying which agents (by Label) to use for running Docker-based Pipelines.
How to restrict the jenkins pipeline docker agent in specific slave?
How can I teach my Jenkisfile to login via basic auth in this setup?
I'm using a custom docker image for my Jenkins build.
As described in the documentation here I defined a docker agent like so:
pipeline {
agent {
docker {
image 'registry.az1:5043/maven-proto'
registryUrl 'https://registry.az1'
args '-v /var/jenkins_home/.m2:/root/.m2'
}
}
options {
timeout(time: 1, unit: 'HOURS')
buildDiscarder(logRotator(numToKeepStr:'10'))
}
stages {
stage ('Build') {
steps{
sh ...
}
}
stage ('Test') {
steps {
sh ...
}
}
stage ('Deploy') {
steps {
sh ...
}
}
}
post {
always {
echo 'Clean up workspace'
deleteDir()
}
}
}
If I use the following agent setup:
pipeline {
agent {
docker.withRegistry('https://registry.az1', 'registry_login'){
image 'registry.az1:5043/maven-proto'
registryUrl 'https://registry.az1'
args '-v /var/jenkins_home/.m2:/root/.m2'
}
}
The execution of the pipeline fails with the following exception:
WorkflowScript: 3: Too many arguments for map key "withRegistry" # line 3, column 16.
docker.withRegistry('https://registry.az1', 'registry_login'){
^
WorkflowScript: 3: Invalid agent type "withRegistry" specified. Must be one of [docker, dockerfile, label, any, none] # line 3, column 16.
docker.withRegistry('https://registry.az1', 'registry_login'){
^
The problem is that the used registry requires a basic auth login. The registry runs behind a nginx reverse proxy using this configuration.
As specified in Using a custom registry, you can specify the credentials and registry url to use as such:
docker.withRegistry('https://registry.az1', 'credentials-id') {
...
}
You need to create a Jenkins credentials object which will contain the credentials for the repository and give it a name to replace credentials-id above.
Update:
For declarative pipelines, the syntax is as such:
agent {
docker {
image 'registry.az1:5043/maven-proto'
registryUrl 'https://registry.az1'
registryCredentialsId 'credentials-id'
args '-v /var/jenkins_home/.m2:/root/.m2'
}
}
I'm trying to create a declarative Jenkins pipeline script but having issues with simple variable declaration.
Here is my script:
pipeline {
agent none
stages {
stage("first") {
def foo = "foo" // fails with "WorkflowScript: 5: Expected a step # line 5, column 13."
sh "echo ${foo}"
}
}
}
However, I get this error:
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 5: Expected a step # line 5, column 13.
def foo = "foo"
^
I'm on Jenkins 2.7.4 and Pipeline 2.4.
The Declarative model for Jenkins Pipelines has a restricted subset of syntax that it allows in the stage blocks - see the syntax guide for more info. You can bypass that restriction by wrapping your steps in a script { ... } block, but as a result, you'll lose validation of syntax, parameters, etc within the script block.
I think error is not coming from the specified line but from the first 3 lines. Try this instead :
node {
stage("first") {
def foo = "foo"
sh "echo ${foo}"
}
}
I think you had some extra lines that are not valid...
From declaractive pipeline model documentation, it seems that you have to use an environment declaration block to declare your variables, e.g.:
pipeline {
environment {
FOO = "foo"
}
agent none
stages {
stage("first") {
sh "echo ${FOO}"
}
}
}
Agree with #Pom12, #abayer. To complete the answer you need to add script block
Try something like this:
pipeline {
agent any
environment {
ENV_NAME = "${env.BRANCH_NAME}"
}
// ----------------
stages {
stage('Build Container') {
steps {
echo 'Building Container..'
script {
if (ENVIRONMENT_NAME == 'development') {
ENV_NAME = 'Development'
} else if (ENVIRONMENT_NAME == 'release') {
ENV_NAME = 'Production'
}
}
echo 'Building Branch: ' + env.BRANCH_NAME
echo 'Build Number: ' + env.BUILD_NUMBER
echo 'Building Environment: ' + ENV_NAME
echo "Running your service with environemnt ${ENV_NAME} now"
}
}
}
}
In Jenkins 2.138.3 there are two different types of pipelines.
Declarative and Scripted pipelines.
"Declarative pipelines is a new extension of the pipeline DSL (it is basically a pipeline script with only one step, a pipeline step with arguments (called directives), these directives should follow a specific syntax. The point of this new format is that it is more strict and therefore should be easier for those new to pipelines, allow for graphical editing and much more.
scripted pipelines is the fallback for advanced requirements."
jenkins pipeline: agent vs node?
Here is an example of using environment and global variables in a Declarative Pipeline. From what I can tell enviroment are static after they are set.
def browser = 'Unknown'
pipeline {
agent any
environment {
//Use Pipeline Utility Steps plugin to read information from pom.xml into env variables
IMAGE = readMavenPom().getArtifactId()
VERSION = readMavenPom().getVersion()
}
stages {
stage('Example') {
steps {
script {
browser = sh(returnStdout: true, script: 'echo Chrome')
}
}
}
stage('SNAPSHOT') {
when {
expression {
return !env.JOB_NAME.equals("PROD") && !env.VERSION.contains("RELEASE")
}
}
steps {
echo "SNAPSHOT"
echo "${browser}"
}
}
stage('RELEASE') {
when {
expression {
return !env.JOB_NAME.equals("TEST") && !env.VERSION.contains("RELEASE")
}
}
steps {
echo "RELEASE"
echo "${browser}"
}
}
}//end of stages
}//end of pipeline
You are using a Declarative Pipeline which requires a script-step to execute Groovy code. This is a huge difference compared to the Scripted Pipeline where this is not necessary.
The official documentation says the following:
The script step takes a block of Scripted Pipeline and executes that
in the Declarative Pipeline.
pipeline {
agent none
stages {
stage("first") {
script {
def foo = "foo"
sh "echo ${foo}"
}
}
}
}
you can define the variable global , but when using this variable must to write in script block .
def foo="foo"
pipeline {
agent none
stages {
stage("first") {
script{
sh "echo ${foo}"
}
}
}
}
Try this declarative pipeline, its working
pipeline {
agent any
stages {
stage("first") {
steps{
script {
def foo = "foo"
sh "echo ${foo}"
}
}
}
}
}