How can I teach my Jenkisfile to login via basic auth in this setup?
I'm using a custom docker image for my Jenkins build.
As described in the documentation here I defined a docker agent like so:
pipeline {
agent {
docker {
image 'registry.az1:5043/maven-proto'
registryUrl 'https://registry.az1'
args '-v /var/jenkins_home/.m2:/root/.m2'
}
}
options {
timeout(time: 1, unit: 'HOURS')
buildDiscarder(logRotator(numToKeepStr:'10'))
}
stages {
stage ('Build') {
steps{
sh ...
}
}
stage ('Test') {
steps {
sh ...
}
}
stage ('Deploy') {
steps {
sh ...
}
}
}
post {
always {
echo 'Clean up workspace'
deleteDir()
}
}
}
If I use the following agent setup:
pipeline {
agent {
docker.withRegistry('https://registry.az1', 'registry_login'){
image 'registry.az1:5043/maven-proto'
registryUrl 'https://registry.az1'
args '-v /var/jenkins_home/.m2:/root/.m2'
}
}
The execution of the pipeline fails with the following exception:
WorkflowScript: 3: Too many arguments for map key "withRegistry" # line 3, column 16.
docker.withRegistry('https://registry.az1', 'registry_login'){
^
WorkflowScript: 3: Invalid agent type "withRegistry" specified. Must be one of [docker, dockerfile, label, any, none] # line 3, column 16.
docker.withRegistry('https://registry.az1', 'registry_login'){
^
The problem is that the used registry requires a basic auth login. The registry runs behind a nginx reverse proxy using this configuration.
As specified in Using a custom registry, you can specify the credentials and registry url to use as such:
docker.withRegistry('https://registry.az1', 'credentials-id') {
...
}
You need to create a Jenkins credentials object which will contain the credentials for the repository and give it a name to replace credentials-id above.
Update:
For declarative pipelines, the syntax is as such:
agent {
docker {
image 'registry.az1:5043/maven-proto'
registryUrl 'https://registry.az1'
registryCredentialsId 'credentials-id'
args '-v /var/jenkins_home/.m2:/root/.m2'
}
}
Related
I've following two pieces of code one work and other doesn't. I need to understand if the agent is declared inside the stage, credentials are recognised and if the the agent is declared on top/global level, then credentials don't work and ends un in error. Could anyone help understand why it is so and how this can be worked out?
Error:
pipeline
{
environment {
DOCKER_REGISTRY='xxxxxxxxx'
DOCKER_CREDENTIAL='dcaas-r'
}
agent
{
docker {
image "xxxxxxxxx/dotnet:latest"
registryUrl env.DOCKER_REGISTRY
registryCredentialsId env.DOCKER_CREDENTIAL
reuseNode true
}
}
stages
{
stage('Test')
{
steps
{
sh 'dotnet --version'
}
}
}
}
Error response from daemon: Head "xxxxx/dotnet/manifests/latest": unknown: Authentication is required
Success:
pipeline
{
agent any
environment {
DOCKER_REGISTRY='xxxxxxxxx'
DOCKER_CREDENTIAL='dcaas-r'
}
stages
{
stage('Test')
{
agent
{
docker {
image "xxxxxxxxx/dotnet:latest"
registryUrl env.DOCKER_REGISTRY
registryCredentialsId env.DOCKER_CREDENTIAL
reuseNode true
}
}
steps
{
sh 'dotnet --version'
}
}
}
}
What could be done, in order to not to write same agent block in all stages?
If you have a global agent directive, then you do not need to specify the docker values as environment variables because
they are not environment variables
they are only used once
they are not dynamic
It would appear like:
agent {
docker {
image 'xxxxxxxxx/dotnet:latest'
registryUrl 'xxxxxxxxx'
registryCredentialsId 'dcaas-r'
reuseNode true
}
}
You will only view logs for the stages, and so global directives are not logged. This means you will be unable to see image retrieval logs for the docker agent in the global directive.
I am declaring a String Parameter in Jenkins -
Name is SLACKTOKEN
Value is qwsaw2345
My Jenkins file has script
pipeline {
agent { label 'trial' }
stages {
stage("Build Docker Image") {
steps{
sh 'docker build -t trial:latest --build-arg SLACK_SIGNING_SECRET=${SLACKTOKEN}'
}
}
}
}
I tried like this, but it didnt work. Could you please let me know how can I pass a value from Jenkins string parameter to Jenkins declarative script file.
I have added the Password parameter in job like below
Inside parameters directive you can provide parameters, details is here.
To pass parameter use params.SLACKTOKEN inside double quotes, not single:
pipeline {
agent { label 'trial' }
parameters {
password(name: 'SLACKTOKEN', defaultValue: '', description: 'Slack Token')
}
stages {
stage("Build Docker Image") {
steps{
sh "docker build -t trial:latest --build-arg SLACK_SIGNING_SECRET=${params.SLACKTOKEN}"
}
}
}
}
Where have you declared your variable?
There are a lot of options:
Example: Use env section from declarative syntax:
pipeline {
agent {
label 'trial'
}
environment {
SLACKTOKEN = 'qwsaw2345'
}
stages {
stage('Build') {
steps {
sh "docker build -t trial:latest --build-arg SLACK_SIGNING_SECRET=${SLACKTOKEN}"
}
}
}
}
Using Jenkins Declarative Pipeline, one can easily specify a Dockerfile, agent label, build args and run args as follows:
Jenkinsfile (Declarative Pipeline)
agent {
dockerfile {
dir './path/to/dockerfile'
label 'my-label'
additionalBuildArgs '--build-arg version=1.0'
args '-v /tmp:/tmp'
}
}
I am trying to achieve the same using the scripted pipeline syntax. I found a way to pass the agent label and run args, but was unable to to pass the directory and build args. Ideally, I would write something like this (label and run args are already working):
Jenkinsfile (Scripted Pipeline)
node ("my-label"){
docker.dockerfile(
dir: './path/to/dockerfile',
additionalBuildArgs:'--build-arg version=1.0'
).inside('-v /tmp:/tmp') {
\\ add stages here
}
}
The documentation shows how this can be done using an existing docker image, i.e., with the image directive in the pipeline.
Jenkinsfile (Declarative Pipeline)
pipeline {
agent {
docker { image 'node:7-alpine' }
}
stage('Test') {
//...
}
}
Jenkinsfile (Scripted Pipeline)
node {
docker.image('node:7-alpine').inside {
stage('Test') {
//...
}
}
}
However, the scripted pipeline syntax for the dockerfile directive is missing.
The workaround I am using at the moment is building the image myself.
node ("my-label"){
def testImage = docker.build(
"test-image",
"./path/to/dockerfile",
"--build-arg v1.0"
)
testImage.inside('-v /tmp:/tmp') {
sh 'echo test'
}
}
Any help is much appreciated!
I personally put the docker cli arguments before the image folder path and would specify the docker filename with -f argument
Apart from that, you are doing this the right way. agent dockerfile is building a docker image the same way docker.build step is doing. Except you can push your image to a registry by using the docker.build step
Here is I how do
def dockerImage
//jenkins needs entrypoint of the image to be empty
def runArgs = '--entrypoint \'\''
pipeline {
agent {
label 'linux_x64'
}
options {
buildDiscarder(logRotator(numToKeepStr: '100', artifactNumToKeepStr: '20'))
timestamps()
}
stages {
stage('Build') {
options { timeout(time: 30, unit: 'MINUTES') }
steps {
script {
def commit = checkout scm
// we set BRANCH_NAME to make when { branch } syntax work without multibranch job
env.BRANCH_NAME = commit.GIT_BRANCH.replace('origin/', '')
dockerImage = docker.build("myImage:${env.BUILD_ID}",
"--label \"GIT_COMMIT=${env.GIT_COMMIT}\""
+ " --build-arg MY_ARG=myArg"
+ " ."
)
}
}
}
stage('Push to docker repository') {
when { branch 'master' }
options { timeout(time: 5, unit: 'MINUTES') }
steps {
lock("${JOB_NAME}-Push") {
script {
docker.withRegistry('https://myrepo:5000', 'docker_registry') {
dockerImage.push('latest')
}
}
milestone 30
}
}
}
}
}
Here is a purely old-syntax scripted pipeline that solves the problem of checking out, building a docker image and pushing the image to a registry. It assumes the Jenkins project is type "Pipeline script from SCM".
I developed this pipeline for a server that requires proxies to reach the public internet. The Dockerfile accepts build arguments to configure its tools for proxies.
I think this has a pretty good structure #fredericrous :) but I'm new to pipelines, please help me improve!
def scmvars
def image
node {
stage('clone') {
// enabled by project type "Pipeline script from SCM"
scmvars = checkout(scm)
echo "git details: ${scmvars}"
}
stage('env') {
// Jenkins provides no environment variable view
sh 'printenv|sort'
}
stage('build') {
// arg 1 is the image name and tag
// arg 2 is docker build command line
image = docker.build("com.mycompany.myproject/my-image:${env.BUILD_ID}",
" --build-arg commit=${scmvars.GIT_COMMIT}"
+ " --build-arg http_proxy=${env.http_proxy}"
+ " --build-arg https_proxy=${env.https_proxy}"
+ " --build-arg no_proxy=${env.no_proxy}"
+ " path/to/dir/with/Dockerfile")
}
stage('push') {
docker.withRegistry('https://registry.mycompany.com:8100',
'jenkins-registry-credential-id') {
image.push()
}
}
}
I am trying to choose a different docker agent from a private container registry based on an a parameter in Jenkins pipeline. For my example let's say I have 'credsProd' and 'credsTest' saved in the credentials store. My attempt is as follows:
pipeline {
parameters {
choice(
name: 'registrySelection',
choices: ['TEST', 'PROD'],
description: 'Is this a deployment to STAGING or PRODUCTION environment?'
)
}
environment {
URL_VAR = "${env.registrySelection == "PROD" ? "urlProd.azure.io" : "urlTest.azure.io"}"
CREDS_VAR = "${env.registrySelection == "PROD" ? "credsProd" : "credsTest"}"
}
agent {
docker {
image "${env.URL_VAR}/image:tag"
registryUrl "https://${env.URL_VAR}"
registryCredentialsId "${env.CREDS_VAR}"
}
}
stages{
stage('test'){
steps{
echo "${env.URL_VAR}"
echo "${env.CREDS_VAR}"
}
}
}
}
I get error:
Error response from daemon: Get https://null/v2/: dial tcp: lookup null on
If I hard code the registryUrl I get a similar issue with registryCredentialsId:
agent {
docker {
image "${env.URL_VAR}/image:tag"
registryUrl "https://urlTest.azure.io"
registryCredentialsId "${env.CREDS_VAR}"
}
}
ERROR: Could not find credentials matching null
It is successful if I hardcode both registryUrl and registryCredentialsId.
agent {
docker {
image "${env.URL_VAR}/image:tag"
registryUrl "https://urlTest.azure.io"
registryCredentialsId "credsTest"
}
}
It appears that the docker login stage of the agent{docker{}} cannot access/resolve environment variables.
Is there a way around this that does not involve code duplication? I manage changes with multi branch pipeline so ideally do not want to have separate Prod and test groovy files or different sets sequential steps in the same file.
Try running a scripted pipeline before declarative:
URL_VAR = null
CREDS_VAR = null
node('master') {
stage('Choose') {
URL_VAR = params.registrySelection == "PROD" ? "urlProd.azure.io" : "urlTest.azure.io"
CREDS_VAR = params.registrySelection == "PROD" ? "credsProd" : "credsTest"
}
}
pipeline {
agent {
docker {
image "${URL_VAR}/image:tag"
registryUrl "https://${URL_VAR}"
registryCredentialsId "${CREDS_VAR}"
}
}
...
Alternatively, you can define two stages (with hard-coded url and creds) but run only one of them, using when in both.
Our current Jenkins pipeline looks like this:
pipeline {
agent {
docker {
label 'linux'
image 'java:8'
args '-v /home/tester/.gradle:/.gradle'
}
}
environment {
GRADLE_USER_HOME = '/.gradle'
GRADLE_PROPERTIES = credentials('gradle.properties')
}
stages {
stage('Build') {
steps {
sh 'cp ${GRADLE_PROPERTIES} ${GRADLE_USER_HOME}/'
sh './gradlew clean check'
}
}
}
post {
always {
junit 'build/test-results/**/*.xml'
}
}
}
We mount /.gradle because we want to reuse cached data between builds. The problem is, if the machine is a brand new build machine, the directory on the host does not yet exist.
Where do I put setup logic which runs before, so that I can ensure this directory exists before the docker image is run?
You can run a prepare stage before all the stages and change agent after that
pipeline {
agent { label 'linux' } // slave where docker agent needs to run
environment {
GRADLE_USER_HOME = '/.gradle'
GRADLE_PROPERTIES = credentials('gradle.properties')
}
stages {
stage('Prepare') {
// prepare host
}
stage('Build') {
agent {
docker {
label 'linux' // should be same as slave label
image 'java:8'
args '-v /home/tester/.gradle:/.gradle'
}
}
steps {
sh 'cp ${GRADLE_PROPERTIES} ${GRADLE_USER_HOME}/'
sh './gradlew clean check'
}
}
}
post {
always {
junit 'build/test-results/**/*.xml'
}
}
}
Specifying a Docker Label
Pipeline provides a global option in the Manage Jenkins page, and on the Folder level, for specifying which agents (by Label) to use for running Docker-based Pipelines.
How to restrict the jenkins pipeline docker agent in specific slave?