My Jenkins pipeline code successfully checks out my private git repo from bitbucket using
checkout([$class: 'GitSCM',
userRemoteConfigs: [[credentialsId: 'cicd-user', url:'ssh://git#bitbucket.myorg.co:7999/A/software.git']]
in same software.git I have a Dockerfile that I want to use to build various build targets present in software.git on Kubernetes and I am trying the below to pass jenkins credentials into a docker container that I want to build and run.
So in the same jenkins pipeline when I checked out software.git (above code), I try to do the following to get the docker container built
withCredentials([sshUserPrivateKey(credentialsId: 'cicd-user', keyFileVariable: 'FILE')]) {
sh "cd ${WORKSPACE} && docker build -t ${some-name} --build-arg USERNAME=cicd-user --build-arg PRIV_KEY_FILE=$FILE --network=host -f software/tools/jenkins/${some-name}/Dockerfile ."
}
in Dockerfile I do
RUN echo "$PRIV_KEY_FILE" > /home/"$USERNAME"/.ssh/id_rsa && \
chmod 700 /home/"$USERNAME"/.ssh/id_rsa
RUN echo "Host bitbucket.myorg.co\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config
But still from my Docker container I am not able to successfully checkout my private repo(s). What am I missing ? Any comments, suggestions ? Thanks.
Please read about Groovy String Interpolation.
In your expression
sh "cd ${WORKSPACE} && docker build -t ${some-name} \
--build-arg USERNAME=cicd-user \
--build-arg PRIV_KEY_FILE=$FILE --network=host \
-f software/tools/jenkins/${some-name}/Dockerfile ."
you use double quotes so Groovy interpolates all the variables in the string. This includes $FILE so Groovy replaces that with the value of Groovy variable named FILE. You don't have any Groovy variable with that name (but rather bash variable which is different from Groovy) so this gets replaced with an empty string.
To prevent interpolating that particular variable, you need to hint Groovy not to interpolate this particular one, by escaping this $ with \:
sh "cd ${WORKSPACE} && docker build -t ${some-name}\
--build-arg USERNAME=cicd-user \
--build-arg PRIV_KEY_FILE=\$FILE --network=host \
-f software/tools/jenkins/${some-name}/Dockerfile ."
Related
I need to use host ssh key inside docker , for this purpose i have build docker like
docker build -t example --build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)" -f dockerfile-dev .
if we use direct docker command it is working fine , but if I use inside the jenkins pipe-line script getting below error
Running in Durability level: MAX_SURVIVABILITY
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 92: expecting '}', found 'ssh_prv_key' # line 92, column 116.
ev:${GIT_COMMIT} "--build-arg ssh_prv_ke
Below step i have used in jenkins pipe-line
sh "docker build -t ${service_name}-dev:${GIT_COMMIT} --build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)" -f dockerfile-dev ."
And docker file used like below
ARG ssh_prv_key
# Authorize SSH Host
# Add the keys and set permissions
RUN mkdir -p /root/.ssh
RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa
I solved a similar issue as follow:
Jenkins pipeline
sh "cp ~/.ssh/id_rsa id_rsa"
sh "docker build -t ${service_name}-dev:${GIT_COMMIT} -f dockerfile-dev ."
sh "rm id_rsa"
Dockerfile
# Some instructions...
ADD id_rsa id_rsa
# Now use the "id_rsa" file inside the image...
This question is a follow up to this question
How to pass jenkins credentials into docker build command?
I am getting the ssh key file from jenkins credential store in my groovy pipeline and
passing it into docker build command via --build-arg so that I can checkout and build artifacts from the private git repos from within my docker container
credentials store id : cicd-user, which works for checking out my private works as expected from my groovy Jenkinsfile
checkout([$class: 'GitSCM',
userRemoteConfigs: [[credentialsId: 'cicd-user', url:'ssh://git#bitbucket.myorg.co:7999/A/software.git']]
I access it and try to pass the same to docker build command:
withCredentials([sshUserPrivateKey(credentialsId: 'cicd-user', keyFileVariable: 'FILE')]) {
sh "cd ${WORKSPACE} && docker build -t ${some-name} --build-arg USERNAME=cicd-user --build-arg PRIV_KEY_FILE=\$FILE --network=host -f software/tools/jenkins/${some-name}/Dockerfile ."
}
in Dockerfile I do
RUN echo "$PRIV_KEY_FILE" > /home/"$USERNAME"/.ssh/id_rsa && \
chmod 700 /home/"$USERNAME"/.ssh/id_rsa
RUN echo "Host bitbucket.myorg.co\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config
But I am seeing the following issue
"Load key "/home/cicd-user/.ssh/id_rsa" :(invalid format)
"git#Bitbucket.mycomp.co:Permission denied( Public key)
"fatal: could not read from remote repository"
In the past I have passed the ssh priv key as --build-arg from outside by cat'ing like below
--build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)"
Should I do something similar
--build-arg PRIV_KEY_FILE="$(cat $FILE)"
Any idea on what might be going wrong or where I should be looking for debugging this correctly ?
I ran into the same issue yesterday and I think I've come up with a workable solution.
Here are the basic steps I took - using the sshagent plugin to manage the sshagent within the Jenkins job. You could probably use withCredentials as well, though that's not what I ended up finding success with.
The ssagent (or alternatively the key) can be made available to specific build steps using the docker build commands --ssh flag. (Feature reference) It's important to note that for this to work (at the current time) you need to set DOCKER_BUILDKIT=1. If you forget to do this, then it seems like it ignores this configuration and the ssh connection will fail. Once that's set, the sshagent
Cut down look at the pipeline:
pipeline {
agent {
// ...
}
environment {
// Necessary to enable Docker buildkit features such as --ssh
DOCKER_BUILDKIT = "1"
}
stages {
// other stages
stage('Docker Build') {
steps {
// Start ssh agent and add the private key(s) that will be needed in docker build
sshagent(['credentials-id-of-private-key']) {
// Make the default ssh agent (the one configured above) accessible in the build
sh 'docker build --ssh default .'
}
}
// other stages
}
}
}
In the Dockerfile it's necessary to explicitly give lines that need it access to the ssh agent. This can be done by including mount=type=ssh in the relevant RUN command.
For me, this looked roughly like this:
FROM node:14
# Retrieve bitbucket host key
RUN mkdir -p -m -0600 ~/.ssh && ssh-keyscan bitbucket.org >> ~/.ssh/known_hosts
...
# Mount ssh agent for install
RUN --mount=type=ssh npm i
...
With this configuration, the npm install was able to install a private git repo stored on Bitbucket by utilizing the SSH private key within docker build via sshagent.
After spending one week I found some how reasonable way to do.
just add
RUN git config --global url."https://${GIT_ACCESS_TOKEN}#github.com".insteadOf "ssh://git#github.com"
into your docker file and it will install if it needs to install private packages as well.
add pass your GIT_ACCESS_TOKEN (you can have it in your github settings account with setting proper permissions) where you are building your image. Like
docker build --build-arg GIT_ACCESS_TOKEN=yourtoken -t imageNameAndTag .
I've got a composer packages in our company's private repository on BitBucket. To access it I need to use credentials stored in Jenkins. Currently the whole build is based on Declarative Pipeline and Dockerfile. To pass credentials to Composer I need those credentials in build stage to pass them to Dockerfile.
How can I achieve it?
I've tried:
// Jenkinsfile
agent {
dockerfile {
label 'mylabel'
filename '.docker/php/Dockerfile'
args '-v /net/jenkins-ex-work/workspace:/net/jenkins-ex-work/workspace'
additionalBuildArgs '--build-arg jenkins_usr=${JENKINS_CREDENTIALS_USR} --build-arg jenkins_credentials=${JENKINS_CREDENTIALS} --build-arg test_arg=test'
}
}
// Dockerfile
ARG jenkins_usr
ARG jenkins_credentials
ARG test_arg
But the args are empty.
TL;DR
Use jenkins withCredentials([sshUserPrivateKey()]) and echo the private key into id_rsa in the container.
EDITED: Removed the "run as root" step, as I think this caused issues. Instead a jenkins user is created inside the docker container with the same UID as the jenkins user that builds the docker container (no idea if that matters, but we need a user with a home dir so we can create ~/.ssh/id_rsa)
For those that suffered like me... My solution is below. It is NOT ideal as:
it risks exposing your private key in the build logs if you are not careful (the below is careful, but it's easy to forget). (Although with that in mind, it appears extracting jenkins credentials is extremely easy for anyone with naughty intentions?)
So use with caution...
In my (legacy) git project, a simple php app with internal git based composer dependencies, I have
Dockerfile.build
FROM php:7.4-alpine
# install git, openssh, composer... whatever u need here, then:
# create a jenkins user inside the docker image
ARG UID=1001
RUN adduser -D -g jenkins -s /bin/sh -u $UID jenkins \
&& mkdir -p /home/jenkins/.ssh \
&& touch /home/jenkins/.ssh/id_rsa \
&& chmod 600 /home/jenkins/.ssh/id_rsa \
&& chown -R jenkins:jenkins /home/jenkins/.ssh
USER jenkins
# I think only ONE of the below are needed, not sure.
RUN echo "Host bitbucket.org\n\tStrictHostKeyChecking no\n" >> /home/jenkins/.ssh/config \
&& ssh-keyscan bitbucket.org >> /home/jenkins/.ssh/known_hosts
Then in my Jenkinsfile:
def sshKey = ''
pipeline {
agent any
environment {
userId = sh(script: "id -u ${USER}", returnStdout: true).trim()
}
stages {
stage('Prep') {
steps {
script {
withCredentials([
sshUserPrivateKey(
credentialsId: 'bitbucket-key',
keyFileVariable: 'keyFile',
passphraseVariable: 'passphrase',
usernameVariable: 'username'
)
]) {
sshKey = readFile(keyFile).trim()
}
}
}
}
stage('Build') {
agent {
dockerfile {
filename 'Dockerfile.build'
additionalBuildArgs "--build-arg UID=${userId}"
}
}
steps {
// Turn off command trace for next line, as we dont want to log ssh key
sh '#!/bin/sh -e\n' + "echo '${sshKey}' > /home/jenkins/.ssh/id_rsa"
// .. proceed with whatever else, like composer install, etc
To be fair, I think some of the RUN commands in the docker container aren't even necessary, or could be run from the jenkins file? ¯_(ツ)_/¯
There was a similar issue, supposedly fixed in PR 327, with pipeline-model-definition-1.3.9
So start checking the version of your plugin.
But heed also the Dockerfile warning:
It is not recommended to use build-time variables for passing secrets like github keys, user credentials etc.
Build-time variable values are visible to any user of the image with the docker history command.
Using buildkit with --secret is a better approach for that.
I need to run docker container in Jenkins so that installed libraries like pycodestyle can be runnable in the following steps.
I successfully built Docker Container (in Dockerfile)
How do I access to the container so that I can use it in the next step? (Please look for >> << code in Build step below)
Thanks
stage('Build') {
// Install python libraries from requirements.txt (Check Dockerfile for more detail)
sh "docker login -u '${DOCKER_USR}' -p '${DOCKER_PSW}' ${DOCKER_REGISTRY}"
sh "docker build \
--tag '${DOCKER_REGISTRY}/${DOCKER_TAG}:latest' \
--build-arg HTTPS_PROXY=${PIP_PROXY} ."
>> sh "docker run -ti ${DOCKER_REGISTRY}/${DOCKER_TAG}:latest sh" <<<
}
}
stage('Linting') {
sh '''
awd=$(pwd)
echo '===== Linting START ====='
for file in $(find . -name '*.py'); do
filename=$(basename $file)
if [[ ${file:(-3)} == ".py" ]] && [[ $filename = *"test"* ]] ; then
echo "perform PEP8 lint (python pylint blah) for $filename"
cd $awd && cd $(dirname "${file}") && pycodestyle "${filename}"
fi
done
echo '===== Linting END ====='
'''
}
You need to mount the workspace of your Jenkins job (containing your python project) as volume (see "docker run -v" option) to your container and then run the "next step" build step inside this container. You can do this by providing a shell script as part of your project's source code, which does the "next step" or write this script in a previous build stage.
It would be something like this:
sh "chmod +x build.sh"
sh "docker run -v $WORKSPACE:/workspace ${DOCKER_REGISTRY}/${DOCKER_TAG}:latest /workspace/build.sh"
build.sh is an executable script, which is part of your project's workspace and performans the "next step".
$WORKSPACE is the folder that is used by your jenkins job (normally /var/jenkins_home/jobs//workspace - it is provided by Jenkins as a build variable.
Please note: This solution requires that the Docker daemon is running on the same host as Jenkins! Otherwise the workspace will not be available to your container.
Another solution would be to run Jenkins as Docker container, so you can share the Jenkins home/workspaces easily with the containers you run within your build jobs, like described here:
Running Jenkins tests in Docker containers build from dockerfile in codebase
I have a project with DockerHub autobuilds running for each branch of the project. These builds are running nicely.
I would like to extend this autobuild configuration to build images for selected pull requests for these branches.
The following documentation indicates that a variable named DOCKER_TAG should be available in a DockerHub autobuild.
https://docs.docker.com/docker-hub/builds/advanced/#environment-variables-for-building-and-testing
I want to configure my auto build in the following manner.
If I attempt to build a tag named "pr1234" then my build will overlay the code from PR #1234 before running the build.
# Assign the env variable DOCKER_TAG to an arg of the same name
ARG DOCKER_TAG=${DOCKER_TAG}
...
# if DOCKER_TAG is in the format prNNNN then merge code for that PR on top of the current branch
RUN PRNUM=`echo ${DOCKER_TAG}| egrep "^pr([0-9]+)$" | sed -e s/pr//` && \
if [ -n "$PRNUM" ]; \
then echo "Merging $PRNUM"; \
curl -o /tmp/pr.patch -L https://github.com/DSpace/DSpace/pull/$PRNUM.diff; \
git apply /tmp/pr.patch; \
fi
If I run my build locally, I am able to set this variable and my docker build runs as I would like.
docker build -t dspace/dspace:pr1234 -f Dockerfile.jdk8-test --build-arg DOCKER_TAG=pr1234 .
When I attempt to run this from Dockerhub, the DOCKER_TAG variable appears to be blank, so I presume that DOCKER_TAG is not being set as I expected.
Can you suggest a way to access this variable or to accomplish an automated build for selected PR's?
I found a solution that seems to work. I created a build hook named hooks/build and pass the variable explicitly.
#!/bin/bash
docker build --build-arg DOCKER_TAG=$DOCKER_TAG -f $DOCKERFILE_PATH -t $IMAGE_NAME .
See https://docs.docker.com/docker-hub/builds/advanced/#custom-build-phase-hooks