How to use git credentials from Jenkins pipeline input into docker file? - docker

I am trying to load Jenkins pipeline script from SCM. I have to build a docker image and push it to GCR. In the docker image, I need to install private git repositories. Here, I am trying to get git username password from Jenkins input. But I'm not sure how I can use it in the Dockerfile to pull the git repo. These are my Jenkinsfile and Dockerfile in SCM. Any suggestions?
Jenkinsfile :
node {
def app
stage('Clone repository') {
checkout scm
def COMMITHASH = sh(returnStdout: true, script: "git log -n 1 --pretty=format:'%h'").trim()
echo ("Commit hash: "+COMMITHASH.substring(0,7))
}
stage('Build image') {
timeout(time: 600, unit: 'SECONDS') {
gitUser = input(
id: 'gitUser',
message: 'Please enter git credentials :',
parameters: [
[$class: 'TextParameterDefinition', defaultValue: "", description: 'Git user name', name: 'username'],
[$class: 'PasswordParameterDefinition', defaultValue: "", description: 'Git password', name: 'password']
])
}
/* Build docker image */
println('Build image stage');
app = docker.build("testBuild")
}
stage('Push image') {
/* Push image to GCR */
docker.withRegistry('https://us.gcr.io', 'gcr:***') {
app.push("${env.BUILD_NUMBER}")
app.push("latest")
}
}
}
Dockerfile :
# use a ubuntu 16.04 base image
FROM ubuntu:16.04
MAINTAINER "someuser#company.com"
# Set environment variables
ENV DEBIAN_FRONTEND noninteractive
ENV LC_ALL C.UTF-8
# Upgrade the system
RUN apt-get update && apt-get -y upgrade && apt-get install -y python-software-properties software-properties-common
# Install cert bot and apache
RUN apt-get install -y apache2
#Enable apache modules
RUN a2enmod ssl
RUN a2enmod headers
RUN a2enmod rewrite
# Create directory for web application
RUN mkdir -p /var/www/myApp
# Expose ssl port
EXPOSE 443
I want to install my private bitbucket repository in /var/www/myApp. Also, I want to avoid ssh authentication.

Do you have the requirement to always prompt for the credentials?
If not, you could store them in the Jenkins credential store and retrieve them via withCredentials step from the Jenkins Credentials Binding plugin. That way they are hidden in the logs if you do the build within the closure.
withCredentials([usernamePassword(
credentialsId: 'privateGitCredentials',
usernameVariable: 'USERNAME',
passwordVariable: 'PASSWORD'
)]) {
sh "docker build --build-arg username=$USERNAME --build-arg password=$PASSWORD -t <your tag> ."
}

You should pass your git username and password as environment variables during docker build and then call these variables inside Dockerfile.
Example Dockerfile -
FROM test
ARG username
ARG password
RUN git clone https://${username}:${password}#github.com/private-repo-name.git
Build command:
docker build --build-arg username=$git_username --build-arg password=$git_password -t <your tag> .

Related

How to correctly pass ssh key file from Jenkins credentials variable into to docker build command?

This question is a follow up to this question
How to pass jenkins credentials into docker build command?
I am getting the ssh key file from jenkins credential store in my groovy pipeline and
passing it into docker build command via --build-arg so that I can checkout and build artifacts from the private git repos from within my docker container
credentials store id : cicd-user, which works for checking out my private works as expected from my groovy Jenkinsfile
checkout([$class: 'GitSCM',
userRemoteConfigs: [[credentialsId: 'cicd-user', url:'ssh://git#bitbucket.myorg.co:7999/A/software.git']]
I access it and try to pass the same to docker build command:
withCredentials([sshUserPrivateKey(credentialsId: 'cicd-user', keyFileVariable: 'FILE')]) {
sh "cd ${WORKSPACE} && docker build -t ${some-name} --build-arg USERNAME=cicd-user --build-arg PRIV_KEY_FILE=\$FILE --network=host -f software/tools/jenkins/${some-name}/Dockerfile ."
}
in Dockerfile I do
RUN echo "$PRIV_KEY_FILE" > /home/"$USERNAME"/.ssh/id_rsa && \
chmod 700 /home/"$USERNAME"/.ssh/id_rsa
RUN echo "Host bitbucket.myorg.co\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config
But I am seeing the following issue
"Load key "/home/cicd-user/.ssh/id_rsa" :(invalid format)
"git#Bitbucket.mycomp.co:Permission denied( Public key)
"fatal: could not read from remote repository"
In the past I have passed the ssh priv key as --build-arg from outside by cat'ing like below
--build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)"
Should I do something similar
--build-arg PRIV_KEY_FILE="$(cat $FILE)"
Any idea on what might be going wrong or where I should be looking for debugging this correctly ?
I ran into the same issue yesterday and I think I've come up with a workable solution.
Here are the basic steps I took - using the sshagent plugin to manage the sshagent within the Jenkins job. You could probably use withCredentials as well, though that's not what I ended up finding success with.
The ssagent (or alternatively the key) can be made available to specific build steps using the docker build commands --ssh flag. (Feature reference) It's important to note that for this to work (at the current time) you need to set DOCKER_BUILDKIT=1. If you forget to do this, then it seems like it ignores this configuration and the ssh connection will fail. Once that's set, the sshagent
Cut down look at the pipeline:
pipeline {
agent {
// ...
}
environment {
// Necessary to enable Docker buildkit features such as --ssh
DOCKER_BUILDKIT = "1"
}
stages {
// other stages
stage('Docker Build') {
steps {
// Start ssh agent and add the private key(s) that will be needed in docker build
sshagent(['credentials-id-of-private-key']) {
// Make the default ssh agent (the one configured above) accessible in the build
sh 'docker build --ssh default .'
}
}
// other stages
}
}
}
In the Dockerfile it's necessary to explicitly give lines that need it access to the ssh agent. This can be done by including mount=type=ssh in the relevant RUN command.
For me, this looked roughly like this:
FROM node:14
# Retrieve bitbucket host key
RUN mkdir -p -m -0600 ~/.ssh && ssh-keyscan bitbucket.org >> ~/.ssh/known_hosts
...
# Mount ssh agent for install
RUN --mount=type=ssh npm i
...
With this configuration, the npm install was able to install a private git repo stored on Bitbucket by utilizing the SSH private key within docker build via sshagent.
After spending one week I found some how reasonable way to do.
just add
RUN git config --global url."https://${GIT_ACCESS_TOKEN}#github.com".insteadOf "ssh://git#github.com"
into your docker file and it will install if it needs to install private packages as well.
add pass your GIT_ACCESS_TOKEN (you can have it in your github settings account with setting proper permissions) where you are building your image. Like
docker build --build-arg GIT_ACCESS_TOKEN=yourtoken -t imageNameAndTag .

Pass Jenkins credentials to Docker build for Composer usage

I've got a composer packages in our company's private repository on BitBucket. To access it I need to use credentials stored in Jenkins. Currently the whole build is based on Declarative Pipeline and Dockerfile. To pass credentials to Composer I need those credentials in build stage to pass them to Dockerfile.
How can I achieve it?
I've tried:
// Jenkinsfile
agent {
dockerfile {
label 'mylabel'
filename '.docker/php/Dockerfile'
args '-v /net/jenkins-ex-work/workspace:/net/jenkins-ex-work/workspace'
additionalBuildArgs '--build-arg jenkins_usr=${JENKINS_CREDENTIALS_USR} --build-arg jenkins_credentials=${JENKINS_CREDENTIALS} --build-arg test_arg=test'
}
}
// Dockerfile
ARG jenkins_usr
ARG jenkins_credentials
ARG test_arg
But the args are empty.
TL;DR
Use jenkins withCredentials([sshUserPrivateKey()]) and echo the private key into id_rsa in the container.
EDITED: Removed the "run as root" step, as I think this caused issues. Instead a jenkins user is created inside the docker container with the same UID as the jenkins user that builds the docker container (no idea if that matters, but we need a user with a home dir so we can create ~/.ssh/id_rsa)
For those that suffered like me... My solution is below. It is NOT ideal as:
it risks exposing your private key in the build logs if you are not careful (the below is careful, but it's easy to forget). (Although with that in mind, it appears extracting jenkins credentials is extremely easy for anyone with naughty intentions?)
So use with caution...
In my (legacy) git project, a simple php app with internal git based composer dependencies, I have
Dockerfile.build
FROM php:7.4-alpine
# install git, openssh, composer... whatever u need here, then:
# create a jenkins user inside the docker image
ARG UID=1001
RUN adduser -D -g jenkins -s /bin/sh -u $UID jenkins \
&& mkdir -p /home/jenkins/.ssh \
&& touch /home/jenkins/.ssh/id_rsa \
&& chmod 600 /home/jenkins/.ssh/id_rsa \
&& chown -R jenkins:jenkins /home/jenkins/.ssh
USER jenkins
# I think only ONE of the below are needed, not sure.
RUN echo "Host bitbucket.org\n\tStrictHostKeyChecking no\n" >> /home/jenkins/.ssh/config \
&& ssh-keyscan bitbucket.org >> /home/jenkins/.ssh/known_hosts
Then in my Jenkinsfile:
def sshKey = ''
pipeline {
agent any
environment {
userId = sh(script: "id -u ${USER}", returnStdout: true).trim()
}
stages {
stage('Prep') {
steps {
script {
withCredentials([
sshUserPrivateKey(
credentialsId: 'bitbucket-key',
keyFileVariable: 'keyFile',
passphraseVariable: 'passphrase',
usernameVariable: 'username'
)
]) {
sshKey = readFile(keyFile).trim()
}
}
}
}
stage('Build') {
agent {
dockerfile {
filename 'Dockerfile.build'
additionalBuildArgs "--build-arg UID=${userId}"
}
}
steps {
// Turn off command trace for next line, as we dont want to log ssh key
sh '#!/bin/sh -e\n' + "echo '${sshKey}' > /home/jenkins/.ssh/id_rsa"
// .. proceed with whatever else, like composer install, etc
To be fair, I think some of the RUN commands in the docker container aren't even necessary, or could be run from the jenkins file? ¯_(ツ)_/¯
There was a similar issue, supposedly fixed in PR 327, with pipeline-model-definition-1.3.9
So start checking the version of your plugin.
But heed also the Dockerfile warning:
It is not recommended to use build-time variables for passing secrets like github keys, user credentials etc.
Build-time variable values are visible to any user of the image with the docker history command.
Using buildkit with --secret is a better approach for that.

Unable to run docker build inside Jenkinsfile

On doing docker build inside Jenkinsfile,
i.e
docker build -f ./Dockerfile -t datastore:1.0.1 .
I am getting error like
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
This is my Jenkinsfile
#!/usr/bin/groovy
node {
checkout scm
// Here are the important data for this build
def DOCKER_REGISTRY = "XXX"
def DATASTORE = "datastore"
def DOCKER_TAG_DATASTORE = "${DOCKER_REGISTRY}/XXX"
def APP_VERSION = "1.0.1"
stage('Build') {
dockerInside('XXX/db-server:1.0.114', '') {
echo "Setting up artifactory location to push docker image ${DATASTORE}:${APP_VERSION}"
sh "docker build -f ./Dockerfile -t ${DATASTORE}:${APP_VERSION} ."
sh "docker tag ${DATASTORE}:${APP_VERSION} ${DOCKER_TAG_DATASTORE}:${APP_VERSION}"
withCredentials([
usernamePassword(
credentialsId: CORE_IZ_USER,
usernameVariable: 'LOG',
passwordVariable: 'PAS'
)]) {
// Doing some upload commands (see artifactory or docker upload commands from Jenkins)
sh "docker push ${DOCKER_TAG_DATASTORE}:${APP_VERSION}"
echo "Push to ${DOCKER_TAG_DATASTORE}:${APP_VERSION}"
}
}
}
stage('Docker image creation') {
echo "Docker image creation"
}
stage('Docker image upload') {
echo "Docker image upload"
}
}
This is my Dockerfile
FROM XXX/rhel:7.5
USER root
RUN yum -y install gcc && yum install -y git && yum install -y docker
# Install Go
RUN curl -O -s https://dl.google.com/go/go1.10.2.linux-amd64.tar.gz
RUN tar -xzf go1.10.2.linux-amd64.tar.gz -C /usr/local
ENV PATH /usr/local/go/bin:$PATH
ENV GOPATH /gopath
ENV GOBIN /usr/local/go/bin
WORKDIR /gopath/src/XXX
RUN mkdir -p /gopath/src/XXX
ADD . /gopath/src/XXX
RUN GOOS=linux GOARCH=amd64 CGO_ENABLED=1 go build -tags netgo -installsuffix netgo -o ./db-server /gopath/src/XXX/datastore/main.go
ADD ./db-server /db-server
ENTRYPOINT ["/db-server"]

Run sudo within Jenkins dockerfile pipeline

I am setting up a Jenkins pipeline (declarative script) using a Docker container agent built from a Dockerfile. I want one of the build stages to fetch dependent packages (Debian packages, from Artifactory, in my case) and then install them within the Docker container. Installing those packages (using dpkg, in my case) needs super-user permission, and thus sudo. How do I set up the pipeline and/or Dockerfile to enable that?
At present, my Jenkinsfile is somewhat like this:
pipeline {
agent {
dockerfile {
filename 'Dockerfile.jenkins'
}
}
stages {
stage('Set up dependencies') {
steps {
sh 'rm -rf dependent-packages && mkdir dependent-packages'
script {// Fetch packages from Artifactory
def packageserver = Artifactory.server 'deb-repo-srv'
def downloadSpec = ...
packageserver.download(downloadSpec)
}
sh 'sudo dpkg -i -R dependent-packages/'
}
}
...
}
}
And my Dockerfile is like this:
# Set up the O/S environment
FROM debian:9
# Add the build and test tools
RUN apt-get -y update && apt-get -y install \
cmake \
doxygen \
g++ \
libcppunit-dev \
make \
libxerces-c-dev
Because I am using a Dockerfile agent, simply adding the jenkins user to the sudoers file of the Jenkins server will not work.

How to push the artifacts into docker image in Jenkins

How can i create a docker image of the artifacts using docker instruction. I am using "Build Inside a docker container" in jenkins job.
This is the instruction in the dockerfile:
install openjdk 8`RUN apt-get update && \
apt-get install -y openjdk-8-jdk && \
apt-get install -y ant && \
apt-get clean;`
Then I need to control back to Jenkins job to perform the Build process.
So in Build process in jenkins Job, "Execute Shell" commands are executed and artifacts are created.
It has some post build action to run the junit test cases and run coverage report.
In the end, I need the dockerfile to run the instruction to create image of the artifacts.
Add sourcefile destinationfile
Please suggest how to write docker instruction to give the control to jenkins job and get the control after build process is done.
You can use docker pipeline plugin to do this (see docker object)
node("docker") {
docker.withRegistry('<<your-docker-registry>>', '<<your-docker-registry-credentials-id>>') {
git url: "<<your-git-repo-url>>", credentialsId: '<<your-git-credentials-id>>'
sh "git rev-parse HEAD > .git/commit-id"
def commit_id = readFile('.git/commit-id').trim()
println commit_id
def app;
stage("build") {
app = docker.build "your-project-name"
}
stage("publish") {
app.push 'master'
app.push "${commit_id}"
}
}
}

Resources