passing env var from docker run cmd inside jenkinsfile to dockerfie - docker

Iam trying to pass a variable from jenkinsfile to a dockerfile. So i run a docker run command inside the jenkinsfile :
steps {
checkout scm
sh '''
echo ${GIT_BRANCH}
mkdir -p `pwd`/build_target
docker build -t android_build -f docker/Dockerfile.android .
docker run --env GIT_BRANCH=${GIT_BRANCH} android_build
ls -la `pwd`/build_target/*
'''
And try to use the env variable inside the dockerfile:
FROM openjdk:8u212-jdk
USER root
ENV GIT_BRANCH $GIT_BRANCH
RUN echo ${GIT_BRANCH}
RUN if [ "GIT_BRANCH" = "develop" ] ; then echo 'develop' ; else if [ "GIT_BRANCH" = "master" ] ; then echo 'aster' ; fi
But unfortunately it doesn't work and make the pipeline crashes as it can't get the env var.
What is wrong with my code ???

I ended by fixing the variable at build time with "--build-arg" flag, then i was able to use it inside the dockerfile. Hope it helps someone someday

Related

How to pass jenkins credentials into docker build command?

My Jenkins pipeline code successfully checks out my private git repo from bitbucket using
checkout([$class: 'GitSCM',
userRemoteConfigs: [[credentialsId: 'cicd-user', url:'ssh://git#bitbucket.myorg.co:7999/A/software.git']]
in same software.git I have a Dockerfile that I want to use to build various build targets present in software.git on Kubernetes and I am trying the below to pass jenkins credentials into a docker container that I want to build and run.
So in the same jenkins pipeline when I checked out software.git (above code), I try to do the following to get the docker container built
withCredentials([sshUserPrivateKey(credentialsId: 'cicd-user', keyFileVariable: 'FILE')]) {
sh "cd ${WORKSPACE} && docker build -t ${some-name} --build-arg USERNAME=cicd-user --build-arg PRIV_KEY_FILE=$FILE --network=host -f software/tools/jenkins/${some-name}/Dockerfile ."
}
in Dockerfile I do
RUN echo "$PRIV_KEY_FILE" > /home/"$USERNAME"/.ssh/id_rsa && \
chmod 700 /home/"$USERNAME"/.ssh/id_rsa
RUN echo "Host bitbucket.myorg.co\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config
But still from my Docker container I am not able to successfully checkout my private repo(s). What am I missing ? Any comments, suggestions ? Thanks.
Please read about Groovy String Interpolation.
In your expression
sh "cd ${WORKSPACE} && docker build -t ${some-name} \
--build-arg USERNAME=cicd-user \
--build-arg PRIV_KEY_FILE=$FILE --network=host \
-f software/tools/jenkins/${some-name}/Dockerfile ."
you use double quotes so Groovy interpolates all the variables in the string. This includes $FILE so Groovy replaces that with the value of Groovy variable named FILE. You don't have any Groovy variable with that name (but rather bash variable which is different from Groovy) so this gets replaced with an empty string.
To prevent interpolating that particular variable, you need to hint Groovy not to interpolate this particular one, by escaping this $ with \:
sh "cd ${WORKSPACE} && docker build -t ${some-name}\
--build-arg USERNAME=cicd-user \
--build-arg PRIV_KEY_FILE=\$FILE --network=host \
-f software/tools/jenkins/${some-name}/Dockerfile ."

docker - pass arguments to the script during build

I would like to pass argument (from the docker command) to the shell script inside the Dockerfile.
This is the docker command line.
docker build --file=DockerfileTest --build-arg docker_image=PX-release-migration --tag=test-image:latest --rm=true .
This is a script that is called inside the Dockerfile.
#!/bin/sh -e
image_name=$1
echo "docker image is $image_name"
if [[ ($image_name == '') || ($image_name == *"-dev-"*) ]]; then
echo "This is development"
cp src/main/resources/application-dev.properties src/main/resources/application.properties
elif [[ $image_name == *"-preprod-"* ]]; then
echo "This is preprod"
cp src/main/resources/application-stg.properties src/main/resources/application.properties
elif [[ $image_name == *"-release-"* ]]; then
echo "This is production"
cp src/main/resources/application-prod.properties src/main/resources/application.properties
fi
When I execute separately the script, it works, but it doe
This is docker file.
ARG spring_env=local
ARG docker_image=-local-
FROM maven:3.6.1-jdk-8
COPY . /apps/demo
WORKDIR /apps/demo
RUN chmod +x /apps/demo/initialize_env.sh
RUN ./initialize_env.sh $docker_image
RUN echo "spring_env is ${spring_env}"
So basically, i would like to use a different spring application properties file during the build depending on the docker_image name. If a docker image name contains 'release', i would like to package application-prod.properties during the build.
This is the error message that I am getting.
Step 1/8 : ARG spring_env=local
Step 2/8 : ARG docker_image=-local-
Step 3/8 : FROM maven:3.6.1-jdk-8
---> 4c81be38db66
Step 4/8 : COPY . /apps/demo
---> 41439197c465
Step 5/8 : WORKDIR /apps/demo
---> Running in 56bd408c2eb1
Removing intermediate container 56bd408c2eb1
---> 4c4025bf5f64
Step 6/8 : RUN chmod +x /apps/demo/initialize_env.sh
---> Running in 18dc3a5c1a54
Removing intermediate container 18dc3a5c1a54
---> 60d2037a0209
Step 7/8 : RUN ./initialize_env.sh $docker_image
---> Running in 2e049b2cf630
docker image is
./initialize_env.sh: 5: ./initialize_env.sh: Syntax error: word unexpected (expecting ")")
The command '/bin/sh -c ./initialize_env.sh $docker_image' returned a non-zero code: 2
When I execute separately the script, it works, but it doesn't inside the docker container.
Tip: Use ShellCheck to check scripts for syntax errors.
#!/bin/sh -e
if [[ ($image_name == '') || ($image_name == *"-dev-"*) ]]; then
[[ is bash syntax but your script is declared to use plain sh. It works on your machine presumably because sh is really symlinked to bash, but inside the container that's not the case. maven:3.6.1-jdk-8 is based on debian:stretch which uses dash instead of bash.
Change the shebang line. You can also delete the parentheses; they're superfluous.
#!/bin/bash -e
if [[ $image_name == '' || $image_name == *"-dev-"* ]]; then
You could also use a case block to simplify the repetitive checks.
case "$image_name" in
''|*-dev-*)
echo "This is development"
cp src/main/resources/application-dev.properties src/main/resources/application.properties
;;
*-preprod-*)
echo "This is preprod"
cp src/main/resources/application-stg.properties src/main/resources/application.properties
;;
*-release-*)
echo "This is production"
cp src/main/resources/application-prod.properties src/main/resources/application.properties
;;
esac

How to set environment variable in docker container system wide at container start for all users?

I need to set some environment variable for all users and processes inside docker container. It should be set at container start, not in Dockerfile, because it depends on running environment.
So the simple Dockerfile
FROM ubuntu
RUN echo 'export TEST=test' >> '/root/.bashrc'
works well for interactive sessions
docker run -ti test bash
then
env
and there is TEST=test
but when docker run -ti test env there is no TEST
I was trying
RUN echo 'export TEST=test' >> '/etc/environment'
RUN echo 'TEST="test"' >> '/etc/environment'
RUN echo 'export TEST=test' >> /etc/profile.d/1.sh
ENTRYPOINT export TEST=test
Nothing helps.
Why I need this. I have http_proxy variable inside container automatically set by docker, I need to set another variables, based on it, i.e. JAVA_OPT, do it system wide, for all users and processes, and in running environment, not at build time.
I would create a script which would be an entrypoint:
#!/bin/bash
# if env variable is not set, set it
if [ -z $VAR ];
then
# env variable is not set
export VAR=$(a command that gives the var value);
fi
# pass the arguments received by the entrypoint.sh
# to /bin/bash with command (-c) option
/bin/bash -c $#
And in Dockerfile I would set the entrypoint:
ENTRYPOINT entrypoint.sh
Now every time I run docker run -it <image> <any command> it uses my script as entrypoint so will always run it before the command then pass the arguments to the right place which is /bin/bash.
Improvements
The above script is enough to work if you are always using the entrypoint with arguments, otherwise your $# variable will be empty and will give you an error /bin/bash: -c: option requires an argument. A easy fix is an if statement:
if [ ! -z $# ];
then
/bin/bash -c $#;
fi
Setting the parameter in ENTRYPOINT would solve this issue.
In docker file pass parameter in ENTRYPOINT

Build and Run Docker Container in Jenkins

I need to run docker container in Jenkins so that installed libraries like pycodestyle can be runnable in the following steps.
I successfully built Docker Container (in Dockerfile)
How do I access to the container so that I can use it in the next step? (Please look for >> << code in Build step below)
Thanks
stage('Build') {
// Install python libraries from requirements.txt (Check Dockerfile for more detail)
sh "docker login -u '${DOCKER_USR}' -p '${DOCKER_PSW}' ${DOCKER_REGISTRY}"
sh "docker build \
--tag '${DOCKER_REGISTRY}/${DOCKER_TAG}:latest' \
--build-arg HTTPS_PROXY=${PIP_PROXY} ."
>> sh "docker run -ti ${DOCKER_REGISTRY}/${DOCKER_TAG}:latest sh" <<<
}
}
stage('Linting') {
sh '''
awd=$(pwd)
echo '===== Linting START ====='
for file in $(find . -name '*.py'); do
filename=$(basename $file)
if [[ ${file:(-3)} == ".py" ]] && [[ $filename = *"test"* ]] ; then
echo "perform PEP8 lint (python pylint blah) for $filename"
cd $awd && cd $(dirname "${file}") && pycodestyle "${filename}"
fi
done
echo '===== Linting END ====='
'''
}
You need to mount the workspace of your Jenkins job (containing your python project) as volume (see "docker run -v" option) to your container and then run the "next step" build step inside this container. You can do this by providing a shell script as part of your project's source code, which does the "next step" or write this script in a previous build stage.
It would be something like this:
sh "chmod +x build.sh"
sh "docker run -v $WORKSPACE:/workspace ${DOCKER_REGISTRY}/${DOCKER_TAG}:latest /workspace/build.sh"
build.sh is an executable script, which is part of your project's workspace and performans the "next step".
$WORKSPACE is the folder that is used by your jenkins job (normally /var/jenkins_home/jobs//workspace - it is provided by Jenkins as a build variable.
Please note: This solution requires that the Docker daemon is running on the same host as Jenkins! Otherwise the workspace will not be available to your container.
Another solution would be to run Jenkins as Docker container, so you can share the Jenkins home/workspaces easily with the containers you run within your build jobs, like described here:
Running Jenkins tests in Docker containers build from dockerfile in codebase

Conditional logic in Dockerfile, using --build-arg

Say I have this:
ARG my_user="root" # my_user => default is "root"
USER $my_user
ENV USER=$my_user
All good so far, but now we get here:
ENV HOME="/root"
is there a way to do something like this:
ENV HOME $my_user === "root"? "/root" : "/home/$my_user"
Obviously, that's the wrong syntax.
The only solution I can think of is to just use two --build-args, something like this:
docker build -t zoom \
--build-arg my_user="foo" \
--build-arg my_home="/home/foo" \
.
Unfortunately you can't do this directly
https://forums.docker.com/t/how-do-i-send-runs-output-to-env-in-dockerfile/16106/3
So you have two alternatives
Use a shell script at start
You can use a shell script at the start
CMD /start.sh
And in your start.sh you can have that logic
if [ $X == "Y" ]; then
export X=Y
else
export X=Z
fi
Create a profile environment variable
FROM alpine
RUN echo "export NAME=TARUN" > /etc/profile.d/myenv.sh
SHELL ["/bin/sh", "-lc"]
CMD env
And then you when you run it
$ docker run test
HOSTNAME=d98d44fa1dc9
SHLVL=1
HOME=/root
PAGER=less
PS1=\h:\w\$
NAME=TARUN
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
CHARSET=UTF-8
Note: The SHELL ["/bin/sh", "-lc"] is quite important here, else the profile will not be loaded
Note2: Instead of RUN echo "export NAME=TARUN" > /etc/profile.d/myenv.sh you can also do a COPY myevn.sh /etc/profile.d/myenv.sh and have the file be present in your build context

Resources