docker - pass arguments to the script during build - docker

I would like to pass argument (from the docker command) to the shell script inside the Dockerfile.
This is the docker command line.
docker build --file=DockerfileTest --build-arg docker_image=PX-release-migration --tag=test-image:latest --rm=true .
This is a script that is called inside the Dockerfile.
#!/bin/sh -e
image_name=$1
echo "docker image is $image_name"
if [[ ($image_name == '') || ($image_name == *"-dev-"*) ]]; then
echo "This is development"
cp src/main/resources/application-dev.properties src/main/resources/application.properties
elif [[ $image_name == *"-preprod-"* ]]; then
echo "This is preprod"
cp src/main/resources/application-stg.properties src/main/resources/application.properties
elif [[ $image_name == *"-release-"* ]]; then
echo "This is production"
cp src/main/resources/application-prod.properties src/main/resources/application.properties
fi
When I execute separately the script, it works, but it doe
This is docker file.
ARG spring_env=local
ARG docker_image=-local-
FROM maven:3.6.1-jdk-8
COPY . /apps/demo
WORKDIR /apps/demo
RUN chmod +x /apps/demo/initialize_env.sh
RUN ./initialize_env.sh $docker_image
RUN echo "spring_env is ${spring_env}"
So basically, i would like to use a different spring application properties file during the build depending on the docker_image name. If a docker image name contains 'release', i would like to package application-prod.properties during the build.
This is the error message that I am getting.
Step 1/8 : ARG spring_env=local
Step 2/8 : ARG docker_image=-local-
Step 3/8 : FROM maven:3.6.1-jdk-8
---> 4c81be38db66
Step 4/8 : COPY . /apps/demo
---> 41439197c465
Step 5/8 : WORKDIR /apps/demo
---> Running in 56bd408c2eb1
Removing intermediate container 56bd408c2eb1
---> 4c4025bf5f64
Step 6/8 : RUN chmod +x /apps/demo/initialize_env.sh
---> Running in 18dc3a5c1a54
Removing intermediate container 18dc3a5c1a54
---> 60d2037a0209
Step 7/8 : RUN ./initialize_env.sh $docker_image
---> Running in 2e049b2cf630
docker image is
./initialize_env.sh: 5: ./initialize_env.sh: Syntax error: word unexpected (expecting ")")
The command '/bin/sh -c ./initialize_env.sh $docker_image' returned a non-zero code: 2
When I execute separately the script, it works, but it doesn't inside the docker container.

Tip: Use ShellCheck to check scripts for syntax errors.
#!/bin/sh -e
if [[ ($image_name == '') || ($image_name == *"-dev-"*) ]]; then
[[ is bash syntax but your script is declared to use plain sh. It works on your machine presumably because sh is really symlinked to bash, but inside the container that's not the case. maven:3.6.1-jdk-8 is based on debian:stretch which uses dash instead of bash.
Change the shebang line. You can also delete the parentheses; they're superfluous.
#!/bin/bash -e
if [[ $image_name == '' || $image_name == *"-dev-"* ]]; then
You could also use a case block to simplify the repetitive checks.
case "$image_name" in
''|*-dev-*)
echo "This is development"
cp src/main/resources/application-dev.properties src/main/resources/application.properties
;;
*-preprod-*)
echo "This is preprod"
cp src/main/resources/application-stg.properties src/main/resources/application.properties
;;
*-release-*)
echo "This is production"
cp src/main/resources/application-prod.properties src/main/resources/application.properties
;;
esac

Related

Run sed and store result to new variable in dockerfile

How to run sed command and save the result to one new Variable in docker.
The sed will replace the last occurrence of '.' and replace with '_'
Example :
JOB_NAME_WITH_VERSION = test_git_0.1 and wanted result is ZIP_FILE_NAME = test_git_0_1
--Dockerfile
RUN ZIP_FILE_NAME=$(echo ${JOB_NAME_WITH_VERSION} | sed 's/\(.*\)\./\1_/') && export ZIP_FILE_NAME
RUN echo "Zip file Name found : $ZIP_FILE_NAME"
I tried this in my docker file but the result is empty
Zip file Name found :
The issue here is that every RUN command results in a new layer, so whatever shell variable was declared in previous layers is subsequently lost.
Compare this:
FROM ubuntu
RUN JOB="FOOBAR"
RUN echo "${JOB}"
$ docker build .
...
Step 3/3 : RUN echo "${JOB}"
---> Running in c4b7d1632c7e
...
to this:
FROM ubuntu
RUN JOB="FOOBAR" && echo "${JOB}"
$ docker build .
...
Step 2/2 : RUN JOB="FOOBAR" && echo "${JOB}"
---> Running in c11049d1687f
FOOBAR
...
so as a workaround, if using a single RUN command is not an option for whatever reason, write the variable to disk and read it when needed, e.g.:
FROM ubuntu
RUN JOB="FOOBAR" && echo "${JOB}" > /tmp/job_var
RUN cat /tmp/job_var
$ docker build .
...
Step 3/3 : RUN cat /tmp/job_var
---> Running in a346c30c2cd5
FOOBAR
...
Each RUN statement in a Dockerfile is run in a separate shell. So once a statement is done, all environment variables are lost. Even if they are exported.
To do what you want to do, you can combine your RUN statements like this
RUN ZIP_FILE_NAME=$(echo ${JOB_NAME_WITH_VERSION} | sed 's/\(.*\)\./\1_/') && \
export ZIP_FILE_NAME && \
echo "Zip file Name found : $ZIP_FILE_NAME"
As your variable is lost once the RUN statement is finished, your environment variable won't be available in your container when it runs. To have an environment variable available there, you need to use the ENV statement.

Set and use variable in RUN instruction using Windows cmd as shell

This is a MWE of my Dockerfile: (I'm hand-composing, not using docker-compose)
# escape=`
FROM mcr.microsoft.com/windows/servercore:ltsc2019-amd64
SHELL ["cmd", "/S", "/C"]
RUN set zz=foo && echo %zz%
When I build the container, I expect this to print foo, but instead I get %zz% which seems to indicate that echo doesn't see the variable zz set.
Step 20/24 : RUN set zz=foo && echo %zz%
---> Running in b1f462e81c5c
%zz%
Removing intermediate container b1f462e81c5c
---> 9c0abf6eb928
If I run this on the command line, it works as expected:
C:\> cmd /S /C set zz=foo && echo %zz%
foo
How can I make the RUN instruction work like the command prompt?
My actual use case, in case this is an XY problem: I'm reading the contents of a user-provided file into a variable and then using that in a command which sets some global config file.
RUN `
set /p pat=<your_pat.txt && `
git config --global url."https://%pat%#github.com/".insteadOf "https://github.com/"

Missing perl command in Perl image

I'm sure this is an incredibly simple fix. I tried to build Docker image with Perl in it (plus some Perl) modules. However, when I go to run this, it says there is no /bin/perl. The question is:
Why did the Perl Docker Image not have Perl in it?
My Dockerfile below:
FROM perl:5.20
ENV PERL_MM_USE_DEFAULT 1
RUN cpan install Net::SSL inc:latest
RUN mkdir /ssc
COPY /ssc /ssc
RUN mkdir /tmp/ssc-bin-files;cp /ssc/bin/*.sh /tmp/ssc-bin-files;chmod a+rx /tmp/ssc-bin-files/*;cp /tmp/ssc-bin-files/* /ssc/bin
RUN chmod a+rx /ssc/bin/*.sh
ENTRYPOINT ["/ssc/bin/put-and-submit.sh"]
Jenkins Pipeline snippet:
stage('Build, Tag and Push SSC Dockerfile'){
tagAsTest = "${IMAGE_NAME}:test"
REPO = "chq-ic2e-sprint-images-docker-local"
println "Docker App Build"
docker.build(tagAsTest,"-f Dockerfile .")
sh 'docker image ls | grep rules-client'
}
stage('Set image tag to :approved'){
hasReachedDockerComposeUp=false;
REPO = "chq-ic2e-sprint-images-docker-local"
sh "docker tag ${IMAGE_NAME}:test ${IMAGE_NAME}:approved"
buildInfo = rtDocker.push("${IMAGE_NAME}:approved", REPO , buildInfo)
server.publishBuildInfo buildInfo
}
The Jenkins log below:
[Pipeline] sh
+ docker build -t chq-ic2e-sprint-images-docker-local.artifactory.swg-devops.com/ssc-cost-file-processor:test -f Dockerfile .
Sending build context to Docker daemon 39.42kB
Step 1/8 : FROM perl:5.20
---> bbe5a82c1dbe
Step 2/8 : ENV PERL_MM_USE_DEFAULT 1
---> Using cache
---> ca2769a89ab8
Step 3/8 : RUN cpan install Net::SSL inc:latest
---> Using cache
---> 1e53f0573131
Step 4/8 : RUN mkdir /ssc
---> Using cache
---> a324effec8ce
Step 5/8 : COPY /ssc /ssc
---> d40bf34f8565
Step 6/8 : RUN mkdir /tmp/ssc-bin-files;cp /ssc/bin/*.sh /tmp/ssc-bin-files;chmod a+rx /tmp/ssc-bin-files/*;cp /tmp/ssc-bin-files/* /ssc/bin
---> Running in 02386f41174f
Removing intermediate container 02386f41174f
---> 4767a8e6f23a
Step 7/8 : RUN chmod a+rx /ssc/bin/*.sh
---> Running in 07646aa96048
Removing intermediate container 07646aa96048
---> f070fcd8a9e9
Step 8/8 : ENTRYPOINT ["/ssc/bin/put-and-submit.sh"]
---> Running in e6bab12f8f40
Removing intermediate container e6bab12f8f40
---> 1422df9d957b
Successfully built 1422df9d957b
Successfully tagged chq-ic2e-sprint-images-docker-local.artifactory.swg-devops.com/ssc-cost-file-processor:test
[Pipeline] sh
+ docker image ls
+ grep rules-client
chq-ic2e-sprint-images-docker-local.artifactory.swg-devops.com/rules-client approved da334d1d8fae 2 days ago 22.5MB
chq-ic2e-sprint-images-docker-local.artifactory.swg-devops.com/rules-client test da334d1d8fae 2 days ago 22.5MB
Script is being run via pipeline like this:
stage('Run image'){
sh '''
docker run -i -v \
--mount type=bind,source="$(pwd)/host-dirs,target=/host-dirs" \
chq-ic2e-sprint-images-docker-local.artifactory.swg-devops.com/ssc-cost-file-processor:approved
sh
'''
}
or from terminal like this:
#!/bin/bash
docker run -it \
--mount type=bind,source="$(pwd)/host-dirs,target=/host-dirs" \
chq-ic2e-sprint-images-docker-local.artifactory.swg-devops.com/ssc-cost-file-processor:approved sh
The perl binary is probably in /usr/local/bin/perl. You can check that in a shell in the running container.
host> docker exec -it your_container bash
container> which perl
/usr/local/bin/perl
container> exit
It sure has perl version 5.20 in it. I'm just curious about the entrypoint script in your dockerfile. You're running a shell script by default when the container is started. What the script starts or runs? If you want to run perl without entering the container, use --entrypoint=perl with your docker run command.
docker run --rm --name perl perl:5.20 perl --version
### Output
This is perl 5, version 20, subversion 3 (v5.20.3) built for x86_64-linux
(with 1 registered patch, see perl -V for more detail)
Copyright 1987-2015, Larry Wall
Perl may be copied only under the terms of either the Artistic License or the
GNU General Public License, which may be found in the Perl 5 source kit.
Complete documentation for Perl, including FAQ lists, should be found on
this system using "man perl" or "perldoc perl". If you have access to the
Internet, point your browser at http://www.perl.org/, the Perl Home Page.
###

How to copy folder from parent into current directory for Dockerfile using Makefile

I have a makefile that looks like this:
push:
docker build -t dataengineering/dataloader .
docker tag dataengineering/dataloader:latest 127579856528.dkr.ecr.us-west-2.amazonaws.com/dataengineering/dataloader:latest
docker push 127579856528.dkr.ecr.us-west-2.amazonaws.com/dataengineering/dataloader:latest
deploy:
#if [ ! "$(environment)" ]; then echo "environment must be defined" && exit 1; fi
#if [ ! "$(target)" ]; then echo "target must be defined" && exit 1; fi
kubectl delete deploy dataloader-$(target) -n dataengineering|| continue
kubectl apply -f kube/$(environment)/deployment-$(target).yaml -n dataengineering
But I need a folder inside the dataloader in order for my dockerfile to actually work.
Does this work?
push:
cd ..; cp -r datastore/ dataloader/
docker build -t dataengineering/dataloader .
docker tag dataengineering/dataloader:latest 1111111111.dkr.ecr.us-west-2.amazonaws.com/dataengineering/dataloader:latest
docker push 11111111111.dkr.ecr.us-west-2.amazonaws.com/dataengineering/dataloader:latest
deploy:
#if [ ! "$(environment)" ]; then echo "environment must be defined" && exit 1; fi
#if [ ! "$(target)" ]; then echo "target must be defined" && exit 1; fi
kubectl delete deploy dataloader-$(target) -n dataengineering|| continue
kubectl apply -f kube/$(environment)/deployment-$(target).yaml -n dataengineering
My dockerfile:
FROM python:3.7
WORKDIR /var/dataloader
COPY assertions/ ./assertions/
...
COPY datastore/ ./datastore/
RUN pip3 install -r requirements.txt
ENTRYPOINT ["python", "dataloader.py"]
If all you need is to copy the directory into the current directory (which would server as your Docker context), you can use cp -r ../datastore/ dataloader/. Unless you want the dataloader directory to be in the same directory as the datastore directory, then you'd do cp -r ../datastore/ ../dataloader/.

Docker RUN does NOT persist files

I have a problem with Docker which does not persist commands launch via "RUN".
Here is my Dockerfile :
FROM jenkins:latest
RUN echo "foo" > /var/jenkins_home/toto ; ls -alh /var/jenkins_home
RUN ls -alh /var/jenkins_home
RUN rm /var/jenkins_home/.bash_logout ; ls -alh /var/jenkins_home
RUN ls -alh /var/jenkins_home
RUN echo "bar" >> /var/jenkins_home/.profile ; cat /var/jenkins_home/.profile
RUN cat /var/jenkins_home/.profile
And here is the output :
Sending build context to Docker daemon 373.8 kB Step 1 : FROM jenkins:latest ---> fc39417bd5fb Step 2 : RUN echo "foo" > /var/jenkins_home/toto ; ls -alh /var/jenkins_home ---> Using cache
---> c614b13d9d83 Step 3 : RUN ls -alh /var/jenkins_home ---> Using cache ---> 8a16a0c92f67 Step 4 : RUN rm /var/jenkins_home/.bash_logout ; ls -alh /var/jenkins_home ---> Using cache ---> f6ca5d5bdc64 Step 5 : RUN ls -alh /var/jenkins_home
---> Using cache ---> 3372c3275b1b Step 6 : RUN echo "bar" >> /var/jenkins_home/.profile ; cat /var/jenkins_home/.profile ---> Running in 79842be2c6e3
# ~/.profile: executed by the command interpreter for login shells.
# This file is not read by bash(1), if ~/.bash_profile or ~/.bash_login
# exists.
# see /usr/share/doc/bash/examples/startup-files for examples.
# the files are located in the bash-doc package.
# the default umask is set in /etc/profile; for setting the umask
# for ssh logins, install and configure the libpam-umask package.
#umask 022
# if running bash if [ -n "$BASH_VERSION" ]; then
# include .bashrc if it exists
if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc"
fi fi
# set PATH so it includes user's private bin if it exists if [ -d "$HOME/bin" ] ; then
PATH="$HOME/bin:$PATH" fi bar ---> 28559b8fe041 Removing intermediate container 79842be2c6e3 Step 7 : RUN cat /var/jenkins_home/.profile ---> Running in c694e0cb5866
# ~/.profile: executed by the command interpreter for login shells.
# This file is not read by bash(1), if ~/.bash_profile or ~/.bash_login
# exists.
# see /usr/share/doc/bash/examples/startup-files for examples.
# the files are located in the bash-doc package.
# the default umask is set in /etc/profile; for setting the umask
# for ssh logins, install and configure the libpam-umask package.
#umask 022
# if running bash if [ -n "$BASH_VERSION" ]; then
# include .bashrc if it exists
if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc"
fi fi
# set PATH so it includes user's private bin if it exists if [ -d "$HOME/bin" ] ; then
PATH="$HOME/bin:$PATH" fi ---> b7e47d65d65e Removing intermediate container c694e0cb5866 Successfully built b7e47d65d65e
Do you guys know why "foo" file is not persisted on step 3? Why ".bash_logout" file is recreated on step 5? Why "bar" is not in my ".profile" file anymore on step 7?
And of course, if I start a container based on this image, none of my modifications are persisted... so my Dockerfile is... useless. Any clue?
The reason those changes are not persisted, is that they are inside a volume the Jenkins Dockerfile marks /var/jenkins_home/ as a VOLUME.
Information inside volumes is not persisted during docker build, or more precisely; each build-step creates a new volume based on the image's content, discarding the volume that was used in the previous build step.
How to resolve this?
I think the best way to resolve this, is to;
Add the files you want to modify inside jenkins_home in a different location inside the image, e.g. /var/jenkins_home_overrides/
Create a custom entrypoint based on, or "wrapping", the default entrypoint script that copies the content of your jenkins_home_overrides to jenkins_home the first time the container is started.
Actually...
And just when I wrote that up; It looks like the official Jenkins image already support this out of the box;
https://github.com/jenkinsci/docker/blob/683b0d6ed17016ee3211f247304ef2f265102c2b/jenkins.sh#L5-L23
According to the documentation, you need to add your files to the /usr/share/jenkins/ref/ directory, and those will be copied to /var/jenkins/home upon start.
Also see https://issues.jenkins-ci.org/browse/JENKINS-24986

Resources