Is there a way to use If condition in Dockefile? - docker

I am new to docker, I am looking for a way to execute a command in docker container depends on the environment.
In Dockerfile, I have 2 commands, command_a and command_b. If the env = 'prod' run command_a, else command_b. How can I achieve this?
I tried like below:
RUN if [ $env = "prod" ] ; then echo command_a; else echo cpmmand_b; fi;
How can I achieve the desired behaviour?
PS:
I know that echo should not be there.

Docker 17.05 and later supports a kind of conditionals using multi-stage build and build args. Have a look at https://medium.com/#tonistiigi/advanced-multi-stage-build-patterns-6f741b852fae
From the blog post:
ARG BUILD_VERSION=1
FROM alpine AS base
RUN ...
FROM base AS branch-version-1
RUN touch version1
FROM base AS branch-version-2
RUN touch version2
FROM branch-version-${BUILD_VERSION} AS after-condition
FROM after-condition
RUN ...
And then use docker build --build-arg BUILD_VERSION=value ...

Related

Do I need separate Dockerfiles for py2 and py3?

Currently I have 2 Dockerfiles, Dockerfile-py2:
FROM python:2.7
# stuff
and Dockerfile-py3:
FROM python:3.4
# stuff
where both instances of # stuff are identical.
I build two docker images using an invoke task:
#task
def docker(ctx):
"""Build docker images.
"""
tag = ctx.run('git log -1 --pretty=%h').stdout.strip()
for pyversion in '23':
name = 'myrepo/myimage{pyversion}'.format(pyversion=pyversion)
image = '{name}:{tag}'.format(name=name, tag=tag)
latest = '{name}:latest'.format(name=name)
ctx.run('docker build -t {image} -f Dockerfile-py{pyversion} .'.format(image=image, pyversion=pyversion))
ctx.run('docker tag {image} {latest}'.format(image=image, latest=latest))
ctx.run('docker push {name}'.format(name=name))
is there any way to prevent the duplication of # stuff so I can't get in a situation where someone edits one file but not the other?
Here is one way using Dockerfile ARGS along with docker build --build-arg:
ARG version
FROM python:${version}
RUN echo "$(python --version)"
# stuff
Now you build for python2.7 like so:
docker build -t myimg/tmp --build-arg version=2.7 .
In the output you will see:
Step 3/3 : RUN echo "$(python --version)"
---> Running in 06e28a29a3d2
Python 2.7.16
And in the same way, for python3.4:
docker build -t myimg/tmp --build-arg version=3.4 .
In the output you will see:
Step 3/3 : RUN echo "$(python --version)"
---> Running in 2283edc1b65d
Python 3.4.10
As you can imagine you can also set default values for ${version} in your dockerfile:
ARG version=3.4
FROM python:${version}
RUN echo "$(python --version)"
# stuff
Now if you just do docker build -t myimg/tmp . you will build for python3.4. But you can still override with the previous two commands.
So to answer your question, No, you don't need two different docker files.

Run commands inside Docker container without mounting project directory

My Jenkins pipeline uses the docker-workflow plugin. It builds a Docker image and tags it app. The build step fetches some dependencies and bakes them into the image along with my app.
I want to run two commands inside a container based on that image. The command should be executed in the built environment, with access to the dependencies. I tried using Image.inside, but it seems to fail because inside mounts the project directory over the working directory (?) and so the dependencies aren't available.
docker.image("app").inside {
sh './run prepare-tests'
sh './run tests'
}
I tried using docker.script.withDockerContainer too, but the commands don't seem to run inside the container. The same seems to be true for Image.withRun. At least with that I could specify a command, but it seems that I'd have to run specify both commands in one statement. Also it seems that withRun doesn't fail the build if the command doesn't exit cleanly.
docker
.image("app")
.withRun('', 'bash -c "./app prepare-tests && ./app tests"') { container ->
sh "exit \$(docker wait ${container.id})"
}
Is there a way to use Image.inside without mounting the project directory? Or is there are more elegant way of doing this?
docker DSL, like docker.image().inside() {} etc will mount jenkins job workspace dir to container and make it as the WORKDIR which will overwrite the WORKDIR in Dockerfile.
You can verify that from jenkins console output .
1) CD workdir fristly
docker.image("app").inside {
sh '''
cd <WORKDIR of image specifyed in Dockerfile>
./run prepare-tests
./run tests
'''
}
2) Run container in sh , rather than via docker DSL
sh '''
docker run -i app bash -c "./run prepare-tests && ./run tests"
'''

Docker set ENV based on if-else

I have a situation where I need to set an ENV based on runtime condition like thus:
RUN if [ "$RUNTIME" = "prod" ] then VARIABLE="Some Production URL"; else VARIABLE="Some QA URL"; fi;
ENV={VARIABLE}
Been looking at different solutions but none of them seem to be panning out (for example the basic one where VARIABLE is lost when RUN exits). What would be an elegant way to achieve this?
It is an unfortunate constraint that you only have this "dev/qa/prod" environment variable. However, it is possible to achieve what you want.
First, you might consider baking your environment specific configuration into the image for all environments. (Normally I would discourage to do this!)
For example you can COPY three files into your image:
dev-env.sh: contains your dev config in the form:
ELASTICSEARCH_URL=http://elastic-dev:123
qa-env.sh (similar)
prod-env.sh (similar)
Then you evaluate at run-time (not at build-time) in which environment you are: You add an ENTRYPOINT script to your image which will source the correct file, depending on the ENVIRONMENT_NAME variable.
Dockerfile (part):
ENTRYPOINT ["docker-entrypoint.sh"]
docker-entrypoint.sh (copied into WORKDIR of the image):
#!/bin/bash
set -e
if [ "$ENVIRONMENT_NAME" = "prod" ]; then
source prod-env.sh
fi
# else if qa ... , else if dev ..., else fail
exec "$#"
This script will run when you launch the docker container, so this approach is no option for you if you need the variables to be available in Dockerfile-instructions (at build-time).
Another (build-time) workaround is described here and consists of using temporary files to store environment variables across multiple image layers.
The literal conditional execution can be achieved with multistage build and ONBUILD.
ARG mode=prod
FROM alpine as img_prod
ONBUILD ENV ELASTICSEARH_URL=whatever/for/prod
FROM alpine as img_qa
ONBUILD ENV ELASTICSEARH_URL=whatever/for/qa
FROM img_${mode}
...
Then you build with docker build --build-arg mode=qa .
Wouldn't passing env var with docker run be the solution you need? Something like this:
docker run -e YOUR_VARIABLE="Some Production URL" ...

Conditionally set ENV var based on hostname in Dockerfile

How can I set an ENV var in my Dockerfile based on the hostname? I tried this:
RUN if [ hostname = "foo" ]; then ENV BAR "BAZ"; else ENV BAR "BIFF"; fi
But that failed with
ENV: not found
RUN if [ hostname = "foo" ]; then ENV BAR "BAZ"; else ENV BAR "BIFF"; fi
You can't nest docker build instructions, everything after the RUN instruction gets executed in the image context, docker build commands don't
exist there. So that explains the error you are seeing.
Even you if you translated that to proper shell code BAR would only be active for that single RUN instruction during the build.
Either orchestrate on the host and pass BAR via run -e to your container or add a startup script to the image that sets BAR as needed on container start:
FROM foo
COPY my-start.sh /
CMD ["/my-start.sh"]
First of all, you can't embed Docker build command into shell of RUN, the shell will run inside the intermediate container during build process, and Docker build commands will be ran by Docker build engine, they're different things. And besides, Docker does not support conditional commands like IF or something like that. Docker is about immutable infrastructure, Dockerfile is the definition of your image and it's supposed to be able to generate the same image no matter what build context it is in. And from the delivery perspective of view, the image is your deliverable build artifacts, if you want to deliver different stuff, then use different Dockerfile to build different images, otherwise if the differences is about the runtime, I think you could really consider postpone the env definition to the runtime with -e option of docker run.
The reason why your build is failing has been explained by #shizhz & #Erik Dannenberk.
However, if you do really need that behavior I suggest you make a little script to do that:
export BAR=`[[ hostname = "foo" ]] && echo "BAZ" || echo "BIFF"`
docker build -t hello/hi - <<EOF
FROM alpine
ENV BAR $BAR
CMD echo $BAR
EOF

How to make a build arg mandatory during Docker build?

Is there any way to make a build argument mandatory during docker build? The expected behaviour would be for the build to fail if the argument is missing.
For example, for the following Dockerfile:
FROM ubuntu
ARG MY_VARIABLE
ENV MY_VARIABLE $MY_VARIABLE
RUN ...
I would like the build to fail at ARG MY_VARIABLE when built with docker build -t my-tag . and pass when built with docker build -t my-tag --build-arg MY_VARIABLE=my_value ..
Is there any way to achieve that behaviour? Setting a default value doesn't really do the trick in my case.
(I'm running Docker 1.11.1 on darwin/amd64.)
EDIT:
One way of doing that I can think of is to run a command that fails when MY_VARIABLE is empty, e.g.:
FROM ubuntu
ARG MY_VARIABLE
RUN test -n "$MY_VARIABLE"
ENV MY_VARIABLE $MY_VARIABLE
RUN ...
but it doesn't seem to be a very idiomatic solution to the problem at hand.
I tested with RUN test -n <ARGvariablename> what #konradstrack mentioned in the original (edit) post... that seems do the job of mandating the variable to be passed as the build time argument for the docker build command:
FROM ubuntu
ARG MY_VARIABLE
RUN test -n "$MY_VARIABLE"
ENV MY_VARIABLE $MY_VARIABLE
You can also use shell parameter expansion to achieve this.
Let's say your mandatory build argument is called MANDATORY_BUILD_ARGUMENT, and you want it to be set and non-empty, your Dockerfile could look like this:
FROM debian:stretch-slim
MAINTAINER Evel Knievel <evel#kniev.el>
ARG MANDATORY_BUILD_ARGUMENT
RUN \
# Check for mandatory build arguments
: "${MANDATORY_BUILD_ARGUMENT:?Build argument needs to be set and non-empty.}" \
# Install libraries
&& apt-get update \
&& apt-get install -y \
cowsay \
fortune \
# Cleanup
&& apt-get clean \
&& rm -rf \
/var/lib/apt/lists/* \
/var/tmp/* \
/tmp/* \
CMD ["/bin/bash", "-c", "/usr/games/fortune | /usr/games/cowsay"]
Of course, you would also want to use the build-argument for something, unlike I did, but still, I recommend building this Dockerfile and taking it for a test-run :)
EDIT
As mentioned in #Jeffrey Wen's answer, to make sure that this errors out on a centos:7 image (and possibly others, I admittedly haven't tested this on other images than stretch-slim):
Ensure that you're executing the RUN command with the bash shell.
RUN ["/bin/bash", "-c", ": ${MYUID:?Build argument needs to be set and not null.}"]
Another simple way:
RUN test -n "$MY_VARIABLE" || (echo "MY_VARIABLE not set" && false)
Long time ago I had a need to introduce a required (mandatory) ARG, and for better UX include the check at the beginning:
FROM ubuntu:bionic
ARG MY_ARG
RUN [ -z "$MY_ARG" ] && echo "MY_ARG is required" && exit 1 || true
...
RUN ./use-my-arg.sh
But this busts the build cache for every single layer after the initial MY_ARG, because MY_ARG=VALUE is prepended to every RUN command afterwards.
Whenever I changed MY_ARG it would end up rebuilding the whole image, instead of rerunning the last RUN command only.
To bring caching back, I have changed my build to a multi-staged one:
The first stage uses MY_ARG and checks it's presence.
The second stage proceeds as usual and declares ARG MY_ARG right at the end.
FROM alpine:3.11.5
ARG MY_ARG
RUN [ -z "$MY_ARG" ] && echo "MY_ARG is required" && exit 1 || true
FROM ubuntu:bionic
...
ARG MY_ARG
RUN ./use-my-arg.sh
Since ARG MY_ARG in the second stage is declared right before it's used, all the previous steps in that stage are unaffected, thus cache properly.
You could do something like this...
FROM ubuntu:14.04
ONBUILD ARG MY_VARIABLE
ONBUILD RUN if [ -z "$MY_VARIABLE" ]; then echo "NOT SET - ERROR"; exit 1; else : ; fi
Then docker build -t my_variable_base .
Then build your images based on this...
FROM my_variable_base
...
It's not super clean, but at least it abstracts the 'bleh' stuff away to the base image.
I cannot comment yet because I do not have 50 reputation, but I would like to add onto #Jan Nash's solution because I had a little difficulty getting it to work with my image.
If you copy/paste #Jan Nash's solution, it will work and spit out the error message that the build argument is not specified.
What I want to add
When I tried getting it to work on a CentOS 7 image (centos:7), Docker ran the RUN command without erroring out.
Solution
Ensure that you're executing the RUN command with the bash shell.
RUN ["/bin/bash", "-c", ": ${MYUID:?Build argument needs to be set and not null.}"]
I hope that helps for future incoming people. Otherwise, I believe #Jan Nash's solution is just brilliant.
In case anybody is looking for a the solution but with docker compose build, I used mandatory variables.
version: "3.9"
services:
my-service:
build:
context: .
args:
- ENVVAR=${ENVVAR:?See build instructions}
After running docker compose build:
Before exporting ENVVAR: Invalid template: "required variable ENVVAR is missing a value: See build instructions"
After exporting ENVVAR: build proceeds
Support for Required Environment variables
Compose Environment Variables
None of these answers worked for me. I wanted ${MY_VARIABLE:?} but did not want to print anything, so I did like this:
ARG MY_VARIABLE
RUN test -n ${MY_VARIABLE:?}
Nothing is printed on success. On error you see this, which is a good enough error:
ERROR RUN test -n ${MY_VARIABLE:?}
/bin/sh: MY_VARIABLE: parameter not set or null
executor failed running [/bin/sh -c test -n ${MY_VARIABLE:?}]: >exit code: 2

Resources