How to pass arguments to the docker run command dynamically - docker

I have a docker file like this and I have to pass the arguments to docker run command dynamically
FROM ubuntu:14.04
ENV IRONHIDE_SOURCE /var/tmp/ironhide-setup
RUN apt-get update && apt-get install -y openssh-server supervisor cron syslog-ng-core logrotate libapr1 libaprutil1 liblog4cxx10 libxml2 psmisc xsltproc ntp
RUN sed -i -E 's/^(\s*)system\(\);/\1unix-stream("\/dev\/log");/' /etc/syslog-ng/syslog-ng.conf
ADD ironhide-setup/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
RUN mkdir -p /var/log/supervisor & mkdir -p /opt/ibm/
COPY /ironhide-setup/etc/cron.d/* /etc/cron.d
ADD ironhide-setup $IRONHIDE_SOURCE
ENV JAVA_HOME /usr/java/default
ENV PATH $JAVA_HOME/bin:$PATH
ENV IRONHIDE_ROOT /usr/ironhide
ENV LD_LIBRARY_PATH /usr/ironhide/lib
ENV IH_ROOT /usr/ironhide
ENV IRONHIDE_BACKUP_PATH /var/tmp/ironhide-backup
ENV PATH $IH_ROOT/bin:$PATH
RUN echo 'PS1="[AppConnect-Container#\h \w]: "' >> ~/.bashrc
CMD ["/usr/bin/supervisord"]
and my supervisord.conf is this
[supervisord]
nodaemon=true
[program:cron]
command = cron -f -L 15
priority=1
[program:syslog-ng]
command=/usr/sbin/syslog-ng -F -p /var/run/syslog-ng.pid --no-caps
[program:InstallCastIron]
command = %(ENV_IRONHIDE_SOURCE)s/scripts/var_setup
priority=2
I have to pass the arguments to "docker run" command so internally one of the script under scripts location should be using the argument when the docker container comes up.
Please let me know how can I do this and how to achieve" this

To achieve this feat, you will need to use environment variables.
First you will need to make sure that the service you want to pass arguments to consumes those environment variables.
Second you will need to have those variables defined in your dockerfile. For example:-
Third make sure you use entrypoint script For example:-
Last you can use the docker run -e DEFINE_THOSE_VARS=<value>. Or alternatively you can use docker-compose like this
You can traverse through my repo here which achieves this feat.
Please feel free to ask any Question.
Cheers!

Related

conditional environment definition in Dockerfile [duplicate]

Is it possible to conditionally set an ENV variable in a Dockerfile based on the value of a build ARG?
Ex: something like
ARG BUILDVAR=sad
ENV SOMEVAR=if $BUILDVAR -eq "SO"; then echo "hello"; else echo "world"; fi
Update: current usage based on Mario's answer:
ARG BUILD_ENV=prod
ENV NODE_ENV=production
RUN if [ "${BUILD_ENV}" = "test" ]; then export NODE_ENV=development; fi
However, running with --build-arg BUILD_ENV=test and then going onto the host, I still get
docker run -it mycontainer bin/bash
[root#brbqw1231 /]# echo $NODE_ENV
production
Yes, it is possible, but you need to use your build argument as flag. You can use parameter expansion feature of shell to check condition. Here is a proof-of-concept Docker file:
FROM debian:stable
ARG BUILD_DEVELOPMENT
# if --build-arg BUILD_DEVELOPMENT=1, set NODE_ENV to 'development' or set to null otherwise.
ENV NODE_ENV=${BUILD_DEVELOPMENT:+development}
# if NODE_ENV is null, set it to 'production' (or leave as is otherwise).
ENV NODE_ENV=${NODE_ENV:-production}
Testing build:
docker build --rm -t env_prod ./
...
docker run -it env_prod bash
root#2a2c93f80ad3:/# echo $NODE_ENV
production
root#2a2c93f80ad3:/# exit
docker build --rm -t env_dev --build-arg BUILD_DEVELOPMENT=1 ./
...
docker run -it env_dev bash
root#2db6d7931f34:/# echo $NODE_ENV
development
You cannot run bash code in the Dockerfile directly, but you have to use the RUN command. So, for example, you can change ENV with RUN and export the variable in the if, like below:
ARG BUILDVAR=sad
RUN if [ "$BUILDVAR" = "SO" ]; \
then export SOMEVAR=hello; \
else export SOMEVAR=world; \
fi
I didn't try it but should work.
Your logic is actually correct.
The problem here is that RUN export ... won't work in a Dockerfile because the export command won't persist across images.
Dockerfiles create a temporary container in order to generate the image for it, therefore the environment variables won't exist.
ENV on the other hand as per the documentation states:
The environment variables set using ENV will persist when a container is run from the resulting image.
The only way to do this is during your docker run command when generating the container from your image, and wrap your logic around that:
if [ "${BUILD_ENV}" = "test" ]; then
docker run -e NODE_ENV=development myimage
else
docker run myimage
fi
While you can't set conditional ENV variables but you may be able to acomplish what you are after with the RUN command and a null-coalescing environment variable:
RUN node /var/app/current/index.js --env ${BUILD_ENV:-${NODE_ENV:-"development"}}
If we are talking only about environment variable, then just set it with production
ENV NODE_ENV prod
And during container start in development, you may use -e NODE_ENV=dev.
This way image is always built-in production but the local container is launched in development.
This answer is great if you only need to check whether a build-arg is present and you want to set a default value.
To improve this solution, in case you want to use the data passed by the build-arg, you can do the following:
FROM debian:stable
ARG BUILD_DEVELOPMENT=production
ENV NODE_ENV=$BUILD_DEVELOPMENT
The magic comes from the default value for the ARG.
Passing values to Dockerfile and then to entrypoint script
From the command line pass in your required value (TARG)
docker run --env TARG=T1_WS01 -i projects/msbob
Then in your Dockerfile put something like this
Dockerfile:
# if $TARG is not set then "entrypoint" defaults to Q0_WS01
CMD ./entrypoint.sh ${TARG} Q0_WS01
The entrypoint.sh script only reads the first argument
entrypoint.sh:
#!/bin/bash
[ $1 ] || { echo "usage: entrypoint.sh <$TARG>" ; exit ; }
target_env=$1
I had a similar issue for setting proxy server on a container.
The solution I'm using is an entrypoint script, and another script for environment variables configuration. Using RUN, you assure the configuration script runs on build, and ENTRYPOINT when you run the container.
--build-arg is used on command line to set proxy user and password.
The entrypoint script looks like:
#!/bin/bash
# Load the script of environment variables
. /root/configproxy.sh
# Run the main container command
exec "$#"
configproxy.sh
#!/bin/bash
function start_config {
read u p < /root/proxy_credentials
export HTTP_PROXY=http://$u:$p#proxy.com:8080
export HTTPS_PROXY=https://$u:$p#proxy.com:8080
/bin/cat <<EOF > /etc/apt/apt.conf
Acquire::http::proxy "http://$u:$p#proxy.com:8080";
Acquire::https::proxy "https://$u:$p#proxy.com:8080";
EOF
}
if [ -s "/root/proxy_credentials" ]
then
start_config
fi
And in the Dockerfile, configure:
# Base Image
FROM ubuntu:18.04
ARG user
ARG pass
USER root
# -z the length of STRING is zero
# [] are an alias for test command
# if $user is not empty, write credentials file
RUN if [ ! -z "$user" ]; then echo "${user} ${pass}">/root/proxy_credentials ; fi
#copy bash scripts
COPY configproxy.sh /root
COPY startup.sh .
RUN ["/bin/bash", "-c", ". /root/configproxy.sh"]
# Install dependencies and tools
#RUN apt-get update -y && \
# apt-get install -yqq --no-install-recommends \
# vim iputils-ping
ENTRYPOINT ["./startup.sh"]
CMD ["sh", "-c", "bash"]
Build without proxy settings
docker build -t img01 -f Dockerfile .
Build with proxy settings
docker build -t img01 --build-arg user=<USER> --build-arg pass=<PASS> -f Dockerfile .
Take a look here.

Jmeter in Docker

I am trying to run Jmeter in Docker. I got Dockerfile and Entrypoint has entrypoint.sh as well added.
ARG JMETER_VERSION="5.2.1"
RUN mkdir /jmeter
WORKDIR /jmeter
RUN apt-get update \
&& apt-get install wget -y \
&& apt-get install openjdk-8-jdk -y \
&& wget https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.2.1.tgz \
&& tar -xzf apache-jmeter-5.2.1.tgz \
&& rm apache-jmeter-5.2.1.tgz
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64
RUN export JAVA_HOME
RUN echo $JAVA_HOME
ENV JMETER jmeter/apache-jmeter-5.2.1/bin
ENV PATH $PATH:$JMETER_BIN
RUN export JMETER
RUN echo $JMETER
WORKDIR /jmeter/apache-jmeter-5.2.1
COPY users.jmx /jmeter/apache-jmeter-5.2.1
COPY entrypoint.sh /jmeter/apache-jmeter-5.2.1
RUN ["chmod", "+x", "entrypoint.sh"]
ENTRYPOINT ["./entrypoint.sh"]
entrypoint.sh
#!/bin/bash
# Inspired from https://github.com/hhcordero/docker-jmeter-client
# Basically runs jmeter, assuming the PATH is set to point to JMeter bin-dir (see Dockerfile)
#
# This script expects the standdard JMeter command parameters.
#
set -e
freeMem=`awk '/MemFree/ { print int($2/1024) }' /proc/meminfo`
s=$(($freeMem/10*8))
x=$(($freeMem/10*8))
n=$(($freeMem/10*2))
export JVM_ARGS="-Xmn${n}m -Xms${s}m -Xmx${x}m"
echo "START Running Jmeter on `date`"
echo "JVM_ARGS=${JVM_ARGS}"
echo "jmeter args=$#"
# Keep entrypoint simple: we must pass the standard JMeter arguments
bin/jmeter.sh $#
echo "END Running Jmeter on `date`"
Now when I try to run container without jmeter arguments, container starts and asks for jmeter arguments
docker run sar/test12
I get error as An error occurred:
No X11 DISPLAY variable was set, but this program performed an operation which requires it.
But when i run jmeter container with arguments
docker run -v /home/jmeter/unbuntjmeter/:/jmeter/apache-jmeter-5.2.1 sar/test12 -n -t ./users.jmx
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: "./entrypoint.sh": permission denied": unknown.
Solutions
For the X11 issue, you can try setting -e DISPLAY=$DISPLAY in your docker run, you may need to perform some other steps to get it working properly depending on how your host is setup. But trying to get the GUI working here seems like overkill. To fix your problem when you pass through the command arguments, you can either:
Add execute permissions to the entrypoint.sh file on your host by running chmod +x /home/jmeter/unbuntjmeter/entrypoint.sh.
Or
Don't mount /home/jmeter/unbuntjmeter/ into the container by removing the -v argument from your docker run command.
Problem
When you run this command docker run -v /home/jmeter/unbuntjmeter/:/jmeter/apache-jmeter-5.2.1 sar/test12 -n -t ./users.jmx, you are mounting the directory /home/jmeter/unbuntjmeter/ from your host machine onto /jmeter/apache-jmeter-5.2.1 in your docker container.
That means your /jmeter/apache-jmeter-5.2.1/entrypoint.sh script in the container is being overwritten by the one in that directory on your host (if there is one, which there does seem to be). This file on your host machine doesn't have the proper permissions to be executed in your container (presumably it just needs +x because you are running this in your build: RUN ["chmod", "+x", "entrypoint.sh"]).

Docker-image-as-executable: should I execute in CMD or ENTRYPOINT?

I am creating a Docker image to initialize my PostgreSQL database. It looks like this:
FROM debian:stretc
RUN set -x \
&& ... ommitted ...
&& apt-get install postgresql-client -y
COPY scripts /scripts
CMD cd /scripts && psql -f myscript.sql
This works great. Every time I need to initialize my database I start the container (docker run --rm my-image). After the psql command is done, the container is automatically stopped and removed (because of the --rm). So basically, I have a Docker-image-as-executable.
But, I am confused whether that last line should be:
CMD cd /scripts && psql -f myscript.sql
or
ENTRYPOINT cd /scripts && psql -f myscript.sql
Which one should be used in my case (Docker-image-as-excutable)? Why?
You need to use ENTRYPOINT if you need to make it as "Docker-image-as-executable"
RUN executes the command(s) that you give in a new layer and creates
a new image. This is mainly used for installing a new package.
CMD sets default command and/or parameters, however, we can overwrite
those commands or pass in and bypass the default parameters from
the command line when docker runs
ENTRYPOINT is used when yo want to run a container as an executable.
Both Entrypoint and Command will do the same thing. The only main difference is that when you use CMD you have more flexibilty in overriding the command that runs from the CLI.
so If you have the dockerfile:
FROM debian:stretc
RUN set -x \
&& ... ommitted ...
&& apt-get install postgresql-client -y
COPY scripts /scripts
CMD cd /scripts && psql -f myscript.sql
You can override the CMD defined in the dockerfile from the cli to run a different command:
docker run --rm my-image psql -f MYSCRIPT2.sql
This will run MYSCRIPT2.sql as given in the cli. You can't do that with ENRTYPOINT.

Modifying a Docker container's ENTRYPOINT to run a script before the CMD

I'd like to run a script to attach a network drive every time I create a container in Docker. From what I've read this should be possible by setting a custom entrypoint. Here's what I have so far:
FROM ubuntu
COPY *.py /opt/package/my_code
RUN mkdir /efs && \
apt-get install nfs-common -y && \
echo "#!/bin/sh" > /root/startup.sh && \
echo "mount -t nfs4 -o net.nfs.com:/ /nfs" >> /root/startup.sh && \
echo "/bin/sh -c '$1'" >> /root/startup.sh && \
chmod +x /root/startup.sh
WORKDIR /opt/package
ENV PYTHONPATH /opt/package
ENTRYPOINT ["/root/startup.sh"]
At the moment my CMD is not getting passed through properly to my /bin/sh line, but I'm wondering if there isn't an easier way to accomplish this?
Unfortunately I don't have control over how my containers will be created. This means I can't simply prepend the network mounting command to the original docker command.
From documentation:
CMD should be used as a way of defining default arguments for an ENTRYPOINT command or for executing an ad-hoc command in a container
So if you have an ENTRYPOINT specified, the CMD will be passed as additional arguments for it. It means that your entrypoint script should explicitly handle these arguments.
In your case, when you run :
docker run yourimage yourcommand
What is executed in your container is :
/root/startup.sh yourcommand
The solution is to add exec "$#" at the end of your /root/startup.sh script. This way, it will execute any command given as its arguments.
You might want to read about the ENTRYPOINT mechanisms and its interaction with CMD.

Conditional ENV in Dockerfile

Is it possible to conditionally set an ENV variable in a Dockerfile based on the value of a build ARG?
Ex: something like
ARG BUILDVAR=sad
ENV SOMEVAR=if $BUILDVAR -eq "SO"; then echo "hello"; else echo "world"; fi
Update: current usage based on Mario's answer:
ARG BUILD_ENV=prod
ENV NODE_ENV=production
RUN if [ "${BUILD_ENV}" = "test" ]; then export NODE_ENV=development; fi
However, running with --build-arg BUILD_ENV=test and then going onto the host, I still get
docker run -it mycontainer bin/bash
[root#brbqw1231 /]# echo $NODE_ENV
production
Yes, it is possible, but you need to use your build argument as flag. You can use parameter expansion feature of shell to check condition. Here is a proof-of-concept Docker file:
FROM debian:stable
ARG BUILD_DEVELOPMENT
# if --build-arg BUILD_DEVELOPMENT=1, set NODE_ENV to 'development' or set to null otherwise.
ENV NODE_ENV=${BUILD_DEVELOPMENT:+development}
# if NODE_ENV is null, set it to 'production' (or leave as is otherwise).
ENV NODE_ENV=${NODE_ENV:-production}
Testing build:
docker build --rm -t env_prod ./
...
docker run -it env_prod bash
root#2a2c93f80ad3:/# echo $NODE_ENV
production
root#2a2c93f80ad3:/# exit
docker build --rm -t env_dev --build-arg BUILD_DEVELOPMENT=1 ./
...
docker run -it env_dev bash
root#2db6d7931f34:/# echo $NODE_ENV
development
You cannot run bash code in the Dockerfile directly, but you have to use the RUN command. So, for example, you can change ENV with RUN and export the variable in the if, like below:
ARG BUILDVAR=sad
RUN if [ "$BUILDVAR" = "SO" ]; \
then export SOMEVAR=hello; \
else export SOMEVAR=world; \
fi
I didn't try it but should work.
Your logic is actually correct.
The problem here is that RUN export ... won't work in a Dockerfile because the export command won't persist across images.
Dockerfiles create a temporary container in order to generate the image for it, therefore the environment variables won't exist.
ENV on the other hand as per the documentation states:
The environment variables set using ENV will persist when a container is run from the resulting image.
The only way to do this is during your docker run command when generating the container from your image, and wrap your logic around that:
if [ "${BUILD_ENV}" = "test" ]; then
docker run -e NODE_ENV=development myimage
else
docker run myimage
fi
While you can't set conditional ENV variables but you may be able to acomplish what you are after with the RUN command and a null-coalescing environment variable:
RUN node /var/app/current/index.js --env ${BUILD_ENV:-${NODE_ENV:-"development"}}
If we are talking only about environment variable, then just set it with production
ENV NODE_ENV prod
And during container start in development, you may use -e NODE_ENV=dev.
This way image is always built-in production but the local container is launched in development.
This answer is great if you only need to check whether a build-arg is present and you want to set a default value.
To improve this solution, in case you want to use the data passed by the build-arg, you can do the following:
FROM debian:stable
ARG BUILD_DEVELOPMENT=production
ENV NODE_ENV=$BUILD_DEVELOPMENT
The magic comes from the default value for the ARG.
Passing values to Dockerfile and then to entrypoint script
From the command line pass in your required value (TARG)
docker run --env TARG=T1_WS01 -i projects/msbob
Then in your Dockerfile put something like this
Dockerfile:
# if $TARG is not set then "entrypoint" defaults to Q0_WS01
CMD ./entrypoint.sh ${TARG} Q0_WS01
The entrypoint.sh script only reads the first argument
entrypoint.sh:
#!/bin/bash
[ $1 ] || { echo "usage: entrypoint.sh <$TARG>" ; exit ; }
target_env=$1
I had a similar issue for setting proxy server on a container.
The solution I'm using is an entrypoint script, and another script for environment variables configuration. Using RUN, you assure the configuration script runs on build, and ENTRYPOINT when you run the container.
--build-arg is used on command line to set proxy user and password.
The entrypoint script looks like:
#!/bin/bash
# Load the script of environment variables
. /root/configproxy.sh
# Run the main container command
exec "$#"
configproxy.sh
#!/bin/bash
function start_config {
read u p < /root/proxy_credentials
export HTTP_PROXY=http://$u:$p#proxy.com:8080
export HTTPS_PROXY=https://$u:$p#proxy.com:8080
/bin/cat <<EOF > /etc/apt/apt.conf
Acquire::http::proxy "http://$u:$p#proxy.com:8080";
Acquire::https::proxy "https://$u:$p#proxy.com:8080";
EOF
}
if [ -s "/root/proxy_credentials" ]
then
start_config
fi
And in the Dockerfile, configure:
# Base Image
FROM ubuntu:18.04
ARG user
ARG pass
USER root
# -z the length of STRING is zero
# [] are an alias for test command
# if $user is not empty, write credentials file
RUN if [ ! -z "$user" ]; then echo "${user} ${pass}">/root/proxy_credentials ; fi
#copy bash scripts
COPY configproxy.sh /root
COPY startup.sh .
RUN ["/bin/bash", "-c", ". /root/configproxy.sh"]
# Install dependencies and tools
#RUN apt-get update -y && \
# apt-get install -yqq --no-install-recommends \
# vim iputils-ping
ENTRYPOINT ["./startup.sh"]
CMD ["sh", "-c", "bash"]
Build without proxy settings
docker build -t img01 -f Dockerfile .
Build with proxy settings
docker build -t img01 --build-arg user=<USER> --build-arg pass=<PASS> -f Dockerfile .
Take a look here.

Resources