What's the meaning of "CMD [ "-?" ]" in Docker? - docker

《The Docker Book v17.12.0-ce》 Page 223
Listing 6.19: Our war fle fetcher
FROM ubuntu:16.04
MAINTAINER James Turnbull
ENV REFRESHED_AT 2016-06-01
RUN apt-get -yqq update
RUN apt-get -yqq install wget
VOLUME [ "/var/lib/tomcat7/webapps/" ]
WORKDIR /var/lib/tomcat7/webapps/
ENTRYPOINT [ "wget" ]
CMD [ "-?" ]
This incredibly simple image does one thing: it wgets whatever fle from a URL
that is specifed when a container is run from it and stores the fle in the /var/lib
/tomcat7/webapps/ directory. This directory is also a volume and the working
directory for any containers. We’re going to share this volume with our Tomcat
server and run its contents.
Finally, the ENTRYPOINT and CMD instructions allow our container to run when no
URL is specifed; they do so by returning the wget help output when the container
is run without a URL.
Can anyboy tell me what's the meaning of "CMD [ "-?" ]"
I know the concept of ENTRYPOINT and CMD,
what I don't understand is the meaning of "-?" in "wget -?"

When you run a Docker container, it constructs a command line by simply concatenating the "entrypoint" and "command". Those come from different places in the docker run command line; but if you don't provide a --entrypoint option then the ENTRYPOINT in the Dockerfile is used, and if you don't provide any additional command-line arguments after the image name then the CMD is appended.
So, a couple of invocations:
# Does "wget -?"
docker run --rm thisimage
# Does "wget -O- http://stackoverflow.com": dumps the SO home page
docker run --rm thisimage -O- http://stackoverflow.com
# What you need to do to get an interactive shell
docker run --rm -it --entrypoint /bin/sh thisimage

I figure it out, The author made a clerical error. The arguments in CMD should be "-h".
Because in the later he said " Finally, the ENTRYPOINT and CMD instructions allow our container to run when no URL is specifed; they do so by returning the wget help output when the container is run without a URL."

Related

Docker JBoss SVN automation script? RPM v. YUM?

As it stands, my Dockerfile works as written below, but currently I have to run the two commented lines in order to pull, compile, and deploy my application to the server. I tried creating a shell script to run those commands using ADD and ENTRYPOINT, but when I run (using the docker commands below) the shell script runs and then the container exits.
What/How do I modify (I'm assuming, the docker run command) to fix this?
Is there an easier way to import libraries than the multiple URLS for RPM? I tried using YUM, but I wasn't sure how to set up my repo for installing anything.
Dockerfile
FROM registry.access.redhat.com/jboss-eap-7/eap71-openshift
USER root
RUN rpm -i [the URLS of the 40 libraries I need for SVN]
ADD subversion_installer_1.14.1.sh /home/svn_installer.sh
RUN yes | /home/svn_installer.sh
USER jboss
ARG REPO_USER
ARG REPO_PW
ARG REPO_URL
ENV REPO_USER=$REPO_USER
ENV REPO_PW=$REPO_PW
ENV REPO_URL=$REPO_URL
#RUN svn export --username="$REPO_USER" --password="$REPO_PW" "$REPO_URL" /usr/svn/myapp
#RUN /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/bin/jar -cvf $JBOSS_HOME/standalone/deployments/myapp.war /usr/svn/myapp
Docker commands
docker build . -t myapp:latest
docker run -d -p 8080:8080 -p 9990:9990 --env-file=svnvars.cfg myapp:latest
Found out what I was doing wrong. I was trying to use
/opt/eap/bin/standalone.sh
as the last command in my entrypoint script.
I discovered this was wrong by calling
docker images inspect myapp:latest
where I found
"Cmd": [
"/opt/eap/bin/openshift-launch.sh"
],
I was calling the wrong command. So I fixed this by replacing the command in my shell script and changing my ENTRYPOINT to CMD.
Here are the corrected files:
Dockerfile
FROM registry.access.redhat.com/jboss-eap-7/eap71-openshift
USER root
RUN rpm -i [too many libraries]
ADD subversion_installer_1.14.1.sh /home/svn_installer.sh
ADD svnvars.cfg /var/svn/svnvars.cfg
RUN yes | /home/svn_installer.sh
USER jboss
ARG REPO_USER
ARG REPO_PW
ARG REPO_URL
ENV REPO_USER=$REPO_USER
ENV REPO_PW=$REPO_PW
ENV REPO_URL=$REPO_URL
ADD entrypoint.sh /home/entrypoint.sh
CMD /home/entrypoint.sh
entrypoint.sh
#!/bin/bash
svn export --username="$REPO_USER" --password="$REPO_PW" "$REPO_URL" /usr/svn/myapp
cd /usr/svn/myapp
ant war
/opt/eap/bin/openshift-launch.sh

Jmeter in Docker

I am trying to run Jmeter in Docker. I got Dockerfile and Entrypoint has entrypoint.sh as well added.
ARG JMETER_VERSION="5.2.1"
RUN mkdir /jmeter
WORKDIR /jmeter
RUN apt-get update \
&& apt-get install wget -y \
&& apt-get install openjdk-8-jdk -y \
&& wget https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.2.1.tgz \
&& tar -xzf apache-jmeter-5.2.1.tgz \
&& rm apache-jmeter-5.2.1.tgz
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64
RUN export JAVA_HOME
RUN echo $JAVA_HOME
ENV JMETER jmeter/apache-jmeter-5.2.1/bin
ENV PATH $PATH:$JMETER_BIN
RUN export JMETER
RUN echo $JMETER
WORKDIR /jmeter/apache-jmeter-5.2.1
COPY users.jmx /jmeter/apache-jmeter-5.2.1
COPY entrypoint.sh /jmeter/apache-jmeter-5.2.1
RUN ["chmod", "+x", "entrypoint.sh"]
ENTRYPOINT ["./entrypoint.sh"]
entrypoint.sh
#!/bin/bash
# Inspired from https://github.com/hhcordero/docker-jmeter-client
# Basically runs jmeter, assuming the PATH is set to point to JMeter bin-dir (see Dockerfile)
#
# This script expects the standdard JMeter command parameters.
#
set -e
freeMem=`awk '/MemFree/ { print int($2/1024) }' /proc/meminfo`
s=$(($freeMem/10*8))
x=$(($freeMem/10*8))
n=$(($freeMem/10*2))
export JVM_ARGS="-Xmn${n}m -Xms${s}m -Xmx${x}m"
echo "START Running Jmeter on `date`"
echo "JVM_ARGS=${JVM_ARGS}"
echo "jmeter args=$#"
# Keep entrypoint simple: we must pass the standard JMeter arguments
bin/jmeter.sh $#
echo "END Running Jmeter on `date`"
Now when I try to run container without jmeter arguments, container starts and asks for jmeter arguments
docker run sar/test12
I get error as An error occurred:
No X11 DISPLAY variable was set, but this program performed an operation which requires it.
But when i run jmeter container with arguments
docker run -v /home/jmeter/unbuntjmeter/:/jmeter/apache-jmeter-5.2.1 sar/test12 -n -t ./users.jmx
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: "./entrypoint.sh": permission denied": unknown.
Solutions
For the X11 issue, you can try setting -e DISPLAY=$DISPLAY in your docker run, you may need to perform some other steps to get it working properly depending on how your host is setup. But trying to get the GUI working here seems like overkill. To fix your problem when you pass through the command arguments, you can either:
Add execute permissions to the entrypoint.sh file on your host by running chmod +x /home/jmeter/unbuntjmeter/entrypoint.sh.
Or
Don't mount /home/jmeter/unbuntjmeter/ into the container by removing the -v argument from your docker run command.
Problem
When you run this command docker run -v /home/jmeter/unbuntjmeter/:/jmeter/apache-jmeter-5.2.1 sar/test12 -n -t ./users.jmx, you are mounting the directory /home/jmeter/unbuntjmeter/ from your host machine onto /jmeter/apache-jmeter-5.2.1 in your docker container.
That means your /jmeter/apache-jmeter-5.2.1/entrypoint.sh script in the container is being overwritten by the one in that directory on your host (if there is one, which there does seem to be). This file on your host machine doesn't have the proper permissions to be executed in your container (presumably it just needs +x because you are running this in your build: RUN ["chmod", "+x", "entrypoint.sh"]).

How to pass an unknown list of environment variables to a command in Dockerfile

I have a very long and often changing list of environment variables which I need to pass to the same Docker image when starting it. These environment variables are configured in a Rancher environment and will be passed individually as such. They should all be passed to the command that is about to start within the image.
When I had just a few parameters it was possible to pass them while having them explicitly declared in the Dockerfile:
CMD [ "sh", "-c", "node src/server.js --param1=$ENV_PARAM_1" --param2=$ENV_PARAM_2 ... --paramN=$ENV_PARAM_N"" ]
Now this is not possible anymore because the list has grown to far and is dynamically changing a lot. I also can't build a new image per usecase.
I need something like:
CMD [ "sh", "-c", "node src/server.js $PRINT_ALL_MY_PARAMS_HERE" ]
Side note: The command will fail when providing command arguments that are unknown to the command.
Any idea how I could solve this?
You can override CMD when you run the container. Say you've built an image with a default command
CMD node src/server.js
When you go to actually run the container, you can override this with whatever you want
docker run \
-d -p ... \
my/image \
node src/server.js --param1=$ENV_PARAM_1 --param2=$ENV_PARAM_2 ...
As I've written it here the $ENV_PARAM_N will be resolved by the host system's shell, but if a tool is launching the container for you that might not be a problem. If some of the values are from Dockerfile ENV directives you'll need to force the container shell to do the expansion
docker run \
-d -p ... \
-e ENV_PARAM_2=not-in-the-dockerfile \
my/image \
sh -c 'node src/server.js --param1=$ENV_PARAM_1 --param2=$ENV_PARAM_2 ...'
There's also a pattern of using the ENTRYPOINT as the main program to run and using CMD only for additional options.
ENTRYPOINT ["node", "src/server.js"]
CMD []
docker run \
-d -p ... \
my/image \
--param1=$ENV_PARAM_1 --param2=$ENV_PARAM_2 ...
However, note in this case that you cannot ask the container shell to expand things for you. ENTRYPOINT must use the JSON-array syntax, and you can't insert an sh -c anywhere in this command usefully. (sh -c command consumes only a single shell "word" as its command, and any other options you write after that will generally get ignored.)
You could use ENTRYPOINT to define the part that should always be there when launching the container and CMD for the part that is overridden by command given at container launch:
ENTRYPOINT [ "sh", "-c", "node src/server.js"]
CMD ["--param1=$ENV_PARAM_1", "--param2=$ENV_PARAM_2",... "--paramN=$ENV_PARAM_N"]
This way you can have different parameters for each run:
docker run server # Executes ENTRYPOINT + CMD from Dockerfile
docker run server --help # Executes ENTRYPOINT + "--help"

Pass in Arguments when Doing a Docker Run

If I have the following Dockfile
FROM centos:8
RUN yum update -y
RUN yum install -y python38-pip
COPY . /app
WORKDIR /app
RUN pip3 install -r requirements.txt
ENTRYPOINT ["python3"]
CMD ["app.py"]
With app being the following:
#!/usr/bin/python
import sys
print('Here is your param: ', sys.argv[0])
When I call docker run -it (myimg), how can I pass in a parameter so the output would be the param?
ex:
docker run -it (myparam) "testfoo"
would print
Here is your param: testfoo
sys.argv[0] refer to the FileName so you can not expect testfoo when you run docker run -it my_image testfoo
The first item in the list, sys.argv[0], is the name of the Python script. The rest of the list elements, sys.argv[1] to sys.argv[n], are the command line arguments 2 through n
print('Here is your param: file Name', sys.argv[0],'args testfoo:',sys.argv[1])
So you can just replace the entrypoint to below then you are good to pass runtime argument testfoo
ENTRYPOINT ["python3","app.py"]
Now pass argument testfoo
docker run -it --rm my_image testfoo
Anything you provide after the image name in the docker run command line replaces the CMD from the Dockerfile, and then that gets appended to the ENTRYPOINT to form a complete command.
Since you put the script name in CMD, you need to repeat that in the docker run invocation:
docker run -it myimg app.py testfoo
(This split of ENTRYPOINT and CMD seems odd to me. I'd make sure the script starts with a line like #!/usr/bin/env python3 and is executable, so you can directly run ./app.py; make that be the CMD and remove the ENTRYPOINT entirely.)

Docker: Cannot execute binary file

I can't run any binary in my docker container.
Dockerfile:
FROM ubuntu:eoan AS compiler-build
RUN apt-get update && \
dpkg --add-architecture i386 && \
apt-get install -y gcc \
gcc-multilib \
make \
cmake \
git \
python3.8 \
bash
WORKDIR /home
ADD . /home/pawn
RUN mkdir build
WORKDIR /home/build
ENTRYPOINT ["/bin/bash"]
CMD ["/bin/bash"]
I can't even use file builtin:
[root#LAPTOP-EJ5BH6DJ compiler]:~/dev/private/SAMP/compiler (v13.11.0) (master) dc run compiler file bash
/usr/bin/file: /usr/bin/file: cannot execute binary file
From this forum thread:
This error occurs when you use a shell in your entrypoint without the "-c" argument
So, if you change your Dockerfile to end with
ENTRYPOINT [ "/bin/bash", "-l", "-c" ]
then you can run binary files.
Note the purpose of the options for /bin/bash, from the manpage:
-l: Make bash act as if it had been invoked as a login shell
-c: If the -c option is present, then commands are read from the first non-option argument command_string. If there are arguments after the command_string, the first argument is assigned to $0 and any remaining arguments are assigned to the positional parameters. The assignment to $0 sets the name of the shell, which is used in warning and error messages.
Additionally, this article is a worthwhile read on how to use both ENTRYPOINT and CMD together, and what their differences are.
EDIT: Here's another article that goes into a trivial (but clearer than the first article) example using the echo shell builtin.
EDIT: Here's an adaptation of the trivial example from the second article I linked:
FROM ubuntu
ENTRYPOINT [ "/bin/bash", "-l", "-c" ]
CMD [ "ls" ]
$ docker build -t test .
$ docker run --rm test
bin
boot
...
var
$ docker run --rm test "ls etc"
adduser.conf
alternatives
apt
...
update-motd.d
xattr.conf
Note the " around ls /etc. Without the quotes, the argument /etc doesn't seem to be passed to the ls command as I might expect.
Entrypoint can't point to /bin/bash it seems. Removing
ENTRYPOINT ["/bin/bash"] is enough to make it work.
I hit the same error. Unlike the other answers, my error was related to my docker run parameters:
# failed
docker run -it $(pwd | xargs basename):latest bash
# worked
docker run -it $(pwd | xargs basename):latest
I didn't need to add bash as I already had this in my Dockerfile:
ENTRYPOINT ["/bin/bash"]
There are times when we don't have control over the image's Dockerfile but it's original entrypoint has an issue.
We could overwrite it's entrypoint to debug issues:
# example
docker run --rm \
--entrypoint /bin/bash \
-it apache/spark-py:v3.3.0

Resources