Docker from scratch CMD bug - docker

I have the following Dockerfile
FROM golang as builder
ARG CADDY_HASH=4b4e99bdb2e327d553a5f773f827f624181714af
WORKDIR /root/caddy
RUN wget -qO- github.com/caddyserver/caddy/archive/"$CADDY_HASH".tar.gz | tar zx --strip-components=1
RUN set -e; cd cmd/caddy && CGO_ENABLED=0 go build
FROM scratch
COPY --from=builder /root/caddy/cmd/caddy/caddy /
ARG PORT=8000
ENV PORT $PORT
EXPOSE $PORT
CMD /caddy file-server --browse --listen :$PORT
I build and run with this command
DOCKER_BUILDKIT=0 docker build -t caddy-static-docker:latest . && docker run -e PORT=8000 -p 8000:8000 caddy-static-docker:latest
Why this won't work and I receive this error?
docker: Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/bin/sh": stat /bin/sh: no such file or directory: unknown.

Use an entrypoint instead of CMD
ENTRYPOINT ["/caddy"]
CMD ["file-server", "--browse", "--listen", "8080"]
Also note the json syntax (exec form), which leads to the things not run in a subshell.
Unlike the shell form, the exec form does not invoke a command shell. This means that normal shell processing does not happen. For example, CMD [ "echo", "$HOME" ] will not do variable substitution on $HOME. If you want shell processing then either use the shell form or execute a shell directly, for example: CMD [ "sh", "-c", "echo $HOME" ]. When using the exec form and executing a shell directly, as in the case for the shell form, it is the shell that is doing the environment variable expansion, not docker.
source: https://docs.docker.com/engine/reference/builder/#cmd
The shell form prevents any CMD or run command line arguments from being used, but has the disadvantage that your ENTRYPOINT will be started as a subcommand of /bin/sh -c, which does not pass signals. This means that the executable will not be the container’s PID 1 - and will not receive Unix signals - so your executable will not receive a SIGTERM from docker stop .
Source: https://docs.docker.com/engine/reference/builder/#entrypoint
Your PORT will still cause issues. Consider hard coding it.

Use a alpine image which is a lightweight image to run shell commands and it has /bin/sh loaded in it.
The SCRATCH image is basically empty with nothing inside / folder, thus no executables to execute anything that is given as part of CMD.

Related

Running a shell from within a docker container succeeds but not as an entrypoint ("no such file or directory" error)

I made a small container that runs a single shell script.
Its Dockerfile is as follows:
FROM centos:centos7.9.2009
RUN mkdir -p /var/lib/test
COPY ./ /var/lib/test/
RUN yum -y localinstall /var/lib/test/*.rpm
ENTRYPOINT ["sh /var/lib/test/test.sh"]
However, when I run the image, it returns the error:
#docker run -it test:1.0
/usr/bin/docker-current: Error response from daemon: oci runtime error: container_linux.go:290: starting container process caused "exec: \"sh /var/lib/test/test.sh\": stat sh /var/lib/test/test.sh: no such file or directory".
The script file definitely exists as I can replace its entrypoint with bash and manually execute it:
# docker run -it --entrypoint bash test:1.0
[root#e9361c3e67fa /]# sh /var/lib/test/test.sh
Shell script starts...
I read similar posts and confirmed that the permission of the script is correct, the return codes inside it were all LF.
And the shell script is really simple:
#!/bin/bash
echo "Test"
exit 0
What else can cause this problem?
Change the entrypoint to:
ENTRYPOINT ["/bin/bash", "-c", "/var/lib/test/test.sh"]
When you interactively run sh test.sh, the shell breaks it into two words for you. The JSON-array form of ENTRYPOINT and CMD (and also RUN) requires you to explicitly break up the words yourself, though. As you have it written in a single array entry is the same as writing a single word in quotes 'sh test.sh' at a shell prompt.
The most expedient answer is to break this into two words
ENTRYPOINT ["sh", "/var/lib/test/test.sh"]
However: you shouldn't need the explicit sh at all. If the script is executable (as in chmod +x) and begins with the line #!/bin/sh, then the system will be able to figure out that it's a standard Bourne shell script. You should be able to just run
ENTRYPOINT ["/var/lib/test/test.sh"]
# (or CMD ["/var/lib/test/test.sh"])
without directly saying sh anywhere.

How to pass an unknown list of environment variables to a command in Dockerfile

I have a very long and often changing list of environment variables which I need to pass to the same Docker image when starting it. These environment variables are configured in a Rancher environment and will be passed individually as such. They should all be passed to the command that is about to start within the image.
When I had just a few parameters it was possible to pass them while having them explicitly declared in the Dockerfile:
CMD [ "sh", "-c", "node src/server.js --param1=$ENV_PARAM_1" --param2=$ENV_PARAM_2 ... --paramN=$ENV_PARAM_N"" ]
Now this is not possible anymore because the list has grown to far and is dynamically changing a lot. I also can't build a new image per usecase.
I need something like:
CMD [ "sh", "-c", "node src/server.js $PRINT_ALL_MY_PARAMS_HERE" ]
Side note: The command will fail when providing command arguments that are unknown to the command.
Any idea how I could solve this?
You can override CMD when you run the container. Say you've built an image with a default command
CMD node src/server.js
When you go to actually run the container, you can override this with whatever you want
docker run \
-d -p ... \
my/image \
node src/server.js --param1=$ENV_PARAM_1 --param2=$ENV_PARAM_2 ...
As I've written it here the $ENV_PARAM_N will be resolved by the host system's shell, but if a tool is launching the container for you that might not be a problem. If some of the values are from Dockerfile ENV directives you'll need to force the container shell to do the expansion
docker run \
-d -p ... \
-e ENV_PARAM_2=not-in-the-dockerfile \
my/image \
sh -c 'node src/server.js --param1=$ENV_PARAM_1 --param2=$ENV_PARAM_2 ...'
There's also a pattern of using the ENTRYPOINT as the main program to run and using CMD only for additional options.
ENTRYPOINT ["node", "src/server.js"]
CMD []
docker run \
-d -p ... \
my/image \
--param1=$ENV_PARAM_1 --param2=$ENV_PARAM_2 ...
However, note in this case that you cannot ask the container shell to expand things for you. ENTRYPOINT must use the JSON-array syntax, and you can't insert an sh -c anywhere in this command usefully. (sh -c command consumes only a single shell "word" as its command, and any other options you write after that will generally get ignored.)
You could use ENTRYPOINT to define the part that should always be there when launching the container and CMD for the part that is overridden by command given at container launch:
ENTRYPOINT [ "sh", "-c", "node src/server.js"]
CMD ["--param1=$ENV_PARAM_1", "--param2=$ENV_PARAM_2",... "--paramN=$ENV_PARAM_N"]
This way you can have different parameters for each run:
docker run server # Executes ENTRYPOINT + CMD from Dockerfile
docker run server --help # Executes ENTRYPOINT + "--help"

Shebang in JavaScript ignored when executed as command in Docker with "/bin/bash" as entrypoint

When I try to execute a JavaScript file with a shebang such as #!/usr/bin/env node through the command argument of docker run ... it seems to "ignore" the shebang.
$ docker run --rm foobar/hello-world /hello-world.js
/hello-world.js: line 2: syntax error near unexpected token `'Hello, World!''
/hello-world.js: line 2: `console.log('Hello, World!');'
Dockerfile
FROM node:13.12-alpine
COPY hello-world.js /hello-world.js
RUN chmod +x /hello-world.js
RUN apk update && apk update && apk add bash
ENTRYPOINT ["/bin/bash"]
hello-world.js
#!/usr/bin/env node
console.log('Hello, World!');
When I use /hello-world.js as the entrypoint directly (ENTRYPOINT ["/hello-world.js"]) it works correctly.
Add -c to the entrypoint so bash will expect a command. Without -c it interprets its argument as the name of a bash script to execute.
ENTRYPOINT ["/bin/bash", "-c"]
I'd recommend just setting the default CMD to the program you're installing in your container, and generally preferring CMD to ENTRYPOINT if you only need one of them.
FROM node:13.12-alpine
COPY hello-world.js /hello-world.js
RUN chmod +x /hello-world.js
CMD ["/hello-world.js"]
When you provide a command at the docker run command line, it overrides the Dockerfile CMD (if any), and it's appended to the ENTRYPOINT. In your original example the ENTRYPOINT from the Dockerfile is combined with the docker run command and you're getting a combined command bash /hello-world.js.
If you do need an interactive shell to debug the container, you can launch that with
docker run --rm -it foobar/hello-world /bin/sh

How are CMD and ENTRYPOINT exec forms in a docker file parsed?

I am running Jupyter in a Docker container. The following shell form will run fine:
CMD jupyter lab --ip='0.0.0.0' --port=8888 --no-browser --allow-root /home/notebooks
But the following one on docker file will not:
ENTRYPOINT ["/bin/sh", "-c"]
CMD ["jupyter", "lab", "--ip='0.0.0.0'", "--port=8888", "--no-browser", "--allow-root", "/home/notebooks"]
The error is:
usage: jupyter [-h] [--version] [--config-dir] [--data-dir] [--runtime-dir] [--paths] [--json] [subcommand]
jupyter: error: one of the arguments --version subcommand --config-dir --data-dir --runtime-dir --paths is required
So obviously /bin/sh -c sees the jupyter argument, but not the following ones.
Interestingly,
CMD ["jupyter", "lab", "--ip='0.0.0.0'", "--port=8888", "--no-browser", "--allow-root", "/home/notebooks"]
will run fine, so it cannot be the number of arguments, or can it?
According to https://docs.docker.com/engine/reference/builder/#cmd, the shell form of CMD executes with /bin/sh -c. So from my point of view I see little difference in the 2 versions. But the reason must be how the exec forms are being evaluated when ENTRYPOINT and CMD are present at the same time.
At a very low level, Linux commands are executed as a series of "words". Typically your shell will take a command line like ls -l "a directory" and breaks that into three words ls -l a directory. (Note the space in "a directory": in the shell form that needs to be quoted to be in the same word.)
The Dockerfile CMD and ENTRYPOINT (and RUN) commands have two forms. In the form you've specified that looks like a JSON array, you are explicitly specifying how the words get broken up. If it doesn't look like a JSON array then the whole thing is taken as a single string, and wrapped in an sh -c command.
# Explicitly spelling out the words
RUN ["ls", "-l", "a directory"]
# Asking Docker to run it via a shell
RUN ls -l 'a directory'
# The same as
RUN ["sh", "-c", "ls -l 'a directory'"]
If you specify both ENTRYPOINT and CMD the two lists of words just get combined together. The important thing for your example is that sh -c takes the single next word and runs it as a shell command; any remaining words can be used as $0, $1, ... positional arguments within that command string.
So in your example, the final thing that gets run is more or less
ENTRYPOINT+CMD ["sh", "-c", "jupyter", ...]
# If the string "jupyter" contained "$1" it would expand to the --ip option
The other important corollary to this is that, practically, ENTRYPOINT can't be the bare-string format: when the CMD is appended to it you get
ENTRYPOINT some command
CMD with args
ENTRYPOINT+CMD ["sh", "-c", "some command", "sh", "-c", "with args"]
and by the same rule all of the CMD words get ignored.
In practice you almost never need to explicitly put sh -c or a SHELL declaration in a Dockerfile; use a string-form command instead, or put complex logic into a shell script.

Docker ENV in CMD

I am trying to do the following:
Pass a build arg to my docker build command
Store that as an env variable in my container
Use it in my CMD to start when my container launches.
Below is my setup:
FROM ubuntu:xenial
ARG EXECUTABLE
ENV EXECUTABLE ${EXECUTABLE}
CMD ["/opt/foo/bin/${EXECUTABLE}", "-bar"]
Here is how i'm building container
docker build --build-arg EXECUTABLE=$EXECUTABLE -t test_image .
Here is how i'm running image
docker run -d test_image
When I run the container it crashes and tells me
docker: Error response from daemon: OCI runtime create failed:
container_linux.go:296: starting container process caused
"exec: \"/opt/foo/bin/${EXECUTABLE}\": stat /opt/foo/bin/${EXECUTABLE}:
no such file or directory": unknown.
To use environment variables, you need to use shell.
https://docs.docker.com/engine/reference/builder/#cmd
Note: Unlike the shell form, the exec form does not invoke a command
shell. This means that normal shell processing does not happen. For
example, CMD [ "echo", "$HOME" ] will not do variable substitution on
$HOME. If you want shell processing then either use the shell form or
execute a shell directly, for example: CMD [ "sh", "-c", "echo $HOME"
]. When using the exec form and executing a shell directly, as in the
case for the shell form, it is the shell that is doing the environment
variable expansion, not docker.
Based on this, I think you can work fine by the following Dockerfile.
FROM ubuntu:xenial
ARG EXECUTABLE
ENV EXECUTABLE ${EXECUTABLE}
CMD [ "sh", "-c", "/opt/foo/bin/${EXECUTABLE}", "-bar"]
You'll have to write out an executable or shim as ARG / ENV substitution is not supported for CMD.
The list of supported substitutions:
ADD
COPY
ENV
EXPOSE
FROM
LABEL
STOPSIGNAL
USER
VOLUME
WORKDIR
as well as:
ONBUILD (when combined with one of the supported instructions above)
A workaround is to write your executable to a file and execute that:
FROM ubuntu:xenial
ARG EXECUTABLE
RUN : \
&& /bin/echo -e "#!/bin/sh\nexec '/bin/$EXECUTABLE' -bar" > /exe \
&& chmod +x /exe
CMD ["/exe"]
Build:
docker build -t test --build-arg EXECUTABLE=echo .
Run:
$ docker run -ti test
-bar
Another way to access the environment variable passed is by running it as:
docker run -e EXECUTABLE=<some_value> <docker_image>
Then, in the dockerfile
CMD exec /opt/foo/bin/${EXECUTABLE} -bar

Resources