Do docker-compose's two `command` forms behave differently? - docker

In Dockerfile, we can specify the CMD to be in one of three different forms:
...
CMD ["executable","param1","param2"] (exec form, this is the preferred form)
CMD ["param1","param2"] (as default parameters to ENTRYPOINT)
CMD command param1 param2 (shell form)
...
If you use the shell form of the CMD, then the <command> will execute in /bin/sh -c:
FROM ubuntu
CMD echo "This is a test." | wc -
If you want to run your <command> without a shell then you must express the command as a JSON array and give the full path to the executable. This array form is the preferred format of CMD. Any additional parameters must be individually expressed as strings in the array:
FROM ubuntu
CMD ["/usr/bin/wc","--help"]
Source
In docker-compose, we can also use two different forms for its command:
...
Override the default command.
command: bundle exec thin -p 3000
The command can also be a list, in a manner similar to dockerfile:
command: ["bundle", "exec", "thin", "-p", "3000"]
Source
Do these two different forms in docker-compose behave the same way as the exec and shell forms in Dockerfile do?

No they do not. docker-compose's command always uses the equivalent of Dockerfile's exec form. This can be easily seen with a quick demo:
Dockerfile with shell form processes shell symbols:
FROM alpine:3.11.5
CMD echo foo && echo bar
$ docker run example
foo
bar
Dockerfile with exec form doesn't do any shell processing:
FROM alpine:3.11.5
CMD ["echo", "foo", "&&", "echo", "bar"]
$ docker run example
foo && echo bar
docker-compose's first command form doesn't do any shell processing:
version: "3.7"
services:
example:
image: alpine:3.11.5
command: echo foo && echo bar
$ docker-compose up
Starting example_example_1 ... done
Attaching to example_example_1
example_1 | foo && echo bar
example_example_1 exited with code 0
And neither does docker-compose's second command form:
version: "3.7"
services:
example:
image: alpine:3.11.5
command: ["echo", "foo", "&&", "echo", "bar"]
$ docker-compose up
Starting example_example_1 ... done
Attaching to example_example_1
example_1 | foo && echo bar
example_example_1 exited with code 0
 
So, if one does want shell processing in docker-compose's command, it has to be explicitly enabled by passing the command to sh -c:
version: "3.7"
services:
example:
image: alpine:3.11.5
command: sh -c "echo foo && echo bar"
$ docker-compose up
Starting example_example_1 ... done
Attaching to example_example_1
example_1 | foo
example_1 | bar
example_example_1 exited with code 0

Related

Run multiple command after entrypoint with docker-compose

I have been looking to the matrix of interaction of CMD and ENTRYPOINT and I can't found my way to have container running an entrypoint THEN a cmd with multiple commands
version: '3.8'
services:
test:
image: debian:buster-slim
entrypoint: [ "/entrypoint.sh" ]
volumes:
- ./entrypoint.sh:/entrypoint.sh
command: [ "echo" ,"toto","&&","echo","tutu" ]
where entrypoint.sh is a file containing :
#!/bin/bash
set -e
set -x
echo tata
exec "$#"
"should" print
tata
toto
tutu
but it's printing
tata
toto && echo tutu
I found a solution by replacing [ "echo" ,"toto","&&","echo","tutu" ] by "bash -c 'echo toto && echo tutu'" and then it work.
but I don't get why the first method do not work since the documentation say it will do :
exec_entry p1_entry /bin/sh -c exec_cmd p1_cmd
The problem is generated by the exec command, which synopsis is:
exec [command [argument...]]
so it will only accept one command with multiple arguments.
Solution:
The solution is the one that you pointed out, by using sh -c '':
services:
test:
image: debian:buster-slim
entrypoint: [ "/entrypoint.sh" ]
volumes:
- ./entrypoint.sh:/entrypoint.sh
command: ["sh", "-c", "echo toto && echo tutu"]
because the final result will satisfy the exec command with one command and multiple arguments
On docker side, the official documentation explains the ENTRYPOINT vs CMD very well with this table:
docker table
source
If you combine CMD and ENTRYPOINT in the array context the result would be /entrypoint.sh "echo" "toto" "&&" "echo" "tutu" because each parameter of the CMD will be a parameter for the ENTRYPOINT
Here's the output of the example above executed directly in the terminal:
# ./entrypoint.sh "echo" "toto" "&&" "echo" "tutu"
+ echo tata
tata
+ exec echo toto '&&' echo tutu
toto && echo tutu
And this is the result of the docker-compose up
# docker-compose up
test_1 | + echo tata
test_1 | tata
test_1 | + exec echo toto '&&' echo tutu
test_1 | toto && echo tutu
root_test_1 exited with code 0
As you can see each parameter is passed in the array form so the '&&' is parsed as a string (note the single quotes).
Note:
The result you expected is this one:
# ./entrypoint.sh echo toto && echo tutu
+ echo tata
tata
+ exec echo toto
toto
tutu
In this scenario as you see the only parameter passed to the exec is the first echo toto.
echo tutu is executed in bash terminal after ./entrypoint.sh script exits.
Obviously if this would be parsed by docker as a separate command it will never be executed because the ENTRYPOINT will exit before the echo tutu command.

CMD doesn't run in Dockerfile [duplicate]

Inside my Dockerfile:
ENV PROJECTNAME mytestwebsite
CMD ["django-admin", "startproject", "$PROJECTNAME"]
Error:
CommandError: '$PROJECTNAME' is not a valid project name
What is the quickest workaround here? Does Docker have any plan to "fix" or introduce this functionality in later versions of Docker?
NOTE: If I remove the CMD line from the Docker file and then run the Docker container, I am able to manually run Django-admin startproject $PROJECTNAME from inside the container and it will create the project...
When you use an execution list, as in...
CMD ["django-admin", "startproject", "$PROJECTNAME"]
...then Docker will execute the given command directly, without involving a shell. Since there is no shell involved, that means:
No variable expansion
No wildcard expansion
No i/o redirection with >, <, |, etc
No multiple commands via command1; command2
And so forth.
If you want your CMD to expand variables, you need to arrange for a shell. You can do that like this:
CMD ["sh", "-c", "django-admin startproject $PROJECTNAME"]
Or you can use a simple string instead of an execution list, which gets you a result largely identical to the previous example:
CMD django-admin startproject $PROJECTNAME
If you want to use the value at runtime, set the ENV value in the Dockerfile. If you want to use it at build-time, then you should use ARG.
Example :
ARG value
ENV envValue=$value
CMD ["sh", "-c", "java -jar ${envValue}.jar"]
Pass the value in the build command:
docker build -t tagName --build-arg value="jarName"
You also can use exec
This is the only known way to handle signals and use env vars simultaneously.
It can be helpful while trying to implement something like graceful shutdown according to Docker github
Example:
ENV PROJECTNAME mytestwebsite
CMD exec django-admin startproject $PROJECTNAME
Lets say you want to start a java process inside a container:
Example Dockerfile excerpt:
ENV JAVA_OPTS -XX +UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -XshowSettings:vm
...
ENTRYPOINT ["/sbin/tini", "--", "entrypoint.sh"]
CMD ["java", "${JAVA_OPTS}", "-myargument=true"]
Example entrypoint.sh excerpt:
#!/bin/sh
...
echo "*** Startup $0 suceeded now starting service using eval to expand CMD variables ***"
exec su-exec mytechuser $(eval echo "$#")
For the Java developers, following my solution below gonna work:
if you tried to run your container with a Dockerfile like below
ENTRYPOINT ["/docker-entrypoint.sh"]
# does not matter your parameter $JAVA_OPTS wrapped as ${JAVA_OPTS}
CMD ["java", "$JAVA_OPTS", "-javaagent:/opt/newrelic/newrelic.jar", "-server", "-jar", "app.jar"]
with an ENTRYPOINT shell script below:
#!/bin/bash
set -e
source /work-dir/env.sh
exec "$#"
it will build the image correctly but print the error below during the run of container:
Error: Could not find or load main class $JAVA_OPTS
Caused by: java.lang.ClassNotFoundException: $JAVA_OPTS
instead, Java can read the command line parameters either through the command line or by _JAVA_OPTIONS environment variable. so, it means we can pass the desired command line parameters through _JAVA_OPTIONS without changing anything on Dockerfile as well as to allow it to be able to start as parent process of container for the valid docker signalization via exec "$#".
The below one is my final version of the Dockerfile and docker-entrypoint.sh files:
...
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["java", "-server", "-jar", "app.jar"]
#!/bin/bash
set -e
source /work-dir/env.sh
export _JAVA_OPTIONS="-XX:+PrintFlagsFinal"
exec "$#"
and after you build your docker image and tried to run it, you will see the logs below that means it worked well:
Picked up _JAVA_OPTIONS: -XX:+PrintFlagsFinal
[Global flags]
int ActiveProcessorCount = -1 {product} {default}
Inspired on above, I did this:
#snapshot by default. 1 is release.
ENV isTagAndRelease=0
CMD echo is_tag: ${isTagAndRelease} && \
if [ ${isTagAndRelease} -eq 1 ]; then echo "release build"; mvn -B release:clean release:prepare release:perform; fi && \
if [ ${isTagAndRelease} -ne 1 ]; then echo "snapshot build"; mvn clean install; fi && \
.....

docker-compose run multiple commands for a service

I am using docker on windows - version 18.03 (client)/18.05 (server). I have created docker-compose file for ELK stack. Everything is working fine. What I would like to do is, to install logtrail before kibana is started. I was thinking about copying logtrail*.zip first, then call install:
container_name: kibana
(...)
command:
- docker cp kibana:/ ./kibana/logtrail/logtrail-6.7.1-0.1.31.zip
- /bin/bash
- ./bin/kibana-plugin install/logtrail-6.7.1-0.1.31.zip
But that doesn't look like right way as first of all it doesn't work, second of all I am not sure if I can call mutliple commands like I did and third of all I'm not sure if docker cp in command is even allowed on that stage of service creation
command:
- /bin/bash
- -c
- |
echo "This is a multiline command"
echo "See how I escape $$ sign"
echo $$PATH
You can run multiple commands like above however you can not run docker cp as in your command.
You can run multiple commands for a service in docker compose by:
command: sh -c "command1 && command2 && command2"
THATS MY SOLUTION FOR THIS CASE:
# OPTION 01:
# command: >
# bash -c "chmod +x /scripts/rs-init.sh
# && sh /scripts/rs-init.sh"
# OPTION 02:
# entrypoint: [ "bash", "-c", "chmod +x /scripts/rs-init.sh && sh /scripts/rs-init.sh"]
If you're looking to install software David Maze's comment seems to be the standard path. If you want to actually run multiple commands look at the answer to this SO question Using Docker-Compose, how to execute multiple commands

Why does redirecting container stdout to a file not work?

I set an simple environment for testing.
Dockerfile
FROM ubuntu:16.04
COPY test.sh /
ENTRYPOINT /test.sh
test.sh
#!/bin/bash
while true; do
echo "test..."
sleep 5
done
docker-compose.yml
version: '3.4'
services:
test:
image: asleea/simple_test
entrypoint: ["/test.sh", ">", "test.log"]
# command: [">", "/test.log"]
container_name: simple_test
Run the test container
$docker-compose up
WARNING: The Docker Engine you're using is running in swarm mode.
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
To deploy your application across the swarm, use `docker stack deploy`.
Starting simple_test ...
Starting simple_test ... done
Attaching to simple_test
simple_test | test...
simple_test | test...
It is still printing stdout there.
Check test.log inside the container
$ docker exec -it simple_test bash
$ cd /
$ ls
# No file named `test.log`
test.log for redirection doesn't exist.
docker seems to just ignore redirection. Is it normal and why? or I did wrong way something?
Edit
Thank you #Sebastian for your answer. it works redirecting stdout to a file.
However, one more question.
The docs you refer also is saying the below.
If you use the shell form of the CMD, then the will execute
in /bin/sh -c:
As my understanding of that, command: /test.sh > /test.log is equivalent with command: ["sh", "-c", "/test.sh > /test.log"].
However, when I did command: /test.sh > /test.log, it didn't redirect as well.
Why does command: ["sh", "-c", "/test.sh > /test.log"] work but command: /test.sh > /test.log.
Do I misunderstand?
You need to make sure your command is executed in a shell. Try to use:
CMD [ "sh", "-c", "/test.sh", ">", "test.log" ]
You specified the command/ entrypoint as JSON which is called exec form
The exec form does not invoke a command shell.
This means that normal shell processing does not happen.
Docker docs
i think you are doing something wrong with syntax , "command" parameter works with compose and also did same thing as CMD. try to use command: sh -c '/test.sh > /tmp/test.log' in your compose file. it works fine.

How can I use a variable inside a Dockerfile CMD?

Inside my Dockerfile:
ENV PROJECTNAME mytestwebsite
CMD ["django-admin", "startproject", "$PROJECTNAME"]
Error:
CommandError: '$PROJECTNAME' is not a valid project name
What is the quickest workaround here? Does Docker have any plan to "fix" or introduce this functionality in later versions of Docker?
NOTE: If I remove the CMD line from the Docker file and then run the Docker container, I am able to manually run Django-admin startproject $PROJECTNAME from inside the container and it will create the project...
When you use an execution list, as in...
CMD ["django-admin", "startproject", "$PROJECTNAME"]
...then Docker will execute the given command directly, without involving a shell. Since there is no shell involved, that means:
No variable expansion
No wildcard expansion
No i/o redirection with >, <, |, etc
No multiple commands via command1; command2
And so forth.
If you want your CMD to expand variables, you need to arrange for a shell. You can do that like this:
CMD ["sh", "-c", "django-admin startproject $PROJECTNAME"]
Or you can use a simple string instead of an execution list, which gets you a result largely identical to the previous example:
CMD django-admin startproject $PROJECTNAME
If you want to use the value at runtime, set the ENV value in the Dockerfile. If you want to use it at build-time, then you should use ARG.
Example :
ARG value
ENV envValue=$value
CMD ["sh", "-c", "java -jar ${envValue}.jar"]
Pass the value in the build command:
docker build -t tagName --build-arg value="jarName"
You also can use exec
This is the only known way to handle signals and use env vars simultaneously.
It can be helpful while trying to implement something like graceful shutdown according to Docker github
Example:
ENV PROJECTNAME mytestwebsite
CMD exec django-admin startproject $PROJECTNAME
Lets say you want to start a java process inside a container:
Example Dockerfile excerpt:
ENV JAVA_OPTS -XX +UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -XshowSettings:vm
...
ENTRYPOINT ["/sbin/tini", "--", "entrypoint.sh"]
CMD ["java", "${JAVA_OPTS}", "-myargument=true"]
Example entrypoint.sh excerpt:
#!/bin/sh
...
echo "*** Startup $0 suceeded now starting service using eval to expand CMD variables ***"
exec su-exec mytechuser $(eval echo "$#")
For the Java developers, following my solution below gonna work:
if you tried to run your container with a Dockerfile like below
ENTRYPOINT ["/docker-entrypoint.sh"]
# does not matter your parameter $JAVA_OPTS wrapped as ${JAVA_OPTS}
CMD ["java", "$JAVA_OPTS", "-javaagent:/opt/newrelic/newrelic.jar", "-server", "-jar", "app.jar"]
with an ENTRYPOINT shell script below:
#!/bin/bash
set -e
source /work-dir/env.sh
exec "$#"
it will build the image correctly but print the error below during the run of container:
Error: Could not find or load main class $JAVA_OPTS
Caused by: java.lang.ClassNotFoundException: $JAVA_OPTS
instead, Java can read the command line parameters either through the command line or by _JAVA_OPTIONS environment variable. so, it means we can pass the desired command line parameters through _JAVA_OPTIONS without changing anything on Dockerfile as well as to allow it to be able to start as parent process of container for the valid docker signalization via exec "$#".
The below one is my final version of the Dockerfile and docker-entrypoint.sh files:
...
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["java", "-server", "-jar", "app.jar"]
#!/bin/bash
set -e
source /work-dir/env.sh
export _JAVA_OPTIONS="-XX:+PrintFlagsFinal"
exec "$#"
and after you build your docker image and tried to run it, you will see the logs below that means it worked well:
Picked up _JAVA_OPTIONS: -XX:+PrintFlagsFinal
[Global flags]
int ActiveProcessorCount = -1 {product} {default}
Inspired on above, I did this:
#snapshot by default. 1 is release.
ENV isTagAndRelease=0
CMD echo is_tag: ${isTagAndRelease} && \
if [ ${isTagAndRelease} -eq 1 ]; then echo "release build"; mvn -B release:clean release:prepare release:perform; fi && \
if [ ${isTagAndRelease} -ne 1 ]; then echo "snapshot build"; mvn clean install; fi && \
.....

Resources