I am experimenting with a nginx-based Dockerfile. The last line currently looks like this:
FROM nginx:alpine
... # not really relevant
CMD /bin/sh -c "envsubst < /etc/nginx/conf.d/site.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
Now when I run the container with docker run my-nginx-image, I noticed that CTRL-C is no longer stopping the container.
Before that change, I had the following CMD statement in the end:
CMD ["nginx", "-g", "daemon off;"]
Here, CTRL-C worked as expected: the container was stopped. Why is that? How can I have the both of two worlds?
CTRL-C working
envsubst included
update
After some reading, I realized that I have to bootstrap with CMD [...]. But I fail to integrate the whole command envsubst < ... > ... && nginx -g 'daemon off;' into the [...] syntax.
https://forums.docker.com/t/docker-run-cannot-be-killed-with-ctrl-c/13108/2
So there are two factors at play here:
If you specify a string for an entrypoint, like this:
ENTRYPOINT /go/bin/myapp
Docker runs the script with /bin/sh -c 'command'. This intermediate
script gets the SIGTERM, but doesn’t send it to the running server
app.
To avoid the intermediate layer, specify your entrypoint as an array
of strings.
ENTRYPOINT ["/go/bin/myapp"]
Related
I'm using this in my Dockerfile, without CMD or ENTRYPOINT. I'm relying on the underlying nginx official image's docker-entrypoint.sh
FROM nginx:1.18
I've tried in my docker-compose to add the following command but it just keeps restarting and no error msg.
command: >
sh -c 'echo "My Own Command" && /usr/sbin/nginx -g "daemon off;"'
It will work fine if I remove the command from my docker-compose.yml.
My final objective is to add some scripts so I can export secrets to my environment variable but I couldn't get the underlying docker-entrypoint.sh or nginx-g "daemon off;" command running and keeping the container going.
I'm building SPA app that I would like to host in Docker container. App requires some configuration (e.g. url of backend). I decided to create short bash script that reads enviromental variables and assemble configuration file, but if I try to run it by CMD or ENTRYPOINT it dies immediately. I suppose that I override entrypoint in original docker file? Do I really need to start nginx manually when I would be done with preparing this file?
In this case you have it easy: the official nginx image doesn't declare an ENTRYPOINT, so you can add your own without conflicting with anything in the base image. The important details here:
When the entrypoint exits, the container is finished
The entrypoint is passed the CMD or docker run command as arguments
If the entrypoint is a shell script, it therefore usually wants to end with exec "$#"
A typical entrypoint script for this sort of thing might look like:
#!/bin/sh
sed -i.bak -e "s/EXTERNAL_URL/$EXTERNAL_URL/g" /etc/nginx/nginx.conf
exec "$#"
(For this specific task I've found envsubst to be very useful, but I don't think it's present in the Alpine-based images; it's not a "standard" tool but it will be there in a full GNU-based runtime environment, like the Debian-based images have. It goes through files and replaces $VARIABLE references with the contents of the matching environment variables.)
Yes, you are overriding the CMD.
Recommended way:
Try to use environment variable in your app when ever possible, This way you don't need to change the entrypoint/cmd of the official nginx container.
If it's not possible:
Nginx Dockerfile uses "nginx", "-g", "daemon off;" as cmd, you can override it by:
docker run -d --name yourapp-nginx <put other required docker switches here> <your image name:tag> /bin/sh -c '/path/to/yourscript.sh && /usr/sbin/nginx -g "daemon off;"'
Or you can put it as CMD/Entrypoint in your Dockerfile, if you are building your own image.
I found that if I create my own container some of commands from parent containers are not executed. Despite of fact that nginx image exposes port 80, I had to add EXPOSE command to mine one
FROM nginx
ADD dist/* /usr/share/nginx/html/
EXPOSE 80/tcp
COPY docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]
Additionally I had to start nginx manually in my docker-entrypoint.sh:
#!/bin/bash
echo "{ backendUrl: '$BACKEND_URL'}" >> /usr/share/nginx/html/config
exec "$#"
nginx -g "daemon off;"
In the process of am making a docker, I have to change its command from /bin/sh to nginx -g "daemon off;" (exactly that).
I wrote:
docker commit --change="EXPOSE 80" --change='CMD ["nginx", "-g", "\"daemon off;\""]' ${arr[0]} mine/nginx_final
Where ${arr[0]} expands to the correct docker container.
However when I try to run this docker it fails with the error:
nginx: [emerg] unexpected end of parameter, expecting ";" in command line
Docker inspect also doesn't show anything bad:
"Cmd": [
"nginx",
"-g",
"\"daemon off;\""
],
expected, and i expect the "\"daemon off;\"" to expand to "daemon off;"
Yet I'm pretty sure there is a ; sign after deamon off. Where did this sign go? And how can I debug this? (And fix it)
Nginx can't process a global directive that includes quotes: "daemon off;"
docker commit \
--change='EXPOSE 80' \
--change='CMD ["nginx", "-g", "daemon off;"]' \
${arr[0]} \
mine/nginx_final
Exec Form
CMD ["foo"] is called the exec form. A process will be run via exec rather than via a shell. Each element in the array becomes an argument to exec. The extra " quotes are being passed through to nginx:
CMD ["nginx", "-g", "\"daemon off;\""]
exec('nginx', '-g', '"daemon off;"')
Using the exec form already passes the space on unaltered so all you need is:
CMD ["nginx", "-g", "daemon off;"]
exec('nginx' '-g' 'daemon off;')
Shell Form
CMD foo is called the shell form. The global directive argument with spaces would need quoting here:
CMD nginx -g "daemon off;"
exec('sh', '-c', 'nginx -g "daemon off;"')
exec('nginx', '-g', 'daemon off;')
Other wise the shell interpreting the command will split arguments on spaces and try and exec nginx with 3 arguments:
CMD nginx -g daemon off;
exec('sh', '-c', 'nginx -g daemon off;')
exec('nginx', '-g', 'daemon', 'off;')
I have several base docker images which are not owned by me (so I cannot modify them). However, I'm creating new images from them with additional things installed.
What I can't figure out is how to tell dockerfile to copy the CMD (or ENTRYPOINT) of the base image. Something like this:
FROM other:latest
RUN my-extra-install
CMD <use-the-CMD-from-base-image>
I don't think there's any direct syntax for the CMD command to do what I want. I'm wondering if there's a workaround.
If you left it blank in your new Dockerfile, it will inherit the one from the base image.
For example:
base
FROM ubuntu
CMD ["echo", "AAA"]
layer1
FROM base
If you build above images and run layer1 you will get the following:
$ sudo docker run -it layer1
AAA
#Vor is right. But in case
# Dockerfile
FROM nginx:stable-alpine
ENTRYPOINT ["/docker-entrypoint.sh"]
COPY ./docker-entrypoint.sh /
and
# docker-entrypoint.sh
#!/usr/bin/env sh
set -e
exec "$#"
the default CMD from nginx:stable-alpine won't be executed in exec "$#".
You must to write default nginx-alpine's CMD by yourself(!) in Dockerfile
# Dockerfile
FROM nginx:stable-alpine
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]
COPY ./docker-entrypoint.sh /
OR change your docker-entrypoint.sh
# docker-entrypoint.sh
#!/usr/bin/env sh
set -e
exec nginx -g "daemon off;"
Hope it helps
I'm using docker on macOS with docker-osx-dev (https://github.com/brikis98/docker-osx-dev)
And all is ok. It helps me to solve the problem with slow volumes. But every time when i up my docker-compose i have a problem with permissions and i am forced to set permission through docker exec and chmod ... I spent a lot of time to find solution. I tried to use usermod with 501 and 1000 id but nothing help. Have you any idea about how to fix it?
My project settings: https://bitbucket.org/SmileSergey/symfonydocker/src
Thanks!
You can use http://docker-sync.io with the https://github.com/EugenMayer/docker-sync/blob/master/example/docker-sync.yml#L47 to map the OSX user to the uid you have on the container. This removes the whole issue, even though this will replace docker-osx-dev completely.
As a quick workaround, you could try to add a command to your nginx part of your docker-compose.yml like follows:
nginx:
image: nginx:latest
...
volumes:
- "./config/nginx.conf:/etc/nginx/conf.d/default.conf"
- "./app:/var/www/symfony"
command: /bin/bash -c "chmod -R 777 /var/www/symfony/ && nginx -g 'daemon off;'"
Background:
the official nginx Dockerfile specifies following default command:
CMD ["nginx", "-g", "daemon off;"]
which is executing
nginx -g 'daemon off;'
You already have found a quick&dirty workaround to run chmod -R 777 /var/www/symfony/ within the container.
With the command in the docker-compose file above, we execute the chmod command before executing the nginx -g 'daemon off;' default command.
Note, that a CMD specified in the Dockerfile is replaced by the command defined on a docker-compose file (or any command specified on docker run ... myimage mycommand.)
For dev on OSX just use vagrant + nfs shared folders. It solves whole the problems and doesn't require to add anything special to docker-compose for dev.