Docker Compose and execute command on starting container - docker

I am trying to get my head around the COMMAND option in docker compose. In my current docker-compose.yml i start the prosody docker image (https://github.com/prosody/prosody-docker) and i want to create a list of users when the container is actually started.
The documentation of the container states that a user can be made using environment options LOCAL, DOMAIN, and PASSWORD, but this is a single user. I need a list of users.
When reading some stuff around the internet it seemed that using the command option i should be able to execute commands in a starting or running container.
xmpp:
image: prosody/prosody
command: prosodyctl register testuser localhost testpassword
ports:
- "5222:5222"
- "127.0.0.1:5347:5347"
But this seems not to work, i checked to running container using docker exec -it <imageid> bash but the user is not created.
Is it possible to execute a command on a started container using docker-compose or are there other options?

The COMMAND instruction is exactly the same as what is passed at the end of a docker run command, for example echo "hello world" in:
docker run debian echo "hello world"
The command is interpreted as arguments to the ENTRYPOINT of the image, which in debian's case is /bin/bash. In the case of your image, it gets passed to this script. Looking at that script, your command will just get passed to the shell. I would have expected any command you pass to run successfully, but the container will exit once your command completes. Note that the default command is set in the Dockerfile to CMD ["prosodyctl", "start"] which is presumably a long-running process which starts the server.
I'm not sure how Prosody works (or even what it is), but I think you probably want to either map in a config file which holds your users, or set up a data container to persist your configuration. The first solution would mean adding something like:
volumes:
- my_prosodoy_config:/etc/prosody
To the docker-compose file, where my_prosody_config is a directory holding the config files.
The second solution could involve first creating a data container like:
docker run -v /etc/prosody -v /var/log/prosody --name prosody-data prosody-docker echo "Prosody Data Container"
(The echo should complete, leaving you with a stopped container which has volumes set up for the config and logs. Just make sure you don't docker rm this container by accident!)
Then in the docker-compose file add:
volumes_from:
- prosody-data
Hopefully you can then add users by running docker exec as you did before, then running prosodyctl register at the command line. But this is dependent on how prosody and the image behave.

CMD is directly related to ENTRYPOINT in Docker (see this question for an explanation). So when changing one of them, you also have to check how this affects the other. If you look at the Dockerfile, you will see that the default command is to start prosody through CMD ["prosodyctl", "start"]. entrypoint.sh just passes this command through as Adrian mentioned. However, your command overrides the default command, so your prosody demon is never started. Maybe you want to try something like
xmpp:
image: prosody/prosody
command: sh -c prosodyctl register testuser localhost testpassword && prosodyctl start
ports:
- "5222:5222"
- "127.0.0.1:5347:5347"
instead. More elegant and somehow what the creator seems to have intended (judging from the entrypoint.sh script) would be something like
xmpp:
image: prosody/prosody
environment:
- LOCAL=testuser
- DOMAIN=localhost
- PASSWORD=testpassword
ports:
- "5222:5222"
- "127.0.0.1:5347:5347"
To answer your final question: no, it is not possible (as of now) to execute commands on a running container via docker-compose. However, you can easily do this with docker:
docker exec -i prosody_container_name prosodyctl register testuser localhost testpassword
where prosody_container_name is the name of your running container (use docker ps to list running containers).

Related

What is the difference between docker run -p and ports in docker-compose.yml?

I would like to use a standard way of running my docker containers. I have have been keeping a docker_run.sh file, but docker-compose.yml looks like a better choice. This seems to work great until I try to access my website running in the container. The ports don't seem to be set up correctly.
Using the following docker_run.sh, I can access the website at localhost. I expected the following docker-compose.yml file to have the same results when I use the docker-compose run web command.
docker_run.sh
docker build -t web .
docker run -it -v /home/<user>/git/www:/var/www -p 80:80/tcp -p 443:443/tcp -p 3316:3306/tcp web
docker-compose.yml
version: '3'
services:
web:
image: web
build: .
ports:
- "80:80"
- "443:443"
- "3316:3306"
volumes:
- "../www:/var/www"
Further analysis
The ports are reported as the same in docker ps and docker-compose ps. Note: these were not up at the same time.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
<id> web "/usr/local/scripts/…" About an hour ago Up About an hour 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:3307->3306/tcp <name>
$ docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------------------------------------------
web /usr/local/scripts/start_s ... Up 0.0.0.0:3316->3306/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp
What am I missing?
As #richyen suggests in a comment, you want docker-compose up instead of docker-compose run.
docker-compose run...
Runs a one-time command against a service.
That is, it's intended to run something like a debugging shell or a migration script, in the overall environment specified by the docker-compose.yml file, but not the standard command specified in the Dockerfile (or the override in the YAML file).
Critically to your question,
...docker-compose run [...] does not create any of the ports specified in the service configuration. This prevents port collisions with already-open ports. If you do want the service’s ports to be created and mapped to the host, specify the --service-ports flag.
Beyond that, the docker run command you show and the docker-compose.yml file should be essentially equivalent.
You don't run docker-compose.yamls the same way that you would run a local docker image that you have either installed or created on your machine. docker-compose files are typically launched running the command docker-compose up -d to run in detached mode. Then when you run docker ps you should see it running. You can also run docker-compose ps as you did above.

docker-compose equivalent for docker run arguments like "--foo bar" in compose file?

Given the following docker run command:
-p 80:80 -p 443:443 \
rancher/rancher:latest \
--acme-domain <YOUR.DNS.NAME>
What is the notation for writing --acme-domain in the docker-compose file? I was not able to find this in the docs. Thanks
Everything after the image name in your docker run command line is the "command", which gets executed either by the shell or by your ENTRYPOINT script. The equivalent docker-compose directive is command. For example:
service:
image: rancher/rancher:latest
ports:
- "80:80"
- "443:443"
command: "--acme-domain <YOUR.DNS.NAME>"
...
You can try
docker-compose run service_name --acme-domain <example.com>
Runs a one-time command against a service. For example, the following
command starts the web service and runs bash as its command.
docker-compose run web bash
Commands you use with run start in new containers with configuration
defined by that of the service, including volumes, links, and other
details. However, there are two important differences.
First, the command passed by run overrides the command defined in the
service configuration. For example, if the web service configuration
is started with bash, then docker-compose run web python app.py
overrides it with python app.py.
docker-compose-run
update:
As mentioned by #larsks, pass anything to command in docker-compose will be treated as an argument, if look into the dockerfile the entrypoint is
exec tini -- rancher --http-listen-port=80 --https-listen-port=443 --audit-log-path=${AUDIT_LOG_PATH} --audit-level=${AUDIT_LEVEL} --audit-log-maxage=${AUDIT_LOG_MAXAGE} --audit-log-maxbackup=${AUDIT_LOG_MAXBACKUP} --audit-log-maxsize=${AUDIT_LOG_MAXSIZE} "${#}"
so you can follow the #larsks answer or you can try the above without changing in docker-compose, as the entrypoint will process "${#}"

Docker containers out of same image don't work as expected

I have a docker-compose.yml set up like this:
app:
build:
dockerfile: ./docker/app/Dockerfile.dev
image: test/test:${ENV}-test-app
...
Dockerfile called here has this line present:
...
RUN ln -s ../overrides/${ENV}/plugins ../plugins
...
And there is also a script I am running to get the whole environment up (it is dependant upon several containers so I tried to omit irrelevant info).
It is a bash script and running the following:
ENV=$1 docker-compose -p $1 up -d --force-recreate --build app
What I wanted to achieve is that i can run two app containers at the same time, and this works as follows:
sh initializer.sh foo -> creates foo-test-app container
sh initializer.sh bar -> creates bar-test-app container
Now the issue I'm having is that even when I have --force-recreate flag present two images created actually are seen as the same image with two different tags.
And what this does when I inspect the containers is that both containers have a symbolic link to:
overrides/foo/plugins
It doesn't notice when I create the new container to re-do that part. How can I fix it?
Also if I sh to one container and change the symbolic link, it is automatically changed in the other container as well.
$ENV in your dockerfile is not the same as the one in your compose file.
When you run docker-compose up, it can be roughly seen as a docker build followed by a docker run. So Docker builds the image, layer by layer, at that stage there is not env called ENV. Only at docker run will $ENV be used.
Environment variables at build stage can be used though, they are passed via ARG
// compose.yml
build:
context: frontend
args:
- BUILD_ENV=${BUILD_ENV}
// dockerfile
ARG BUILD_ENV
RUN ./node_modules/.bin/ng build --$BUILD_ENV
You can do this to solve your problem however this will create one image per project, which you may not want. Or you can do it in an entrypoint script.
I have found the answer to be in project flag when creating my containers. So this is what I did:
docker-compose -p foo up -d
docker-compose -p bar up -d
This would bring containers up as 2 separate projects.
Link to documentation

How can I make my Docker compose "wait-for-it" script invoke the original container ENTRYPOINT or CMD?

According to Controlling startup order in Compose, one can control the order in which Docker Compose starts containers by using a "wait-for-it" script. Script wait-for-it.sh expects both a host:port argument as well as the command that the script should execute when the port is available. The documentation recommends that Docker Compose invoke this script using the entrypoint: option. However, if one uses this option, the container will no longer run its default ENTRYPOINT or CMD because entrypoint: overrides the default.
How might one provide this default command to wait-for-it.sh so that the script can invoke the default ENTRYPOINT or CMD when the condition for which it waits is satisfied?
In my case, I've implemented a script wait-for-file.sh that polls waiting for a file to exist:
#!/bin/bash
set -e
waitFile="$1"
shift
cmd="$#"
until test -e $waitFile
do
>&2 echo "Waiting for file [$waitFile]."
sleep 1
done
>&2 echo "Found file [$waitFile]."
exec $cmd
Docker Compose invokes wait-for-file.sh as the entry-point to a slightly custom container derived from tomcat:8-jre8:
platinum-oms:
image: opes/platinum-oms
ports:
- "8080:8080"
volumes_from:
- liquibase
links:
- postgres:postgres
- activemq:activemq
depends_on:
- liquibase
- activemq
entrypoint: /wait-for-file.sh /var/run/liquibase/done
Before it exits successfully, another custom container liquibase creates /var/run/liquibase/done and so platinum-oms effectively waits for container liquibase to complete.
Once container liquibase creates file /var/run/liquibase/done, wait-for-file.sh prints Found file [/var/run/liquibase/done]., but fails to invoke default command catalina.sh run in base container tomcat:8-jre8. Why?
Test Scenario
I created a simplified test scenario docker-compose-wait-for-file to demonstrate my problem. Container ubuntu-wait-for-file waits for container ubuntu-create-file to create file /wait/done and then I expect container ubuntu-wait-for-file to invoke the default ubuntu container command /bin/bash, but instead, it exits. Why doesn't it work as I expect?
However, if one uses this option, the container will no longer run its default ENTRYPOINT or CMD command because entrypoint: overrides the default.
That is expected, which is why the wait-for-it is presented as a wrapper script.
It does allow to execute a "subcommand" though:
wait-for-it.sh host:port [-s] [-t timeout] [-- command args]
^^^^^^^^^^^^
The subcommand will be executed regardless if the service is up or not.
If you wish to execute the subcommand only if the service is up, add the --strict argument.
That means the CMD part of your image can be used for your actual container command, as its parameters will passed in parameters to the ENTRYPOINT command:
entrypoint: wait-for-it.sh host:port --
cmd: mycmd myargs
This should work... except for docker-compose issue 3140 (mentioned by the OP Derek Mahar in the comments)
entrypoint defined in docker-compose.yml wipes out CMD defined in Dockerfile
That issue suggests (Jan. 2021)
If you have a custom image you can add a startscript to the build and call it inside the dockerfile and in the docker-compose you can call it again.
Thats a way to avoid duplicate for more complicated entrypoints.

Linode/lamp + docker-compose

I want to install linode/lamp container to work on some wordpress project locally without messing up my machine with all the lamp dependencies.
I followed this tutorial which worked great (it's actually super simple).
Now I'd like to use docker-compose because I find it more convenient to simply having to type docker-compose up and being good to go.
Here what I have done:
Dockerfile:
FROM linode/lamp
RUN service apache2 start
RUN service mysql start
docker-compose.yml:
web:
build: .
ports:
- "80:80"
volumes:
- .:/var/www/example.com/public_html/
When I do docker-compose up, I get:
▶ docker-compose up
Recreating gitewordpress_web_1...
Attaching to gitewordpress_web_1
gitewordpress_web_1 exited with code 0
Gracefully stopping... (press Ctrl+C again to force)
I'm guessing I need a command argument in my docker-compose.yml but I have no idea what I should set.
Any idea what I am doing wrong?
You cannot start those two processes in the Dockerfile.
The Dockerfile determines what commands are to be run when building the image.
In fact many base images like the Debian ones are specifically designed to not allow starting any services during build.
What you can do is create a file called run.sh in the same folder that contains your Dockerfile.
Put this inside:
#!/usr/bin/env bash
service apache2 start
service mysql start
tail -f /dev/null
This script just starts both services and forces the console to stay open.
You need to put it inside of your container though, this you do via two lines in the Dockerfile. Overall I'd use this Dockerfile then:
FROM linode/lamp
COPY run.sh /run.sh
RUN chmod +x /run.sh
CMD ["/bin/bash", "-lc", "/run.sh"]
This ensures that the file is properly ran when firing up the container so that it stays running and also that those services actually get started.
What you should also look out for is that your port 80 is actually available on your host machine. If you have anything bound to it already this composer file will not work.
Should this be the case for you ( or you're not sure ) try changing the port line to like 81:80 or so and try again.
I would like to point you to another resource where LAMP server is already configured for you and you might find it handy for your local development environment.
You can find it mentioned below:
https://github.com/sprintcube/docker-compose-lamp

Resources