I have 4 containers that I run using a docker-compose file. one of these containers "orchestrates" the other containers work and therefore calls the other containers for doing some tasks.
My problem is that the output of this "orchestrater" container is always displayed once all the other containers have finished and therefore have finished displayed their own outputs.
For the sake of example, this is how the workflow of the containers can look like:
orchestrater container
Container 2
orchestrater container
Container 3
orchestrater container
Container 4
orchestrater container
Is there a way to enforce that the outputs are displayed in sequence ?
When you run docker-compose in attached mod (without -d flag that run containers in detached mode), you will get output from all containers in real time, and there is no way to range outputs.
Possible solution can be to disable all logs from containers 2,3,4 with settings in docker-compose file:
logging:
driver: none
In this case you will get outputs only for orchestrater container.
Other way to split this containers to different docker-compose files. One will be with orchestrater container and second with other containers. You can run docker-compose with defining files:
docker-compose up -f orchestrater_docker_compose.yml
docker-compose up -f services_docker_compose.yml
Related
If I run docker-compose up, and then after a few seconds I open another terminal window to the same directory and run it again, will I get two separate instances of the container?
Or will the second one attach to the already-running container from the first one?
I can post my docker-compose.yml and Dockerfile if needed.
You can run multiple instances of service in docker-compose via the docker-compose up --scale SERVICE=NUM command documented here.
For example if you have a docker-compose.yml file with 2 services defined, named nginx and mysql and you want to run 3 instances of the nginx container, you would run the following command:
docker-compose up --scale nginx=3
I have a Docker compose setup where I have 20 different services that depend on each other. I'm writing a script that runs tests on a container by using docker-compose run my_service ....
I've got a couple of issues with that though:
After the tests finish, they should output both an XML file with the test results and an XML file with the coverage results. I want my script, which calls docker-compose, to have access to both of these files. This is a challenge because as far as I know, after running docker-compose run, these containers are shut down. The only solution I can think of is running it with --entrypoint=tail -f /dev/null, then executing the test command and retrieving the file. But that's a little cumbersome. Is there a better way?
After the tests finish, I'd like to stop and delete not only the container I was running tests on but all containers that were started because it was dependent on them. How can I do that automatically?
After the tests finish, they should output both an XML file...
If the main function of your task is to read or produce files to the local filesystem, it's often better to run it outside of Docker. In the case of integration tests, this is even pretty straightforward: instead of running the tests inside a Docker container and pointing at the other containers' endpoints, run the tests on the host and point at their published ports. If your test environment can run docker-compose commands then you can launch the container stack as a test fixture.
If for some reason they have to run in Docker, then you can bind-mount a host directory into the container to receive the result files. docker-compose run does support additional -v volume mounts, so you should be able to run something like
docker-compose run -v $PWD/my_service_tests:/output my_service ..
I'd like to stop and delete not only the container I was running tests on but all containers that were started because it was dependent on them.
I don't think Docker Compose has that option; it's not that clever. Consider the case of two different tests running at the same time, each running a separate test container but sharing a database container. The first test can't stop the database container because the second test is using it, but Compose isn't really aware of this.
If you don't mind running a complete isolated stack for each test run, then you can use the docker-compose -p option to do that. Then you can use docker-compose rm to clean everything up, for that specific test run.
docker-compose -p test1 run -v $PWD/test1:/output my_service ...
docker-compose -p test1 stop
docker-compose -p test1 rm
After the tests finish, they should output both an XML file with the test results and an XML file with the coverage results. I want my
script, which calls docker-compose, to have access to both of these
files.
You can write test reports to some folder inside the container. This folder may be mapped to folder on the Docker host using volumes. So script running docker-compose commands would be able to use them.
This is a challenge because as far as I know, after running
docker-compose run, these containers are shut down.
They are stopped. But, the next time you run docker-compose up they are restarted preserving mounted volumes.
Note:
Compose caches the configuration used to create a container. When you
restart a service that has not changed, Compose re-uses the existing
containers. Re-using containers means that you can make changes to
your environment very quickly.
It means you can copy reports files generated by test service using docker cp commands even after containers exit.
docker cp should work regardless volumes. For example, suppose tests had written reports.xml to /test_reports folder insider the container. You can copy the file to the host using docker cp after test container has stopped.
Example, Example2
After the tests finish, I'd like to stop and delete not only the
container I was running tests on but all containers that were started
because it was dependent on them. How can I do that automatically?
Use docker-compose down
The command
Stops containers and removes containers, networks, volumes, and images created by up.
The command will work if you defined the service under test with all dependent services and test service itself in the same compose file.
Usage example
Is there a docker command which works like the vagrant up command?
I'd like to use the arangodb docker image and provide a Dockerfile for my team without forcing my teammates to get educated on the details of its operation, it should 'just work'. Within the the project root, I would expect the database to start and stop with a standard docker command. Does this not exist? If so, why not?
Docker Compose could do it.
docker-compose up builds image, creates container and starts it.
docker-compose stop stops the container.
docker-compose start restarts the container.
docker-compose down stops the container and removes image and the container.
With Docker compose file you can configure the ArangoDB (expose ports, volume mapping for db initialisation, etc.). Place the compose file to the project root, and run the up command.
Currently, I use Docker Compose to start multiple containers in one shot. I have containers started and running already, but while doing docker-compose up -d, I just want to exclude some containers while taking other containers up or down.
Use the following to exclude specific services:
docker-compose up --scale <service name>=0
Think you have to go the "other way". You can start single containers from your docker-compose.yml via:
docker-compose up -d --no-deps ServiceName
If you're looking to exclude some containers because they are not related to the Composition you might be interested in dobi.
dobi lets you define the images and containers (run resources) used to build and run your application. It also has a compose resource for starting the Compose project.
So using dobi you would only put the containers you want to run together into the docker-compose.yml, and the other containers would be just in the dobi.yml.
I've an app running on multiple Docker containers defined by docker-compose. Everything works fine from my user and the docker-compose ps output looks like:
Name Command State Ports
------------------------------------------------------------
myuser_app_1 /config/bootstrap.sh Exit 137
myuser_data_1 sh Exit 0
myuser_db_1 /run.sh Exit 143
Now I'm trying to run docker-compose up with supervisord (see relevant part of supervisord.conf below) and the issue is that the containers are now named myapp_app_1, myapp_data_1 and myapp_db_1, that is they're created from scratch and all customizations on the former containers is lost.
I tried renaming the containers, but it gives an error saying that there's already a container with that name.
Q: Is there some way to force docker-compose reuse existing containers instead of creating new ones based in their respective images?
supervisord.conf
...
[program:myapp]
command=/usr/local/bin/docker-compose
-f /usr/local/app/docker-compose.yml up
redirect_stderr=true
stdout_logfile=/var/log/myapp_container.log
stopasgroup=true
user=myuser
Compose will always reuse containers as long as their config hasn't changed.
If you have any state in a container, you need to put that state into a volume. Containers should be ephemeral, you should be able to destroy them and recreate them at any time without losing anything.
If you need to initialize something I would do it in the Dockerfile, so that it's preserved in the image.