I have a Docker compose setup where I have 20 different services that depend on each other. I'm writing a script that runs tests on a container by using docker-compose run my_service ....
I've got a couple of issues with that though:
After the tests finish, they should output both an XML file with the test results and an XML file with the coverage results. I want my script, which calls docker-compose, to have access to both of these files. This is a challenge because as far as I know, after running docker-compose run, these containers are shut down. The only solution I can think of is running it with --entrypoint=tail -f /dev/null, then executing the test command and retrieving the file. But that's a little cumbersome. Is there a better way?
After the tests finish, I'd like to stop and delete not only the container I was running tests on but all containers that were started because it was dependent on them. How can I do that automatically?
After the tests finish, they should output both an XML file...
If the main function of your task is to read or produce files to the local filesystem, it's often better to run it outside of Docker. In the case of integration tests, this is even pretty straightforward: instead of running the tests inside a Docker container and pointing at the other containers' endpoints, run the tests on the host and point at their published ports. If your test environment can run docker-compose commands then you can launch the container stack as a test fixture.
If for some reason they have to run in Docker, then you can bind-mount a host directory into the container to receive the result files. docker-compose run does support additional -v volume mounts, so you should be able to run something like
docker-compose run -v $PWD/my_service_tests:/output my_service ..
I'd like to stop and delete not only the container I was running tests on but all containers that were started because it was dependent on them.
I don't think Docker Compose has that option; it's not that clever. Consider the case of two different tests running at the same time, each running a separate test container but sharing a database container. The first test can't stop the database container because the second test is using it, but Compose isn't really aware of this.
If you don't mind running a complete isolated stack for each test run, then you can use the docker-compose -p option to do that. Then you can use docker-compose rm to clean everything up, for that specific test run.
docker-compose -p test1 run -v $PWD/test1:/output my_service ...
docker-compose -p test1 stop
docker-compose -p test1 rm
After the tests finish, they should output both an XML file with the test results and an XML file with the coverage results. I want my
script, which calls docker-compose, to have access to both of these
files.
You can write test reports to some folder inside the container. This folder may be mapped to folder on the Docker host using volumes. So script running docker-compose commands would be able to use them.
This is a challenge because as far as I know, after running
docker-compose run, these containers are shut down.
They are stopped. But, the next time you run docker-compose up they are restarted preserving mounted volumes.
Note:
Compose caches the configuration used to create a container. When you
restart a service that has not changed, Compose re-uses the existing
containers. Re-using containers means that you can make changes to
your environment very quickly.
It means you can copy reports files generated by test service using docker cp commands even after containers exit.
docker cp should work regardless volumes. For example, suppose tests had written reports.xml to /test_reports folder insider the container. You can copy the file to the host using docker cp after test container has stopped.
Example, Example2
After the tests finish, I'd like to stop and delete not only the
container I was running tests on but all containers that were started
because it was dependent on them. How can I do that automatically?
Use docker-compose down
The command
Stops containers and removes containers, networks, volumes, and images created by up.
The command will work if you defined the service under test with all dependent services and test service itself in the same compose file.
Usage example
Related
TLDR: When using docker compose, I can simply recreate a container by changing its configuration and/or image in the docker-compose.yml file along with running docker-compose up. Is there any generic equivalent for recreating a container (to apply changes) which was created by a bare docker create/run command?
Elaborating a bit:
The associated docker compose documentation states:
If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers (preserving mounted volumes).
I'm having troubles to understand which underlaying steps are actually performed during this recreation, as e.g. the docker (without compose) documentation doesn't really seem to use the recreate term at all.
Is it safe to simply run docker container rm xy and then docker container create/run (along with passing the full and modified configuration)? Or is docker compose actually doing more under the hood?
I already found answers about applying specific configuration changes like e.g. this one about port mappings, but I'm still wondering whether there is a more general answer to this.
I'm having troubles to understand which underlaying steps are actually performed during this recreation, as e.g. the docker (without compose) documentation doesn't really seem to use the recreate term at all.
docker-compose is a high level tool; it performs in a single operation what would require multiple commands using the docker cli. When docker-compose says, "docker-compose up picks up the changes by stopping and recreating the containers", it means it is doing the equivalent of:
docker stop <somecontainer>
docker rm <somecontainer>
docker run ...
(Where ... represents whatever configuration is implied by the service definition in your docker-compose.yaml).
Let's say it recognizes a change in container1 it does (not really, working via API):
docker compose rm -fs container1
docker compose create (--build) container1
docker compose start container1
What is partially close to (depending on your compose-config):
docker rm -f projectname_container1
(docker build --flags)
docker create --allDozensOfAttributes projectname_container1
docker start projectname_container1
docker network connect (--flags) projectname_networkname projectname_container1
and maybe more..
so i would advise to use the docker compose commands for single services instead of docker cli if suitable..
The issue is that the variables and settings are not exposed through any docker apis. It may be possible by way of connecting directly to the docker socket, parsing the variables, and then stopping/removing the container and recreating it.
This would be prone to all kinds of errors and would require lots of debugging to get these values.
What I do is to simply store my docker commands in a shell script. You can just save the command you need to run into a text file, name it .sh, set the -x on the file, then run it. Then when you stop/delete the container, you can just rerun the shell script.
Another thing you can do would be to replace the docker command with a function (in something like your ~/.bashrc) that stores the arguments to a text file and rechecks that text file with a passed argument (like "recreate" followed by a name). However, I'm more a fan of doing docker containers in their own shell scripts as its more portable.
I have a Docker container with a init script CMD ["init_server.sh"]
which is orchestrated by docker-compose.
Does running docker-compose restart re-run the init script,
or will only running docker-compose down followed by docker-compose up
trigger the script to be run again?
I imagine whatever the answer to this will apply to docker restart as well.
Am I correct?
A Docker container only runs one process, defined by the "entrypoint" and "command" settings (typically from a Dockerfile, you can override them in a docker-compose.yml). Whatever that process does, it will do every time the container starts.
In terms of Docker commands, the Compose commands you show aren't different from their underlying plain-Docker variants. restart is just stop followed by start, so it will re-run the main container process in its existing container with the existing (possibly modified) container filesystem. If you do a docker rm in between these (or docker-compose down) the process starts in a clean container based on the image.
It's typical for an initialization script to check if the initialization it requires has already been done. For things like the standard Docker Hub database images, this works by checking if the data directory is totally empty; initialization only happens on the very first startup. An init script that runs something like database migrations will generally keep track of which migrations have already been done and won't repeat work.
I have a NodeJS application that is using ioredis to connect to redis and publish data and other redisy things.
I am trying to write a component test against redis and was able to create a setup/teardown script via jest that runs redis via docker on a random port and tears it down when the tests are done via docker run -d -p 6379 --rm redis and docker stop {containerId}.
This works great locally, but we have the tests running in a multi-stage build in our Dockerfile:
RUN yarn test
which I try to build via docker build . it goes great until it gets to the tests and then complains with the following error - /bin/sh: docker: not found
Hence, Docker is unavailable to the docker-build process to run the tests?
Is there a way to run docker-build to give it the ability to spin up sibling processes during the process?
This smells to me like a "docker-in-docker" situation.
You can't spin up siblings, but you can spawn a container within a container, by doing some tricks: (you might need to do some googling to get it right)
install the docker binaries in the "host container"
mount the docker socket from the actual host inside the "host" container, like so docker run -v /var/run/docker.sock:/var/run/docker.sock ...
But you won't be able to do it in the build step, so it won't be easy for your case.
I suggest you prepare a dedicated build container capable of running nested containers, which would basically emulate your local env and use that in your CI. Still, you might need to refactor your process a bit make it work.
Good luck :)
In my practice, tests shouldn't be concerned with initializing the database, they should only be concerned about how to connect to the database, so you just pass your db connection data via environment variables.
The way you are doing it it won't scale, imagine that you need a lot more services for your application, it will be difficult and not practical to start them via tests.
When you are developing locally, it's your responsibility to have the services running before doing the tests.
You can have docker compose scripts in your repository that create and start all the services you need when you start developing.
And when you are using CI in the cloud, you would still use docker containers and run tests in them( node container with your tests, redis container, mysql container, etc...) and again just pass the appropriate connection data via environment variables.
So, is there a point in the command "start"? like in "docker start -i albineContainer".
If I do this, I can't really do anything with the albine inside the container, I would have to do a run and create another container with the "-it" command and "sh" after (or "/bin/bash", don't remember it correctly right now).
Is that how it will go most of the times? delete and rebuilt containers and do the command "-it" if you want to do stuff in them? or would it more depend on the Dockerfile, how you define the cmd.
New to Docker in general and trying to understand the basics on how to use it. Thanks for the help.
Running docker run/exec with -it means you run the docker container and attach an interactive terminal to it.
Note that you can also run docker applications without attaching to them, and they will still run in the background.
Docker allows you to run a program (which can be bash, but does not have to be) in an isolated environment.
For example, try running the jenkins docker image: https://hub.docker.com/_/jenkins.
this will create a container, without you having attach to it, and you would still be able to use it.
You can also attach to an existing, running container by using docker exec -it [container_name] bash.
You can also use docker logs to peek at the stdout of a certain docker container, without actually attaching to its shell interactively.
You almost never use docker start. It's only possible to use it in two unusual circumstances:
If you've created a container with docker create, then docker start will run the process you named there. (But it's much more common to use docker run to do both things together.)
If you've stopped a container with docker stop, docker start will run its process again. (But typically you'll want to docker rm the container once you've stopped it.)
Your question and other comments hint at using an interactive shell in an unmodified Alpine container. Neither is a typical practice. Usually you'll take some complete application and its dependencies and package it into an image, and docker run will run that complete packaged application. Tutorials like Docker's Build and run your image go through this workflow in reasonable detail.
My general day-to-day workflow involves building and testing a program outside of Docker. Once I believe it works, then I run docker build and docker run, and docker rm the container once I'm done. I rarely run docker exec: it is a useful debugging tool but not the standard way to interact with a process. docker start isn't something I really ever run.
Is there a docker command which works like the vagrant up command?
I'd like to use the arangodb docker image and provide a Dockerfile for my team without forcing my teammates to get educated on the details of its operation, it should 'just work'. Within the the project root, I would expect the database to start and stop with a standard docker command. Does this not exist? If so, why not?
Docker Compose could do it.
docker-compose up builds image, creates container and starts it.
docker-compose stop stops the container.
docker-compose start restarts the container.
docker-compose down stops the container and removes image and the container.
With Docker compose file you can configure the ArangoDB (expose ports, volume mapping for db initialisation, etc.). Place the compose file to the project root, and run the up command.