From terminal 1 there are multiple docker compose services running, from time to time I need to run another container (for testing purposes) from terminal 2. I should see the output of the testing container from terminal 2, but this is not possible when using the start command because the output will be detached.
Basically I need the same behaviour as the run command without creating more containers but the start command has no options.
How can I do that?
Related
So, is there a point in the command "start"? like in "docker start -i albineContainer".
If I do this, I can't really do anything with the albine inside the container, I would have to do a run and create another container with the "-it" command and "sh" after (or "/bin/bash", don't remember it correctly right now).
Is that how it will go most of the times? delete and rebuilt containers and do the command "-it" if you want to do stuff in them? or would it more depend on the Dockerfile, how you define the cmd.
New to Docker in general and trying to understand the basics on how to use it. Thanks for the help.
Running docker run/exec with -it means you run the docker container and attach an interactive terminal to it.
Note that you can also run docker applications without attaching to them, and they will still run in the background.
Docker allows you to run a program (which can be bash, but does not have to be) in an isolated environment.
For example, try running the jenkins docker image: https://hub.docker.com/_/jenkins.
this will create a container, without you having attach to it, and you would still be able to use it.
You can also attach to an existing, running container by using docker exec -it [container_name] bash.
You can also use docker logs to peek at the stdout of a certain docker container, without actually attaching to its shell interactively.
You almost never use docker start. It's only possible to use it in two unusual circumstances:
If you've created a container with docker create, then docker start will run the process you named there. (But it's much more common to use docker run to do both things together.)
If you've stopped a container with docker stop, docker start will run its process again. (But typically you'll want to docker rm the container once you've stopped it.)
Your question and other comments hint at using an interactive shell in an unmodified Alpine container. Neither is a typical practice. Usually you'll take some complete application and its dependencies and package it into an image, and docker run will run that complete packaged application. Tutorials like Docker's Build and run your image go through this workflow in reasonable detail.
My general day-to-day workflow involves building and testing a program outside of Docker. Once I believe it works, then I run docker build and docker run, and docker rm the container once I'm done. I rarely run docker exec: it is a useful debugging tool but not the standard way to interact with a process. docker start isn't something I really ever run.
I have a Docker container which runs a web service. After the container process is started, I need to run a single command. How can I do this automatically, either by using Docker Compose or Docker?
I'm looking for a solution that does not require me to substitute the original container process with a Bash script that runs sleep infinity etc. Is this even possible?
I have a Docker compose setup where I have 20 different services that depend on each other. I'm writing a script that runs tests on a container by using docker-compose run my_service ....
I've got a couple of issues with that though:
After the tests finish, they should output both an XML file with the test results and an XML file with the coverage results. I want my script, which calls docker-compose, to have access to both of these files. This is a challenge because as far as I know, after running docker-compose run, these containers are shut down. The only solution I can think of is running it with --entrypoint=tail -f /dev/null, then executing the test command and retrieving the file. But that's a little cumbersome. Is there a better way?
After the tests finish, I'd like to stop and delete not only the container I was running tests on but all containers that were started because it was dependent on them. How can I do that automatically?
After the tests finish, they should output both an XML file...
If the main function of your task is to read or produce files to the local filesystem, it's often better to run it outside of Docker. In the case of integration tests, this is even pretty straightforward: instead of running the tests inside a Docker container and pointing at the other containers' endpoints, run the tests on the host and point at their published ports. If your test environment can run docker-compose commands then you can launch the container stack as a test fixture.
If for some reason they have to run in Docker, then you can bind-mount a host directory into the container to receive the result files. docker-compose run does support additional -v volume mounts, so you should be able to run something like
docker-compose run -v $PWD/my_service_tests:/output my_service ..
I'd like to stop and delete not only the container I was running tests on but all containers that were started because it was dependent on them.
I don't think Docker Compose has that option; it's not that clever. Consider the case of two different tests running at the same time, each running a separate test container but sharing a database container. The first test can't stop the database container because the second test is using it, but Compose isn't really aware of this.
If you don't mind running a complete isolated stack for each test run, then you can use the docker-compose -p option to do that. Then you can use docker-compose rm to clean everything up, for that specific test run.
docker-compose -p test1 run -v $PWD/test1:/output my_service ...
docker-compose -p test1 stop
docker-compose -p test1 rm
After the tests finish, they should output both an XML file with the test results and an XML file with the coverage results. I want my
script, which calls docker-compose, to have access to both of these
files.
You can write test reports to some folder inside the container. This folder may be mapped to folder on the Docker host using volumes. So script running docker-compose commands would be able to use them.
This is a challenge because as far as I know, after running
docker-compose run, these containers are shut down.
They are stopped. But, the next time you run docker-compose up they are restarted preserving mounted volumes.
Note:
Compose caches the configuration used to create a container. When you
restart a service that has not changed, Compose re-uses the existing
containers. Re-using containers means that you can make changes to
your environment very quickly.
It means you can copy reports files generated by test service using docker cp commands even after containers exit.
docker cp should work regardless volumes. For example, suppose tests had written reports.xml to /test_reports folder insider the container. You can copy the file to the host using docker cp after test container has stopped.
Example, Example2
After the tests finish, I'd like to stop and delete not only the
container I was running tests on but all containers that were started
because it was dependent on them. How can I do that automatically?
Use docker-compose down
The command
Stops containers and removes containers, networks, volumes, and images created by up.
The command will work if you defined the service under test with all dependent services and test service itself in the same compose file.
Usage example
Is there any way to make docker-compose start a service without running the declared command?
Not sure if any such option exists, nothing obvious in the flags for docker-compose up. It would be useful for debugging as presently I have to comment out the command in order to enter a container that otherwise exits on startup.
In this case, there's no command in the Dockerfile, but there's a command in docker-compose.yml.
Based on jonrsharpe's comment, the answer is to use run instead as it will start the container.
docker-compose run service bash
This makes it possible to enter the container and debug the problem so the real command can run.
The doc says
docker attach: Attach local standard input, output, and error streams to a running container
From my understanding, a running container can have many running processes, including those started using docker exec. So When using docker attach, which process am I attaching to exactly?
It should attach rather to the attach terminal’s standard input, output, and error, displaying the ongoing output or to control it interactively of the ENTRYPOINT/CMD process.
So it does not seem to be related to a specific process.
docker attach adds:
You can attach to the same contained process multiple times simultaneously, from different sessions on the Docker host.
Still the same process though.
Whatever process has pid 1 in the container. If the image declared an ENTRYPOINT in the Dockerfile (or if you docker run --entrypoint ...), it's that program; if not, it's the command passed on the docker run command line or the Dockerfile's CMD.