What is the difference between docker-compose up and docker-compose start? - docker

Whenever I execute
docker-compose start
docker-compose ps
I see my containers with the state "UP". If I do
docker-compose up -d
I will see more verbose but it will have the same state. Is there any difference between both commands?

docker-compose start
(https://docs.docker.com/compose/reference/start/)
Starts existing containers for a service.
docker-compose up
(https://docs.docker.com/compose/reference/up/)
Builds, (re)creates, starts, and attaches to containers for a service.
Unless they are already running, this command also starts any linked services.
The docker-compose up command aggregates the output of each container
(essentially running docker-compose logs -f). When the command exits,
all containers are stopped. Running docker-compose up -d starts the
containers in the background and leaves them running.
If there are existing containers for a service, and the service’s
configuration or image was changed after the container’s creation,
docker-compose up picks up the changes by stopping and recreating the
containers (preserving mounted volumes). To prevent Compose from
picking up changes, use the --no-recreate flag.
For the complete CLI reference:
https://docs.docker.com/compose/reference/

In docker Frequently asked questions this is explained very clearly:
What’s the difference between up, run, and start?
Typically, you want docker-compose up. Use up to start or restart
all the services defined in a docker-compose.yml. In the default
“attached” mode, you see all the logs from all the containers. In
“detached” mode (-d), Compose exits after starting the containers, but
the containers continue to run in the background.
The docker-compose run command is for running “one-off” or “adhoc”
tasks. It requires the service name you want to run and only starts
containers for services that the running service depends on. Use run
to run tests or perform an administrative task such as removing or
adding data to a data volume container. The run command acts like
docker run -ti in that it opens an interactive terminal to the
container and returns an exit status matching the exit status of the
process in the container.
The docker-compose start command is useful only to restart containers
that were previously created, but were stopped. It never creates new
containers.

What is the difference between up, run and start in Docker Compose?
docker-compose up: Builds, (re)creates, and starts containers. It also attaches to containers for a service.
docker-compose run: Run one-off or ad-hoc tasks based on the business requirements. Here, the service name has to be provided and the docker starts only that specific service and also the other services to which the target service is dependent (if any). It is helpful for testing the containers and also for performing tasks
docker-compose start: Start the stopped containers, can't create new ones.

Related

Killing the container if the `docker run` command is killed

I am running a Docker container that has access to the host's Docker socket. It uses this privilege to start new, short-lived containers on its own, using docker run -it some_image some_command. Though there's no strict link here, we can call the original container the parent, and the other, short-lived containers children. What I'd like to happen is for the children containers to die if the parent dies, for example, by sending a SIGKILL to process started by docker run -it ....
The context is here to avoid the XY problem, but the same question is relevant in a non-nested scenario - if I run docker run -it image sleep 300, then kill this process, sleep 300 will carry on and the container will stay alive. What I want is for the docker run command to be "bound together" with the command it runs. I am looking to do this without access to the host, and without using docker compose. Is there any such way?
The options listed under the docker run docs don't seem to have anything like this. My best working approach would be to set up a trap for SIGTERM, run the children containers in detached mode (so I get their container IDs), and then docker kill them in the trap, but this seems more cumbersome than it should be (and I'm unsure if docker kill can be trapped).

Does restarting docker service kills all containers?

I'm having trouble with docker where docker ps won't return and is stuck.
I found that doinng docker service restart something like
sudo service docker restart (https://forums.docker.com/t/what-to-do-when-all-docker-commands-hang/28103/4)
However I'm worried if it will kill all the running containers? (I guess the service do provide service so that docker containers can run?)
In the default configuration, your assumption is correct: If the docker daemon is stopped, all running containers are shut down.. But, as outlined on the link, this behaviour can be changed on docker >= 1.12 by adding
{
"live-restore": true
}
to /etc/docker/daemon.json. Crux: the daemon must be restarted for this change to take effect. Please take note of the limitations of live reload, e.g. only patch version upgrades are supported, not major version upgrades.
Another possibility is to define a restart policy when starting a container. To do so, pass one of the following values as value for the command line argument --restart when starting the container via docker run:
no Do not automatically restart the container. (the default)
on-failure Restart the container if it exits due to an error, which manifests
as a non-zero exit code.
always Always restart the container if it stops. If it is manually stopped,
it is restarted only when Docker daemon restarts or the container
itself is manually restarted.
(See the second bullet listed in restart policy details)
unless-stopped Similar to always, except that when the container is stopped
(manually or otherwise), it is not restarted even after Docker
daemon restarts.
For your specific situation, this would mean that you could:
Restart all containers with --restart always (more on that further below)
Re-configure the docker daemon to allow for live reload
Restart the docker daemon (which is not yet configured for live reload, but will be after this restart)
This restart would shut down and then restart all your containers once. But from then on, you should be free to stop the docker daemon without your containers terminating.
Handling major version upgrades
As mentioned above, live reload cannot handle major version upgrades. For a major version upgrade, one has to tear down all running containers. With a restart policy of always, however, the containers will be restarted after the docker daemon is restarted after the upgrade.

Cron job to kill all hanging docker containers

I am new to docker containers but we have containers being deployed and due to some internal application network bugs the process running in the container hangs and the docker container is not terminated. While we debug this issue I would like a way to find all those containers and setup a cron job to periodically check and kill those relevant containers.
So how would I determine from "docker ps -a" which containers should be dropped and how would I go about it? Any ideas? We are eventually moving to kubernetes which will help with these issues.
Docker already have a command to cleanup the docker environment, you can use it manually or maybe setup a job to run the following command:
$ docker system prune
Remove all unused containers, networks, images (both dangling and
unreferenced), and optionally, volumes.
refer to the documentation for more details on advanced usage.

How do you kill a docker containers default command without killing the entire container?

I am running a docker container which contains a node server. I want to attach to the container, kill the running server, and restart it (for development). However, when I kill the node server it kills the entire container (presumably because I am killing the process the container was started with).
Is this possible? This answer helped, but it doesn't explain how to kill the container's default process without killing the container (if possible).
If what I am trying to do isn't possible, what is the best way around this problem? Adding command: bash -c "while true; do echo 'Hit CTRL+C'; sleep 1; done" to each image in my docker-compose, as suggested in the comments of the linked answer, doesn't seem like the ideal solution, since it forces me to attach to my containers after they are up and run the command manually.
This is by design by Docker. Each container is supposed to be a stateless instance of a service. If that service is interrupted, the container is destroyed. If that service is requested/started, it is created. If you're using an orchestration platform like k8s, swarm, mesos, cattle, etc at least.
There are applications that exist to represent PID 1 rather than the service itself. But this goes against the design philosophy of microservices and containers. Here is an example of an init system that can run as PID 1 instead and allow you to kill and spawn processes within your container at will: https://github.com/Yelp/dumb-init
Why do you want to reboot the node server? To apply changes from a config file or something? If so, you're looking for a solution in the wrong direction. You should instead define a persistent volume so that when the container respawns the service would reread said config file.
https://docs.docker.com/engine/admin/volumes/volumes/
If you need to restart the process that's running the container, then simply run a:
docker restart $container_name_or_id
Exec'ing into a container shouldn't be needed for normal operations, consider that a debugging tool.
Rather than changing the script that gets run to automatically restart, I'd move that out to the docker engine so it's visible if your container is crashing:
docker run --restart=unless-stopped ...
When a container is run with the above option, docker will restart it for you, unless you intentionally run a docker stop on the container.
As for why killing pid 1 in the container shuts it down, it's the same as killing pid 1 on a linux server. If you kill init/systemd, the box will go down. Inside the namespace of the container, similar rules apply and cannot be changed.

Deploying changes to Docker and its containers on the fly

Brand spanking new to Docker here. I have Docker running on a remote VM and am running a single dummy container on it (I can verify the container is running by issuing a docker ps command).
I'd like to secure my Docker installation by giving the docker user non-root access:
sudo usermod -aG docker myuser
But I'm afraid to muck around with Docker while any containers are running in case "hot deploys" create problems. So this has me wondering, in general: if I want to do any sort of operational work on Docker (daemon, I presume) while there are live containers running on it, what do I have to do? Do all containers need to be stopped/halted first? Or will Docker keep on ticking and apply the updates when appropriate?
Same goes for the containers themselves. Say I have a myapp-1.0.4 container deployed to a Docker daemon. Now I want to deploy myapp-1.0.5, how does this work? Do I stop 1.0.4, remove it from Docker, and then deploy/run 1.0.5? Or does Docker handle this for me under the hood?
if I want to do any sort of operational work on Docker (daemon, I presume) while there are live containers running on it, what do I have to do? Do all containers need to be stopped/halted first? Or will Docker keep on ticking and apply the updates when appropriate?
Usually, all containers are stopped first.
That happen typically when I upgrade docker itself: I find all my container stopped (except the data containers, which are just created, and remain so)
Say I have a myapp-1.0.4 container deployed to a Docker daemon. Now I want to deploy myapp-1.0.5, how does this work? Do I stop 1.0.4, remove it from Docker, and then deploy/run 1.0.5? Or does Docker handle this for me under the hood?
That depend on the nature and requirements of your app: for a completely stateless app, you could even run 1.0.5 (with different host ports mapped to your app exposed port), test it a bit, and stop 1.0.4 when you think 1.0.5 is ready.
But for an app with any kind of shared state or resource (mounted volumes, shared data container, ...), you would need to stop and rm 1.0.4 before starting the new container from 1.0.5 image.
(1) why don't you stop them [the data containers] when upgrading Docker?
Because... they were never started in the first place.
In the lifecycle of a container, you can create, then start, then run a container. But a data container, by definition, has no process to run: it just exposes VOLUME(S), for other container to mount (--volumes-from)
(2) What's the difference between a data/volume container, and a Docker container running, say a full bore MySQL server?
The difference is, again, that a data container doesn't run any process, so it doesn't exit when said process stops. That never happens, since there is no process to run.
The MySQL server container would be running as long as the server process doesn't stop.

Resources