I have a docker-compose.yml file that contains 4 containers: redis, postgres, api and worker.
During the development of the worker container, I often need to restart it in order to apply changes. Is there any good way to restart a single container (e.g. worker) without restarting the others?
It is very simple: Use the command:
docker-compose restart worker
You can set the time to wait for stop before killing the container (in seconds)
docker-compose restart -t 30 worker
Note that this will restart the container but without rebuilding it. If you want to apply your changes and then restart, take a look at the other answers.
The other answers to restarting a single node are on target, docker-compose restart worker. That will bounce that container, but not include any changes, even if you rebuilt it separately. You can manually stop, rm, create, and start, but there are much easier methods.
If you've updated your code, you can do the build and reload in a single step with:
docker-compose up --detach --build
That will first rebuild your images from any changed code, which is fast if there are no changes since the cache is reused. And then it only replaces the changed containers. If your downloaded images are stale, you can precede the above command with:
docker-compose pull
To download any changed images first (the containers won't be restarted until you run a command like the up above). Doing an initial stop is unnecessary.
And to only do this for a single service, follow the up or pull command with the services you want to specify, e.g.:
docker-compose up --detach --build worker
Here's a quick example of the first option, the Dockerfile is structured to keep the frequently changing parts of the code near the end. In fact the requirements are pulled in separately for the pip install since that file rarely changes. And since the nginx and redis containers were up-to-date, they weren't restarted. Total time for the entire process was under 6 seconds:
$ time docker-compose -f docker-compose.nginx-proxy.yml up --detach --build
Building counter
Step 1 : FROM python:2.7-alpine
---> fc479af56697
Step 2 : WORKDIR /app
---> Using cache
---> d04d0d6d98f1
Step 3 : ADD requirements.txt /app/requirements.txt
---> Using cache
---> 9c4e311f3f0c
Step 4 : RUN pip install -r requirements.txt
---> Using cache
---> 85b878795479
Step 5 : ADD . /app
---> 63e3d4e6b539
Removing intermediate container 9af53c35d8fe
Step 6 : EXPOSE 80
---> Running in a5b3d3f80cd4
---> 4ce3750610a9
Removing intermediate container a5b3d3f80cd4
Step 7 : CMD gunicorn app:app -b 0.0.0.0:80 --log-file - --access-logfile - --workers 4 --keep-alive 0
---> Running in 0d69957bda4c
---> d41ff1635cb7
Removing intermediate container 0d69957bda4c
Successfully built d41ff1635cb7
counter_nginx_1 is up-to-date
counter_redis_1 is up-to-date
Recreating counter_counter_1
real 0m5.959s
user 0m0.508s
sys 0m0.076s
To restart a service with changes here are the steps that I performed:
docker-compose stop -t 1 worker
docker-compose build worker
docker-compose up --no-start worker
docker-compose start worker
Following command
docker-compose restart worker
will just STOP and START the container. i.e without loading any changes from the docker-compose.xml
STOP is similar to hibernating in PC . Hence stop/start will not look for any changes made in configuration file . To reload from the recipe of container (docker-compose.xml) we need to remove and create the container (Similar analogy to rebooting the PC )
So commands will be as following
docker-compose stop worker // go to hibernate
docker-compose rm worker // shutdown the PC
docker-compose create worker // create the container from image and put it in hibernate
docker-compose start worker //bring container to life from hibernation
Since some of the other answers include info on rebuilding, and my use case also required a rebuild, I had a better solution (compared to those).
There's still a way to easily target just the one single worker container that both rebuilds + restarts it in a single line, albeit it's not actually a single command. The best solution for me was simply rebuild and restart:
docker-compose build worker && docker-compose restart worker
This accomplishes both major goals at once for me:
Targets the single worker container
Rebuilds and restarts it in a single line
Hope this helps anyone else getting here.
Restart Service with docker-compose file
docker-compose -f [COMPOSE_FILE_NAME].yml restart [SERVICE_NAME]
Use Case #1: If the COMPOSE_FILE_NAME is docker-compose.yml and service is worker
docker-compose restart worker
Use Case #2: If the file name is sample.yml and service is worker
docker-compose -f sample.yml restart worker
By default docker-compose looks for the docker-compose.yml if we run the docker-compose command, else we have flag to give specific file name with -f [FILE_NAME].yml
The answer's here are talking about the reflection of the change on the docker-compose.yml file.
But what if I want to incorporate the changes I have done in my code, and I believe that will be only possible by rebuilding the image and that I do with following commands
1. docker container stop
docker stop container-id
2. docker container removal
docker rm container-id
3. docker image removal
docker rmi image-id
4. compose the container again
docker-compose up container-name
Restart container
If you want to just restart your container:
docker-compose restart servicename
Think of this command as "just restart the container by its name", which is equivalent to docker restart command.
Note caveats:
If you changed ENV variables they won't updated in container. You need to stop it and start again. Or, using single command docker-compose up will detect changes and recreate container.
As many others mentioned, if you changed docker-compose.yml file itself, simple restart won't apply those changes.
If you copy your code inside container at the build stage (in Dockerfile using ADD or COPY commands), every time the code changes you have to rebuild the container (docker-compose build).
Correlation to your code
docker-compose restart should work perfectly fine, if your code gets path mapped into the container by volume directive in docker-compose.yml like so:
services:
servicename:
volumes:
- .:/code
But I'd recommend to use live code reloading, which is probably provided by your framework of choice in DEBUG mode (alternatively, you can search for auto-reload packages in your language of choice). Adding this should eliminate the need to restart container every time after your code changes, instead reloading the process inside.
Simple 'docker' command knows nothing about 'worker' container.
Use command like this
docker-compose -f docker-compose.yml restart worker
After making changes, you need to pull the changes into the server and then reacreate the container. So as the documentation shows:
docker-compose pull worker && docker-compose up -d --no-deps worker
pull worker will make only this project to be pulled to the server, and --no-deps will prevent from restart containers the worker container depends on.
To apply changes to a Docker Compose file while only restarting a single service, you can use the docker-compose command with the up command and specify the service name. For example:
$ docker-compose up -d --no-deps myservice
This command will update the configuration for the myservice service and restart it, without touching any of the other services in the Compose file. The -d flag runs the services in the background, and the --no-deps flag tells Compose not to start any dependencies of the myservice service.
Alternatively, you can use the restart command to restart a single service:
$ docker-compose restart myservice
This will apply the latest configuration for the myservice service and restart it.
Note that this command will not apply any changes to the Compose file itself, it will only restart the service using the current configuration.
Related
I created Dockerfile with the following content,
FROM node:16.4.2-alpine3.14
WORKDIR /app
COPY package.json .
COPY . /app
then I created image,
docker build -t app:0.1 .
and then started the container by running,
docker run -it app:0.1
It opened the node shell.
I then closed it.
Doing docker ps -a gives the following output,
Now,
I want to restart the same container and with node shell. How can it be done?
It shows as exited since there is no process running. To start it again , you can use the docker start command with -i flag.
https://docs.docker.com/engine/reference/commandline/start/ for more options
docker rm the existing container and docker run a new one. Consider using the docker run --rm option so the container deletes itself when it's done.
There's nothing special or valuable about a container; it's just a wrapper around a single process (in your case the Node REPL), and creating a new one isn't especially expensive. In the same way that you can't restart the REPL once you've exited it but need to re-run node, you generally will want to delete and recreate containers once their process has finished. With longer-running processes this also helps ensure the process's filesystem is exactly what you expect: if something exits unexpectedly, deleting the container will also remove any temporary files or lock files it's left behind, so restarting the container will run successfully.
use the following sequence of commands:
docker container start magical_merkle
docker attach magical_merkle
Explanation: the first command restarts your exited container, but in detached mode, it means it's running in the background and you can't see it's output. now for you to reattach to the container you run attach command of docker (second command) which attaches the std io of your host terminal to the std io of the running container.
you may notice the magical_merkle in the commands. this is the name of your container as found in the container ls output you provided. when you run the run command, docker will name the container with a auto generated name if you don't provide one for it.
I run the one of the open source microservices from here. When i run docker ps then all the containers status are UP, means they keep running. My issue is when I separately run a container then it did not keep running and exits. Below is one of the service defined in docker-compose file.
social-graph-service:
image: yg397/social-network-microservices
hostname: social-graph-service
restart: always
entrypoint: SocialGraphService
when i run it using command
sudo docker run -d --restart always --entrypoint SocialGraphService --hostname social-graph-service yg397/social-network-microservices
then its status does not UP, it exits after running. Why all the containers run continuously when i run them using sudo docker-compose up? and exit when i run them individually?
It looks like the graph service depends on MongoDB in order to run. My guess is it crashes when you run it individually because the mongo instance doesn't exist and it fails to connect.
The author of the repo wrote the docker-compose file to hide away some of the complexity from you, but that's a substantial tree of relationships between microservices, and most of them seem to depend on others existing in order to boot up.
-- Update --
The real issue is in the comments below. OP was already running the docker-compose stack while attempting to start another container, but forgot to connect the container to the docker network generated by docker-compose.
I am trying to run a simple docker container with my web application installed (Not using docker file).
During the testing I would always run a container using -t -i option and then start the tomcat service inside it by running a shell script.
How when I am moving to production I dont want to use the -t -i option any more and just need my Tomcat service to start and be the only primary service.
I trying pointing the entrypoint to the start up script for starting tomcat but the container terminates after that script finishes.
How do I run a container, start a service and keep that service as the single primary service of the container?
Note: I read some posts about supervisor but not sure if I would need to start building my image from scratch if I go that route? I would prefer not doing that.
Any suggestions?
If you have a Dockerfile that uses an entrypoint pattern, it will look something like this:
(Dockerfile)
FROM ubuntu
...Some configuration steps...
add start.sh /start.sh
ENTRYPOINT ["/start.sh"]
All you need to do is make sure your start.sh script 'hangs' in some way. Some people like to tail the syslogs, but tailing any file that exists will work.
(start.sh)
#!/bin/bash
service Your_Service_Or_Whatever start
tail -f /var/log/dmesg
A shorter version:
FROM ubuntu
...Some configuration steps...
ENTRYPOINT ["/bin/sh", "-c", "while true; do sleep 1; done"]
tested with Docker version 1.12.1, build 23cf638
Use docker --version to find out your version
Docker containers as default will run according to the configuration in the images Dockerfile. If you usually run a container with the -i flag, you leave STDIN open allowing you access to the containers entrypoint or it could be a bash shell. To achieve what you want, you can run the container in a detached state passing your commands into docker run directly.
docker run -d myapp /opt/catalina/bin/startup.sh
This will run the myapp container in a detached state and will run the command passed as the 3rd argument. If the command results in a long lived service, the container will stay active as long as the service is.
This is explained in detail in the docs.
Up until recently, when one was doing docker-compose up for a bunch of containers and one of the started containers stopped, all of the containers were stopped. This is not the case anymore since https://github.com/docker/compose/issues/741 and this is a really annoying for us: We use docker-compose to run selenium tests which means starting application server, starting selenium hub + nodes, starting tests driver, then exiting when tests driver stops.
Is there a way to get back old behaviour?
You can use:
docker-compose up --abort-on-container-exit
Which will stop all containers if one of your containers stops
In your docker compose file, setup your test driver container to depend on other containers (with depends_on parameter). Your docker compose file should look like this:
services:
application_server:
...
selenium:
...
test_driver:
entry_point: YOUR_TEST_COMMAND
depends_on:
- application_server
- selenium
With dependencies expressed this way, run:
docker-compose run test_driver
and all the other containers will shut down when the test_driver container is finished.
This solution is an alternative to the docker-compose up --abort-on-container-exit answer. The latter will also shut down all other containers if any of them exits (not only the test driver). It depends on your use case which one is more adequate.
Did you try the work around suggested on the link you provided?
Assuming your test script looked similar to this:
$ docker-compose rm -f
$ docker-compose build
$ docker-compose up --timeout 1 --no-build
When the application tests end, compose would exit and the tests finish.
In this case, with the new docker-compose version, change your test container to have a default no-op command (something like echo, or true), and change your test script as follows:
$ docker-compose rm -f
$ docker-compose build
$ docker-compose up --timeout 1 --no-build -d
$ docker-compose run tests test_command...
$ docker-compose stop
Using run allows you to get the exit status from the test run, and you only see the output of the tests (not all the dependencies).
Reference
If this is not acceptable, you could refer to Docker Remote API and watch for the stop event for the containers and act on it.
An example usage is this docker-gen tool written in golang which watches for container start events, to automatically regenerate configuration files.
I'm not sure this is the perfect answer to your problem, but maestro for Docker, lets you manage mulitple Docker containers as single unit.
It should feel familiar as you group them using a YAML file.
I'm trying to use docker-compose to orchestrate several containers. To troubleshoot, I frequently end up running bash from within a container by doing:
$ docker-compose run --rm run web bash
I always try pass the --rm switch so that these containers are removed when I exit the bash session. Sometimes though, they remain, and I see them at the output of docker-compose ps.
Name Command State Ports
----------------------------------------------------------------------------------
project_nginx_1 /usr/sbin/nginx Exit 0
project_nginx_run_1 bash Up 80/tcp
project_web_1 python manage.py runserver ... Exit 128
project_web_run_1 bash Up 8000/tcp
At this point, I am trying to stop and remove these components manually, but I can not manage to do this. I tried:
$ docker-compose stop project_nginx_run_1
No such service: project_nginx_run_1
I also tried the other commands rm, kill, etc..
What should I do to get rid of these containers?
Edit:
Fixed the output of docker-compose ps.
just stop those test containers with the docker stop command instead of using docker-compose.
docker-compose shines when it comes to start together many containers, but using docker-compose to start containers does not prevent you from using the docker command to do whatever you need to do with individual containers.
docker stop project_nginx_run_1 project_web_run_1
Also, since you are debugging containers, I suggest to use docker-compose exec <service id> bash to get a shell in a running container. This has the advantage of not starting a new container.
With docker-compose, services can be stopped in two ways, but I would like add some detailed info about both options.
In short
docker-compose down
Stop and remove containers, networks, images, and volumes
docker-compose stop
Stop services
In detail
If docker-compose run starts services project_nginx_run_1 and project_web_run_1, then
docker-compose down log will be
$ docker-compose down
Stopping project_nginx_run_1 ...
Stopping project_web_run_1 ...
.
. some service logs goes here
Stopping project_web_run_1 ... done
Stopping project_nginx_run_1 ... done
Removing project_web_run_1 ... done
Removing project_nginx_run_1 ... done
Removing network project_default
docker-compose stop log will be
$ docker-compose stop
Stopping project_nginx_run_1 ...
Stopping project_web_run_1 ...
.
. some service logs goes here
Stopping project_web_run_1 ... done
Stopping project_nginx_run_1 ... done
The docker-compose, unlike docker, use the names for it's containers defined in the yml file. Therefore, to stop just one container the command will be:
docker-compose stop nginx_run
docker-compose down
from within the directory where it was launched, is the only way I managed to confirm it was stopped, as in docker-compose ps no longer yields it!