Stop docker container with docker-compose - docker

docker-compose fails with a timeout:
docker-compose stop mycontainer
but docker succeeds:
docker stop mycontainer
My questions
What is the difference between docker-compose stop and docker stop?
Where can I get more detailed information about that problem? (I killed docker-compose after a few minutes)
How can I solve that problem with docker-compose?

docker-compose stop it stops running containers that are started when you run command docker-compose start . it base on docker-compose file.
Please take a look content of docker-compose file for more details.
Running docker ps to see what are containers running.
docker stop actually, docker stop running_container_id it stops running container.

Doc says that docker stop sends a SIGTERMand then a SIGKILL to the running container, while docker-compose stop does not mention that. Maybe that is the reason.
Docs:
Compose: https://docs.docker.com/compose/reference/stop/
Docker: https://docs.docker.com/engine/reference/commandline/stop/
You might want to check if you can reproduce that signalling with docker-compose.
Edit: More on signal-handling in Docker Compose: https://docs.docker.com/compose/faq/

Finally I solved that problem by restarting the host vm.
Apparently the docker daemon was in trouble.
Nevertheless I still wonder, why docker-compose stop did not work.

Related

What to do if the docker container hangs and does not respond to any command other than ctrl+c?

I have been running a nvidia docker image since 13 days and it used to restart without any problems using docker start -i <containerid> command. But, today while I was downloading pytorch inside the container, download got stuck at 5% and gave no response for a while.
I couldn't exit the container either by ctrl+d or ctrl+c. So, I exited the terminal and in new terminal I ran this docker start -i <containerid> again. But ever since this particular container is not responding to any command. Be it start/restart/exec/commit ...nothing! any command with this container ID or name is just non-responsive and had to exit out of it only after ctrl+c
I cannot restart the docker service since it will kill all running docker containers.
Cannot even stop the container using this docker container stop <containerid>
Please help.
You can make use of docker RestartPolicy:
docker update --restart=always <container>
while mindful of caveats on the docker version you running.
or explore an answer by #Yale Huang from a similar question: How to add a restart policy to a container that was already created
I had to restart docker process to revive my container. There was nothing else I could do to solve it. used sudo service docker restart and then revived my container using docker run. I will try to build the dockerfile out of it in order to avoid future mishaps.

Containers start with docker-compose up, are listed with docker ps, then don't appear with docker-compose

I've got an issue on ubuntu 18.04 where occasionally running docker-compose up results in the containers starting, the networking between them behaving as expected yet according to docker-compose they aren't there.
docker ps shows the containers exist.
UPDATE: after some comments:
docker-compose ps shows nothing. Also, the problem is intermittent meaning any example is hard to come by unfortunately.
Do you see them in docker ps -a ? You should add docker-compose -d up for detached mode.
Try running
docker-compose ps
from where your .yml file is located. It will show you the list of all docker containers generated from that .yml file

Docker - Bind for 0.0.0.0:4000 failed: port is already allocated

I am using docker for the first time and I was trying to implement this -
https://docs.docker.com/get-started/part2/#tag-the-image
At one stage I was trying to connect with localhost by this command -
$ curl http://localhost:4000
which showed this error-
curl: (7) Failed to connect to localhost port 4000: Connection refused
However, I have solved this by following code -
$ docker-machine ip default
$ curl http://192.168.99.100:4000
After that everything was going fine, but in the last part, I was trying to run the app by using following line according to the tutorial...
$ docker run -p 4000:80 anibar/get-started:part1
But, I got this error
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: driver failed programming external connectivity on endpoint goofy_bohr (63f5691ef18ad6d6389ef52c56198389c7a627e5fa4a79133d6bbf13953a7c98): Bind for 0.0.0.0:4000 failed: port is already allocated.
You need to make sure that the previous container you launched is killed, before launching a new one that uses the same port.
docker container ls
docker rm -f <container-name>
Paying tribute to IgorBeaz, you need to stop running the current container. For that you are going to know current CONTAINER ID:
$ docker container ls
You get something like:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
12a32e8928ef friendlyhello "python app.py" 51 seconds ago Up 50 seconds 0.0.0.0:4000->80/tcp romantic_tesla
Then you stop the container by:
$ docker stop 12a32e8928ef
Finally you try to do what you wanted to do, for example:
$ docker run -p 4000:80 friendlyhello
I tried all the above answers, none of them worked, in my case even docker container ls doesn't show any container running. It looks like the problem is due to the fact that the docker proxy is still using ports although there are no containers running. In my case I was using ubuntu. Here's what I tried and got the problem solved, just run the following two commands:
sudo service docker stop
sudo rm -f /var/lib/docker/network/files/local-kv.db
I solved it this way:
First, I stopped all running containers:
docker-compose down
Then I executed a lsof command to find the process using the port (for me it was port 9000)
sudo lsof -i -P -n | grep 9000
Finally, I "killed" the process (in my case, it was a VSCode extension):
kill -9 <process id>
The quick fix is ​​a just restart docker:
sudo service docker stop
sudo service docker start
Above two answers are correct but didn't work for me.
I kept on seeing blank like below for docker container ls
then I tried, docker container ls -a and after that it showed all the process previously exited and running.
Then docker stop <container id> or docker container stop <container id> didn't work
then I tried docker rm -f <container id> and it worked.
Now at this I tried docker container ls -a and this process wasn't present.
When I used nginx docker image, I also got this error:
docker: Error response from daemon: driver failed programming external connectivity on endpoint recursing_knuth (9186f7d7f523732b99d3510029cde9679f3f3fe7b7eb5f612d54c4aacea58220): Bind for 0.0.0.0:8080 failed: port is already allocated.
And I solved it using following commands:
$ docker container ls
$ docker stop [CONTAINER ID]
Then, running this docker container(like this) again is ok:
$ docker run -v $PWD/vueDemo:/usr/share/nginx/html -p 8080:80 -d nginx:alpine
You just need to stop the previous docker container.
I have had same problem with docker-compose, to fix it:
Killed docker-proxy processe
Restart docker
Start docker-compose again
docker ps will reveal the list of containers running on docker. Find the one running on your needed port and note down its PID.
Stop and remove that container using following commands:
docker stop PID
docker rm PID
Now run docker-compose up and your services should run as you have freed the needed port.
on linux 'sudo systemctl restart docker' solved the issue for me
For anyone having this problem with docker-compose.
When you have more than one project (i.e. in different folders) with similar services you need to run docker-compose stop in each of your other projects.
If you are using Docker-Desktop, you can quit Docker Desktop and then restart it. It solved the problem for me.
In my case, there was no process to kill.
Updating docker fixed the problem.
It might be a conflict with the same port specified in docker-compose.yml and docker-compose.override.yml or the same port specified explicitly and using an environment variable.
I had a docker-compose.yml with ports on a container specified using environment variables, and a docker-compose.override.yml with one of the same ports specified explicitly. Apparently docker tried to open both on the same container. docker container ls -a listed neither because the container could not start and list the ports.
For me the containers where not showing up running, so NOTHING was using port 9010 (in my case) BUT Docker still complained.
I did not want to reset my Docker (for Windows) so what I did to resolve it was simply:
Remove the network (I knew that before a container was using this network with the port in question (9010) docker network ls docker network rm blabla (or id)
I actually used a new network rather than the old (buggy) one but shouldn't be needed
Restart Docker
That was the only way it worked for me. I can't explain it but somehow the "old" network was still bound to that port (9010) and Docker kept on "blocking" it (whinching about it)
FOR WINDOWS;
I killed every process that docker use and restarted the docker service on services. My containers are working now.
It is about ports that is still in use by Docker even though you are not using on that moment.
On Linux, you can run sudo netstat -tulpn to see what is currently listening on that port. You can then choose to configure either that process or your Docker container to bind to a different port to avoid the conflict.
Stopping the container didn't work for me either. I changed the port in docker-compose.yml.
For me, the problem was mapping the same port twice.
Due to a parametric docker run, it ended up being something like
docker run -p 4000:80 -p 4000:80 anibar/get-started:part1
notice double mapping on port 4000.
The log is not informative enough in this case, as it doesn't state I was the cause of the double mapping, and that the port is no longer bound after the docker run command returns with a failure.
Don't forget the easiest fix of all....
Restart your computer.
I have tried most of the above and still couldn't fix it. Then just restart my Mac and then it's all back to normal.
For anyone still looking for a solution, just make sure you have binded your port the right way round in your docker-compose.yml
It goes:
- <EXTERNAL SERVER PORT>:<INTERNAL CONTAINER PORT>
Had the same problem. Went to Docker for Mac Dashboard and clicked restart. Problem solved.
my case was dump XD I was exposing port 80 twice :D
ports:
- '${APP_PORT:-80}:80'
- '${APP_PORT:-8080}:8080'
APP_PORT is defined, thus 80 was exposed twice.
I tried almost all solutions and found out the probable/possible reason/solution. So, If you are using traefik or any other networking server, they internally facilitate proxy for load balacing. That, most use the blueprint as it, works pretty fine. It then passes the load control entirely to nginx or similiar proxy servers. So, stopping, killing(networking server) or pruning might not help.
Solution for traefik with nginx,
sudo /etc/init.d/nginx stop
# or
sudo service nginx stop
# or
sudo systemctl stop nginx
Credits
How to stop docker processes
Making Docker Stop Itself <- Safe and Fast
this is the best way to stop containers and all unstoppable processes: making docker do the job.
go to docker settings > resources. change any of the resource and click apply and restart.
docker will stop itself and its every process -- even the most stubborn ones that might not be killed by other commonly used commands such as kill or more wild commands like rm suggested by others.
i ran into a similar problem before and all the good - proper - tips from my colleagues somehow did not work out. i share this safe trick whenever someone in my team asks me about this.
Error response from daemon: driver failed programming external connectivity on endpoint foobar
Bind for 0.0.0.0:8000 failed: port is already allocated
hope this helps!
simply restart your computer, so the docker service gets restarted

Docker container keeps restarting

I was trying rancher.
I used the command:
sudo docker run -d --restart=always -p 8080:8080 rancher/server
to start run it.
Then I stopped the container and removed it. But if I stop and restart the docker daemon or reboot my laptop, and lookup running containers using docker ps command, it will have rancher server running again. How do I stop/remove it completely and make sure it will not run again.
Note: following issue 11008 and PR 15348 (commit fd8b25c, docker v1.11.2), you would avoid the issue with:
sudo docker run -d --restart=unless-stopped -p 8080:8080 rancher/server
In your current situation, thanks to PR 19116, you can use docker update to update the restart policy.
docker update --restart=unless-stopped <yourContainerID_or_Name>
Then stop your container, restart your docker daemon: it won't restart said container.
The OP codefire points to another reason in the comments:
When I first ran the start rancher server command, I didn't notice that it was being downloaded. So I may have retried that command a couple times.
That must be why the job kept on restarting even after stopping and removing containers that was started as rancher server.
After stopping and removing 8+ containers, it finally stopped
That is why I have aliases to quickly remove any stopped containers.
it keeps restarting because you're using --restart=always flag
Run
docker logs <CONTAINER_ID>
to see if your code is encountering any errors that does not allow the container to run properly
Thanks for the answers, but for me even using the answer von VonC the problem continues.
After researching Kubernetes was running the image again and again.
Use kubectl get nodes to get the nodes u have and later run:
kubectl drain NODE_ID --delete-emptydir-data --ignore-daemonsets
These solutions are correct! But for me there was a situation when these answers did not work out. That's because I was running service in the background I did not remove it so it keeps running even you remove container it will restart it again and so on...
So answer to that specific problem is
docker service rm service name or id
Docker run in your locahost if you shutdown/kill docker damen then docker stop and inside in docker container delete data if you not save your data in external volume.
docker run -d nginxlogs:/var/log/nginx -p 5000:80 nginx

How stop containers run with `docker-compose run`

I'm trying to use docker-compose to orchestrate several containers. To troubleshoot, I frequently end up running bash from within a container by doing:
$ docker-compose run --rm run web bash
I always try pass the --rm switch so that these containers are removed when I exit the bash session. Sometimes though, they remain, and I see them at the output of docker-compose ps.
Name Command State Ports
----------------------------------------------------------------------------------
project_nginx_1 /usr/sbin/nginx Exit 0
project_nginx_run_1 bash Up 80/tcp
project_web_1 python manage.py runserver ... Exit 128
project_web_run_1 bash Up 8000/tcp
At this point, I am trying to stop and remove these components manually, but I can not manage to do this. I tried:
$ docker-compose stop project_nginx_run_1
No such service: project_nginx_run_1
I also tried the other commands rm, kill, etc..
What should I do to get rid of these containers?
Edit:
Fixed the output of docker-compose ps.
just stop those test containers with the docker stop command instead of using docker-compose.
docker-compose shines when it comes to start together many containers, but using docker-compose to start containers does not prevent you from using the docker command to do whatever you need to do with individual containers.
docker stop project_nginx_run_1 project_web_run_1
Also, since you are debugging containers, I suggest to use docker-compose exec <service id> bash to get a shell in a running container. This has the advantage of not starting a new container.
With docker-compose, services can be stopped in two ways, but I would like add some detailed info about both options.
In short
docker-compose down
Stop and remove containers, networks, images, and volumes
docker-compose stop
Stop services
In detail
If docker-compose run starts services project_nginx_run_1 and project_web_run_1, then
docker-compose down log will be
$ docker-compose down
Stopping project_nginx_run_1 ...
Stopping project_web_run_1 ...
.
. some service logs goes here
Stopping project_web_run_1 ... done
Stopping project_nginx_run_1 ... done
Removing project_web_run_1 ... done
Removing project_nginx_run_1 ... done
Removing network project_default
docker-compose stop log will be
$ docker-compose stop
Stopping project_nginx_run_1 ...
Stopping project_web_run_1 ...
.
. some service logs goes here
Stopping project_web_run_1 ... done
Stopping project_nginx_run_1 ... done
The docker-compose, unlike docker, use the names for it's containers defined in the yml file. Therefore, to stop just one container the command will be:
docker-compose stop nginx_run
docker-compose down
from within the directory where it was launched, is the only way I managed to confirm it was stopped, as in docker-compose ps no longer yields it!

Resources