Not able to enter a container becuase it keeps restarting - docker

I am new to docker and I'm working with a container, which was working fine. I stopped it to test something and now I can't start it again and it keeps restarting because apparently there are 2 configuration files. Is there a method to enter the Container (i. e. docker exec -it <containerID> bash) to fix this problem?
when I start the container:
docker start mosquitto
returns
mosquitto
then
docker exec -it mosquitto bash
returns
Error response from daemon: Container 5cd7191016f772729776779551e08719701700ad0dd135d87633a17351ab9208 is restarting, wait until the container is running
and docker logs
docker logs mosquitto
returns
1668704116: Loading config file /etc/mosquitto/conf.d/mosquitto.conf
1668704116: Error: Duplicate pid_file value in configuration.
1668704116: Error found at /etc/mosquitto/conf.d/mosquitto.conf:7.
1668704116: Error found at /etc/mosquitto/mosquitto.conf:13.
Thanks in advance :)

The short version here is, delete line 7 of the file you have mounted on /etc/mosquitto/conf.d/mosquitto.conf
You can only specific the pid_file once and the default config file has already defined it (on line 13 of the /etc/mosquitto/mosquitto.conf file)

Related

Can't get my Docker Container to start and give me a shell

I am trying to get a Docker Container running. I am following this guide: http://opendata.cern.ch/docs/cms-guide-docker.
The container refuses to start and give me access to the shall I expect.
Running the following command (as mentioned in the guide) does nothing, the process exits with a non-0 exit code. The first time I ran it, it downloaded the container image but did not land me into the sell as the guide says it would.
$ docker run --name opendata-2010 -it cmsopendata/cmssw_4_2_8 /bin/bash
I can see the container, it exits soon as it starts.
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
be670158d200 cmsopendata/cmssw_5_3_32 "/opt/cms/entrypoint…" 34 minutes ago Exited (139) 3 seconds ago opendata
These are other things I have tried to no avail.
$ docker exec -it be670158d200 /bin/bash
Error response from daemon: Container be670158d200ae85871fbda810fa6074dcb7bc8fc606f000710f630add1b80b6 is not running
$ docker start --attach be670158d200
failed to resize tty, using default size
My question is similar to this: Docker - Container is not running, but I know that unlike in that question, here I should be getting the shell.
I am running this in Windows Subsystem for Linux 2 - Ubuntu 20.04, docker version 19.03.8 - build afacb8b7f0. Any help is greatly appreciated, thanks.
I had the same error with below logs
dockerd[15309]: time="2022-01-11T11:13:35.133154132+05:30" level=error msg="Handler for POST /v1.41/exec/94553dc2f9aaa3c1245df7384138786a8a576af99105a285258fce8b980b4660/resize returned error: timeout waiting for exec session ready"
This is a bug in docker 20.10 version and can be solved by downgrading containerd rpm
Removed:
containerd.io.x86_64 0:1.4.4-3.1.el7
Installed:
containerd.io.x86_64 0:1.4.3-3.1.el7

Docker - Bind for 0.0.0.0:4000 failed: port is already allocated

I am using docker for the first time and I was trying to implement this -
https://docs.docker.com/get-started/part2/#tag-the-image
At one stage I was trying to connect with localhost by this command -
$ curl http://localhost:4000
which showed this error-
curl: (7) Failed to connect to localhost port 4000: Connection refused
However, I have solved this by following code -
$ docker-machine ip default
$ curl http://192.168.99.100:4000
After that everything was going fine, but in the last part, I was trying to run the app by using following line according to the tutorial...
$ docker run -p 4000:80 anibar/get-started:part1
But, I got this error
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: driver failed programming external connectivity on endpoint goofy_bohr (63f5691ef18ad6d6389ef52c56198389c7a627e5fa4a79133d6bbf13953a7c98): Bind for 0.0.0.0:4000 failed: port is already allocated.
You need to make sure that the previous container you launched is killed, before launching a new one that uses the same port.
docker container ls
docker rm -f <container-name>
Paying tribute to IgorBeaz, you need to stop running the current container. For that you are going to know current CONTAINER ID:
$ docker container ls
You get something like:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
12a32e8928ef friendlyhello "python app.py" 51 seconds ago Up 50 seconds 0.0.0.0:4000->80/tcp romantic_tesla
Then you stop the container by:
$ docker stop 12a32e8928ef
Finally you try to do what you wanted to do, for example:
$ docker run -p 4000:80 friendlyhello
I tried all the above answers, none of them worked, in my case even docker container ls doesn't show any container running. It looks like the problem is due to the fact that the docker proxy is still using ports although there are no containers running. In my case I was using ubuntu. Here's what I tried and got the problem solved, just run the following two commands:
sudo service docker stop
sudo rm -f /var/lib/docker/network/files/local-kv.db
I solved it this way:
First, I stopped all running containers:
docker-compose down
Then I executed a lsof command to find the process using the port (for me it was port 9000)
sudo lsof -i -P -n | grep 9000
Finally, I "killed" the process (in my case, it was a VSCode extension):
kill -9 <process id>
The quick fix is ​​a just restart docker:
sudo service docker stop
sudo service docker start
Above two answers are correct but didn't work for me.
I kept on seeing blank like below for docker container ls
then I tried, docker container ls -a and after that it showed all the process previously exited and running.
Then docker stop <container id> or docker container stop <container id> didn't work
then I tried docker rm -f <container id> and it worked.
Now at this I tried docker container ls -a and this process wasn't present.
When I used nginx docker image, I also got this error:
docker: Error response from daemon: driver failed programming external connectivity on endpoint recursing_knuth (9186f7d7f523732b99d3510029cde9679f3f3fe7b7eb5f612d54c4aacea58220): Bind for 0.0.0.0:8080 failed: port is already allocated.
And I solved it using following commands:
$ docker container ls
$ docker stop [CONTAINER ID]
Then, running this docker container(like this) again is ok:
$ docker run -v $PWD/vueDemo:/usr/share/nginx/html -p 8080:80 -d nginx:alpine
You just need to stop the previous docker container.
I have had same problem with docker-compose, to fix it:
Killed docker-proxy processe
Restart docker
Start docker-compose again
docker ps will reveal the list of containers running on docker. Find the one running on your needed port and note down its PID.
Stop and remove that container using following commands:
docker stop PID
docker rm PID
Now run docker-compose up and your services should run as you have freed the needed port.
on linux 'sudo systemctl restart docker' solved the issue for me
For anyone having this problem with docker-compose.
When you have more than one project (i.e. in different folders) with similar services you need to run docker-compose stop in each of your other projects.
If you are using Docker-Desktop, you can quit Docker Desktop and then restart it. It solved the problem for me.
In my case, there was no process to kill.
Updating docker fixed the problem.
It might be a conflict with the same port specified in docker-compose.yml and docker-compose.override.yml or the same port specified explicitly and using an environment variable.
I had a docker-compose.yml with ports on a container specified using environment variables, and a docker-compose.override.yml with one of the same ports specified explicitly. Apparently docker tried to open both on the same container. docker container ls -a listed neither because the container could not start and list the ports.
For me the containers where not showing up running, so NOTHING was using port 9010 (in my case) BUT Docker still complained.
I did not want to reset my Docker (for Windows) so what I did to resolve it was simply:
Remove the network (I knew that before a container was using this network with the port in question (9010) docker network ls docker network rm blabla (or id)
I actually used a new network rather than the old (buggy) one but shouldn't be needed
Restart Docker
That was the only way it worked for me. I can't explain it but somehow the "old" network was still bound to that port (9010) and Docker kept on "blocking" it (whinching about it)
FOR WINDOWS;
I killed every process that docker use and restarted the docker service on services. My containers are working now.
It is about ports that is still in use by Docker even though you are not using on that moment.
On Linux, you can run sudo netstat -tulpn to see what is currently listening on that port. You can then choose to configure either that process or your Docker container to bind to a different port to avoid the conflict.
Stopping the container didn't work for me either. I changed the port in docker-compose.yml.
For me, the problem was mapping the same port twice.
Due to a parametric docker run, it ended up being something like
docker run -p 4000:80 -p 4000:80 anibar/get-started:part1
notice double mapping on port 4000.
The log is not informative enough in this case, as it doesn't state I was the cause of the double mapping, and that the port is no longer bound after the docker run command returns with a failure.
Don't forget the easiest fix of all....
Restart your computer.
I have tried most of the above and still couldn't fix it. Then just restart my Mac and then it's all back to normal.
For anyone still looking for a solution, just make sure you have binded your port the right way round in your docker-compose.yml
It goes:
- <EXTERNAL SERVER PORT>:<INTERNAL CONTAINER PORT>
Had the same problem. Went to Docker for Mac Dashboard and clicked restart. Problem solved.
my case was dump XD I was exposing port 80 twice :D
ports:
- '${APP_PORT:-80}:80'
- '${APP_PORT:-8080}:8080'
APP_PORT is defined, thus 80 was exposed twice.
I tried almost all solutions and found out the probable/possible reason/solution. So, If you are using traefik or any other networking server, they internally facilitate proxy for load balacing. That, most use the blueprint as it, works pretty fine. It then passes the load control entirely to nginx or similiar proxy servers. So, stopping, killing(networking server) or pruning might not help.
Solution for traefik with nginx,
sudo /etc/init.d/nginx stop
# or
sudo service nginx stop
# or
sudo systemctl stop nginx
Credits
How to stop docker processes
Making Docker Stop Itself <- Safe and Fast
this is the best way to stop containers and all unstoppable processes: making docker do the job.
go to docker settings > resources. change any of the resource and click apply and restart.
docker will stop itself and its every process -- even the most stubborn ones that might not be killed by other commonly used commands such as kill or more wild commands like rm suggested by others.
i ran into a similar problem before and all the good - proper - tips from my colleagues somehow did not work out. i share this safe trick whenever someone in my team asks me about this.
Error response from daemon: driver failed programming external connectivity on endpoint foobar
Bind for 0.0.0.0:8000 failed: port is already allocated
hope this helps!
simply restart your computer, so the docker service gets restarted

client (browser, jdbc driver) hangs when trying to connect with docker instance

Summary
Client (browser,jdbc drive) hangs connecting to docker.
Context
I've been playing with docker and found an oddity: stuff running on my host OS (browser, jdbc driver) "hangs" trying to connect to docker.
I've concluded the issue lies with 'docker' and 'my setup' versus the images themselves, due to the fact that the problem appears:
with both Tomcat official images as well as with Microsoft's new Sql Server image
after I've successfully run docker once
Usecase
Boot my laptop (ubuntu 14.04)
Start up docker (see Appendix )
Connect with browser http://localhost:8888/.
Result: Success
Shut down docker instance: 'ctrl-c'
Start up docker again (Repeat step 2)
Try to connect with browser
Result: Browser hangs/spins for 20 minutes, than says "aborted"
Notes
Docker starts without error in both steps
After shutdown (step 4), "netstat -aon |grep 8888" shows nothing. So no 'rogue process' is listening on port 888
Because browser "hangs", rather than says "connection refused", I concluded Docker listens on the port, but doesn't do anything else.
Version Info
Ubuntu 14.04
Docker version 1.9.1, build a34a1d5
Appendix A: Docker file and commands
Dockerfile
Dockerfile: FROM tomcat:8.5.8-jre8-alpine
Commands
Create Image:
$ docker build -t mytomcat_858 .
Start:
$ docker run -it --rm -p 8888:8080 mytomcat_858
What command are you executing to start Docker the second time? docker start mytomcat_858? If it starts correctly the second time, can you do docker attach mytomcat_858 and view possible Tomcat errors?
I found, if not the root cause, at least workaround: restarting the docker daemon cleared up all the networking issues:
# /etc/init.d/docker restart
Try running docker with the following:
docker run -d -p 8888:8080 mytomcat_858
Then,
You can do docker stop mytomcat_858, and docker start mytomcat_858 to stop and start the process. Do not repeat docker run command for the second time.

Cannot stop or restart a docker container

When trying to stop or restart a docker container I'm getting the following error message:
$ docker restart 5ba0a86f36ea
Error response from daemon: Cannot restart container 5ba0a86f36ea: [2] Container does not exist: container destroyed
Error: failed to restart containers: [5ba0a86f36ea]
But when I run
$ docker logs -f 5ba0a86f36ea
I can see the logs, so obviously the container does exist. Any ideas?
Edit:
sorry, I forgot to mention this:
When I run docker ps -a I see the container as up and running. However the application inside it is malfunctioning so I want to restart it, or just get a fresh version of that application online. But when I can't stop and remove the container, I also can't get a new application up and running, which would be listening to the same port.
I couldn't locate boot2docker in my machine. So, I came up with something that worked for me.
$ sudo systemctl restart docker.socket docker.service
$ docker rm -f <container id>
Check if it helps you as well.
All the docker:
start | restart | stop | rm --force | kill commands
may not work if the container is stuck. You can always restart the docker daemon. However, if you have other containers running, that may not be the option. What you can do is:
ps aux | grep <<container id>> | awk '{print $1 $2}'
The output contains:
<<user>><<process id>>
Then kill the process associated with the container like so:
sudo kill -9 <<process id from above command>>
That will kill the container and you can start a new container with the right image.
That looks like docker/docker/issues/12738, seen with docker 1.6 or 1.7:
Some container fail to stop properly, and the restart
We are seeing this issue a lot in our users hosts when they upgraded from 1.5.0 to 1.6.0.
After the upgrade, some containers cannot be stopped (giving 500 Server Error: Internal Server Error ("Cannot stop container xxxxx: [2] Container does not exist: container destroyed")) or forced destroyed (giving 500 Server Error: Internal Server Error ("Could not kill running container, cannot remove - [2] Container does not exist: container destroyed")).
The processes are still running on the host.
Sometimes, it works after restarting the docker daemon.
There are some workarounds:
I've tried all remote API calls for that unkillable container and here are results:
json, stats, changes, top, logs returned valid responses
stop, pause, wait, kill reported 404 (!)
After I finished with remote API, I double-checked docker ps (the container was still there), but then I retried docker kill and it worked! The container got killed and I could remove it.
Or:
What worked was to restart boot2docker on my host. Then docker rm -f
$ boot2docker stop
$ boot2docker start
$ docker rm -f 1f061139ba04
Worth knowing:
If you are running an ENTRYPOINT script ... the script will work with the shebang
#!/bin/bash -x
But will stop the container from stopping with
#!/bin/bash -xe
Enjoy
sudo aa-remove-unknown
This is what worked for me.
Check if there is any zombie process using "top" command.
docker ps | grep <<container name>>
Get the container id.
ps -ef | grep <<container id>>
ps -ef|grep defunct | grep java
And kill the container by Parent PID .
For anyone on a Mac who has Docker Desktop installed. I was able to just click the tray icon and say Restart Docker. Once it restarted was able to delete the containers.
If you're on a Mac and try this via Terminal: Use killall Docker to quit Docker.
Restart it in the Applications folder or with open /Applications/Docker.app.
Subsequently you can run a docker rm <id> for the concerned container.
I had the same problem on a windows host machine and none of the other options here worked for me. I ended up just needing to delete the physical container folder, which was located here:
C:\ProgramData\Docker\containers\[container guid]
I had stopped the docker service first just to be safe and when I restarted it, the broken containers were now gone and I was able to create new ones. I suspect the same will work on a linux host machine, but I do not know where the container folders are kept on that OS.
Ubuntu
Stop the container by using its system process ID.
Get the main process ID using:
docker inspect -f '{{.State.Pid}}' container-id
This will return an id as ´25430´.
Kill this with the command
sudo kill -9 25430
in my case, i couldn't delete container created with nomad jobs,
there's no output for the docker logs <ContainerID> and, in general, it looks like frozen.
until now the solution is: sudo service docker restart, may someone suggest better one?
i forgot that i had made the container start as a system service.
so if i stopped or killed the container, the service would bring it back.
if you are using systemctl, you can list all the running services with systemctl | grep running and find the name of the service.
then use
sudo systemctl disable <your_service_name> to stop it.
If you're on Ubuntu, make sure docker-compose isn't installed as a snap. This will cause all kinds of random issues, including the above.
Remove the snap:
sudo snap remove docker-compose
And install manually from the compose repository:
Docker compose installation instruction
Sometimes this is caused by problem of the docker daemon.
I solved the problem by restarting the docker service.
On Linux:
systemctl restart docker
In my case, docker rm $(docker ps -aq) works for me.

How to restart a container on docker restart (--restart=true doesn't work)?

I am using docker version 1.1.0, started by systemd using the command line /usr/bin/docker -d, and tried to:
run a container
stop the docker service
restart the docker service (either using systemd or manually, specifying --restart=true on the command line)
see if my container was still running
As I understand the docs, my container should be restarted. But it is not. Its public facing port doesn't respond, and docker ps doesn't show it.
docker ps -a shows my container with an empty status:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cb0d05b4e0d9 mildred/p2pweb:latest node server-cli.js - 7 minutes ago 0.0.0.0:8888->8888/tcp jovial_ritchie
...
And when I try to docker restart cb0d05b4e0d9, I get an error:
Error response from daemon: Cannot restart container cb0d05b4e0d9: Unit docker-cb0d05b4e0d9be2aadd4276497e80f4ae56d96f8e2ab98ccdb26ef510e21d2cc.scope already exists.
2014/07/16 13:18:35 Error: failed to restart one or more containers
I can always recreate a container using the same base image using docker run ..., but how do I make sure that my running containers will be restarted if docker is restarted. Is there a solution that exists even in case the docker is not stopped properly (imagine I remove the power plug from the server).
Thank you
As mentioned in a comment, the container flag you're likely looking for is --restart=always, which will instruct Docker that unless you explicitly docker stop the container, Docker should start it back up any time either Docker dies or the container does.

Resources