I have a fresh install using boot2docker. (DockerToolbox was giving me the same error. After uninstalling DockerToolbox, I deleted ~/.docker and searched my whole filesystem for anything starting with "docker" and found no other configuration files where things might be hiding.)
This is the second command I did, after docker run hello-world:
bash-3.2$ docker run -it ubuntu /bin/bash
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
6071b4945dcf: Verifying Checksum
5bff21ba5409: Pulling fs layer
e5855facec0b: Download complete
8251da35e7a7: Download complete
8251da35e7a7: Layer already being pulled by another client. Waiting.
And I'm stuck here indefinitely.
I promise I only have 1 docker process running. I just want to get past this. If it means nuking whatever cache is in place and doing a manual download, that's okay. I just want to stop being stuck here for hours.
You need to restart Docker service or just restart the OS.
Also in this issue #avramirez pointed out that you can do this using boot2docker:
boot2docker stop
boot2docker up
docker pull <repo>
quote from issue#15603 message:
Hello all! I believe this should by fixed on master by #15489 (and
will soon ship in a few weeks as part of Docker 1.9.0).
This is a bug in Docker.
Try out the following in order (Trying to avoid restarting the OS):
ps aux | grep docker-compose and find the PID of docker-compose processes running.
Kill them using kill <pid>
Restart Docker using service docker restart (linux)
2nd method should ideally solve the problem, if not, Restart the OS.
Hopefully, this issue will be solved in version 1.9
Related
I have been running a nvidia docker image since 13 days and it used to restart without any problems using docker start -i <containerid> command. But, today while I was downloading pytorch inside the container, download got stuck at 5% and gave no response for a while.
I couldn't exit the container either by ctrl+d or ctrl+c. So, I exited the terminal and in new terminal I ran this docker start -i <containerid> again. But ever since this particular container is not responding to any command. Be it start/restart/exec/commit ...nothing! any command with this container ID or name is just non-responsive and had to exit out of it only after ctrl+c
I cannot restart the docker service since it will kill all running docker containers.
Cannot even stop the container using this docker container stop <containerid>
Please help.
You can make use of docker RestartPolicy:
docker update --restart=always <container>
while mindful of caveats on the docker version you running.
or explore an answer by #Yale Huang from a similar question: How to add a restart policy to a container that was already created
I had to restart docker process to revive my container. There was nothing else I could do to solve it. used sudo service docker restart and then revived my container using docker run. I will try to build the dockerfile out of it in order to avoid future mishaps.
I installed docker CE version on an ubuntu 18.04 server. Then, I installed a new jenkins container and everything worked well for two weeks.
After two weeks, for some reason, when I run docker ps I receive an empty list although the jenkins container is running and functioning (it worked in the past). I also tried to run docker ps -a, docker images and again, everything is empty. Also tried restarting the server and still every time the list is empty.
I then uninstalled and reinstalled docker and right after the installation, when running docker ps I see the containers....I thought that the problem was fixed, but today it happened to me again and I still see an empty list when running docker ps. Any ideas ? it will be much appreciated.
Run the command sudo service docker stop
After that find the process dockerd
ps aux | grep "dockerd"
and kill the one by
sudo kill {paste_dockerd_pid_here} -9
Start docker service
sudo service docker start
I am using an Amazon linux machine (p2).
I have installed this docker version:
Client:
Version: 17.03.2-ce
API version: 1.27
Go version: go1.7.5
Git commit: 7392c3b/17.03.2-ce
Built: Wed Aug 9 22:45:09 2017
OS/Arch: linux/amd64
I'm not sure, but I think the issue started after killing a screen which ran some docker container
I'm experiencing this error:
sudo docker ps
Gives:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
And:
sudo service docker status
Gives:
docker dead but subsys locked
I have tried both:
sudo rm -rf /var/run/docker
sudo rm /var/run/docker.*
I also tried to restart and stop:
sudo service docker start/stop
I also rebooted the EC2 machine
Try this and restart docker
yum update device-mapper-libs
sudo service docker restart
I am also facing the same issue. Although I (sort of) fixed it by issuing sudo service docker stop and sudo service docker start before running anything in docker.
Details: I am using docker in a spot instance, so it is setup everytime I need to perform some task. I create the docker and upload my files without any problem. But when I issue the command to run an uploaded bash script in docker I face this issue of docker not running. So before running the script I just stop and start docker. Weirdly, simply doing sudo service docker start or even sudo service docker restart did not solve my problem. I had to specifically issue both start and stop commands. But I don't have enough data points just yet, it is only working from the last couple of days and I am not in a hurry to test this hypothesis (of issuing both commands and not just one).
I had 10 docker containers running an ec2 instance(t2.large), each instance was running on its own service and the whole services are running in a cluster. I updated the timezone of the ec2 instance machine, this required me to reboot the instance. I rebooted the instance, this problem surfaced. First thing I noticed was that ssh into the machine was slower than before, I later realized docker ps was throwing that error, I magically resolved that later to realize that some of the container instances are running but they are not serving any page docker logs -f CONTAINER_ID let me know nginx didn't start due to privilege issues that some of my files that supposed to be created were not created.
I later realized that my magical solution was a really magical solution(most magical solutions are not solutions), all my 10 containers were trying to start at the same time which required more memory space than space my instance could offer, I later had to delete services and recreate them one by one - allow one container to start before creating another one in the same cluster. That was when I had peace. I hope this help somebody.
I keep getting connection timeout while pulling an image:
First, it starts downloading the 3 first layers, after one of them finish, the 4th layer try to start downloading. Now the problem is it won't start until the two remaining layers finish there download process, and before that happens (I think) the fourth layer fails to start downloading and abort the whole process.
So I was thinking, if downloading the layers one by one would solve this problem.
Or maybe a better way/option to solve this issue that may occure when you don't have a very fast internet speed.
The Docker daemon has a --max-concurrent-downloads option.
According to the documentation, it sets the max concurrent downloads for each pull.
So you can start the daemon with dockerd --max-concurrent-downloads 1 to get the desired effect.
See the dockerd documentation for how to set daemon options on startup.
Please follow the step if docker running already Ubuntu:
sudo service docker stop
sudo dockerd --max-concurrent-downloads 1
Download your images after that stop this terminal and start the daemon again as it was earlier.
sudo service docker start
There are 2 ways:
permanent change. add docker settings file:
sudo vim /etc/docker/daemon.json
the json file as below:
{
"max-concurrent-uploads": 1,
"max-concurrent-downloads": 4
}
after adding the file, run
sudo service docker restart
temporary change
stop the docker by
sudo service docker stop
then run
sudo dockerd --max-concurrent-uploads 1
at this point, start the push at another terminal. it will transfer files one by one. when you finished, restart the service or computer.
Building on the previous answers, in my case I couldn't do service stop, and also I wanted to make sure I would restart the docker daemon in the same state, I thus followed these steps:
Record the command line used to start the docker daemon:
ps aux | grep dockerd
Stop the docker daemon:
sudo kill <process id retrieved from previous command>
Restart docker daemon with max-concurrent-downloads option: Use the command retrieved at the first step, and add --max-concurrent-downloads 1
Additionally
You might still run into a problem if even with a single download at a time, your pull is still aborted at some point, and layers that are already downloaded are erased. It's a bug, but it was my case.
A solution in that case is to make sure to keep already downloaded layers, voluntarily.
The way to do that is to regularly abort the pull manually, but NOT by killing the docker command, but BY KILLING THE DOCKER DAEMON.
Actually, it's the daemon that erases already downloaded layers when the pull fails. Thus, by killing it, it can't erase these layers. The docker pull command does terminate, but once you restart the docker daemon, and then relaunch your docker pull command, downloaded layers are still here.
I'm new to docker and have followed the installation instructions on their site here.
The installation completed successfully:
docker -v
Docker version 1.8.1, build d12ea79
but when I try to run
sudo docker run hello-world
I get the following:
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
535020c3e8ad: Pulling fs layer
af340544ed62: Layer already being pulled by another client. Waiting.
af340544ed62: Layer already being pulled by another client. Waiting.
This then continues to hang indefinitely.
I have tried restarting the service and my entire machine. I always get the same problem.
Any idea what's causing this or how to resolve?
This command helped on my end on Ubuntu 14.04 (Docker version 1.8.1, build d12ea79):
sudo restart docker
This seems to have now resolved itself. Quite possibly it was caused by a problem at docker's end.