I'm trying to push a container that was interrupted previously by a network drop out. But I get this error:
Error: push rimian/ruby-node-npm is already in progress
But when I run docker ps I don't see anything running.
What shall I do?
Restart the docker service on which you are running the docker.
With ubuntu:
sudo service docker restart
Just wait.
I had this once, too, and the problem is that the push is still running in the background, hence you can't do another one.
So just wait, and the problem will disappear automatically after some time.
The fact that you do not see anything running with docker ps is that this command only shows docker containers, not internal docker processes. And pushing an image is not run by a container.
Related
I have been running a nvidia docker image since 13 days and it used to restart without any problems using docker start -i <containerid> command. But, today while I was downloading pytorch inside the container, download got stuck at 5% and gave no response for a while.
I couldn't exit the container either by ctrl+d or ctrl+c. So, I exited the terminal and in new terminal I ran this docker start -i <containerid> again. But ever since this particular container is not responding to any command. Be it start/restart/exec/commit ...nothing! any command with this container ID or name is just non-responsive and had to exit out of it only after ctrl+c
I cannot restart the docker service since it will kill all running docker containers.
Cannot even stop the container using this docker container stop <containerid>
Please help.
You can make use of docker RestartPolicy:
docker update --restart=always <container>
while mindful of caveats on the docker version you running.
or explore an answer by #Yale Huang from a similar question: How to add a restart policy to a container that was already created
I had to restart docker process to revive my container. There was nothing else I could do to solve it. used sudo service docker restart and then revived my container using docker run. I will try to build the dockerfile out of it in order to avoid future mishaps.
I am running a docker container which is trying to access a port in another docker container. Both of these are running are configured together to run on the same network. But as soon as I start this container it gets killed and doesn't throw any error. There are no error logs. I also tried using docker inspect but couldn't find much.
PS: I am a newbie docker user.
Following from OP comment w/ ENTRYPOINT
ENTRYPOINT /configure.sh && bash
Answer
Given your ENTRYPOINT the container will always exit since the process is bash. You need to have a continuously running process in the foreground for the container to stay running i.e. an application daemon.
I keep getting connection timeout while pulling an image:
First, it starts downloading the 3 first layers, after one of them finish, the 4th layer try to start downloading. Now the problem is it won't start until the two remaining layers finish there download process, and before that happens (I think) the fourth layer fails to start downloading and abort the whole process.
So I was thinking, if downloading the layers one by one would solve this problem.
Or maybe a better way/option to solve this issue that may occure when you don't have a very fast internet speed.
The Docker daemon has a --max-concurrent-downloads option.
According to the documentation, it sets the max concurrent downloads for each pull.
So you can start the daemon with dockerd --max-concurrent-downloads 1 to get the desired effect.
See the dockerd documentation for how to set daemon options on startup.
Please follow the step if docker running already Ubuntu:
sudo service docker stop
sudo dockerd --max-concurrent-downloads 1
Download your images after that stop this terminal and start the daemon again as it was earlier.
sudo service docker start
There are 2 ways:
permanent change. add docker settings file:
sudo vim /etc/docker/daemon.json
the json file as below:
{
"max-concurrent-uploads": 1,
"max-concurrent-downloads": 4
}
after adding the file, run
sudo service docker restart
temporary change
stop the docker by
sudo service docker stop
then run
sudo dockerd --max-concurrent-uploads 1
at this point, start the push at another terminal. it will transfer files one by one. when you finished, restart the service or computer.
Building on the previous answers, in my case I couldn't do service stop, and also I wanted to make sure I would restart the docker daemon in the same state, I thus followed these steps:
Record the command line used to start the docker daemon:
ps aux | grep dockerd
Stop the docker daemon:
sudo kill <process id retrieved from previous command>
Restart docker daemon with max-concurrent-downloads option: Use the command retrieved at the first step, and add --max-concurrent-downloads 1
Additionally
You might still run into a problem if even with a single download at a time, your pull is still aborted at some point, and layers that are already downloaded are erased. It's a bug, but it was my case.
A solution in that case is to make sure to keep already downloaded layers, voluntarily.
The way to do that is to regularly abort the pull manually, but NOT by killing the docker command, but BY KILLING THE DOCKER DAEMON.
Actually, it's the daemon that erases already downloaded layers when the pull fails. Thus, by killing it, it can't erase these layers. The docker pull command does terminate, but once you restart the docker daemon, and then relaunch your docker pull command, downloaded layers are still here.
I interrupted the following command : docker push <user>/docker-whale.
If I try running it again, I get :
Error response from daemon: push <user>/docker-whale is already in progress
I understand that the upload is still running in the background and that I only interrupted the client output. However, is there a way to get it back?
Also, if it's somehow stuck, how would you restart the push operation?
This happens because you stopped pushing before it was finished.
You don't need to remove the containers; just restart boot2docker(or docker service).
Command maybe:
boot2docker restart (on Mac)
service docker restart (on Linux)
After that, you can push your image again, Good Luck!
I have the same issue on Mac, restart boot2docker and remove stopped container to fix it
boot2docker restart
docker ps -a | cut -c-12 | xargs docker rm
I am using docker version 1.1.0, started by systemd using the command line /usr/bin/docker -d, and tried to:
run a container
stop the docker service
restart the docker service (either using systemd or manually, specifying --restart=true on the command line)
see if my container was still running
As I understand the docs, my container should be restarted. But it is not. Its public facing port doesn't respond, and docker ps doesn't show it.
docker ps -a shows my container with an empty status:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cb0d05b4e0d9 mildred/p2pweb:latest node server-cli.js - 7 minutes ago 0.0.0.0:8888->8888/tcp jovial_ritchie
...
And when I try to docker restart cb0d05b4e0d9, I get an error:
Error response from daemon: Cannot restart container cb0d05b4e0d9: Unit docker-cb0d05b4e0d9be2aadd4276497e80f4ae56d96f8e2ab98ccdb26ef510e21d2cc.scope already exists.
2014/07/16 13:18:35 Error: failed to restart one or more containers
I can always recreate a container using the same base image using docker run ..., but how do I make sure that my running containers will be restarted if docker is restarted. Is there a solution that exists even in case the docker is not stopped properly (imagine I remove the power plug from the server).
Thank you
As mentioned in a comment, the container flag you're likely looking for is --restart=always, which will instruct Docker that unless you explicitly docker stop the container, Docker should start it back up any time either Docker dies or the container does.