Today I tried to run my containers in detached mode and I faced some issue.
When I ran the command docker container run -d nginx, the image of nginx got pulled and the output of the container was not shown as it was in detached mode.
Then I ran the command docker container ls which we all know shows only running containers and it showed my nginx container running.
Then I tried the same thing with the ubuntu image i.e.
docker container run -d ubuntu but when I ran docker container ls command my ubuntu image was not running, only the nginx container was running.
Why is it so?
You don't see a running container with the ubuntu image, because the container stops immediately after being started. while the nginx image starts an nginx server that keeps the container running, the ubuntu image executes a sh -c "bash" on start - bash is not a process that keeps on running after execution. You will be able to see your stopped ubuntu container with the docker ps -a
If you want to keep the ubuntu container running, you need to pass it a command that starts a process that keeps on running, e.g. docker run -d ubuntu tail -f /dev/null
I have accessed a Remote Machine (call it , RM) through SSH (from my host). And I am running a docker image inside RM via my SSH session. Both are Ubuntu 16.04 based.
There are some processes running inside this docker container, so I can't exit the container.
So,how do I detach this ssh session from my host, so that those processes inside the docker would still run unaffected.
I am doing this, because I have to restart my host machine for some purpose.
PS:
In this link Correct way to detach from a container without stopping it, it's not running the docker container via SSH session. So two scenarios are different.
First, you have to start your Docker container in daemon (non-interactive) mode, using -d argument and dropping -it. Don't forget to name your container for further usage with --name foo option.
After container is started, you can control it using docker exec -it foo sh-or-whatever. If your ssh session will terminate, container will continue running. However, you docker exec session will be over.
I am trying to run a docker example following this documentation
This is my command:
docker run -d -p 80:80 --name webserver nginx
But I get this error:
docker: Error response from daemon: driver failed programming external connectivity on endpoint webserver (bd57efb73c738e3b271db180ffbee0a56cae86c8193242fbc02ea805101df21e): Error starting userland proxy: Bind for 0.0.0.0:80: unexpected error (Failure EADDRINUSE).
How do I fix this?
From your error message, the EADDRINUSE indicates port 80 is already in use on either the docker VM or possibly directly on your laptop. You can either stop whatever is running on that port, or change the port used in your Docker, command. To change to the external port 8080, use:
docker run -d -p 8080:80 --name webserver nginx
If you are the port i not in use, try restarting docker. That usually works for me.
I had the same issue with one of my containers. I tried everything but when nothing worked, I tried the following and launched the container again with success
sudo service docker stop
sudo rm /var/lib/docker/network/files/local-kv.db
sudo service docker start
Try restarting the docker service. It works 99% of the time.
service docker restart
If that didn't work as expected, try restarting your pc and then restarting the docker service using above command.
If none of the above worked try changing the exposed port to another unused port that should work.
docker run -d -p 81:80 --name webserver nginx
Note :- 81 is the port on your host and 80 is the port on your docker container
For the first time, when i made a docker simple web app, i also faced same problem.
Simply you can try the following steps to resolve the problem and also able to understand the reason why you had faced the problem in details.
Step-1: check all the running containers using the command:
docker ps
Step-2: Find out the container id of the container which is running on the same port, you are trying to reach.
Step-3: Stop the container which one is running on the same port using this command:
docker stop <container id>
Step-4: Again build the container:
docker build -t DockerID/projectName .
Step-5: Again try to run your container on the same port using port mapping.
docker run -p 8080:8080 DockerID/projectName
Try this command:
sudo service docker restart
If it does not help, restart your server.
Stop all the running containers docker ps -a -q then
Stop the Docker on your machine & restart it.
Recently this problem started to happen a lot on Windows. You can try restarting docker or you can manually stop docker before Windows shutdown - docker starts cleanly on reboot. On 7/24/2018 docker issue is open and further details can be found at https://github.com/docker/for-win/issues/1967
Check what's on port 80 right now - sudo ss -tulpn | grep :80
You may have apache2 running.
You can check it - sudo service apache2 status
If so - sudo service apache2 statop
If you tried all above solutions and still having issues, you can kill LISTEN ports manually as below for Linux users
sudo lsof -i -P -n | grep LISTEN
sudo kill -9 <process_pid> (ex. sudo kill -9 28563 28575 28719 28804)
In my case, port 80 is the default port for the web server and therefore it is protected. I changed the bind to port 60:8080 to ensure no deeper issues. Changing the bind to a different port allows me to execute the docker run and hit it in the browser at http://ip:60
I had same problem with same error.
As long as I had a local nginx installed in my computer, running another nginx through the container made conflict in port :80.
Simply I tried to stop the service of my local installed nginx as below:
sudo service nginx stop
Then after, I could run nginx by docker-compose up -d without any problem:
Creating MyWebServer ... done
Creating mongo ... done
Creating redis ... done
For me, a simple
ddev poweroff
fixed this.
If this case is with Redis: remove the ports - ... in the docker-compose file and let it assign by itself. or change the port mapping in the host from 6379:6379 to 6378:6379 that worked for me.
windows users: docker description
On Windows systems, CTRL+C does not stop the container. So, first type
CTRL+C to get the prompt back (or open another shell), then type
docker container ls to list the running containers, followed by docker
container stop to stop the container.
Otherwise, you get an error response from the daemon when you try to
re-run the container in the next step.
I had the same problem, I thought with CTRL+C stoped the container but it was not the case, any af the answer above works because they all stop containers, restarting docker or stoping container explicity.
I prefer:
docker container ls #list containers running
docker stop [container id] #replace [container id] with the container id running
This seems to be an incompatibility problem with windows "fast-boot" as described here: (just restart the docker service) and it may work.
https://github.com/docker/for-win/issues/2722
This is caused by an incompatibility with Docker and fastboot. You can either make sure you stop all containers before shutting Windows down or you can disable fastboot in Windows' power settings by doing the following:
CTRL+R > "powercfg.cpl" > "Choose what the power buttons do" > "Change settings that are currently unavailable" > Deselect "Turn on fast start-up"
You can also disable fastboot with a single command in powershell if you're comfortable doing so:
Set-ItemProperty 'HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager\Power\' -Name HiberbootEnabled -Value 0
If you are using WSL, after i tried all above and still it doesn't work, i tried to restart the WSL from Powershell with admin privileges and shutdown command:
wsl --shutdown
That worked for me.
I am using docker-machine with Google Compute Engine(GCE)
to run a
docker swarm cluster. I created a swarm successfully with 2
nodes
(swnd-01 & swnd-02) in the cluster. I created a daemon container
like this
in the swarm-manager environment:
docker run -d ubuntu /bin/bash
docker ps shows the container running on swnd-01. When I tried
executing a command over the container using docker exec I get the
error that container is not running while docker ps shows otherwise.
I ssh'ed into swnd-01 via docker-machine to come to know that container
exited as soon as it was created. I tried docker run command inside the
swnd-01 but it still exits. I don't understand the behavior.
Any suggestions will be thankfully received.
The reason it exits is that the /bin/bash command completes and a Docker container only runs as long as its main process (if you run such a container with the -it flags the process will keep running while the terminal is attached).
As to why the swarm manager thought the container was still running, I'm not sure. I guess there is a short delay while Swarm updates the status of everything.
I'm running a docker containers interactively:
sudo docker run --rm -t -i CONTAINER_NAME bash
I need instances of containers to be purged after usage. Also container does not have any sense since tty is lost. When session is closed from container side (exit in bash) everything works fine but if my ssh session to host disconnects container stays running (shown in docker ps). This could also be reproduced by opening container in tmux window and then killing a window.
Is there a way to make docker to stop a container if host process (ssh session or tmux) that is attached to tty terminates?