I'm getting this strange error, when I try to run a docker with a name it gives me this error.
docker: Error response from daemon: service endpoint with name qc.T8 already exists.
However, there is no container with this name.
> docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
> sudo docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 3
Server Version: 1.12.3
Storage Driver: aufs
Root Dir: /ahdee/docker/aufs
Backing Filesystem: extfs
Dirs: 28
Dirperm1 Supported: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: null bridge host overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: apparmor
Kernel Version: 3.13.0-101-generic
Operating System: Ubuntu 14.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 64
Total Memory: 480.3 GiB
Is there anyway I can flush this out?
Just in case someone else needs this. As #Jmons pointed out it was a weird networking issue. So I solved this by forcing a removal
docker network disconnect --force bridge qc.T8
A
TLDR: restart your docker daemon or restart your docker-machine (if you're using that e.g. on a mac).
Edit: As there are more recent posts below, they answer the question better then mine. The Network adapter is stuck on the daemon. I'm updating mine as its possibly 'on top' of the list and people might not scroll down.
Restarting your docker daemon / docker service / docker-machine is the easiest answer.
the better answer (via Shalabh Negi):
docker network inspect <network name>
docker network disconnect <network name> <container id/ container name>
This is also faster in real time if you can find the network as restarting the docker machine/demon/service in my experience is a slow thing. If you use that, please scroll down and click +1 on their answer.
So the problem is probably your network adapter (virtual, docker thing, not real): have a quick peek at this: https://github.com/moby/moby/issues/23302.
To prevent it happening again is a bit tricky. It seems there may be an issue with docker where a container exits with a bad status code (e.g. non-zero) that holds the network open. You can't then start a new container with that endpoint.
docker network inspect <network name>
docker network disconnect <network name> <container id/ container name>
You can also try doing:
docker network prune
docker volume prune
docker system prune
these commands will help clearing zombie containers, volume and network.
When no command works then do
sudo service docker restart
your problem will be solved
docker network rm <network name>
Worked for me
Restarting docker solved it for me.
I created a script a while back, I think this should help people working with swarm. Using docker-machine this can help a bit.
https://gist.github.com/lcamilo15/7aaaebe71852444ea8f1da5c4c9c84b7
declare -a NODE_NAMES=("node_01", "node_02");
declare -a CONTAINER_NAMES=("container_a", "container_b");
declare -a NETWORK_NAMES=("network_1", "network_2");
for x in "${NODE_NAMES[#]}"; do;
docker-machine env $x;
eval $(docker-machine env $x)
for CONTAINER_NAME in "${CONTAINER_NAMES[#]}"; do;
for NETWORK_NAME in "${NETWORK_NAMES[#]}"; do;
echo "Disconnecting $CONTAINER_NAME from $NETWORK_NAME"
docker network disconnect -f $NETWORK_NAME $CONTAINER_NAME;
done;
done;
done;
You could try seeing if there's any network with that container name by running:
docker network ls
If there is, copy the network id then go on to remove it by running:
docker network rm network-id
This could be because an abrupt removal of a container may leave the network open for that endpoint (container-name).
Try stopping the container first before removing it.
docker stop <container-name>. Then docker rm <container-name>.
Then docker run <same-container-name>.
i think restart docker deamon will solve the problem
Even reboot did not help in my case. It turned out that port 80 to be assigned by the nginx container automatically was in use, even after reboot. How come?
root#IONOS_2: /root/2_proxy # netstat -tlpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:873 0.0.0.0:* LISTEN 1378/rsync
tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN 1565/systemd-resolv
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1463/nginx: master
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1742/sshd
tcp6 0 0 :::2377 :::* LISTEN 24139/dockerd
tcp6 0 0 :::873 :::* LISTEN 1378/rsync
tcp6 0 0 :::7946 :::* LISTEN 24139/dockerd
tcp6 0 0 :::5355 :::* LISTEN 1565/systemd-resolv
tcp6 0 0 :::21 :::* LISTEN 1447/vsftpd
tcp6 0 0 :::22 :::* LISTEN 1742/sshd
tcp6 0 0 :::5000 :::* LISTEN 24139/dockerd
No idea what nginx: master means or where it came from. And indeed 1463 is the PID:
root#IONOS_2: /root/2_proxy # ps aux | grep "nginx"
root 1463 0.0 0.0 43296 908 ? Ss 00:53 0:00 nginx: master process /usr/sbin/nginx
root 1464 0.0 0.0 74280 4568 ? S 00:53 0:00 nginx: worker process
root 30422 0.0 0.0 12108 1060 pts/0 S+ 01:23 0:00 grep --color=auto nginx
So I tried this:
root#IONOS_2: /root/2_proxy # kill 1463
root#IONOS_2: /root/2_proxy # ps aux | grep "nginx"
root 30783 0.0 0.0 12108 980 pts/0 S+ 01:24 0:00 grep --color=auto nginx
And the problem was gone.
Related
I installed docker using snap (during the install process of 22.04) and it was working fine, and all my containers were spun up using docker run ...
This was until I installed docker-compose using apt later on. When I attempted to bring up containers with docker-compose I would get errors stating that the port was already in use.
So I then checked what program/command was using these ports:
sudo lsof -i :9091:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
docker-pr 1883 root 4u IPv4 28696 0t0 TCP *:9091 (LISTEN)
docker-pr 1890 root 4u IPv6 27395 0t0 TCP *:9091 (LISTEN)
sudo netstat -pna | grep 9091
tcp 0 0 0.0.0.0:9091 0.0.0.0:* LISTEN 1883/docker-proxy
tcp6 0 0 :::9091 :::* LISTEN 1890/docker-proxy
This showed that my container was still somehow running, as the port was in use. However, when running docker ps -a no containers were running...
The commands above all pointed towards docker-proxy, what is this service? Also, why is it so under the radar that docker itself can't even stop the container with commands like: docker rm $(docker ps -aq)? Also, not sure why my container became invisible and was unable to stop it without stopping the docker service entirely.
I'm trying to run
/usr/bin/docker run --rm -v /var/data/redis:/data -v /var/data/conf/redis.conf:/usr/local/etc/redis/redis.conf --name redis -p 6379:6379 redis:5.0.3-alpine3.9
but I get:
/usr/bin/docker: Error response from daemon: driver failed programming external connectivity on endpoint redis (f16f19b7727a710fb6c96be566dac66ce26282982960d97faa28861c24fcf2fb): Bind for 0.0.0.0:6379 failed: port is already allocated.
When I try to check the ports used with netstat, I get:
[root#artik ~]# netstat -nlpute | grep 6379
tcp6 0 0 :::6379 :::* LISTEN 0 14384 2471/docker-proxy
I have no docker containers running right now.
I don't understand this issue, what should I do ?
[root#artik ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Steps I had to take to get everything working:
sudo service docker stop
sudo rm /var/lib/docker/network/files/local-kv.db
sudo service docker start
docker system prune
And then try again.
From your netstat output its clear that there is one process holding port 6379
[root#artik ~]# netstat -nlpute | grep 6379
tcp6 0 0 :::6379 :::* LISTEN 0 14384 2471/docker-proxy
docker-proxy processes are created when you do port forwarding in docker run which is true in your case -p 6379:6379.
For more info on docker-proxy check this out.
I suspect that you earlier ran a redis container which used port 6379, but that container was not properly deleted which kept process docker-proxy running and hence you got port is already allocated
Hope this helps.
As DannyMoshe suggested for anyone else.
Try this before you potentially mess up your whole setup::
sudo service docker stop
sudo service docker start
remove the ports - ... in the docker-compose file and let it assign by itself. or change the port mapping in the host from 6379:6379 to 6378:6379 that worked for me. Before doing this you may need to clear already started containers. docker rm -f $(docker ps -a -q)
On Ubuntu 18.04, I'm trying to install Hyperledger Cello, and during the install, I get:
make[2]: Entering directory '/home/julien/cello'
docker-compose -f bootup/docker-compose-files/docker-compose-nfs.yml up -d --no-recreate
WARNING: Found orphan containers (cello-user-dashboard, cello-operator-dashboard, cello-watchdog, cello-keycloak-server, cello-parse-server, cello-dashboard_rabbitmq, cello-mongo, cello-keycloak-mysql, cello-engine) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Starting cello-nfs ... error
ERROR: for cello-nfs Cannot start service nfs: driver failed programming external connectivity on endpoint cello-nfs (d1be7a4999731983a12df9f1fb6484c7adf669be7edf01c6d962856ed8a6846f): Error starting userland proxy: listen tcp 0.0.0.0:2049: bind: address already in use
ERROR: for nfs Cannot start service nfs: driver failed programming external connectivity on endpoint cello-nfs (d1be7a4999731983a12df9f1fb6484c7adf669be7edf01c6d962856ed8a6846f): Error starting userland proxy: listen tcp 0.0.0.0:2049: bind: address already in use
ERROR: Encountered errors while bringing up the project.
When trying to figure out which application is using 2049 port, I do:
➜ cello git:(master) ✗ sudo netstat -pna | grep 2049
tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN -
tcp6 0 0 :::2049 :::* LISTEN -
udp 0 0 0.0.0.0:2049 0.0.0.0:* -
udp6 0 0 :::2049 :::* -
unix 3 [ ] STREAM CONNECTED 204951 18122/brave --type=
unix 3 [ ] STREAM CONNECTED 204950 5193/brave
But I get no app name.
I also tried to remove containers with
docker rm -f $(docker ps -aq)
like said in this post, but it didn't work.
How should I do to free this port ?
You can try :
docker stop $(docker ps -a -q)
docker ps # again to make sure containers is off
sudo lsof -i tcp:2049 # now you get and list of process running and using 2049 port find and copy PID
sudo kill -9 yout_PID
Now that the 2049 port is killed, then try start containers again...
It looks as if you have an NFS server running on your host. When you run netstat -p ... as root and you don't see a PID for a port, like this...
tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN -
tcp6 0 0 :::2049 :::* LISTEN -
udp 0 0 0.0.0.0:2049 0.0.0.0:* -
udp6 0 0 :::2049 :::* -
...it generally means there is a kernel service bound to that port. Disabling the kernel NFS server (assuming that you're not using it) should allow you to run your container.
Im trying configure the docker daemon so i can connect to it from inside the docker containers i start..
So i changed /etc/docker/daemon.json to
{
"hosts": ["unix:///var/run/docker.sock", "tcp://0.0.0.0:2375"]
}
So that i connect to it through the docker bridge.. However when i restart docker i get
netstat -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address
State PID/Program name
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 3728/mysqld
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN 24253/redis-server
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 3756/nginx
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3634/sshd
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 3756/nginx
tcp6 0 0 :::8010 :::* LISTEN 4230/apache2
tcp6 0 0 :::9200 :::* LISTEN 26824/java
tcp6 0 0 :::9300 :::* LISTEN 26824/java
tcp6 0 0 :::22 :::* LISTEN 3634/sshd
tcp6 0 0 :::2375 :::* LISTEN 1955/dockerd
So first i though the issue was the fact that it was listening on ipv6 not ipv4. and according to
Make docker use IPv4 for port binding
It should all still work but it doesnt.. When i try
telnet 172.17.0.1(docker host) 2375
it fails to connect while
telnet 172.17.0.1(docker host) 80
works. How can i connect to docker running on the host machine? Im running on Ubuntu 14.04.5 docker Version: 17.06.2-ce
You can start your containers mounting the host docker socket into your containers.
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
With this setup, Docker clients inside the containers will be using the Docker daemon from the host. Your containers will be able to build, run, push etc. using daemon running in host. Please note that with these setup everything is happening on the host, so if you start new containers they will be “sibling” containers.
EDIT
If you are using the bridge network, you can connect to any service running on host machine using host IP address.
For example, I have mysqld running on my host with IP 10.0.0.1 and from a container I can do
mysql -u user -p -h 10.0.0.1
The trick is to find out the host IP address from containers.
In Docker for Mac (I am running version 17.07.0) is as simple as connecting to the special host "docker.for.mac.localhost"
Another option is to add an alias IP to your loopback interface
sudo ifconfig lo0 alias 192.168.1.1
And then when running containers add a host for this alias IP
docker run --rm -ti --add-host host-machine:192.168.1.1 mysql:5.7 bash
With this setup, inside container you should be able to do
mysql -u user -p -h host-machine
This answer may be a bit late, but it's better late than never as we never can tell who may be experiencing similar problem. I just fixed it be disabling the unnecessary ufw rule blocking the internal communication.
Example:
sudo ufw allow from <IP address or range> to any port [desired port]
sudo ufw allow from 172.16.0.0/12 to any port 3421.
As for me, I disabled the UFW service totally using the command below.
sudo ufw disable
I have attached to a docker container and need to find out the number of sockets being open by java application . Unfortunately there is no lsof or netstat available in the container . There is no data in /proc/PID/net/tcp. Is there any way I can find this data?
I like netshoot for this. You can run a container in the same networking and even pid namespace, and use the tools in netshoot to analyze the other container's network:
$ docker run -d -p 8888:80 --name nginx-test nginx
d8a90f5c7d1744483ae6d26cc97dad222ed237b5c4211f711c9f15f88252897f
$ docker run --net container:nginx-test --pid container:nginx-test -it --rm nicolaka/netshoot
/ # netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1/nginx: master pro
/ # ps -ef
PID USER TIME COMMAND
1 root 0:00 nginx: master process nginx -g daemon off;
7 104 0:00 nginx: worker process
8 root 0:00 sh
15 root 0:00 ps -ef
Alternatively, you can see this: /proc/PID/net/tcp in the host machine as long as you are in the same box as the docker daemon. This is less elegant than #BMitch's answer.
What you need to do is find out the PID of your process outside the container (in the main pid namespace, technically speaking, your host).
ps aux | grep java
Inside your container, your java has a pid; but outside it has another pid that you can use to access to the information that you have requested: /proc/PID/net/tcp