When I print list
docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f15a180315d3 influxdb "/entrypoint.sh infl…" 2 hours ago Exited (128) 2 hours ago influxdb
7b753ba600df influxdb "/entrypoint.sh infl…" 3 hours ago Exited (0) 2 hours ago nervous_fermi
2ddc5d9af400 influxdb "/entrypoint.sh infl…" 3 hours ago Exited (0) 3 hours ago nostalgic_varahamihira
2e174a82d38d influxdb "/entrypoint.sh infl…" 3 hours ago Exited (0) 3 hours a modest_mestorf
But if I try restart
docker container restart influxdb
I get
Error response from daemon: Cannot restart container influxdb: driver failed programming external connectivity on endpoint influxdb (06ee4d738dffecd1a202840699a899286f4bbb88392e4eb227d65670108687a6): Error starting userland proxy: listen tcp 0.0.0.0:8086: bind: address already in use
netstat -nl -p tcp | grep 8086
tcp6 0 0 :::8086 :::* LISTEN 1985/influxd
How to restart docker container?
If I go for
docker kill influxdb
Error response from daemon: Cannot kill container: influxdb: Container f15a180315d38c2f5fac929b2d0b9be3e8ca2a09033648b5c5174c15a64c4d71 is not running
Problem
As indicated by the error message:
Error response from daemon: Cannot restart container influxdb: driver failed programming external connectivity on endpoint influxdb (06ee4d738dffecd1a202840699a899286f4bbb88392e4eb227d65670108687a6): Error starting userland proxy: listen tcp 0.0.0.0:8086: bind: address already in use
The port 8086 was already blocked ( therefore the address already in use part) by another process. Therefore the container was not able to run, because the container tried to start influxdb, but failed because of the already bound port.
Additionally the output of netstat provided the hint, which process occupies the port:
netstat -nl -p tcp | grep 8086
tcp6 0 0 :::8086 :::* LISTEN 1985/influxd
(see the last part: 1985/influxd)
Solution
Kill the other process (first check, if the process is busy and you should save data before stopping it), e.g. using the kill command:
kill 1985
Related
i've been trying all the existing commands for several hours and could not fix this problem.
i used everything covered in this Article: Docker - Bind for 0.0.0.0:4000 failed: port is already allocated.
I currently have one container: docker ps -a | meanwhile docker ps is empty
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5ebb9289dfd1 dockware/dev:latest "/bin/bash /entrypoi…" 2 minutes ago Created TheGoodPartDocker
when i Try docker-compose up -d i get the Error:
ERROR: for TheGoodPartDocker Cannot start service shop: driver failed programming external connectivity on endpoint TheGoodPartDocker (3b59ebe9366bf1c4a848670c0812935def49656a88fa95be5c4a4be0d7d6f5e6): Bind for 0.0.0.0:80 failed: port is already allocated
I've tried to remove everything using: docker ps -aq | xargs docker stop | xargs docker rm
Or remove ports: fuser -k 80/tcp
even deleting networks:
sudo service docker stop
sudo rm -f /var/lib/docker/network/files/local-kv.db
or just manually shut down stop and run:
docker-compose down
docker stop 5ebb9289dfd1
docker rm 5ebb9289dfd1
here is also my netstat : netstat | grep 80
unix 3 [ ] STREAM CONNECTED 20680 /mnt/wslg/PulseAudioRDPSink
unix 3 [ ] STREAM CONNECTED 18044
unix 3 [ ] STREAM CONNECTED 32780
unix 3 [ ] STREAM CONNECTED 17805 /run/guest-services/procd.sock
And docker port TheGoodPartDocker gives me no result.
I also restarted my computer, but nothing works :(.
Thanks for helping
Obviously port 80 is already occupied by some other process. You need to stop the process, before you start the container. To find out the process use ss:
$ ss -tulpn | grep 22
tcp LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=1187,fd=3))
tcp LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=1187,fd=4))
Solved at bottom
But why do I have to append :4000?
I'm following the docker get-started Guide here, https://docs.docker.com/get-started/part4/
I'm fairly certain I've done everything correctly, but am wondering why I can't connect to view the app after deploying it.
I've set my env to my VM, myvm1, for reference to following commands.
docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
099e16249604 beresj/getting-started:part2 "python app.py" 12 seconds ago Up 12 seconds 80/tcp getstartedlab_web.5.y0e2k1r1ev47u24e5iufkyn3i
6f9a24b343a7 beresj/getting-started:part2 "python app.py" 12 seconds ago Up 12 seconds 80/tcp getstartedlab_web.3.1pls3osj3uhsb5dyqtt4ts8j6
docker image ls -a
REPOSITORY TAG IMAGE ID CREATED SIZE
beresj/getting-started <none> e290b6208c21 22 hours ago 131MB
docker stack ls
NAME SERVICES ORCHESTRATOR
getstartedlab 1 Swarm
docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
myvm1 * virtualbox Running tcp://192.168.99.100:2376 v18.09.6
myvm2 - virtualbox Running tcp://192.168.99.101:2376 v18.09.6
docker stack ps getstartedlab
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
vkxx79fh3h85 getstartedlab_web.1 beresj/getting-started:part2 myvm2 Running Running 3 minutes ago
qexbaa3wz0pd getstartedlab_web.2 beresj/getting-started:part2 myvm2 Running Running 3 minutes ago
1pls3osj3uhs getstartedlab_web.3 beresj/getting-started:part2 myvm1 Running Running 3 minutes ago
ucuwen1jrncf getstartedlab_web.4 beresj/getting-started:part2 myvm2 Running Running 3 minutes ago
y0e2k1r1ev47 getstartedlab_web.5 beresj/getting-started:part2 myvm1 Running Running 3 minutes ago
curl 192.168.99.100
curl: (7) Failed to connect to 192.168.99.100 port 80: Connection refused
docker info
Containers: 2
Running: 2
Paused: 0
Stopped: 0
Images: 1
Server Version: 18.09.6
...
Swarm: active
NodeID: 0p9qrax9h3by0fupat8ufkfbq
Is Manager: true
ClusterID: 7vnqdk85n8jx6fqck9k7dv2ka
Managers: 1
Nodes: 2
Default Address Pool: 10.0.0.0/8
...
Node Address: 192.168.99.100
Manager Addresses:
192.168.99.100:2377
...
Kernel Version: 4.14.116-boot2docker
Operating System: Boot2Docker 18.09.6 (TCL 8.2.1)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 989.4MiB
Name: myvm1
I would expect to see what I was able to see when I just ran it on my local machine instead of on a VM in a swarm (I think I have the lingo correct?)
Not sure how to check open ports.
Again: this works if I simply remove the stack, unset the docker-machine environment, and just run:
docker stack deploy -c docker-compose.yml getstartedlab
not on the vm.
Thank you in advance. (Also, I'm new hence the get-started guide so I appreciate any help)
Edit
It works if I append :4000 to the VM IP in my url, ex: 192.168.99.100:4000 or 192.168.99.101:4000. It shows the two container Id's listed in 'docker container ls' for myvm1, and the other three are from myvm2. Could anyone tell me why I have to append 4000? Is it because I have ports: "4000:80" in my docker-compose.yml?
Not sure if this will help but if you use docker inspect <instance_id_here>, you can see what ports are exposed.
Exposed ports aren't open ports. You would need to bind a host port to a container port in the docker-compose.yml in order for it to be to be open.
I'm setting up a Hyperledger Fabric private network on Linux and got the message while running ./byfn.sh up
as I'm a newbie in Ubuntu and docker I think that the port needs to be changed in order to fix the problem, however, I have no clue in doing so. Any help would be appreciated.
alaa#ubuntu:~/fabric-samples/first-network$ sudo netstat -pna | grep 7050
tcp6 0 0 :::7050 :::* LISTEN 3682/docker-proxy
did a netstat to check the port and its docker-proxy
alaa#ubuntu:~/fabric-samples/first-network$ sudo ./byfn.sh up
Starting with channel 'mychannel' and CLI timeout of '10' seconds and CLI delay of '3' seconds
Continue? [Y/n] y
proceeding ...
2019-05-19 14:07:22.820 UTC [main] main -> INFO 001 Exiting.....
LOCAL_VERSION=1.1.0
DOCKER_IMAGE_VERSION=1.1.0
Creating network "net_byfn" with the default driver
Creating volume "net_orderer.example.com" with default driver
Creating volume "net_peer0.org1.example.com" with default driver
Creating volume "net_peer1.org1.example.com" with default driver
Creating volume "net_peer0.org2.example.com" with default driver
Creating volume "net_peer1.org2.example.com" with default driver
Creating orderer.example.com ... error
Creating peer1.org2.example.com ...
Creating peer1.org1.example.com ...
Creating peer0.org1.example.com ...
Creating peer1.org2.example.com ... done
Creating peer1.org1.example.com ... done
Creating peer0.org1.example.com ... done
Creating peer0.org2.example.com ... done
ERROR: for orderer.example.com Cannot start service orderer.example.com: b'driver failed programming external connectivity on endpoint orderer.example.com (60d170dbc933d3c2de9eacd1bb6c7842cf79a52b3a938c9e0e69d1bd55f5e1a9): Error starting userland proxy: listen tcp 0.0.0.0:7050: bind: address already in use'
ERROR: Encountered errors while bringing up the project.
ERROR !!!! Unable to start network
alaa#ubuntu:~/fabric-samples/first-network$ sudo netstat -pna | grep 7050
tcp6 0 0 :::7050 :::* LISTEN 3682/docker-proxy
Well, first of all for any kind of hyperledger tutorial, u better follow the official link, cos most of other sources were also taken from that one: https://hyperledger-fabric.readthedocs.io/en/release-1.4/
Secondly,bring down the network, stop&remove all running&previous containers, restart docker, re-run the network properly, should work fine:
$./byfn.sh down
$docker ps -qa|xargs docker rm
$sudo systemctl daemon-reload
$sudo systemctl restart docker
$cd....fabric-samples/first-network
$./byfn.sh -m generate
$./byfn.sh -m up
Using the Docker tutorial I'm stuck at this part: https://docs.docker.com/get-started/part3/#run-your-new-load-balanced-app
I use curl -4 http://localhost but i get a curl: (7) Failed to connect to localhost port 80: Connection refused error.
output of previous step:
docker service ps getstartedlab_web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
kqu5qggifnlm getstartedlab_web.1 s1mpl3/get-started:part2 moby Running Running 29 minutes ago
prhrmm6hpop3 getstartedlab_web.2 s1mpl3/get-started:part2 moby Running Running 29 minutes ago
ytrwy5gxp2rk getstartedlab_web.3 s1mpl3/get-started:part2 moby Running Running 29 minutes ago
mayvauijghbj getstartedlab_web.4 s1mpl3/get-started:part2 moby Running Running 29 minutes ago
r625x2k7n6ta getstartedlab_web.5 s1mpl3/get-started:part2 moby Running Running 29 minutes ago
So error and ports are empty.
What should I analyse to fix this issue?
For part 4 when you deploy to your swarm, you get an URL with docker-machine ls.
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
myvm1 * virtualbox Running tcp://192.168.99.100:2376 v17.10.0-ce
myvm2 - virtualbox Running tcp://192.168.99.101:2376 v17.10.0-ce
Change in docker-compose.yml file 80:80 to 4000:80
Use 192.168.99.100:4000 and it should be working.
The Load Balancing section in the swarm docs don't make it clear if the internal loadbalancer also does health checks, and if it removes nodes that aren't running the service anymore (because it got killed or the node got rebooted).
In the following case I've got a service with replicas 3, 1 instance running on each of the 3 nodes.
Manager:
[root#centosvm ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a593d485050a ddewaele/springboot.crud.sample:latest "sh -c 'java $JAVA_OP" 7 minutes ago Up 7 minutes springbootcrudsample.1.5syc6j4c8i3bnerdqq4e1yelm
Node1:
[root#node1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d3b3fbc0f2c5 ddewaele/springboot.crud.sample:latest "sh -c 'java $JAVA_OP" 4 minutes ago Up 4 minutes springbootcrudsample.3.7y1oyjyrifgkmxlr20oai5ppl
Node 2:
[root#node2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ebca8f24ec3a ddewaele/springboot.crud.sample:latest "sh -c 'java $JAVA_OP" 7 minutes ago Up 7 minutes springbootcrudsample.2.4tqjad7od8ep047s55485na1t
Now, on node1, we kill the docker container. This node will be without a service (swarm will re-create it here after a couple of seconds to keep the replication=3 on the service)
[root#node1 ~]# docker kill d3b3fbc0f2c5
d3b3fbc0f2c5
Container gone
[root#node1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
New container up
[root#node1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b8c9a7a5cf97 ddewaele/springboot.crud.sample:latest "sh -c 'java $JAVA_OP" 11 seconds ago Up 9 seconds springbootcrudsample.3.9v4cnhi8dvq7n8afb2kvp28sk
In the output below however, when container d3b3fbc0f2c5 was killed, the ingress loadbalancer didn't detect this, and it was still sending traffic to the node (resulting in connection refused) ?
How should we handle such a scenario ? Do we still need an external loadbalancer for this scenario and how should we configure it ?
[root#centosvm ~]# while :; do curl http://localhost:8080/env/hostname ; echo "" ; sleep 1; done
{"hostname":"d3b3fbc0f2c5"}
{"hostname":"a593d485050a"}
{"hostname":"ebca8f24ec3a"}
{"hostname":"d3b3fbc0f2c5"}
{"hostname":"a593d485050a"}
{"hostname":"ebca8f24ec3a"}
{"hostname":"d3b3fbc0f2c5"}
{"hostname":"a593d485050a"}
{"hostname":"ebca8f24ec3a"}
{"hostname":"a593d485050a"}
{"hostname":"ebca8f24ec3a"}
{"hostname":"a593d485050a"}
curl: (7) Failed connect to localhost:8080; Connection refused
{"hostname":"ebca8f24ec3a"}
{"hostname":"a593d485050a"}
curl: (7) Failed connect to localhost:8080; Connection refused
{"hostname":"ebca8f24ec3a"}
{"hostname":"a593d485050a"}
curl: (7) Failed connect to localhost:8080; Connection refused
{"hostname":"ebca8f24ec3a"}
{"hostname":"a593d485050a"}
curl: (7) Failed connect to localhost:8080; Connection refused
{"hostname":"ebca8f24ec3a"}
{"hostname":"a593d485050a"}
curl: (7) Failed connect to localhost:8080; Connection refused
{"hostname":"ebca8f24ec3a"}
{"hostname":"a593d485050a"}
curl: (7) Failed connect to localhost:8080; Connection refused
{"hostname":"ebca8f24ec3a"}
{"hostname":"a593d485050a"}
{"hostname":"b8c9a7a5cf97"}
{"hostname":"ebca8f24ec3a"}
{"hostname":"a593d485050a"}
{"hostname":"b8c9a7a5cf97"}
As indicated by François Maturel, with a proper healthcheck in place, Docker Swarm will take into account the health status of the container to decide if it will route requests to it.
For Spring Boot applications that have enabled the default actuators, adding this to the Dockerfile is sufficient for a basic healthcheck. When the Spring Boot app is initialized and its health actuator is enabled, the following http request will return a valid http 200 response and the healthcheck will pass.
HEALTHCHECK CMD wget -q http://localhost:8080/health -O /dev/null
This will result in your docker containers being anble to reach a healthy status. When your docker container is started, the service running within it might still be initializing. To do proper load balancing and detect service health, Swarm needs to know when it is able to route reqeusts to a particular service instance (container on a node).
So when Swarm starts a service replica, it fires up a container, it will wait until the health status of the service is "healthy". As your container is starting, it will transition from "starting" :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5001e1c46953 ddewaele/springboot.crud.sample#sha256:4ce69c3f50c69640c8240f9df68c8816605c6214b74e6581be44ce153c0f3b7a "/docker-entrypoin..." 5 seconds ago Up Less than a second (health: starting) springbootcrudsample.2.yt6d38zhhq2wxt1d6qfjz5974
to 'healthy'. Only then will the Swarm load balancer route requests to this endpoint.
[root#centos-a ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5001e1c46953 ddewaele/springboot.crud.sample#sha256:4ce69c3f50c69640c8240f9df68c8816605c6214b74e6581be44ce153c0f3b7a "/docker-entrypoin..." About a minute ago Up About a minute (healthy) springbootcrudsample.2.yt6d38zhhq2wxt1d6qfjz5974
#ddewaele is correct, so here's some more tidbits:
No the LB does not perform port connection checks directly, that's the job of the Docker engine kicking off the healthchecks, which could be a simple curl or much more.
healthchecks are critical to zero downtime deployments. Especially if your container takes more then a sub-second to startup or shutdown. Without a healthcheck, docker only knows "Does Linux say the process is running?"
You can use docker events to see it kicking off exec commands in each container with a healthcheck set for their Swarm service. You can also see there how it'll mark the task/container as healthy/unhealthy.
There have been issues/bugs with the ingress load balancer sending packets during update/shutdown of tasks, but AFAIK as of 17.12 (just released) those are mostly/all fixed. One of the old issues is that the LB might not remove the task from its route table before the container shutdown starts but people are reporting better results from the last few releases. https://github.com/moby/moby/issues/30321