Failed to allocate gateway with docker stack - docker

I'm trying to execute the tutorial from the official documentation. It works fine except with Services.
When I start 5 instances of the container (with docker stack command), the containers are not starting and I get this error:
"failed to allocate gateway"
$ docker stack ps getstartedlab
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
imb6vgifjvq7 getstartedlab_web.1 seb/docker-whale:1.1 ns3553081.ip-XXX-YYY-ZZZ.eu Ready Rejected 4 seconds ago "failed to allocate gateway (1…"
ulm1tqdhzikd \_ getstartedlab_web.1 seb/docker-whale:1.1 ns3553081.ip-XXX-YYY-ZZZ.eu Shutdown Rejected 9 seconds ago "failed to allocate gateway (1…"
...
The docker-compose.yml contains
version: "3"
services:
web:
image: seb/docker-whale:1.1
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
networks:
webnet:
to start containers I'm using the command:
$ docker stack deploy -c docker-compose.yml getstartedlab
I can start without any issue one instance of the container with the command:
$ docker run -p 80:80 seb/docker-whale:1.1
Any idea why it's not working? How can I get more details on the error?
Thanks for your help.

Answer from a beginner: Same here (version 1.13.1), the message vanished when I changed ports "80:80" to "8080:80". Port 80 was used by the host of the docker machine.

Related

Why docker stack deploy is not starting all my containers?

I deployed my docker stack:
⇒ docker stack deploy -c docker-compose.yml my_stack
Creating network my_stack_network
Creating service my_stack_redis
Creating service my_stack_wsgi
Creating service my_stack_nodejs
Creating service my_stack_nginx
Creating service my_stack_haproxy
Creating service my_stack_postgres
But when I do docker container ls, it only shows three containers:
~|⇒ docker container ls | grep my_stack
212720bfafc3 postgres:11 "docker-entrypoint.s…" 4 minutes ago Up 3 minutes 5432/tcp my_stack_postgres.1.9nx7jb21whi61aboe9hmet6m2
3132dd980589 isiq/nginx-brotli:1.21.0 "/docker-entrypoint.…" 4 minutes ago Up 4 minutes 80/tcp my_stack_nginx.1.isl2c78z6w5ptizurm3a4cnte
62ef3c76fb9e redis:6.2.4 "docker-entrypoint.s…" 4 minutes ago Up 4 minutes 6379/tcp my_stack_redis.1.xnisrd1i6hod6jkm64623cpzj
But docker stack ps lists all of them as Running:
~|⇒ docker stack ps --no-trunc my_stack
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
1fqwlgblhi5q0cdl5cy75ucli my_stack_haproxy.1 haproxy:2.3.9#sha256:f63aabf39efcd277b04a503d38e59e80224a0c11f47b2568b13b0092698c5a3a Running New 2 minutes ago
isl2c78z6w5ptizurm3a4cnte my_stack_nginx.1 isiq/nginx-brotli:1.21.0#sha256:436cbc0d8cd051e7bdb197d7915fe90fa5a1bdadea6d02272ba117fccf30c936 tadoba Running Running 2 minutes ago
1myvtgl11qqw2xa9cv79uikcs my_stack_nodejs.1 nodejs:my_stack Running New 2 minutes ago
9nx7jb21whi61aboe9hmet6m2 my_stack_postgres.1 postgres:11#sha256:5d2aa4a7b5f9bdadeddcf87cf7f90a176737a02a30d917de4ab2e6a329bd2d45 tadoba Running Running 2 minutes ago
xnisrd1i6hod6jkm64623cpzj my_stack_redis.1 redis:6.2.4#sha256:6bc98f513258e0c17bd150a7a26f38a8ce3e7d584f0c451cf31df70d461a200a tadoba Running Running 2 minutes ago
mzmmb7a3bxjpfkfa3ea5o5w85 my_stack_wsgi.1 wsgi:my_stack Running New 2 minutes ago
Checking logs of containers not listed in docker container ls gives No such container error:
~|⇒ docker logs -f 1myvtgl11qqw2xa9cv79uikcs
Error: No such container: 1myvtgl11qqw2xa9cv79uikcs
~|⇒ docker logs -f mzmmb7a3bxjpfkfa3ea5o5w85
Error: No such container: mzmmb7a3bxjpfkfa3ea5o5w85
~|⇒ docker logs -f 1fqwlgblhi5q0cdl5cy75ucli
Error: No such container: 1fqwlgblhi5q0cdl5cy75ucli
What could be the reason? How can I debug this?
Update
It seems that services without dependencies are able to join the network. But services with dependencies on other services are not able to. I am unable to figure out the reason. Here is the gist with output of docker inspect network's output.
PS
I run watch 'docker container ls | grep my_app' in another terminal before running docker stack deploy .... But those three containers never appear in the watch list. Rest of three do appear.
I am running all nodes on the same remote machine connected through ssh. This is the output of docker node ls:
~|⇒ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
z9hovq8ry6qont3m2rbn6upy4 * tadoba Ready Active Leader 20.10.11
Here is my docker compose file for reference:
version: "3.8"
services:
postgres:
image: postgres:11
volumes:
- my_app_postgres_volume:/var/lib/postgresql/data
- type: tmpfs
target: /dev/shm
tmpfs:
size: 536870912 # 512MB
environment:
POSTGRES_DB: my_app_db
POSTGRES_USER: my_app
POSTGRES_PASSWORD: my_app123
networks:
- my_app_network
redis:
image: redis:6.2.4
volumes:
- my_app_redis_volume:/data
networks:
- my_app_network
wsgi:
image: wsgi:my_app3_stats
volumes:
- /my_app/frontend/static/
- ./wsgi/my_app:/my_app
- /my_app/frontend/clientApp/node_modules
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
depends_on:
- postgres
- redis
ports:
- 9090
environment:
C_FORCE_ROOT: 'true'
SERVICE_PORTS: 9090
networks:
- my_app_network
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
max_attempts: 3
window: 120s
nodejs:
image: nodejs:my_app3_stats
volumes:
- ./nodejs/frontend:/frontend
- /frontend/node_modules
depends_on:
- wsgi
ports:
- 9998:9999
environment:
BACKEND_API_URL: http://aa.bb.cc.dd:9764/api/
networks:
- my_app_network
nginx:
image: isiq/nginx-brotli:1.21.0
volumes:
- ./nginx:/etc/nginx/conf.d:ro
- ./wsgi/my_app:/my_app:ro
- my_app_nginx_volume:/var/log/nginx/
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
networks:
- my_app_network
haproxy:
image: haproxy:2.3.9
volumes:
- ./haproxy:/usr/local/etc/haproxy/:ro
- /var/run/docker.sock:/var/run/docker.sock
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
depends_on:
- wsgi
- nodejs
- nginx
ports:
- 9764:80
networks:
- my_app_network
deploy:
placement:
constraints: [node.role == manager]
volumes:
my_app_postgres_volume:
my_app_redis_volume:
my_app_nginx_volume:
my_app_pgadmin_volume:
networks:
my_app_network:
driver: overlay
Output of docker service ps <service-name> for services not listed under tadoba node:
~/my_app|master-py3⚡
⇒ docker service ps my_app_nodejs
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
i04jpykp9ign my_app_nodejs.1 nodejs:bodhitree3_stats Running New about a minute ago
~/my_app|master-py3⚡
⇒ docker service ps my_app_haproxy
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
of4fcsxuq24c my_app_haproxy.1 haproxy:2.3.9 Running New about a minute ago
~/my_app|master-py3⚡
⇒ docker service ps my_app_wsgi
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
yt9nuhule39z my_app_wsgi.1 wsgi:bodhitree3_stats Running New 2 minutes ago
Firstly all of the services are running which is good. But docker container ls isn't cluster-aware i.e. it is showing the current containers running on this node. From the output docker stack ps --no-trunc my_stack I can see that there is another node labeled tadoba. So if you can log in to the other node you can see the running containers.
You can list the nodes on your cluster by running docker node ls.
If you want you can set up your docker contexts so you can change your docker context which will remove the need to log in and log out of the nodes. You can find more info here.

Docker swarm does not distribute the container in the cluster

I have a two servers to use in a Docker cluster Swarm(test only), one is a Manager and other is a Worker, but running the command docker stack deploy --compose-file docker-compose.yml teste2 all the services is run in the manager and the worker not receive the containers to run, for some reason the Swarm is not achieving distributing the services in the cluster and running all in manager server.
Will my docker-compose.yml be causing the problem or might it be a network problem?
Here are some settings:
Servers CentOs 7, Docker version 18.09.4;
I executed the commands systemctl stop firewalld && systemctl disable firewalld to disable firewall;
I executed the command docker swarm join --token ... in the worker;
Result docker node ls:
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
993dko0vu6vlxjc0pyecrjeh0 * name.server.manager Ready Active Leader 18.09.4
2fn36s94wjnu3nei75ymeuitr name.server.worker Ready Active 18.09.4
File docker-compose.yml:
version: "3"
services:
web:
image: testehello
deploy:
replicas: 5
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
# placement:
# constraints: [node.role == worker]
ports:
- 4000:80
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- 8080:8080
stop_grace_period: 1m30s
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
webnet:
I executed the command docker stack deploy --compose-file docker-compose.yml teste2
In the docker-compose.yml I commented the parameters placement and constraints because they did not work and did not start the containers on the servers, without it the containers are started in the manager. Through the Visualizer all appear in the manager.
I think the images are not accessible from a worker node, that is why they not receive containers, try to use this guide by docker https://docs.docker.com/engine/swarm/stack-deploy/
P.S. I think you solved it already, but just in case.

Docker Swarm with springboot app

I'm currently trying to deploy an application with docker swarm in 3 virtual machines, I'm doing it through docker-compose to create the image, my files are the following:
Dockerfile:
FROM openjdk:8-jdk-alpine
WORKDIR /home
ARG JAR_FILE
ARG PORT
VOLUME /tmp
COPY ${JAR_FILE} /home/app.jar
EXPOSE ${PORT}
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/home/app.jar"]
and my docker-compose is:
version: '3'
services:
service_colpensiones:
build:
context: ./colpensiones-servicio
dockerfile: Dockerfile
args:
JAR_FILE: ColpensionesServicio.jar
PORT: 8082
volumes:
- data:/home
ports:
- 8082:8082
volumes:
data:
I'm using the command docker-compose up -d --build to build the image, I automatically create the container which is deleted later. To use docker swarm I use the 3 machines, one manager and two worker, I have another file to deploy the service with 3 replicas
version: '3'
services:
service_colpensiones:
image: deploy_lyra_colpensiones_service_colpensiones
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
volumes:
- data:/home
ports:
- 8082:8082
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
networks:
webnet:
volumes:
data:
So far I think everything is fine because in the console with the command: docker service ls I see the services created, the viewer dockersamples / visualizer: stable, shows me the nodes correctly on port 8080, but when I want to make a request to the url of the services that is in the following way:
curl -4 http://192.168.99.100:8082/colpensiones/msg
the error appears:
curl: (7) Failed to connect to 192.168.99.100 port 8082: Refused connection.
The images from service are:
I am following the docker tutorial: Get Started https://docs.docker.com/get-started/part5/
I hope your help, thanks
I had the same issue but fixed after changing the port number of the spring boot service to
ports:
- "8082:8080"
The actual issue is: tomcat server by default listening on port 8080 not the port mentioned on the compose file. Also i increased the memory limit.
FYI: The internal port of the tasks/container running in the service can be same for other containers as well(:) so mentioning 8080(internal port) for both spring boot container and visualizer container is not a problem.
I also faced the same issue for my application. I rebuilt my app by removing from Dockerfile => -Djava.security.egd=file:/dev/./urandom java cmdline property, and it started working for me.
Please check "docker service logs #containerid#" (to see container ids run command "docker stack ps #servicename#") which served you request at that time, and see if you see any error message.
PS: I recently started on docker, so might not be an expert advice. Just in case if it helps.

docker stack deploy generate stuck services with no replications

I am trying to deploy my working docker-compose set up to a docker-swarm, everything seems ok, except that the only service that got replicate and generate a running container is the redis one, the 3 others got stuck and never generate any running container, they don't even download their respective images.
I can't find any debug feature, all the logs are empty, I'm completely helpless.
Let me show you the current state of my installation.
docker node ls print =>
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
oapl4et92vjp6mv67h2vw8boq boot2docker Ready Active
x2fal9iwj6aqt1vgobyp1smv1 * manager1 Ready Active Leader
lmtuojednaiqmfmm2izl7npf0 worker1 Ready Active
The docker compose =>
version: '3'
services:
mongo:
image: mongo
container_name: mongo
restart: always
volumes:
- /data/db:/data/db
deploy:
placement:
constraints: [node.role == manager]
ports:
- "27017:27017"
redis:
image: redis
container_name: redis
restart: always
bnbkeeper:
image: registry.example.tld/keepers:0.10
container_name: bnbkeeper
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
depends_on:
- mongo
- redis
ports:
- "8080:8080"
links:
- mongo
- redis
environment:
- REDIS_HOST=redis
- MONGO_HOST=mongo
bnbkeeper-ws:
image: registry.example.tld/keepers:0.10
container_name: bnbkeeper-ws
restart: unless-stopped
depends_on:
- mongo
- redis
ports:
- "3800:3800"
links:
- mongo
- redis
environment:
- REDIS_HOST=redis
command: npm run start:subscription
The current state of my services
ID NAME MODE REPLICAS IMAGE PORTS
tbwfswsxx23f stack_bnbkeeper replicated 0/5 registry.example.tld/keepers:0.10
khrqtx28qoia stack_bnbkeeper-ws replicated 0/1 registry.example.tld/keepers:0.10
lipa8nvncpxb stack_mongo replicated 0/1 mongo:latest
xtz2411htcg7 stack_redis replicated 1/1 redis:latest
My redis successful service (docker service ps stack_redis)
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
cqv0njcgsw6f stack_redis.1 redis:latest worker1 Running Running 25 minutes ago
my mongo unsuccessful service (docker service ps stack_mongo)
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
yipokxxiftqq stack_mongo.1 mongo:latest Running New 25 minutes ago
I'm completely new to docker swarm, and probably made a silly mistake here, but I couldn't find much documentation on how to setup such a simple stack.
To monitor, try this:
journalctl -f -n10
Then run the docker stack deploy command in a separate session and see what it shows
try removing port publish and add --endpoint-mode dnsrr to your service.

Docker containers not running on VM

I am new to Docker and I am following the 'Getting Started' documentation at the Docker site.
I am trying to run 3 containers on a VM.
OS: Centos 7.3
Docker: 17.03.1-ce
I followed the first part and could get hello-world running on a container inside the VM.
Then I moved on to the Docker compose example.
I have the following directory structure:
home
|
- docker-compose.yml
|
- docker-test
|
- app.py
- Dockerfile
- requirements.txt
The files under docker-test are from the python app example on the docker website.
With the docker-compose, I was attempting to run 3 containers of the hello-world example.
My docker-compose.yml:
version: "3"
services:
web:
image: hello-world
deploy:
replicas: 3
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
networks:
webnet:
Then I ran the following commands:
sudo docker swarm init
sudo docker stack deploy -c docker-compose.yml getstartedlab
sudo docker stack ps getstartedlab shows:
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
iytr4ptz3m8l getstartedlab_web.1 hello-world:latest <node1> Shutdown Complete 16 minutes ago
s5t41txo05ex getstartedlab_web.2 hello-world:latest <node2> Shutdown Complete 16 minutes ago
91iitdnc49fk getstartedlab_web.3 hello-world:latest <node3> Shutdown Complete 16 minutes ago
However,
sudo docker ps shows no containers and when I curl http://localhost:80, it can't connect.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
What am I missing?
Your docker-compose.yml file says that the web service should use the hello-world image, which just prints a message & exits when run, leading to all of the containers being stopped. Presumably, you meant to instead use the image created by building docker-test/; to do this, simply replace the image: hello-world line with build: docker-test.

Resources