Docker - swarm with docker toolbox doesn't run - docker

i applied docker tutorial to set up a swarm.
I used docker toolbox, because i'm on windows 10 Family.
i step all statements, but at the end, the statement "curl ip_adress" doesn't run. error also with access on url.
$ docker --version
Docker version 18.03.0-ce, build 0520e24302
docker-compose.yml, located in /home/docker of virtual machine called "myvm1" :
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: 12081981/friendlyhello:part1
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
networks:
webnet:
swarm :
$ docker-machine ssh myvm1 "docker stack ps getstartedlab"
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
blmx8mldam52 getstartedlab_web.1 12081981/friendlyhello:part1 myvm1 Running Running 9 seconds ago
04ctl86chp6o getstartedlab_web.2 12081981/friendlyhello:part1 myvm3 Running Running 6 seconds ago
r3qyznllno9j getstartedlab_web.3 12081981/friendlyhello:part1 myvm3 Running Running 6 seconds ago
2twwicjssie9 getstartedlab_web.4 12081981/friendlyhello:part1 myvm1 Running Running 9 seconds ago
o4rk4x7bb3vm getstartedlab_web.5 12081981/friendlyhello:part1 myvm3 Running Running 6 seconds ago
result of "docker-machine ls" :
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default - virtualbox Running tcp://192.168.99.100:2376 v18.09.0
myvm1 * virtualbox Running tcp://192.168.99.102:2376 v18.09.0
myvm3 - virtualbox Running tcp://192.168.99.103:2376 v18.09.0
test with curl
$ curl 192.168.99.102
curl: (7) Failed to connect to 192.168.99.102 port 80: Connection refused
How do i do to debug ?
I can give more information, if you want.
Thanks in advance.

Use of the routing mesh in Windows appears to be an EE only feature right now. You can monitor this docker for windows issue for more details. The current workaround is to use DNSRR internally and publish ports to the host directly instead of with the routing mesh. If you want your application to be reachable from any node in the cluster, this means you'd need to have a service on ever host in the cluster, scheduled globally, listening on the requested port. E.g.
version: "3.2"
services:
web:
# replace username/repo:tag with your name and image details
image: 12081981/friendlyhello:part1
deploy:
# global runs 1 on every node, instead of the replicated variant
mode: global
# DNSRR skips the VIP normally assigned to services
endpoint_mode: dnsrr
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- target: 80
published: 80
protocol: tcp
# host publishes the port directly from the container without the routing mesh
mode: host
networks:
- webnet
networks:
webnet:

Related

Docker swarm : can't curl to a service container

I ve a service running under a stack swarm :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
de74ba4d48c1 myregistry/myApi:1.0 "java -Dfile.encodin…" 3 minutes ago Up 3 minutes 8300/tcp myApiCtn
As you can see , my service is running on the 8300 port.
The probleme is that when i run curl ; it seems to not reply:
[user#server home]$ curl http://localhost:8300/api/elk/batch
curl: (52) Empty reply from server
In another side if i ran my container manually (without stack and without swarm services )
(docker run ...)
-> curl works well
My docker-compose is the following :
---
version: '3.4'
services:
api-batch:
image: myRegistry/myImageApi
networks:
- net_common
- default
stdin_open: true
volumes:
- /opt/application/current/logs:/opt/application/current/logs
- /var/opt/data/flat/flf/:/var/opt/data/flat/flf/
tty: true
ports:
- target: 8300
published: 8300
protocol: tcp
deploy:
mode: global
resources:
limits:
memory: 1024M
placement:
constraints:
- node.labels.type == test
healthcheck:
disable: true
networks:
net_common:
external: true
Where my networks list is the following :
NETWORK ID NAME DRIVER SCOPE
17795bfee9ca bridge bridge local
0faecb070730 docker_gwbridge bridge local
51c34d251495 host host local
j2nnf26asn3k ingress overlay swarm
3all3tmn3qn9 net_common overlay swarm
b7alw2yi5fk9 srcd-current_default overlay swarm
Any suggestion to make it work under swarm service ?

Update image in service without downtime

I am running a service on Docker Swarm. This is what I did to deploy the service:
docker swarm init
docker stack deploy -c docker-compose.yml MyApplication
Content of docker-compose.yml:
version: "3"
services:
web:
image: myimage:1.0
ports:
- "9000:80"
- "9001:443"
deploy:
replicas: 3
resources:
limits:
cpus: "0.5"
memory: 256M
restart_policy:
condition: on-failure
Let't say that I update the application and build a new image myimage:2.0. What is a proper way to deploy the new version of image to the service without the downtime?
A way to achieve this is:
provide a healthcheck. That way docker will know if your new deployment has succeeded.
https://docs.docker.com/engine/reference/builder/#healthcheck
https://docs.docker.com/compose/compose-file/#healthcheck]
control how docker will update your service with update_config
https://docs.docker.com/compose/compose-file/#update_config
pay attention to order and parallelism, for example if you choose order: stop-first + parallelism: 2 and your replicas are the same amount as parallelism, your app will stop completely when updating
if your update doesn't succeed you probably want to rollback
https://docs.docker.com/compose/compose-file/#rollback_config
don't forget the restart_policy too
I have some examples on that subject:
Docker Swarm Mode Replicated Example with Flask and Caddy
https://github.com/douglasmiranda/lab/tree/master/caddy-healthcheck-of-caddy-itself
With this you can simply run docker stack deploy... again. If there was changes in the service, it will be updated.
you can use the command docker service update --image but it will start a new container with a implicit scale 0/1.
The downtime depends of your application.

Deploy app on a cluster but cannot access it successfully

I'm now learning to use docker follow get-started documents, but in part 4--Swarms I've met some problem. That is when deployed my app on a cluster, I cannot access it successfully.
docker#myvm1:~$ docker stack ps getstartedlab
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
gsueb9ejeur5 getstartedlab_web.1 zhugw/get-started:first myvm1 Running Preparing 11 seconds ago
ku13wfrjp9wt getstartedlab_web.2 zhugw/get-started:first myvm2 Running Preparing 11 seconds ago
vzof1ybvavj3 getstartedlab_web.3 zhugw/get-started:first myvm1 Running Preparing 11 seconds ago
lkr6rqtqbe6n getstartedlab_web.4 zhugw/get-started:first myvm2 Running Preparing 11 seconds ago
cpg91o8lmslo getstartedlab_web.5 zhugw/get-started:first myvm2 Running Preparing 11 seconds ago
docker#myvm1:~$ curl 'http://localhost'
curl: (7) Failed to connect to localhost port 80: Connection refused
➜ ~ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
myvm1 - virtualbox Running tcp://192.168.99.101:2376 v17.06.0-ce
myvm2 - virtualbox Running tcp://192.168.99.100:2376 v17.06.0-ce
➜ ~ curl 'http://192.168.99.101'
curl: (7) Failed to connect to 192.168.99.101 port 80: Connection refused
What's wrong?
In addition, very strange. After adding below content in docker-compose.yml I found above question resolved automatically
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
but this time the new added visualizer does not work
docker#myvm1:~$ docker stack ps getstartedlab
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
xomsv2l5nc8x getstartedlab_web.1 zhugw/get-started:first myvm1 Running Running 7 minutes ago
ncp0rljod4rc getstartedlab_visualizer.1 dockersamples/visualizer:stable myvm1 Running Preparing 7 minutes ago
hxddan48i1dt getstartedlab_web.2 zhugw/get-started:first myvm2 Running Running 7 minutes ago
dzsianc8h7oz getstartedlab_web.3 zhugw/get-started:first myvm1 Running Running 7 minutes ago
zpb6dc79anlz getstartedlab_web.4 zhugw/get-started:first myvm2 Running Running 7 minutes ago
pg96ix9hbbfs getstartedlab_web.5 zhugw/get-started:first myvm2 Running Running 7 minutes ago
from above you know it's always preparing.
My whole docker-compose.yml
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: zhugw/get-started:first
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
networks:
webnet:
Had this problem while learning too.
It's because your none clustered image is still running from step 2 and the clustered image you just deployed uses the same port mapping (4000:80) in the docker-compose.yml file.
You have two options:
Go into your docker-compose.yml and change the port mapping to something else e.g 4010:80 and then redeploy your cluster with the update. Then try: http://localhost:4010
Remove the container you created in step 2 of the guide that's still running and using port mapping 4000:80
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
should be
volumes:
- /var/run/docker.sock:/var/run/docker.sock
this is an error in the dockers tutors
Open port 7946 TCP/UDP and port 4789 UDP between the swarm nodes. Use the ingress network. Please let me know if it works, thanks.
What helped for me to get the visualizer running was changing visualizer image tag from stable to latest.
If you are using Docker toolbox for mac, then you should check this out.
I had the same problem. As it says in the tutorial (see "Having connectivity trouble?") the following ports need to be open:
Port 7946 TCP/UDP for container network discovery.
Port 4789 UDP for the container ingress network.
So I executed the following before the swarm init (right after creation of myvm1 and myvm2) and then later could access the service e.g. in the browser with IP_node:4000
$ docker-machine ssh myvm1 "sudo iptables -I INPUT -p tcp --dport 7946 --syn -j ACCEPT"
$ docker-machine ssh myvm2 "sudo iptables -I INPUT -p tcp --dport 7946 --syn -j ACCEPT"
$ docker-machine ssh myvm1 "sudo iptables -I INPUT -p udp --dport 7946 -j ACCEPT"
$ docker-machine ssh myvm2 "sudo iptables -I INPUT -p udp --dport 7946 -j ACCEPT"
$ docker-machine ssh myvm1 "sudo iptables -I INPUT -p udp --dport 4789 -j ACCEPT"
$ docker-machine ssh myvm2 "sudo iptables -I INPUT -p udp --dport 4789 -j ACCEPT"
Hope it helps others.

How to set up a Docker Swarm cluster with overlay network mode

I create a docker swarm cluster on 2 Linux machine, but when I use docker-compose up -d to start containers , some error has occurred.
This is my docker info:
Containers: 4
Running: 4
Paused: 0
Stopped: 0
Images: 28
Server Version: swarm/1.2.5
Role: primary
Strategy: spread
Filters: health, port, containerslots, dependency, affinity, constraint
Nodes: 2
ozcluster01: 192.168.168.41:2375
└ ID: CKCO:JGAA:PIOM:F4PL:6TIH:EQFY:KZ6X:B64Q:HRFH:FSTT:MLJT:BJUY
└ Status: Healthy
└ Containers: 2 (2 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 3.79 GiB
└ Labels: executiondriver=native-0.2, kernelversion=3.10.0- 327.13.1.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storagedriver=devicemapper
└ UpdatedAt: 2016-11-04T02:05:08Z
└ ServerVersion: 1.10.3
ozcluster02: 192.168.168.42:2375
└ ID: 73GR:6M7W:GMWD:D3DO:UASW:YHJ2:BTH6:DCO5:NJM6:SXPN:PXTY:3NHI
└ Status: Healthy
└ Containers: 2 (2 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 64 MiB / 3.79 GiB
└ Labels: executiondriver=native-0.2, kernelversion=3.10.0-327.10.1.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storagedriver=devicemapper
└ UpdatedAt: 2016-11-04T02:05:06Z
└ ServerVersion: 1.10.3
This is my docker-compose.yml
version: '2'
services:
rabbitmq:
image: rabbitmq
ports:
- "5672:5672"
- "15672:15672"
network_mode: "bridge"
config-service:
image: ozms/config-service
ports:
- "8888:8888"
volumes:
- ~/ozms/configs:/var/tmp/
- ~/ozms/log:/log
network_mode: "bridge"
labels:
- "affinity:image==ozms/config-service"
eureka-service:
image: ozms/eureka-service
ports:
- "8761:8761"
volumes:
- ~/ozms/log:/log
links:
- config-service
- rabbitmq
environment:
- SPRING_RABBITMQ_HOST=rabbitmq
network_mode: "bridge"
after I exec docker-compose up -d , the service rabbitmq and config-service can be started up , but eureka-service caused an error:
[dannil#ozcluster01 ozms]$ docker-compose up -d
Creating ozms_config-service_1
Creating ozms_rabbitmq_1
Creating ozms_eureka-service_1
ERROR: Unable to find a node that satisfies the following conditions
[port 8761 (Bridge mode)]
[available container slots]
[--link=ozms_config-service_1:config-service --link=ozms_config-service_1:config-service_1 --link=ozms_config-service_1:ozms_config-service_1 --link=ozms_rabbitmq_1:ozms_rabbitmq_1 --link=ozms_rabbitmq_1:rabbitmq --link=ozms_rabbitmq_1:rabbitmq_1]
And I exec docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
871afc8e1eb6 rabbitmq "docker-entrypoint.sh" 2 minutes ago Up 2 minutes 4369/tcp, 192.168.168.41:5672->5672/tcp, 5671/tcp, 25672/tcp, 192.168.168.41:15672->15672/tcp ozcluster01/ozms_rabbitmq_1
8ef3f666a7b9 ozms/config-service "java -Djava.security" 2 minutes ago Up 2 minutes 192.168.168.42:8888->8888/tcp ozcluster02/ozms_config-service_1
I find that rabbitmq is start up on machine ozculster01, config-service is start up on machine ozculster02.
when docker-compose start config-service, there is no links, so it can be start up successfully.
but when I start eureka-service on machine ozculster02,there is a links to rabbitmq, but the service rabbitmq is on machine ozculster01,
the error occurred.
How could I do to resolve the problem ?
Is that right to use network_mode: "bridge" in Docker Swarm cluster ?
I resolved the problem myself.
In swarm mode, docker containers can't contact another container with network_mode:bridge.
In swarm mode, one must use network_mode : overlay. Overlay is used by default if you are using compose-file formate version 2.
see more detail :
Setting up a Docker Swarm with network overlay
With overlay mode , docker-compose.yml file do not need the config likns , Containers can contact another container with ${service_name_in_composeFile}
Example:
I can enter into the container config-service , and $ ping eureka-service, and it works fine!
this is my compose-file.yml :
version: '2'
services:
rabbitmq:
image: rabbitmq
ports:
- "5672:5672"
- "15672:15672"
config-service:
image: ozms/config-service
ports:
- "8888:8888"
volumes:
- ~/ozms/configs:/var/tmp/
- ~/ozms/log:/log
labels:
- "affinity:image==ozms/config-service"
eureka-service:
image: ozms/eureka-service
ports:
- "8761:8761"
volumes:
- ~/ozms/log:/log
#links: it is no need in overlay mode
# - config-service
# - rabbitmq
environment:
- SPRING_RABBITMQ_HOST=rabbitmq

call a docker container from another container

I have deployed two docker containers which hosts two REST services deployed in Jetty.
Container 1 hosts service 1 and it Listens to 7070
Container 2 hosts service 2 and it Listens to 9090
Endpoints:-
service1:
/ping
/service1/{param}
service2:
/ping
/service2/callService1
curl -X GET http://localhost:7070/ping [Works]
curl -X GET http://localhost:7070/service1/hello [Works]
curl -X GET http://localhost:9090/ping [Works]
I have configured the containers in such a way that:
http://localhost:9090/serivce2/callService1
calls
http://localhost:7070/service1/hello
This throws a connection refused exception. Here's the configuration I have.
docker-compose.yml
------------------
service1:
build: microservice/
ports:
- "7070:7070"
expose:
- "7070"
service2:
build: microservice_link/
ports:
- "9090:9090"
expose:
- "9090"
links:
- service1
service1 Dockerfile
-------------------
FROM localhost:5000/java:7
COPY ./target/service1.jar /opt
WORKDIR /opt
ENTRYPOINT ["java", "-jar", "service1.jar","7070"]
CMD [""]
service2 Dockerfile
-------------------
FROM localhost:5000/java:7
COPY ./target/service2.jar /opt
WORKDIR /opt
ENTRYPOINT ["java", "-jar", "service2.jar","9090"]
CMD [""]
docker info
-----------
root#LT-NB-108U:~# docker info
Containers: 3
Running: 2
Paused: 0
Stopped: 1
Images: 12
Server Version: 1.10.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 28
Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: null host bridge
Kernel Version: 3.13.0-48-generic
Operating System: Ubuntu precise (12.04.5 LTS)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 3.47 GiB
Name: LT-NB-108U
ID: BS52:XURM:3SD7:TC3R:7YVA:ZBZK:CCL2:7AVC:RNZV:RBGW:2X2T:7C46
WARNING: No swap limit support
root#LT-NB-108U:~#
Question:-
I am trying to access the endpoint deployed in Container 1 from Container 2. However, I get a connection refused exception.
I tried exposing port 7070 in container 2. That didn't work.
curl http://service1:7070/
use - host1_name:inner_port_of_host1
That host is called "service1" in container2. Use that as the host name and the port is the inner port listener in service1's container.
If you have an express server on service1, listen on port 7070.

Resources