When running docker containers a swarm cluster do the containers have access to all the IPs of the cluster nodes via ENV variables or otherwise?
I want to run an Elasticsearch instance on each node in my swarm the cluster. And they will discovery each other in unicast mode. Therefore each Elasticsearch instanc needs to be configured with the list of IPs in the cluster.
If you mean that container of one node can access container's IP of other node , then it is not possible . You have to use weave tool to connect container across different node or other tool .
If you are using latest Docker (1.13+) with a swam overlay network, you should be able to get all the cluster's node IPs through DNS round robin.(--endpoint-mode dnsrr)
1) Create an overlay network.
https://docs.docker.com/engine/swarm/networking/
docker network create \
--driver overlay \
my-network
2) Verify swam nodes:
docker#node1:~$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
5l07yt2itiee60xfq7g6c01e4 * node1 Ready Active Leader
pckn7qo3xpbxvs89ni6whyql3 node2 Ready Active
3) Create an alpine container on each nodes using "global" mode:
docker service create --mode global --endpoint-mode dnsrr --name testservice --detach=true --network my-network alpine ash -c "apk update;apk add drill; ping docker.com"
4) verify service is running:
docker#node1:~$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
lmy5s3flw763 testservice global 2/2 alpine:latest
5) Verify that containers were deployed on individual nodes:
$ docker-machine ssh node1 "docker ps"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4c7055b01479 alpine:latest "ash -c 'apk updat..." 2 minutes ago Up 2 minutes testservice.5l07yt2itiee60xfq7g6c01e4.atvascigh3rvxvlzttaotkrua
$ docker-machine ssh node2 "docker ps"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
28da546aa0d5 alpine:latest "ash -c 'apk updat..." 2 minutes ago Up 2 minutes testservice.pckn7qo3xpbxvs89ni6whyql3.ebjz4asni4w1f0srna0p3vj4a
6) Confirm individual virtual IP of each containers on node1 and node2:
| => docker-machine ssh node1 "docker exec 4c7055b01479 ash -c 'ip addr'|grep eth0"
349: eth0#if350: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP
inet 10.0.0.2/24 scope global eth0
| => docker-machine ssh node2 "docker exec 28da546aa0d5 ash -c 'ip addr'|grep eth0"
319: eth0#if320: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP
inet 10.0.0.3/24 scope global eth0
7) Get the container IP addresses for all containers in cluster using Drill dns tool :
| => docker-machine ssh node1 "docker exec 4c7055b01479 ash -c 'drill testservice'"
;; ->>HEADER<<- opcode: QUERY, rcode: NOERROR, id: 60920
;; flags: qr rd ra ; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;; testservice. IN A
;; ANSWER SECTION:
testservice. 600 IN A 10.0.0.3
testservice. 600 IN A 10.0.0.2
;; AUTHORITY SECTION:
;; ADDITIONAL SECTION:
;; Query time: 0 msec
;; SERVER: 127.0.0.11
;; WHEN: Thu Jul 20 19:20:49 2017
;; MSG SIZE rcvd: 83
8) Verify that containers can ping each other:
docker-machine ssh node1 "docker exec 4c7055b01479 ash -c 'ping -c2 10.0.0.3'"
PING 10.0.0.3 (10.0.0.3): 56 data bytes
64 bytes from 10.0.0.3: seq=0 ttl=64 time=0.539 ms
64 bytes from 10.0.0.3: seq=1 ttl=64 time=0.731 ms
--- 10.0.0.3 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.539/0.635/0.731 ms
docker-machine ssh node2 "docker exec 28da546aa0d5 ash -c 'ping -c2 10.0.0.2'"
PING 10.0.0.2 (10.0.0.2): 56 data bytes
64 bytes from 10.0.0.2: seq=0 ttl=64 time=0.579 ms
64 bytes from 10.0.0.2: seq=1 ttl=64 time=0.736 ms
--- 10.0.0.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.579/0.657/0.736 ms
Related
I am trying to connect two locally developed projects running on docker-compose by using external networking.
From one side I have an 1st application intended to be exposed. Compose contains hosts: app and rabbit:
version: '3.4'
services:
app:
# ...
rabbit:
# ...
networks:
default:
driver: bridge
From other side I have second application expected to see 1st application:
version: '3.4'
services:
app:
# ...
networks:
- paymentservice_default
- default
networks:
paymentservice_default:
external: true
Reaching host rabbit.paymentservice_default is possible.
However service app (1st) conflicts with app (2nd):
root#6db86687229c:/app# ping app.paymentservice_default
PING app.paymentservice_default (192.168.80.6) 56(84) bytes of data.
root#6db86687229c:/app# ping app
PING app (192.168.80.6) 56(84) bytes of data.
In general from 2nd compose perspective hosts app and app.paymentservice_default shares same IP making app.paymentservice_default undiscoverable.
The question here is, do I have proper configuration and conflict can be avoided without changing service names app? Why this constraint? Taking consideration that every docker-compose configuration is shared across projects and can be developed in micro-services world.
$ docker-compose --version
docker-compose version 1.17.1, build unknown
$ docker --version
Docker version 19.03.4, build 9013bf583a
Thank you.
I use the following configuration on Docker Playground
paymentservice.docker-compose.yml
version: '3.4'
services:
app:
image: busybox
# keep container running
command: tail -f /dev/null
rabbit:
image: rabbitmq
networks:
default:
driver: bridge
other.docker-compose.yml
version: '3.4'
services:
app:
image: busybox
# keep container running
command: tail -f /dev/null
networks:
- paymentservice_default
- default
networks:
paymentservice_default:
external: true
Run both projects
$ COMPOSE_PROJECT_NAME=paymentservice docker-compose -f paymentservice.docker-compose.yml up -d
$ COMPOSE_PROJECT_NAME=other docker-compose -f other.docker-compose.yml up -d
Show Docker IPs
$ docker ps -q | xargs -n 1 docker inspect --format '{{ .Name }} {{range .NetworkSettings.Networks}} {{.IPAddress}}{{end}}' | sed 's#^/##';
I got
other_app_1 172.20.0.2 172.19.0.4
paymentservice_app_1 172.19.0.3
paymentservice_rabbit_1 172.19.0.2
and I pinged paymentservice_app_1 (172.19.0.3) from other_app_1 using app.paymentservice_default
$ docker exec -it other_app_1 ping -c 1 app.paymentservice_default
PING app.paymentservice_default (172.19.0.3): 56 data bytes
64 bytes from 172.19.0.3: seq=0 ttl=64 time=0.258 ms
--- app.paymentservice_default ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.258/0.258/0.258 ms
and I pinged other_app_1 (172.20.0.2) from other_app_1 using app
$ docker exec -it other_app_1 ping -c 1 app
PING app (172.20.0.2): 56 data bytes
64 bytes from 172.20.0.2: seq=0 ttl=64 time=0.054 ms
--- app ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.054/0.054/0.054 ms
As you can see, I can access the 1st app (of paymentservice.docker-compose.yml) from the 2nd app (of other.docker-compose.yml).
The same works in the other direction. I pinged other_app_1 (172.19.0.4) from paymentservice_app_1 using app.paymentservice_default
$ docker exec -it paymentservice_app_1 ping -c 1 app.paymentservice_default
PING app.paymentservice_default (172.19.0.4): 56 data bytes
64 bytes from 172.19.0.4: seq=0 ttl=64 time=0.198 ms
--- app.paymentservice_default ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.198/0.198/0.198 ms
I pinged paymentservice_app_1 (172.19.0.3) from paymentservice_app_1 using app
$ docker exec -it paymentservice_app_1 ping -c 1 app
PING app (172.19.0.3): 56 data bytes
64 bytes from 172.19.0.3: seq=0 ttl=64 time=0.057 ms
--- app ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.057/0.057/0.057 ms
As you can see, I can access app service of both projects. If I like to access the service of the same project, I use the default network of the project. If I'd like to access the service of another project, I use the external network shared between both projects.
Note: I would recommend to make this more explicit by creating the shared network outside of the projects using the command line
docker network create shared-between-paymentservice-and-other
and declaring it as external in both projects.
Note: There is still the limitation that service discovery may not work if you have 3 projects with the same service name (e.g. app) in the same (external) network (sort of a namespace). In that case, it might be a better idea to rename your services, use multiple external networks, define aliases or use a totally different approach to discover/identify the Docker containers.
Afterword
Has that been the requirement? I tried to reproduce your issue, but I'm not sure if I did the same as you. For example, I'm not sure, where you are running ping. Is root#6db86687229c the Docker host or a Docker container? Which container? I assumed it is the Docker container of service app of other.docker-compose.yml. Please comment if I'm missing something or misinterpreted your question and I will update my answer. Then I may explain in more detail or make another suggestion how to do service discovery between multiple Docker Compose projects.
Appendix
Cleanup
$ COMPOSE_PROJECT_NAME=other docker-compose -f other.docker-compose.yml down
$ COMPOSE_PROJECT_NAME=paymentservice docker-compose -f paymentservice.docker-compose.yml down
Versions
$ docker --version
Docker version 20.10.0, build 7287ab3
$ docker-compose --version
docker-compose version 1.26.0, build unknown
Is it possible to create swarm with the following nodes setup:
Host machine(docker desktop in Windows containers mode)- Manager, Worker
VM on HyperV, same machine- Worker
I tried following:
docker swarm init --advertise-addr 172.18.69.65
docker-machine create --driver hyperv --hyperv-virtual-switch ‘Default Switch’ --hyperv-disable-dynamic-memory --hyperv-memory 2048 --hyperv-boot2docker-url https://github.com/boot2docker/boot2docker/releases/download/v19.03.2/boot2docker.iso docker-worker-linux
docker-machine ssh docker-worker-linux
docker#docker-worker-linux:~$ ping 172.18.69.65
PING 172.18.69.65 (172.18.69.65): 56 data bytes
64 bytes from 172.18.69.65: seq=0 ttl=128 time=0.294 ms
64 bytes from 172.18.69.65: seq=1 ttl=128 time=0.264 ms
docker#docker-worker-linux:~ docker swarm join --token bla-bla-bla 172.18.69.65:2377 Error response from daemon: Timeout was reached before node joined. The attempt to join the swarm will continue in the background. Use the "docker info" command to see the current swarm status of your node.
docker#docker-worker-linux:~ docker info
Swarm: error
NodeID:
Error: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Is Manager: false
Node Address: 172.18.69.66
Should it be possible at all- joining host windows machine running docker desktop with linux vm as swarm nodes?
I am a beginner to docker.Please correct me if anything wrong.
As shown in this docker swarm tutorial https://www.youtube.com/watch?v=nGSNULpHHZc , i am trying to setup multhost setup for my hyperledger fabric application.
I am using two oracle linux servers namely server 1 and server 2.
I connected both the servers using the docker swarm as managers and created overlay network called my-net.
I followed the same syntax given in the above mentioned tutorial and created the service using the beolw mentioned syntax.
docker service create --name myservice --network my-net --replicas 2 alpine sleep 1d
As expected it created one conatianer in each the server.
Say for example server 1 coantainer IP is 10.0.0.4 and server 2 container IP 10.0.0.5.
Now, i am trying to ping from the second servers container to first server's container as shown below and it is pinging.
# docker exec -it ContainerID sh
/ # ping 10.0.0.4
PING 10.0.0.4 (10.0.0.4): 56 data bytes
64 bytes from 10.0.0.4: seq=0 ttl=64 time=0.082 ms
64 bytes from 10.0.0.4: seq=1 ttl=64 time=0.062 ms
64 bytes from 10.0.0.4: seq=2 ttl=64 time=0.067 ms
^C
--- 10.0.0.4 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.062/0.070/0.082 ms
Now, I am trying to create my service(1) using the beolw mentioned syntax.
docker service create --name myservice1 --network my-net --replicas 2 hyperledger/fabric-peer sleep 1d
As expected this also created one conatianer in each the server.
Say for example server 1 coantainer IP is 10.0.0.6 and server 2 container IP 10.0.0.7.
Now, I am trying to ping from the second servers container to first server's container as shown below.
This time i am getting ping not found error,
# docker exec -it ContainerID sh
# ping 10.0.0.6
sh: 1: ping: not found
Can anyone please help what is the problem with the second myservice1.
The Fabric Docker images are based on a bare bones base Ubuntu image and do not include utilities like ping. Once you "exec" into the peer containers, you use "apt" to install ping:
apt-get update
apt-get install inetutils-ping
Added -ping at the end
Expanding on Gari Singh's answer, on a Fabric network I've spun this week, the inetutils has been split in different packages:
# apt-cache search inetutils
inetutils-ftp - File Transfer Protocol client
inetutils-ftpd - File Transfer Protocol server
inetutils-inetd - internet super server
inetutils-ping - ICMP echo tool
inetutils-syslogd - system logging daemon
inetutils-talk - talk to another user
inetutils-talkd - remote user communication server
inetutils-telnet - telnet client
inetutils-telnetd - telnet server
inetutils-tools - base networking utilities (experimental pac
so to install e.g. ping the correct command has become:
# apt-get install inetutils-ping
The Ubuntu version of the peer is:
# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.5 LTS"
I am trying to add a cluster with replicas using docker-compose scale graylog-es-slave=2 but for a version 3 Dockerfile unlike Docker compose and hostname
What I am trying to do ix figure out how to get the specific node in the replica set
Here is what I have tried
D:\p\liberty-docker>docker exec 706814bf33b2 ping graylog-es-slave -c 2
PING graylog-es-slave (172.19.0.4): 56 data bytes
64 bytes from 172.19.0.4: icmp_seq=0 ttl=64 time=0.067 ms
64 bytes from 172.19.0.4: icmp_seq=1 ttl=64 time=0.104 ms
--- graylog-es-slave ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.067/0.085/0.104/0.000 ms
D:\p\liberty-docker>docker exec 706814bf33b2 ping graylog-es-slave.1 -c 2
ping: unknown host
D:\p\liberty-docker>docker exec 706814bf33b2 ping graylog-es-slave_1 -c 2
ping: unknown host
The docker-compose.yml
version: 3
service:
graylog-es-slave:
image: elasticsearch:2
command: "elasticsearch -Des.cluster.name='graylog'"
environment:
ES_HEAP_SIZE: 2g
deploy:
replicas: 2 <-- this is ignored on docker-compose just putting it here for completeness
Instead of ., use _ (underscore), and add the prefix of the project name (the directory that holds your docker-compose.yml, I assume that it is liberty-docker_graylog):
ping liberty-docker_graylog-es-slave_1
You can see that doing network ls, search for the right network, then docker network inspect network_id.
I am looking for a solution to ping a Docker container using its hostname, from another Docker Container.
I tried as follow:
starting first Docker container:
docker run --rm -ti --hostname=repohost --name=repo repo
starting second Docker container, link to first and start bash:
docker run --rm -ti --hostname=repo2host --link repo:rp repo2 /bin/bash
on bash started on repo2
ping repohost
it remain on pending without any result.
Can someone tell me if there is a solution for this?
You should be able to ping using the alias you gave in the link command (the part after the :), in your case ping rp should work.
The following works for me, given a running container called furious_turing:
$ docker run -it --link furious_turing:ft debian /bin/bash
root#06b18931d80b:/# ping ft
PING ft (172.17.0.3): 48 data bytes
56 bytes from 172.17.0.3: icmp_seq=0 ttl=64 time=0.136 ms
56 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.091 ms
56 bytes from 172.17.0.3: icmp_seq=2 ttl=64 time=0.092 ms
^C--- ft ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.091/0.106/0.136/0.000 ms
root#06b18931d80b:/#
If you need to ping on another name, you can add entries to /etc/hosts with the --add-host argument to docker run.
One way to achieve what you need would be with WeaveDNS.