I would like to know how to make one container discoverable to another container running on different host but connected by LAN. Basically, I want to run two containers on different hosts and I want them to communicate, in this way I suppose I can implemented distributed training on tensorflow.
Is there any possible way to accomplish this?
There are multiple options to do that:
You can use weave.
You can set up a docker overlay network
You can use Docker Swarm
You can create macvlan docker network
You may also use a special script called pipework, which will automatically do the job:
Assign static macvlan ip
Assign dynamic ip, using DHCP client
Related
I'm working on Docker container and I find it strange the default network prevent from communicate between container using the name,
thanks for any hint
Containers on the default bridge network can only access each other by IP addresses, unless you use the --link option, which is considered legacy. On a user-defined bridge network, containers can resolve each other by name or alias.
From official docker documentation
Technically, there is nothing stopping docker to resolve the container names on default bridge network. I think it is just a decision that is made by docker team to force users to create bridge networks consciously. So that they know what they are doing and securely use it for production.
I have two instances of keycloak running on container each on is running on a single node.
The nodes are bare-metal nodes inside my company network.
keycloak uses TCPPING as discovery protocol.
Since the two containers are running on different nodes, and each instance is pining inside docker default network they are not able to find each other.
I said docker default network because I didn’t specify special network for the two containers.
Any idea how can I make the two instances in this architectural design discover each others!
and I was thinking about docker swarm as a solution.
Assuming the two nodes are on the same network and are able to connect to each other, you can get the two container to discover each other using docker host networking
It would be as easy as docker run --net=host
Docker host networking makes the container to use the networking of the host node and thus will be allocated an IP address by the DHCP server used by the host node and for all practical purposes , would look like another host in that network.
This allows the two containers to discover each other using TCPPING
Docker swarm would also enable this .Docker swarm basically abstracts multiple host nodes such that you can containers on them as if you are running docker on single host. But that will require docker-machine and whole new setup.
I have been trying to understand docker and the swarm mode.I also read about the docker network tutorials.
I have tried the docker swarm mode.If a docker swarm mode is initialised and if we execute docker network ls it shows a network with the name ingress.
My question is do I need to exclusively create an overlay network?Or should the swam mode work fine without exclusively creating a network?
My question is do I need to exclusively create an overlay network?Or should the swam mode work fine without exclusively creating a network?
No, you don't need to, however it is recommended that you create a custom overlay network for your applications that you deploy to the swarm. The ingress overlay network handles control and data traffic related to swarm services. From the official documentation:
Use the default overlay network demonstrates how to use the default
overlay network that Docker sets up for you automatically when you
initialize or join a swarm. This network is not the best choice for
production systems.
If you need communication between containers on different Docker Swarm Nodes, you need an overlay network.
If you just use "docker run" it will use the ingress network on the host you are running the command.
If I know a Docker container IP address, I can easily communicate with it from another container, but as long as they are in same network.
My question is, how can I communicate with containers from another network and why can't I access local IP which is on the same machine? I am interested in network explanation why I can access 172.19.0.1 from 172.19.0.2 but I can't access 172.20.0.1 from 172.19.0.2.
What are possible workarounds to making Docker container from one network to communicate with docker container from another network?
You can publish a port and then access that port over localhost (or 0.0.0.0 for troubleshooting).
Other than that you could use an alternative to docker network like linking or other things. But I wouldn't suggest that. If you want two containers to communicate with eachother and not the public just create a new network for those two containers.
You can specify that this network is external and they can join it even from different compose files.
Elasticsearch is designed to run in cluster mode, all I have to do is to define the relevant node IPs in the cluster via environment variable and as long as network connectivity is available it will connect and join the other nodes to the cluster.
I have 3 nodes, 1 is acting as the docker swarm manager and the other two are workers. I have initialized the manager and joined the worker nodes and everything looks ok from that standpoint.
Now I'm trying to run the elasticsearch container in a way that will allow me to join all nodes to the same elasticsearch cluster, however, I want the nodes to join using their overlay network interface and that means that I need to know the container internal IP addresses at the time of running the docker service create command, how can I do this? Do I have to use something like consul to achieve this?
Some clarifications:
I need to know, at the time of service creation the IP addresses (or DNS names) for all Elasticsearch participants so I could start the cluster correctly. This has to be at the time of creation and not afterwards. Also, as I understand, I can expose ports 9200/9300 for all services and work with the external machine IPs and get it to work, but I would like to use the overlay network to do all these communications (I thought this is what swarm mode is for).
Only a partial solution here.
So, when attaching your services to a custom overlay network, you indeed have access to Docker's custom Service discovery feature. I'll detail the networking feature of Docker Swarm mode, before trying to tie it to your problem.
I'll be using the different term of services and tasks, in which a service could be elasticsearch, whereas a task is a single instance of that elasticsearch service.
Docker networking
The idea is that for each services you create, docker assigns a Virtual IP (VIP), and a custom dns alias. You can retrieve this VIP using the docker service inspect myservice command.
But, there is two modes to attach a service to an overlay network dnsrr and VIP. You can select these options using the --endpoint-mode options of docker service create.
The VIP mode (I believe it is the default one, or at least the most used), affects the virtual ip to the service's dns alias. This means that doing an nslookup servicename would return to you a single vip, that behind the scenes, would be linked to one of your container in a round robin fashion. But, there is also a special dns alias that lets you access all of your instances ips (all of your tasks ips) : tasks.myservice.
So in VIP mode you can retrieve all of your tasks ips using a simple nslookup tasks.myservice, where myservice is a service name.
The other mode is dnsrr. This mode simply gets rid of the VIP, and connects the dns alias to the different tasks (=service instances), in a round robin way. This way, you simply have to do a nslookup myservice to retrieve the different service instances ip.
Elasticsearch clustering
Ok so first of all I'm not really familiar with the way elasticsearch lets you cluster. From what I understood from your question, you need when running the elasticsearch binary, give it as a parameter, the adress of all of the other nodes it needs to cluster with.
So what I would do, is to create a custom Elasticsearch image, probably based on the one from the default library, to which I would add a custom Entrypoint that would firstly run a script to retrieve the other tasks ip.
I'd believe that staying in VIP mode is suitable for you, since there is the tasks.myservice dns alias. You'll then need to parse the output to retrieve the tasks ip (and probably remove yours). Then you'll be able to save them in a config file environment variable, or use them as a runtime option for your elasticsearch binary.
Edit: To create a custom overlay network, you will need to use the docker network create command, and use the --network option of docker service create
This is answer is mainly based on the Swarm mode networking documentation