I have a dockerized dropwizard service deployed on marathon. I am using Hazelcast as a distributed cache which I start has part of my dropwizard service. I have placed a constraint to ensure that each container is started on a unique Host.
"constraints": [
[
"hostname",
"UNIQUE"
]
],
I have exposed 2 ports on my docker container, 10012 for my service and 10013 for Hazelcast. I am using Zookeeper for my Dropwizard service discovery. Thus when I startup my Hazelcast instance I have access to the hostnames of all the machines on which my docker containers are running and I add all of them as below.
TcpIpConfig tcpIpConfig = join.getTcpIpConfig();
// finder is a handle to a service discovery service and the following gets me all the hosts on which my docker containers will run.
List<ServiceNode<ShardInfo>> nodes = finder.getAllNodes();
nodes.stream()
.peek(serviceNode -> log.info("Adding " + serviceNode.representation() + " to hazelcast."))
.map(serviceNode -> serviceNode.getHost())
.forEach(host -> tcpIpConfig.addMember(host));
tcpIpConfig.setRequiredMember(null).setEnabled(true);
Now issues:
If I use network type as BRIDGE while deploying on Marathon, then I don't know the docker container host and thus my 2 docker containers don't know each other. It looks something like this:
ip-10-200-2-219.ap-southeast-1.compute.internal (docker host) - 172.12.1.18 (docker container ip)
ip-10-200-2-220.ap-southeast-1.compute.internal (docker host) - 172.12.1.20 (docker container ip)
From zookeeper I get the docker host IPs but not the docker container IPs.
If I use network type as HOST then everything works but an issue is that I then have to make sure that ports which my docker containers are running always have port 1001 and 10013 available. (With BRIDGE the docker container ports are bound to a random ports).
Analysis:
The two docker containers are inside their own network localized to the slaves. They need to recognise each other using the public IP of the slave and the bridged port to which 5701 (or whatever hazelcast port you are using).
Solution
In the TCP/IP configuration set the Public Address and Port when starting the instance. All instances will do this and they will talk to each other using the marathon slave IP and the randomized port used to it.
Use the HOST and PORT_5701 variables provided by marathon and available inside the container to do this.
Config hzConfig = new Config();
hzConfig.getNetworkConfig().setPublicAddress(
String.format("%s:%s",
System.getenv("HOST"),
System.getenv("PORT_5701")));
Refer to hazelcast network config documentation to understand a bit more about the public address option.
You can use TCP/IP Discovery mechanism and make sure hazelcast nodes bind to public ip of docker container. Although this solution is might help if you know docker container IPs, before your deployment.
<hazelcast>
...
<network>
...
<join>
<multicast enabled="false">
</multicast>
<tcp-ip enabled="true">
<member>docker-host1</member>
<member>docker-host2</member>
<member>172.12.1.20</member>
<member>192.168.1.21</member>
</tcp-ip>
...
</join>
...
</network>
...
</hazelcast>
Getting hazelcast members to discover each other and form a cluster is tricky. I ended up following advice of #bitsofinfo -
https://github.com/hazelcast/hazelcast/issues/9219
#santanu 's answer is correct. Public address needs to be set properly for hazelcast members to be able to discover each other. Here's a parameterized way of doing this : https://github.com/gagangoku/hazelcast-docker
Related
I'm deploying a stack of services through the command:
docker stack deploy -c <docker-compose.yml> <stack-name>
And I'm mapping ports of one of these services on docker compose with ports: 8000:8000.
The network driver being used is overlay.
I can access these services via localhost:8000, via Peers IP(?).
When I inspect the network created, I can see the local IPs of each container (for instance, 10.0.1.2). But Where is the external IP of container (the one like 172.0. ...) ?
I am running these docker container on a virtual machine ubuntu.
How can I access the services running on containers from other nodes running on other networks? Isn't possible to access via hostIP:port?
If so, how do I get the host IP? When I do docker-machine IP I get "host is not running".
[EDIT: I wasn't doing port mapping between the host and the VM in virtualbox. Now it works!]
Whats the best way to communicate between containers on the same swarm?
Thanks
Whats the best way to communicate between containers on the same swarm? Through name discovery?
In general if you communicate between containers you should use the container/service name.
And for your other problem you probably wan't a reverse proxy like nginx or traefik.
I have 6 microservices packed in docker containers. On every swarm node, i have installed consul agent, binded to host ip, and client in 0.0.0.0 mode.
All microservices are in docker-compose file which I am running from Swarm manager.
Microservices are written in Java and in bootstrap.yml I must to specify consul agent endpoint. Possible choices are:
localhost
${HOSTIP} environment variable
Problems:
- localhost is not localhost of host, but container localhost, and I don't have consul agent on container localhost but host.
- ${HOSTIP} in compose file i have to supply this env var. But, I don't know where Swarm MAnager will schedule microservice start so I cannot know which IP address will be used.
I tried to expose on each node host ip address but since i am running compose from manager, it will not read this variable.
Do you have any proposal how to solve this? I have consul cluster, 3 managers and 3 nodes. on each manager and node i have consul agent started (as docker container). No matter what type of networking i am using, i am not able to start up microservice. I started consul as --net=host and --net=bridge, but this is not working.
Is there anyone with some idea?
Thanks ahead.
So you are running consul in containers also, right? Is it possible in your setup to link containers? So you could start the consul containers as "consul" on each host and link your microservices to it. Linked containers get a hosts entry and so the consul service should be reachable at "consul:8500" from within your services.
Edit: If you are using the official Consul Docker image from Hashicorp, you can configure the client address to 0.0.0.0, this should make the consul API available to the other containers running on the host.
Let me answer my own Q: This is not a way we want to do this, I mean, we cannot put some things in Swarm and some thing outside Swarm with expectation that it will work. It will not. Consul as a service discovery cannot be used outside Swarm, too. Simple answer would be to use Docker Orchestration and Service discovery and not to involve Consul. If someone is using Swarm, everything should be in overlay networks (rabbit, redis, elk and so on)...
Does anyone know what configuartion needs to be done in Hazelcast.xml if we want to form Hazelcast cluster between its instances running on multiple docker containers. Should we provide 127.0.0.1 as address of member or the address should be that of docker host? Also does local.localAddress property needs to point to docker host address?
Edit :
We did some changes by setting the public ip and is able to form cluster but with the limitation that while defining port mapping in docker run command port of host should be same as port of container if we set different port of host and map it to hazelcast port like 8047:5071 it doesn't work it has to be 5701:5701, any idea why such a behaviour
you can set the public-addressproperty in hazelcast config as the ip of the host machine . This will allow the nodes to join the cluster.
<network>
<public-address>host-machine-ip</public-address>
</network>
I am having a weird scenario in my project.
I am running "Supervisor" application in one of docker container.
Using this supervisor I am running two "web applications" in docker containers and both are using one micro service; again installed in another docker container.
Now, I can able to access my application from "Supervisor's container". But obviously it is not accessible from my machine.
How can I able to access my applications "Web App1" or "Web App2" from my machine?
I have less knowledge related to docker networking.
Please help.
You can map ports of Web App1 and Web App2 to the host container and using the IP address and port you can access those containers from you machine. A better way to do this is to add hostname for your containers and maps ports so you don't have to remember the IP addresses since they are generated randomly on every time the container is recreated.
Docker manages network traffic between "host machine" and containers. In this case you have many dockers on different layers. On each layer you have to expose the ports of the internal containers to the "docker host" on the next layer and so on.
This is a solution over ports:
So the "Supervisor" on 172.17.42.1 must expose the ports of all the internal containers (172.17.0.2-4) as its own ports. So for "Supervisor" you need a -p docker parameter for each port of all containers inside the "Supervisor".
Expose the network:
Configure the local machine to send any network packet 172.17.*.* to 172.17.42.1. Then configure 172.17.42.1 to send network packages for IPs 172.17.0.* to its network adapter Docker0 (default docker network adapter). The exact implementation is dependent on your distribution.
Another solution:
Skip your Supervisor container and use docker-compose to arrange and manage your internal containers.
An application server is running as one Docker container and database running in another container. IP address of the database server is obtained as:
sudo docker inspect -f '{{ .NetworkSettings.IPAddress }}' db
Setting up JDBC resource in the application server to point to the database gives "java.net.ConnectException".
Linking containers is not an option since that only works on the same host.
How do I ensure that IP address of the database container is visible to the application server container?
If you want private networking between docker containers on remote hosts you can use weave to setup an overlay network between docker containers. If you don't need a private network just expose the ports using the -p switch and configure the addresses of the host machine as the destination IP in the required docker container.
One simple way to solve this would be using Weave. It allows you to create many application-specific networks that can span multiple hosts as well as datacenters. It also has a very neat DNS-based service discovery mechanism.
I should disclaim, I am one of Weave engineering team.
Linking containers is not an option since that only works on the same host.
So are you saying your application is a container running on docker server 1 and your db is a container on docker server 2? If so, you treat it like ordinary remote hosts. Your DB port needs to be exposed on docker server 2 and that IP:port needs to be configured into your application server, typically via environment variables.
The per host docker subnetwork is a Private Network. It's perhaps possible to have this address be routable, but it would be much pain. And it's further complicated because container IP's are not static.
What you need to do is publish the ports/services up to the host (via PORT in dockerfile and -p in your docker run) Then you just do host->host. You can resolve hosts by IP, Environment Variables, or good old DNS.
Few things were missing that were not allowing the cross-container communication:
WildFly was not bound to 0.0.0.0 and thus was only accepting requests on eht0. This was fixed using "-b 0.0.0.0".
Firewall was not allowing the containers to communication. This was removed using "systemctl stop firewall; systemctl disable firewall"
Virtual Box image required a Host-only adapter
After this, the containers are able to communicate. Complete details are available at:
http://blog.arungupta.me/2014/12/wildfly-javaee7-mysql-link-two-docker-container-techtip65/