I am trying to setup a two server nodes Apache Ignite cluster, based on Docker containers hosted by two different hosts.
After several tries, the only way I found to have nodes communicating was using "--net=host".
But we are using user namespaces on these hosts, so it's not a solution I can deploy.
Is there some workaround ? I have read things about BasicAddressResolver but no results so far. Maybe it's not a right way.
And overlay networks seem a bit cumbersome for our needs.
Thanks for any help, maybe just a working config file I could adapt.
Regards
BAD
docker run -v "/tmp/apache_ignite_node.xml:/opt/ignite/apache-ignite/config/default-config.xml" -p "10800:10800" -p "11211:11211" -p "47100-47199:47100-47199" -p "47500-47599:47500-47599" -p "49112:49112" apacheignite/ignite:latest
WORKS
docker run --net=host -v "/tmp/apache_ignite_node.xml:/opt/ignite/apache-ignite/config/default-config.xml" -p "10800:10800" -p "11211:11211" -p "47100-47199:47100-47199" -p "47500-47599:47500-47599" -p "49112:49112" apacheignite/ignite:latest
(of course I could remove the ports exposition)
"For communication among containers running on different Docker daemon hosts, you can either manage routing at the OS level, or you can use an overlay network." (source)
Routing at the OS level I assume means --net=host. So, according to Docker, the answer is the overlay network. It looks like other options are available, but that would need extra software.
Related
Suppose, I want to connect a container with another container, where both docker containers are running on a different machine. How do I do that? Hopefully, the attached picture will help to understand what I need. thanks.
This works exactly the same way as if neither process was running in Docker: connect to the other system's IP address and the port you published when you launched the container.
machine02$ docker run --name m2-c1 -p 12345:80 image1
machine01$ docker run --name m1-c5 \
> -e CONTAINER_1_URL=http://192.168.1.102:12345 \
> image5
If you find yourself doing this often, a clustered setup like Kubernetes or Docker Swarm is built for this sort of environment. They have a piece called an overlay network that would allow all 10 containers to share a single "network", so you can directly call c1 as a host name and reach either copy of it. A non-Docker service discovery system, like Hashicorp's Consul, can also help remember what service is running on which node.
This question is probably addressed to all docker gurus. But let me give some background first. I faced dns resolution problems (on docker's default network "bridge") until i read the following in the documentation at https://docs.docker.com/engine/userguide/networking/
The docker network inspect command above shows all the connected containers and their network resources on a given network. Containers in this default network are able to communicate with each other using IP addresses. Docker does not support automatic service discovery on the default bridge network. If you want to communicate with container names in this default bridge network, you must connect the containers via the legacy docker run --link option.
As the --link option is deprecated, makes any docker run command hang and finally smashes the docker daemon process (locally) i tried using a different bridged user network and pinned dummy instances to it.
docker network create -d bridge --subnet=172.15.0.0/16
--gateway=172.15.0.1
-o com.docker.network.bridge.default_bridge=false
-o com.docker.network.bridge.enable_icc=true
-o com.docker.network.bridge.enable_ip_masquerade=true
-o com.docker.network.driver.mtu=1500
-o com.docker.network.bridge.name=docker1
-o com.docker.network.bridge.host_binding_ipv4=0.0.0.0 a
docker run --name db1 -e MYSQL_ROOT_PASSWORD=a -d mysql:5.7.16
docker run --name db2 -e MYSQL_ROOT_PASSWORD=a -d mysql:5.7.16
docker network connect --ip 172.15.0.40 a db1
docker network connect --ip 172.15.0.40 a db2
Now the resolution of services/containers named via --name works fine using ping but here is the question:
Why is service/container name resolution not possible on the default bridge network?
Would be great if any docker network guru could give a hint. Regards.
Why is service/container name resolution not possible on the default bridge network?
There's no technical reason this would not be possible, but a decision to keep backward compatibility.
The default ("bridge") network never supported service discovery through a built in DNS, and when the feature was under development, maintainers of some projects raised concerns that they did not want this added on the default network, as it would block alternative implementations.
In addition, custom networks are designed to explicitly allow containers to communicate. On the default network, this is achieved by disabling "inter container communication" (--icc=false), and using --link to establish a link between containers. Having automatic discovery for any container connected to the default network would make this a lot more complicated to use.
So; create a custom network, and attach containers to that network if they should be able to communicate with each other.
Note that in many cases, not all of the options you specified are needed; simply running docker network create foo should work for most use cases.
I'm starting in container based archictectures with Docker and I have a doubt that maybe is a nonsense.
Has it sense to link Docker containers that are running on different hosts?
Say we have two containers:
barDatabase
fooService
If both are in the same host, we would link the barDatabase to fooService giving, this way, a hostname to communicate between them.
But if they are running on different machines:
barDatabase -> machine1.company.local
fooService -> machine2.company.local
Would be yet necessary to link them? Couldn't we use the original hostname without link them?
Thanks.
Yes and no. Newer versions of Docker have the docker network - this requires a bit of extra config, like - for example - an etcd to manage the config.
In doing so, you can then:
docker network create sometnetname
docker run -d --net somenetname --name barDatabase yourimage
And on your other host:
docker run -d -p 8080:8080 --net somenetname --name fooService service_image
You'll then be able to 'ping' barDatabase as if it was a hostname, from fooService. And fooService will attach to the external net, and act as a gateway.
This works on my 1.9.1 docker, and not on my 1.8.2 - on centos. (So I would assume it's a 1.9+ feature, but I can't find a direct source).
More detail:
https://docs.docker.com/engine/userguide/networking/get-started-overlay/
Requires a bit more faff to set up though, because you do have to configure etcd (or another key value store)
I've been using this to put a multi-node elasticsearch instance on a private network, which I would assume is similar to your use case. (3 es nodes on 3 hosts, with logstash feeding in, and kibana acting as a gateway, along with an nginx admin proxy that does some security/rewrite)
In this case you'd have to expose the ports you want access from the database container in machine1, and then in machine2 you'd just point to machine1 at the exposed port, as you expected. There's no need (and AFAIK no way) to directly link the containers from different machines.
I just started playing around with Docker and was able to setup a docker image using Ubuntu 14.03 / LXDE / VNC which works fine since I can connect from outside to the VNC server.
Now I am trying to understand the Networking of Docker but it seems I am completely lost. Since I had to forward the port for VNC already it seems that no further ports could be forwarded?
Assuming I have an application running under Wine which requires several portranges, how to achieve that? Does it mean that I would need to create a further container running the Wine application on top of the base image?
You can specify the -p option as often as you want
ie -p 8080-8085:8080-8085 -p 1234:1234 -p 9000-9005:9000-9005
I have two docker containers which runs on the same host(centos 6 server).
container 1 >> my web application (Ports mapped to some random port of host)
container 2 >> python selenium testscripts ( Runs headless Firefox)
My Test cases fails saying problem loading page
Basically the issue is that the second container or any other container residing on the same host is not able to access my Web application.
But my web app is accesible to outside world
I linked both containers and still i am facing the problem
I tried replicating the same setup in my laptop(ubuntu) and its working fine!!!
Any help appreciated !!
Thanks in advance
I think order matters in linking containers. You should start container1 the web application and then link container2 with webapp.
You need to change your selenium scripts to use the docker link id or alias as the hostname.
For example if you did:
$ sudo docker run -d --name webapp my/webapp
$ sudo docker run -d -P --name selenium --link webapp:webapp my/selenium
then your selenium scripts should point to http://webapp/
I had this problem in Fedora(22) - for some containers (not all). Upon inspection, it showed up there is an special DOCKER chain on the iptables, that can make some connections go loose. Appending an accept rule for that chain made things work:
sudo iptables -A DOCKER -p tcp -j ACCEPT
(While searching for the problem before hitting this question, there are suggestions this also occurs in CentOS and RHEL)
Yes the order of container launch does matter, But i am launching my web application container through jenkins.
jenkins is configured in container 2.
So i can not launch my web application(container 1) manually.
Is there anyother solution for this, something like bidirectional linkage??