Can Consul be run inside a Docker container using Docker for Windows? - docker

I am trying to make Consul work inside a Docker container, but using Docker for Windows and Linux containers. I am using the official Consul Docker image. The documentation states that the container must use --net=host for Consul's consensus and gossip protocols.
The problem is, as far as I can tell, that Docker for Windows uses a Linux VM under the hood, and the "host" of the container is not the actual host machine, but that VM. I could not find a combination of -bind, -client and -advertise parameters (IP addresses), so that:
Other Consul agents on other hosts can connect to the local agent using the host machine's IP address.
Other containerized services on the same host can query the local agent's REST interface.
Whenever I pass the host machines IP address in the LAN through -advertise, I get these errors inside the container:
2018/04/03 15:15:55 [WARN] consul: error getting server health from "linuxkit-00155d02430b": rpc error getting client: failed to get conn: dial tcp
127.0.0.1:0->10.241.2.67:8300: connect: invalid argument 2018/04/03 15:15:56 [WARN] consul: error getting server health from "linuxkit-00155d02430b": context deadline exceeded
Also, other agents on other hosts cannot connect to that agent.
Using -bind on that address fails - my guess is, since the container is inside the Linux VM, the host machine's address is not the container's host's address, and therefore cannot be bound.
I have tried various combinations of -bind, -client and -advertise, using addresses like 0.0.0.0, 127.0.0.1, 10.0.75.2 (addresss on the Docker virtual switch) and the host machine's IP, but to no avail.
I am now wondering whether this is achievable at all. I have been trying this for quite some time, and I am despairing. Any advice would be appreciated!
I have tried the whole process without using --net=host, and everything works fine. I can connect agents across hosts, and I can query the local agents REST interface from other containerized applications... Is --net=host really crucial to the functioning of Consul?

Related

Sharing VirtualBox VM and Docker Container network

I have an headless server with VirtualBox. It run multiple virtual machines. One of them is a web proxy. It redirect external access to the right VM in function of the subdomain. Those VMs are communicating between them with internal network (intnet).
I would like to add some docker container to this configuration. How could I successfully create a network shared between my docker containers and this proxy VM ?
I tried to create a bridge network with docker docker network create my_net and then connect the VM with a additional network card in 'bridged' mode.
With this config ping works but not the actual connection. It isn't impossible to display the web page into a browser.
Am I missing some configuration here ? Also, is it a good practice to connect one VM to a docker network ?
Run the containers on one of the VMs. Use a totally normal Docker setup here: create a network for inter-container communication but don't configure it, and completely ignore the container-private network details and IP addresses.
When you use the docker run -p option, that will publish a container's port on its VM's network interface(s). From that point, other VMs can call the published port using that VM's IP address, just as if it were a non-container process running on the VM. Conversely, containers should be able to make outbound calls to the other VMs without special setup.

docker can connect to localhost of host machine, but not to a local ip of other machine in host network

I have 2 EC2 instances, they can talk normally through curl,
the EC2 marked dolphin has a docker container in it, and all security groups and firewall ports setup is OK.
to connect from docker into localhost of dolphin
i will use "host.docker.internal" instead of "localhost" because localhost means docker container itself.
my question :
how can i make docker not only talk to localhost of it's host, but to connect to the local machine of IP 172.30.2.194
--network host is not working, because i have another container linked... this is the error
/usr/bin/docker-current: Error response from daemon: Conflicting
options: host type networking can't be used with links. This would
result in undefined behavior.
Docker containers internally use the 172.x.x.x ip range. You can't connect to the EC2 because the ip range is clashing and the network stack routes the packets among the docker network instead of the "external" VPC.
A solution would be to change the address range of the VPC which holds your EC2 machines. You could use 192.168.x.x or 10.x.x.x.

docker-compose networking and publishing ports

I'm trying to better understand docker networking, but I'm confused by the following:
I spin up 2 contains via docker-compose (client, api). When I do this, a new network is created, myapp_default, and each container joins this network. The network is a bridge network, and it's at 172.18.0.1. The client is at 172.18.0.2 and the api is at 172.18.0.3.
I can now access the client at 172.18.0.2:8080 and the api at 172.18.0.3:3000 -- this makes total sense. I'm confused when I publish ports in docker-compose: 8080:8080 on the client, and 3000:3000 on the api.
Now I can access the containers from:
Client at 172.18.0.1:8080, 172.18.0.2:8080, and on the docker0 network at 172.17.0.1:8080
API at 172.18.0.1:3000, 172.18.0.3:8080, and on the docker0 network at 172.17.0.1:3000
1) Why can I access the client and api via the docker0 network when I publish ports?
2) Why can I connect to containers via 172.17.0.1 and 172.18.0.1 at all?
You can only access the container-private IP addresses because you're on the same native-Linux host as the Docker daemon. This doesn't work in any other environment (different hosts, MacOS or Windows hosts, environments like Docker Toolbox where Docker is in a VM) and even using docker inspect to find these IP addresses usually isn't a best practice.
When you publish ports they are accessible on the host at those ports. This does work in every environment (in Docker Toolbox "the host" is the VM) and is the recommended way to access your containers from outside Docker space. Unless you bind to a specific address, the containers are accessible on every host interface and every host IP address; that includes the artificial 172.17.0.1 etc. that get created with Docker bridge networks.
Publishing ports is in addition to the other networking-related setup Docker does; it doesn't prevent you from reaching the containers by other paths.
If you haven't yet, you should also read Networking in Compose in the Docker documentation. Whether you publish ports or not, you can use the names in the docker-compose.yml file like client and api as host names, connecting the the (unmapped) port the actual server processes are listening on. Between this functionality and what you get from publishing ports you don't ever actually need to directly know the container-private IP addresses.

Access Dask scheduler from an external docker container

I have started a dask-scheduler at host A. Host A has docker engine installed. So, host A has multiple network interfaces:
192.168.10.250 (default IP for host A)
172.17.0.1 (host A IP address in bridge network (i.e., docker0))
I tested a simple client, from within host A, to both IP addresses and works well
Now, I started a Docker container on the same host A without specifying any networks, so the docker container connects to the default bridge network and receives IP address 172.17.0.2. Within the docker container, I try to start a client that connects to the dask scheduler on the host A as follows:
client=Client('172.17.0.1:8786')
but each time I receive the following error:
IOError: Timed out trying to connect to 'tcp://172.17.0.1:8786' after 10 s: connect() didn't finish in time
I tried to change the network drive for the container to "host" instead of "bridge" but then I receive the following error:
distributed.comm.core.CommClosedError: in : Stream is closed
please help
Regards
Thanks guys. Problem solved.
I realized the problem was that python 2.7 was used inside docker image. When I used python 3.6, it worked (even without the --net host)
Regards

Cross container communication with Docker

An application server is running as one Docker container and database running in another container. IP address of the database server is obtained as:
sudo docker inspect -f '{{ .NetworkSettings.IPAddress }}' db
Setting up JDBC resource in the application server to point to the database gives "java.net.ConnectException".
Linking containers is not an option since that only works on the same host.
How do I ensure that IP address of the database container is visible to the application server container?
If you want private networking between docker containers on remote hosts you can use weave to setup an overlay network between docker containers. If you don't need a private network just expose the ports using the -p switch and configure the addresses of the host machine as the destination IP in the required docker container.
One simple way to solve this would be using Weave. It allows you to create many application-specific networks that can span multiple hosts as well as datacenters. It also has a very neat DNS-based service discovery mechanism.
I should disclaim, I am one of Weave engineering team.
Linking containers is not an option since that only works on the same host.
So are you saying your application is a container running on docker server 1 and your db is a container on docker server 2? If so, you treat it like ordinary remote hosts. Your DB port needs to be exposed on docker server 2 and that IP:port needs to be configured into your application server, typically via environment variables.
The per host docker subnetwork is a Private Network. It's perhaps possible to have this address be routable, but it would be much pain. And it's further complicated because container IP's are not static.
What you need to do is publish the ports/services up to the host (via PORT in dockerfile and -p in your docker run) Then you just do host->host. You can resolve hosts by IP, Environment Variables, or good old DNS.
Few things were missing that were not allowing the cross-container communication:
WildFly was not bound to 0.0.0.0 and thus was only accepting requests on eht0. This was fixed using "-b 0.0.0.0".
Firewall was not allowing the containers to communication. This was removed using "systemctl stop firewall; systemctl disable firewall"
Virtual Box image required a Host-only adapter
After this, the containers are able to communicate. Complete details are available at:
http://blog.arungupta.me/2014/12/wildfly-javaee7-mysql-link-two-docker-container-techtip65/

Resources