Why does Docker prevent attaching a container to both host and user defined bridge network? - docker

Why is it that Docker prohibits attaching a container to both the host and user defined bridge network?
Secondly, for deployments that require disabling IP forwarding on the host machine does docker recommend deploying docker containers with host networking only, since based on what i understand that seems to be the only option left.
Any insights on the above two?
Thanks

Why is it that Docker prohibits attaching a container to both the host and user defined bridge network?
Because there's no way to "attach" networks when a container is running in the host network namespace.
Docker attaches networks by adding virtual interfaces to a container's isolated network namespace. When running in the global network namespace, there's no sane way to do this: any new interfaces wouldn't be restricted to the container, and would potentially disrupt host networking.
Secondly, for deployments that require disabling IP forwarding on the host machine does docker recommend deploying docker containers with host networking only, since based on what i understand that seems to be the only option left.
That's probably the only easy option.
You could run a proxy service on the host that would expose services in Docker containers. You could potentially even automate that by monitoring the Docker for events and getting information about published ports. Otherwise you would need to manually implement the appropriate configuration.

Related

What is the practical use case for --net=host argument in docker?

For running a container we can specify --net=host to enable host networking, which allows the container shares the host’s networking namespace. But what is the practical use case for this?
I've found it useful in two situations:
You have a server process that listens on a very large number of ports, or does not use a consistent port, so the docker run -p option is impractical or impossible.
You have a process that needs to examine or manage the host network environment. (Its wire protocol somehow depends on sending the host's IP address; it's a service-discovery system and you want it to advertise both Docker and non-Docker services running on the host.)
Host networking disables one of Docker's important isolation systems. If you run a container with host networking, you can't use features like port remapping and you can't accept inbound connections from other containers using the container name as a host name. In both of these cases, running the server outside Docker might be more appropriate.
In SO questions I frequently see --net host suggested as a hack to get around programs that have 127.0.0.1 hard-coded as the location of a database or another external resource. This isn't usually necessary, and adding a layer of configuration (environment variables work well) and the standard Docker networking setup is better practice.

Figuring out the IP address of a service for dockerized Consul

I am building a microservices based application and would like to use Consul as service registry. All in all I have three scenarios:
All the services run on the host.
All the services run on the host, but Consul runs in Docker.
All the services and Consul run in Docker.
Now I have the problem of how to register the services with their IP address, because I need to figure out their IP address so that it is reachable by Consul (e.g., for the health checks):
If everything runs on the same host, it's pretty easy: Simply use 127.0.0.1, and you're done.
If everything (including Consul) runs in Docker, I could use hostname -i from within the Docker containers to figure out their external IP and hand it over to Consul. This works, but I wonder if there is a better way to solve this? (Ideally, the solution should also work in the same way on Kubernetes.)
If the services run on the host, but Consul runs in Docker, right now I am missing any idea at all. Basically, Consul requires the host's IP address to be able to talk to the services, but I can only detect this from within the Consul container (by resolving host.docker.internal). But first, this does not work from externally, and second it only works for Docker for Mac / Windows, not e.g. with Kubernetes.
How could I solve these issues?
PS: I would like to avoid using a container such as registrator by Gliderlabs, since I have doubts how well this works on Kubernetes, and also it won't help with the mixed Docker / host scenario.
If you're using Kubernetes, you might start by checking whether its built-in service registry meets your needs. There's generally not a direct path to reach a pod via its node's host's IP address, so the setup you describe won't really work well. (I might consider Consul for a key/value store but I wouldn't reach for it as a service registry in Kubernetes land.)
In plain multi-host Docker land, this is one of the few situations I've found where host networking is appropriate. Start Consul with --net host or an equivalent option in Docker Compose or another orchestration tool. Then Consul will believe "its" IP address is the host's, and if you have automated TCP probes of well-known ports, you can search every service that's running on the host and discover e.g. a MySQL service on port 3306, whether running in a container or natively on the host.
With this setup, servicename.service.consul will resolve to some physical-host IP address. If you have a Docker container pointing at its current host for DNS service, then that will route a service to some host, maybe the same one, but this has worked reliably for me in the past.
Note that the relevant hostnames will be different in different environments: servicename.service.consul for a Consul-based setup, servicename.namespacename.svc.cluster.local in Kubernetes, maybe localhost in a developer-desktop environment. You need to make sure this is configurable, most straightforwardly via an environment variable.

Keycloak docker containers are unable to discover each others

I have two instances of keycloak running on container each on is running on a single node.
The nodes are bare-metal nodes inside my company network.
keycloak uses TCPPING as discovery protocol.
Since the two containers are running on different nodes, and each instance is pining inside docker default network they are not able to find each other.
I said docker default network because I didn’t specify special network for the two containers.
Any idea how can I make the two instances in this architectural design discover each others!
and I was thinking about docker swarm as a solution.
Assuming the two nodes are on the same network and are able to connect to each other, you can get the two container to discover each other using docker host networking
It would be as easy as docker run --net=host
Docker host networking makes the container to use the networking of the host node and thus will be allocated an IP address by the DHCP server used by the host node and for all practical purposes , would look like another host in that network.
This allows the two containers to discover each other using TCPPING
Docker swarm would also enable this .Docker swarm basically abstracts multiple host nodes such that you can containers on them as if you are running docker on single host. But that will require docker-machine and whole new setup.

How to kill networking to a docker container?

What approach can I use to kill networking to a docker container (ie: make it unreachable from the host OS)? A typical approach for a non-container would be to alter iptables, but for Docker I'm not sure how to go about this.
It's mostly this way by default. If you don't expose any ports and don't run network services in the OS (usually you just run your application), there's nothing to reach in the container.
You might clarify precisely what you mean with "reachable". Reachable from where for what purpose? If you don't expose any ports, your container is not reachable from any other host. Your container may still be "reachable" from other containers within the docker network on the host, so if your concern is other docker containers within the same docker host, docker provides the --icc=false flag to disable inter-container communication, which by default is enabled. More info here in the docs.

Cross container communication with Docker

An application server is running as one Docker container and database running in another container. IP address of the database server is obtained as:
sudo docker inspect -f '{{ .NetworkSettings.IPAddress }}' db
Setting up JDBC resource in the application server to point to the database gives "java.net.ConnectException".
Linking containers is not an option since that only works on the same host.
How do I ensure that IP address of the database container is visible to the application server container?
If you want private networking between docker containers on remote hosts you can use weave to setup an overlay network between docker containers. If you don't need a private network just expose the ports using the -p switch and configure the addresses of the host machine as the destination IP in the required docker container.
One simple way to solve this would be using Weave. It allows you to create many application-specific networks that can span multiple hosts as well as datacenters. It also has a very neat DNS-based service discovery mechanism.
I should disclaim, I am one of Weave engineering team.
Linking containers is not an option since that only works on the same host.
So are you saying your application is a container running on docker server 1 and your db is a container on docker server 2? If so, you treat it like ordinary remote hosts. Your DB port needs to be exposed on docker server 2 and that IP:port needs to be configured into your application server, typically via environment variables.
The per host docker subnetwork is a Private Network. It's perhaps possible to have this address be routable, but it would be much pain. And it's further complicated because container IP's are not static.
What you need to do is publish the ports/services up to the host (via PORT in dockerfile and -p in your docker run) Then you just do host->host. You can resolve hosts by IP, Environment Variables, or good old DNS.
Few things were missing that were not allowing the cross-container communication:
WildFly was not bound to 0.0.0.0 and thus was only accepting requests on eht0. This was fixed using "-b 0.0.0.0".
Firewall was not allowing the containers to communication. This was removed using "systemctl stop firewall; systemctl disable firewall"
Virtual Box image required a Host-only adapter
After this, the containers are able to communicate. Complete details are available at:
http://blog.arungupta.me/2014/12/wildfly-javaee7-mysql-link-two-docker-container-techtip65/

Resources