docker's embedded dns on the default bridged network - docker

This question is probably addressed to all docker gurus. But let me give some background first. I faced dns resolution problems (on docker's default network "bridge") until i read the following in the documentation at https://docs.docker.com/engine/userguide/networking/
The docker network inspect command above shows all the connected containers and their network resources on a given network. Containers in this default network are able to communicate with each other using IP addresses. Docker does not support automatic service discovery on the default bridge network. If you want to communicate with container names in this default bridge network, you must connect the containers via the legacy docker run --link option.
As the --link option is deprecated, makes any docker run command hang and finally smashes the docker daemon process (locally) i tried using a different bridged user network and pinned dummy instances to it.
docker network create -d bridge --subnet=172.15.0.0/16
--gateway=172.15.0.1
-o com.docker.network.bridge.default_bridge=false
-o com.docker.network.bridge.enable_icc=true
-o com.docker.network.bridge.enable_ip_masquerade=true
-o com.docker.network.driver.mtu=1500
-o com.docker.network.bridge.name=docker1
-o com.docker.network.bridge.host_binding_ipv4=0.0.0.0 a
docker run --name db1 -e MYSQL_ROOT_PASSWORD=a -d mysql:5.7.16
docker run --name db2 -e MYSQL_ROOT_PASSWORD=a -d mysql:5.7.16
docker network connect --ip 172.15.0.40 a db1
docker network connect --ip 172.15.0.40 a db2
Now the resolution of services/containers named via --name works fine using ping but here is the question:
Why is service/container name resolution not possible on the default bridge network?
Would be great if any docker network guru could give a hint. Regards.

Why is service/container name resolution not possible on the default bridge network?
There's no technical reason this would not be possible, but a decision to keep backward compatibility.
The default ("bridge") network never supported service discovery through a built in DNS, and when the feature was under development, maintainers of some projects raised concerns that they did not want this added on the default network, as it would block alternative implementations.
In addition, custom networks are designed to explicitly allow containers to communicate. On the default network, this is achieved by disabling "inter container communication" (--icc=false), and using --link to establish a link between containers. Having automatic discovery for any container connected to the default network would make this a lot more complicated to use.
So; create a custom network, and attach containers to that network if they should be able to communicate with each other.
Note that in many cases, not all of the options you specified are needed; simply running docker network create foo should work for most use cases.

Related

How docker process communication between different containers on default bridge on the same host?

Here is my situation:
First,I run a MySQL container(IP:172.17.0.2) on centOS;
Then I run a Nacos contanier with specified datasource(MySQL above) on the same host, but i didn't use the ip of the MySQL container, instead I used the ip of the bridge Gateway(172.17.0.1)(two containers both link to the default bridge).
What surprised me was that Nacos works well, it can query config data from MySQL container normally.
How did this happen? I have read some documention but didn't get the answer.It really confused me.
On modern Docker installations, try to avoid using the default bridge network. docker network create a network (it doesn't need any special options, but it does need to be created) and then launch your containers on --net that network. If you're using Compose, it creates a ("user bridge") network named default for you.
On your CentOS host, if you run ifconfig, you should see a docker0 interface with the 172.17.0.1 address. When you launch a container with the docker run -p option, that container is accessible via the first port number on all host interfaces, including the docker0 interface.
Meanwhile, inside a container (on the default bridge network), it sees that same IP address as the normal IPv4 gateway address (try docker run --rm busybox route -n). So, when you connect to 172.17.0.1:3306, you're connecting out to the host, and then connecting to the published port of the database container.
This isn't a totally standard way to connect between containers, though it will work. You should prefer using Docker named networks, which will let you connect to another container using the container's name without manually doing any IP-address lookups. If you really can't move off of the default bridge network, then the standard approach is to --link to the other container, but this entire path is considered outdated.

How can I connect to a VPN in docker not using VPN images?

Good morning!
Im using check point mobile to connect to my client VPN, and I have 2 containers in docker: mysql and karaf both sharing the network I created using the command docker network create --subnet=vpnAddress mynet
I used the command --network=mynet when running the containers.
Until here its all ok, I can connect via putty ssh to karaf, install the kar and all bundles are ok.
But when calling the services I realize that the container is not connected to the VPN, even so that I created a network with the VPN address. I need to be connected to the VPN in order to call the services.
Im connected externally(outside docker) to the VPN using the check point mobile, but I need docker to add/connect to the VPN.
Im using windows 10 (using docker with linux containers), I tried to go to C:\ProgramData\DockerDesktop\tmp-d4w and edit the file host.docker.internal too and change the IP to my VPN address, but none works.
I searched a lot, and I saw people talking about docker vpn images such as nordVpn or openVpn, but I cant use that.
I have been told I need to add the vpn network to docker, But im green at networking and I dont know how to do it, and what I did didn't work.
Hope you can help me. thanks!
edit: in docker engine i added the "bip": "vpnAddress/24"
I realize now that network bridge uses the VPN address now, tried to --network=bridge in both karaf and mysql container, but now karaf cant connect to mysql, but if I use the default docker create network mynet and run the 2 container using that network it works, but no luck with the VPN this way.
I haven't used Docker on Windows, but a quick look at some VPN containers shows that, in *nix at least, they use --device /dev/net/tun --cap-add=NET_ADMIN to expose the VPN "device" to the container. Other containers then use docker networking or links to connect to this VPN container - so looking at how the VPN containers do it might be helpful.
One suggestion for Mac seems to be using extra_hosts like so:
extra_hosts:
- "vpn.company.com:172.21.1.1"
You might be able to hack it with something like that. (or physically adding 172.21.1.1 vpn.company.com to /etc/hosts in the container). Also, checking for IP address conflicts between the Docker daemon and your host machine.
Windows docs seem to suggest they don't support network interfaces as "devices", so you probably need to either create a very specific docker network or modify host networking settings, starting with getting Docker daemon to recognize the VPN network.
See the Configure Advanced Networking section for some examples. I'd try creating a network associated with the VPN device first, then look into flags like --subnet and --gateway.
docker network create -d transparent \
-o com.docker.network.windowsshim.interface="Ethernet 2" TransparentNet2
This creates a network with a particular subnet and gateway, then runs a container with a statically-assigned IP on that network.
C:\> docker network create -d transparent \
--subnet=10.123.174.0/23 \
--gateway=10.123.174.1 MyTransparentNet
C:\> docker run -it --network=MyTransparentNet \
--ip=10.123.174.105 windowsservercore cmd
Good luck!

Multiple Docker host machine communication

Suppose, I want to connect a container with another container, where both docker containers are running on a different machine. How do I do that? Hopefully, the attached picture will help to understand what I need. thanks.
This works exactly the same way as if neither process was running in Docker: connect to the other system's IP address and the port you published when you launched the container.
machine02$ docker run --name m2-c1 -p 12345:80 image1
machine01$ docker run --name m1-c5 \
> -e CONTAINER_1_URL=http://192.168.1.102:12345 \
> image5
If you find yourself doing this often, a clustered setup like Kubernetes or Docker Swarm is built for this sort of environment. They have a piece called an overlay network that would allow all 10 containers to share a single "network", so you can directly call c1 as a host name and reach either copy of it. A non-Docker service discovery system, like Hashicorp's Consul, can also help remember what service is running on which node.

Understanding why ports need to be exposed for inter container communication on docker0

I was going through docker official docs to understand the difference between user-defined and default bridge. Link to specific page - https://docs.docker.com/network/bridge/
In first point of section "Differences between user-defined bridges and the default bridge", it is stated that
If you run the same application stack on the default bridge network,
you need to open both the web port and the database port, using the -p
or --publish flag for each.
I don't understand this specific text, as to why it is need to explicitly publish(-p) required port of database container when it will be used only by some other container connected to the same bridge.
My existing understanding is that, unless explicitly blocked, containers connected to the docker0 can freely communicate with each other.
So, this extract has confused me. Can somebody help ?
If you take away one thing from that page, it's that you should always docker create network and then docker run --net containers on that network, if you're using plain Docker commands. (Docker Compose does this automatically for you; Kubernetes's networking model is fundamentally different.)
If you docker run a container without a --net option then you wind up using a backwards-compatiblitiy networking mode. In this mode (the "default bridge network") from the page you cite containers cannot communicate with each other by default. Your two options are for the server to publish a port (docker run -p) and the client to connect to the published port on the host, or for the server to expose a port (almost always done with an EXPOSE directive in the Dockerfile) and the client to --link to it.
There's no real reason to be using this "default" mode at this point, and in practice the paragraph you cite shouldn't matter except for fairly old scripted Docker setups.

Link Docker containers running on different hosts?

I'm starting in container based archictectures with Docker and I have a doubt that maybe is a nonsense.
Has it sense to link Docker containers that are running on different hosts?
Say we have two containers:
barDatabase
fooService
If both are in the same host, we would link the barDatabase to fooService giving, this way, a hostname to communicate between them.
But if they are running on different machines:
barDatabase -> machine1.company.local
fooService -> machine2.company.local
Would be yet necessary to link them? Couldn't we use the original hostname without link them?
Thanks.
Yes and no. Newer versions of Docker have the docker network - this requires a bit of extra config, like - for example - an etcd to manage the config.
In doing so, you can then:
docker network create sometnetname
docker run -d --net somenetname --name barDatabase yourimage
And on your other host:
docker run -d -p 8080:8080 --net somenetname --name fooService service_image
You'll then be able to 'ping' barDatabase as if it was a hostname, from fooService. And fooService will attach to the external net, and act as a gateway.
This works on my 1.9.1 docker, and not on my 1.8.2 - on centos. (So I would assume it's a 1.9+ feature, but I can't find a direct source).
More detail:
https://docs.docker.com/engine/userguide/networking/get-started-overlay/
Requires a bit more faff to set up though, because you do have to configure etcd (or another key value store)
I've been using this to put a multi-node elasticsearch instance on a private network, which I would assume is similar to your use case. (3 es nodes on 3 hosts, with logstash feeding in, and kibana acting as a gateway, along with an nginx admin proxy that does some security/rewrite)
In this case you'd have to expose the ports you want access from the database container in machine1, and then in machine2 you'd just point to machine1 at the exposed port, as you expected. There's no need (and AFAIK no way) to directly link the containers from different machines.

Resources