How does docker manage containers' IP addresses? - docker

When creating containers inside a user-defined bridge network without specifying an IP address, the started containers are given IP addresses starting from the beginning of the IP range. When a container goes down, its IP address becomes available again and can later be used by another container. Docker also detects duplicate IPs and raises exceptions when invalid addresses are supplied. As far as my research goes, the docker daemon is not depending on any DHCP services. So how does Docker actually figure out which IP addresses are in use/available for a new container? Furthermore, how can a docker network plugin (such as docker-go-plugin) do the same thing?
I think one of the keywords here is IPAM, but I don't know anything apart from that. I'd appreciate every piece of information that points me to the right direction.

Docker is a service. Whenever you start a container, it does so asking the Docker service to do all the necessary work. The IP addresses are defined whenever you create a docker network. Docker can also create new networks if you don't do so yourself. From what I've seen they use IPs in the 172.16.0.0 – 172.31.255.255 range. These are all private IP addresses. By default they start with 172.19.0.0 from what I've seen. You can also create your own networks with whatever IP range you'd like. Then add containers to that network and the next available IP will be used. Whenever you kill a container, its IP address becomes available again so the Docker service can re-add it to that list.
This Docker documentation says that you can consider this mechanism to be similar to having a DHCP although the Docker service takes care of the assignments.
I do not know how it's implemented. Probably a list, although they could be using a bitmap. For 65536 IPs, your map has to be 64Kb / 8 = 8Kb only, so it's very small. Each bit then tells you whether the IP is in use or not. However, if they have to support IPv6, such a map would not be practical. Way too large. They can also check the list of existing containers and try to assign the smallest possible IP which is not currently in use.

Related

Why do I see 172.22.0.3 in docker networking examples?

This page and this answer both reference the IP address 172.22.0.3. By RFC 1918, that is within the private networking range 172.16.0.0-172.31.255.255. It is also in my own code (running in docker), but I've forgotten why.
Is it a Docker default? Can you find a reference?
As an implementation detail, Docker allocates part of that IP address range for each Docker network that gets created. Within that address range, typically the .1 address is the host and then addresses are allocated sequentially for each container that's attached to the network.
In the answer you link to, for example, it's very possible that Docker will assign 172.22.0.0/16 to the network listed in the docker-compose.yml file. Then 172.22.0.1 would be the host, 172.22.0.2 would be the first container in the docker-compose.yml, and 172.22.0.3 would be the second (db). When there's an error, say, connecting to db:27017, the resolved address might print out in the error message.
If a Docker-internal IP address is hard-coded in your application somewhere that's probably a mistake. There's no guarantee that the same address or even the same network will be in use if you restart your container somewhere else. These addresses are also unreachable from other hosts; from outside the VM, if Docker is running in a VM; and even from non-container processes on the same host, except on native Linux systems.
Networking in Compose in the Docker documentation is a good practical reference. Note that it doesn't describe this IP allocation at all, since it's mostly an implementation detail. The examples in Networking with standalone containers show low-level diagnostic commands to dump out Docker network details; while these aren't usually useful, they do show Docker allocating /16 networks in that 172.{16...31} reserved range.

Read host's ifconfig in the running Docker container

I would like to read host's ifconfig output during the run of the Docker container, to be able to parse it and get OpenVPN interface (tap0) IP address and process it within my application.
Unfortunately, propagating this value via the environment is not my case, because IP address could change in time of running the container and I don't want to restart my application container each time to see a new value.
Current working solution is a CRON on the host which writes the IP into the file on a shared volume and container reads from it - but I am looking for better solution as it seems to me as a workaround. Also, there was a plan to create new container with network: host which will see host's interfaces - it works, but it also looks like a workaround as it involves many steps and probably security issues.
I have a question, is there any valid and more clean way to achieve my goal - read host's ifconfig in docker container in realtime?
A specific design goal of Docker is that containers can’t directly access the host’s network configuration. The workarounds you’ve identified are pretty much the only way to do these.
If you’re trying to modify the host’s network configuration in some way (you’re trying to actually run a VPN, for example) you’re probably better off running it outside of Docker. You’ll still need root permission either way, but you won’t need to disable a bunch of standard restrictions to do what you need.
If you’re trying to provide some address where the service can be reached, using configuration like an environment variable is required. Even if you could access the host’s configuration, this might not be the address you need: consider a cloud environment where you’re running on a cloud instance behind a load balancer, and external clients need the load balancer; that’s not something you can directly know given only the host’s network configuration.

container and node IP addresses in Docker Swarm

I am going through the Docker tutorials and I'm a bit confused why containers might have different IP addresses than the nodes containing them in a swarm. My confusion is based on the below diagram, from this page in the tutorial.
The bigger green boxes are the nodes in the swarm; they each have their own IP and load balancer, and externally they're listening at port 8080. I believe that the yellow boxes are containers/tasks in the my-web service. They are listening on port 80 and I guess the service is setup to map port 80 from each container to port 8080 externally.
That much I understand more or less, but I don't see why the container/task would have/need a different IP address from the node that it is running on. Can anybody explain this?
If I had to guess, it would be because each container is basically a VM and VMs need their own IP addresses and no two VMs can have the same IP address, therefore the container cannot have the same IP as the node. But I'm not sure if that explanation is correct.
I'm still fairly new to docker/containers myself, but it's my understanding that you're referring to internal IPs and external IPs. Namely that the 192.168.99.100-102 would be externally addressable (aka publicly available), whereas the 10.0.0.1-2 address are for internal addressing only.
The reason for the internal addressing is so that you can have a larger pool of ip addresses to work with for your containers, which is why the 10.0.0.0/8 address space is used. These nodes still need to be addressable, so that your load balancer can correctly distribute the load. And according to the wikipedia entry, you've got 16,777,216 available IPs which allows your swarm to scale to many many containers if you needed it. Whereas you only have a limited number of external IP addresses for your services to be hit on.

How to assign a specific IP for all outgoing (and incoming) traffic on a Windows Docker Container

I found this: Assign static IP to Docker container and a few blog articles that sort of do this on Linux. However I cannot find any documentation on this for Windows.
Basically, I want to force Docker to send all outgoing traffic (and responses) from the containers I specify through a specific IP assigned to the host computer (not the default) This traffic needs to be routed properly by our Router so I need the IP address.
I also don't want it to be bound to all IPs or the default IP for incoming traffic, I want to bind it to the specific IP address.
I cannot find any (windows especially) documentation on how to configure what IP a docker container should use. Obviously port is straight forward to map, bug IP doesn't appear to be.
Anyone have any suggestions on how to force Docker (community or EE on Windows 2019 Server beta) to bind to the specific IP?
Update: It appears that this might be possible using docker network create however I can't make docker cc with Linux containers create a transparent or l2bridge because it says that the plugin isn't found. How does one convince it to create a transparent network with Linux containers in docker for windows?

Difference between of intermediate_ip_address and private_ip_address in bluemix container groups

This question relates to IBM's docker container group service, which allows load balancing across multiple docker containers created using a common docker image.
After a bluemix docker container group is created, you can inspect its metadata using the cf ic inspect <container id> command.
A subsection of the output of this command reads as follows:
"Loadbalancer": {
"intermediate_ip_address": "an ip address",
"private_ip_address": "a different ip address"
},
It would seem that the intention is that at least one of these addresses can be used as the load balancer endpoint in the sense that sending requests to such an address will spread the requests on the members of the docker container group.
The specific question is, what is the distinction between these addresses? What is the intended use for each?
The private ip is the address of the LB within the private network subnet for that container space. That's the one to be used to access the group via that LB from other containers in that space. It's effectively a direct connection within the subnet.
The intermediate ip address is the translation address used for secure routing by the gorouter (by way of isolation firewalls and translation tables) to access the group. It will work from within the space, but will require addition lookups and hops (i.e. latency).
Found a picture: https://console.ng.bluemix.net/docs/containers/container_planning_org_ov.html
The private ip (shown in that picture in the box marked "Container Group Load Balancer") is usable within your space. The intermediate ip (not shown there) is really meant for the line between the "Private Network Gateway" box and the "Go-Router/reverse proxy" box.

Resources