I have a server with several virtual machines running. I am starting a container with Jira installation and i need the container to be assigned with different address from the DHCP and not use the host IP address. I am a noobie so please explain
The technique suggested in #ad22's answer requires a custom build of the Docker engine that uses a fork of libnetwork. Now, more than four years after that hack was developed, the DHCP feature has still not been merged into the standard Docker engine, and the fork has fallen far behind the mainline code.
Since late 2019, it has been possible to assign IP addresses to Docker containers with DHCP using devplayer0's docker-net-dhcp plugin, which works with the standard Docker engine. When you create a new container, this plugin starts a Busybox udhcpc client to obtain a DHCP lease, then runs udhcpc (in a process outside the container's PID namespace) to renew the lease as needed.
As found in the other answer, using the macvlan will not enable the container to obtain addresses from DHCP. The functionality to obtain addresses from DHCP is experimental (this was created by someone associated with the docker libnetwork project)
https://gist.github.com/nerdalert/3d2b891d41e0fa8d688c
It suggests compiling the changes into the docker binary and then running
docker network create -d macvlan \
--ipam-driver=dhcp \
-o parent=eth0 \
--ipam-opt dhcp_interface=eth0 mcv0
Since this requires re-compiling the binary, an alternate solution could be to
assign static IP addresses to all your containers using the "--ip" option to docker run/compose, and get a DNS entry for your hostname assigned to this IP, and also ensure that the IP can never be assigned through DHCP.
You can achieve this using the docker network macvlan driver. According to the docs:
...you can use the macvlan network driver to assign a MAC address to each container’s virtual network interface, making it appear to be a physical network interface directly connected to the physical network.
So essentially, the virtual network interface will use the physical network interface exposed on the host to advertise its own virtual MAC address. This will then be broadcast to the LAN on which the DHCP server is operating, and the virtual interface will be assigned an IP.
The steps to get it going are:
Create a docker network which uses the macvlan driver:
docker network create \
--driver macvlan \
--subnet=172.16.86.0/24 \
--gateway=172.16.86.1 \
--opt parent=eth0 lan_net
The subnet and gateway would be those of your LAN network (on which the DHCP resides). The parent option specifies the physical interface on the host through which you would like your virtual interface to be exposed to the LAN network.
Run your container using the newly created network:
docker run -it --rm --net=lan_net alpine
Related
I'm trying to containerize a program that uses the Vimba SDK to control a GigE camera, which is connected by Ethernet from a secondary network interface on the host system.
Vimba seems to want direct control over the camera network interface. For example, it brings the interface up and configures its MTU. It works to run the container with --network host to give it access to the eth0 interface on the host.
However I was wondering if it would be possible to use Docker networking to:
Allow the hosts on the LAN to access a web service running in the container (i.e., --publish 8080:8080/tcp).
Give the Vimba SDK a network interface it can control.
My attempt was to create a bridge network called net-1-lan and a macvlan network called net-2-cam. (Aside: The names are because when a container is in multiple networks, the network whose name comes first lexicographically gets chosen as the default route.)
docker network create net-1-lan # bridge network
docker network create \
--driver macvlan \
--opt parent=eth0 \
--subnet=169.254.1.0/24 \
net-2-cam
After I start my container in the bridge network, I use docker network connect to add the container to the macvlan network. However, the Vimba SDK does not detect the camera.
One thing I note, but am unsure is relevant, is that the container gets IP address 169.254.1.2 on the macvlan interface. When running in --network host mode, the interface claims 169.254.1.1.
Here is my situation:
First,I run a MySQL container(IP:172.17.0.2) on centOS;
Then I run a Nacos contanier with specified datasource(MySQL above) on the same host, but i didn't use the ip of the MySQL container, instead I used the ip of the bridge Gateway(172.17.0.1)(two containers both link to the default bridge).
What surprised me was that Nacos works well, it can query config data from MySQL container normally.
How did this happen? I have read some documention but didn't get the answer.It really confused me.
On modern Docker installations, try to avoid using the default bridge network. docker network create a network (it doesn't need any special options, but it does need to be created) and then launch your containers on --net that network. If you're using Compose, it creates a ("user bridge") network named default for you.
On your CentOS host, if you run ifconfig, you should see a docker0 interface with the 172.17.0.1 address. When you launch a container with the docker run -p option, that container is accessible via the first port number on all host interfaces, including the docker0 interface.
Meanwhile, inside a container (on the default bridge network), it sees that same IP address as the normal IPv4 gateway address (try docker run --rm busybox route -n). So, when you connect to 172.17.0.1:3306, you're connecting out to the host, and then connecting to the published port of the database container.
This isn't a totally standard way to connect between containers, though it will work. You should prefer using Docker named networks, which will let you connect to another container using the container's name without manually doing any IP-address lookups. If you really can't move off of the default bridge network, then the standard approach is to --link to the other container, but this entire path is considered outdated.
TLDR; I cannot ping my docker containers from my other network clients. Only when a container actively pings the gateway I am able to reach the containers afterwards.
On my homenetwork (192.168.0.0/24) I run a gateway 192.168.0.1 which hosts a DNS server and also routes the internet traffic. My docker host (192.168.0.100) has a macvlan network, created with
docker network create -d macvlan --subnet=192.168.0.0/24 --gateway=192.168.0.100 -o parent=eth0 dockernet
My containers now do get static IPs, like 192.168.0.200. The containers can actively ping other physical hosts on the network, so that works fine.
But if I spin up a new container, it cannot be pinged from my physical network. Not from the docker host (which is expected as this seems to be a limitation of the macvlan network), nor from the gateway or any other client.
Once the container actively pings the gateway, it gets also reachable for other clients.
So I guess some routing needs to be done and there I need your help.
Clients run on debian buster and I use an unmanaged switch to connect the clients.
The missing information above was that I am running docker on raspbian.
So this question is actually a duplicate of Docker MACVLAN only works Outbound
runsudo rpi-update on the host to make it work
TL;DR - Why does Docker call it's default networking as Bridge Networking when it seems to be a lot like NAT Network.
Let start by looking at how -
1) VMWare or VirtualBox handles networking for virtual machines. Say the Host IP is some random 152.50.60.21 and the network CIDR is 152.50.60.0/24.
Bridge Network - Any VM connected through this interface can have any free IP on the network the host is connected to. So if IP 152.50.60.30 is free, then VM can bind to this IP. Similarly, a second VM can have an IP 152.50.60.32 if this IP is free.
The Bridge Network connects the VM's on to the same network the host is connected to. Any machine on the internet can reach the VM's and the VM's can reach the internet directly (of course if the HOST network is connected to internet).
NAT Network - NAT is a separate network from the network the host is connected to. And VMWare can accept any valid CIDR (to not complicate things I will refer to the private reserved blocks only. Though, if am right, any CIDR is fine.) Safely, this new NAT Network being created on host and accessible only on the host can have CIDR 10.0.0.0/8 or 172.16.0.0/12 or 192.168.0.0/16 (or any subnet of these networks). I am picking 10.0.0.0/8.
So two VM's spinning up on the host and connected through NAT Network can have IP's 10.0.3.3 and 10.0.3.6
Being on the NAT Network the VM's are not visible to the outer world beyond the host i.e the VM's are not reachable to the outer world (except with DNAT/Port Forwarding configuration on the Host). But the VM's can access the outer world/internet/intranet though SNAT provided by HOST i.e the IP's of VM's are never exposed to the outer world.
VMWare Doc's reference: Understanding Common Networking Configurations
Next, let's look on the Docker side -
Dockers default networking
When an image is run on the HOST (whose IP from above is 152.50.60.21) using Dockers default networking (which it calls as Bridge Network), the new container can get an IP (say) 172.17.0.13 from the network - 172.16.0.0/12 (at least on my env). Similarly, a second container can get an IP 172.17.0.23. For accessing the internet these containers rely on SNAT provided by HOST. And any machine on the internet/intranet can't access the Containers except through port forwarding provided by the HOST. So the containers are not visible to the world except for HOST.
Looking at this I would assume that the default networking provided by Docker is NAT Network, but Docker likes to call it as Bridge Network.
So, Could anyone say where things are messed up or how I got to look at Bridge/NAT Networks?
Docker's equivalent of VMWare or VirtualBox bridge network is macvlan.
From the docs:
...you can use the macvlan network driver to assign a MAC address to each container’s virtual network interface, making it appear to be a physical network interface directly connected to the physical network. In this case, you need to designate a physical interface on your Docker host to use for the macvlan, as well as the subnet and gateway of the macvlan. You can even isolate your macvlan networks using different physical network interfaces.
When you create a macvlan network, it can either be in bridge mode or 802.1q trunk bridge mode.
In bridge mode, macvlan traffic goes through a physical device on the host.
In the simple bridge example, your traffic flows through eth0 and Docker routes traffic to your container using its MAC address. To network devices on your network, your container appears to be physically attached to the network.
Example of macvlan bridge mode:
$ docker network create -d macvlan \
--subnet=172.16.86.0/24 \
--gateway=172.16.86.1 \
-o parent=eth0 \
my-macvlan-net
This command creates MacVLAN network on top of eth0 with the network name of my-macvlan-net.
In 802.1q trunk bridge mode, traffic goes through an 802.1q sub-interface which Docker creates on the fly. This allows you to control routing and filtering at a more granular level.
In the 802.1q trunked bridge example, your traffic flows through a sub-interface of eth0 (called eth0.10 in the example below) and Docker routes traffic to your container using its MAC address. To network devices on your network, your container appears to be physically attached to the network.
Example of macvlan 802.1q trunk bridge mode:
$ docker network create -d macvlan \
--subnet=172.16.86.0/24 \
--gateway=172.16.86.1 \
-o parent=eth0.10 \
my-8021q-macvlan-net
This creates macvlan network, and has parent eth0.10.
The naming seems confusing to someone coming from VMWare or VirtualBox, but it exists.
You can see another tutorial of macvlan (includes assigning IP addr to the container) here.
Adding this as an answer because it is too long to be a comment.
After hours of googling and reading docs, since I could find any concrete answer, my theory is that the difference is because no standards are defining these virtual interfaces, unlike the physical interfaces which are defined by IEEE and IETF.
There is a list of Linux virtual network interfaces by RedHat. Most of the network interface types that you see in docker and vmware/vbox will be a subset of this, except for some. overlay network, for example -- which only exists in docker to serve its special purpose by connecting container on different hosts to a single network.
Back to the point - it is not like these network interfaces are defined by Cisco or IETF or IEEE or anything/anyone. Physical networking interfaces are implemented at OS/other networking device level and they are defined standards, for other products -- be it a virtualization platform (vmware/vbox) or containerization (docker), they need to implement their own networking solution. At this point, they can implement their interfaces, give it some fancy name, or they can call it the same thing it's called at the OS level.
I am certain docker devs would not be stupid enough to know the diff between NAT and bridge, they probably knew that the network they are calling bridge is a NAT network yet decided to call it a bridge for some reason. And since there exists no centralized definition of a bridge or NAT network, you can't tell that docker is wrong. They wanted to call this network bridge, they call it a bridge. but yes, indeed it can be confusing to people coming from a virtualization/Linux networking background. but it is what it is.
Christophorus Reyhan already wrote an amazing answer about the bridge equivalent in docker and how to create it. I am only adding this because I was not satisfied with the WHY of the question. why is it not the same? Yes, my answer is based on some assumptions, but this is the closest that I could think of the be the reason. No way to confirm the hypothesis unless one day some docker dev sees this and decides to reply.
One more thing, for vmware/vbox, the host is the host physical machine itself, and thus when using bridge mode, it connects to the host's network. but for containers, the host is not the physical host but rather the docker host/engine. so when using bridge mode, containers attach to the host's network, but it would be docker host in this case. So it is still a bridge network, but the hierarchy is different this time. container -> docker host -> host os, and not guest os -> host os. This could be another possibility as well.
From the docker documentation:
macvlan driver is sometimes the best choice when dealing with legacy applications that expect to be directly connected to the physical network, rather than routed through the Docker host’s network stack.
"rather than through the Docker host's network stack" and
Macvlan networks are best when you are migrating from a VM setup or need your containers to look like physical hosts on your network, each with a unique MAC address.
which indicates that networking is not the same as in VMs in the world of docker though they may sometimes share some similarities or the same names.
I deployed a demo web API project on port 8086.I am able to run it on my local browser using localhost:8086/api/controllername and also using local machine IP address for example: 192.0.0.0:8086/api/controllername. I tried accessing the URL from another machine on same LAN and I am able to access it.
But now I want to access it from machines on other networks (publicly).
How can I assign a static IP so that I can use the API from any machine irrespective of network? I created a network using below commands
docker network create --driver bridge --subnet 172.18.0.0/16 -- gateway=172.18.0.1 IPStatic
and
docker network connect --ip 172.18.0.2 IPStatic Containerid.
But unable to access the api using 172.18.0.2:8086/api. Am I missing something? I am using asp.net core web api and I am fairly new to Docker.
You always use the host IP address for this, the same way as if you were running the service outside of Docker. The container-private IP addresses are unreachable from other hosts (and on some platforms aren't even reachable from outside Docker on the same host); it's usually wrong to manually set them or to try to look them up.
If it's specifically important that this service have its own IP address, you need to ask your network administrator to assign an additional address to the host. The docker run -p option can bind a service to only specific network interfaces or addresses. On a Linux host I might run
# Assign the alias address
ifconfig eth0:0 192.0.0.2
# Run the service bound to only this interface
docker run -p 192.0.0.2:80:8080 ...
You might need to reconfigure other services to not listen on this new interface. For Docker services you'd use the same docker run -p option to bind to only the host's primary interface and to localhost (127.0.0.1); configuration for non-Docker services is specific to the service.