Docker host networking - speicify network interface - docker

I am trying to run a docker container using the --network host option on an embedded linux system. There are two eth interfaces, eth0 and eth1. The eth0 interface is used for ethernet to serial and should not be used for anything else.
Whenever I try running my container using the --network host option, docker tries to run it on the ip address of eth0. I need docker to instead run the container using the host ip address of eth1.
The only thing I have tried to get it this to work is to bring the eth0 interface down. This won't work because I need both interfaces up. I need to make this work with host networking. Is there some way to specify the interface to use when I use docker run? If not is there some way to set a constraint in linux to use eth1 for docker instead of eth0?

Related

difference between docker BRIDGE and HOST driver?

Can you give me one guide or graph to understand the difference?
The reason why I ask this question is I can't open website with the following method:
docker network create -d bridge mybridge
docker run -d --net mybridge --name db redis
docker run -d --net mybridge -e DB=db -p 8000:5000 --name web chrch/web
But I can open website with the following method:
docker run --rm -d --network host --name my_nginx nginx
I use google cloud platform VM instance and install docker by myself.
According to the docker documentation about bridge networking:
In terms of Docker, a bridge network uses a software bridge which allows containers connected to the same bridge network to communicate, while providing isolation from containers which are not connected to that bridge network.
According to the docker documentation about host networking
If you use the host network driver for a container, that container’s network stack is not isolated from the Docker host. For instance, if you run a container which binds to port 80 and you use host networking, the container’s application will be available on port 80 on the host’s IP address.
If you want to deploy multiple containers connected between them with a private internal network use bridge networking. If you want to deploy a container connected to the same network stack as the host (and access the same networks as the host) use host networking. If you simply want to publish some ports, run the container with the --publish or -p option, such as -p 8080:80.
In your first example I'd expect the application to be reachable on the host's IP address at port 8000 (the remapped port), and in the second port 5000 (there is no remapping option with host networking). If there's some sort of configuration or firewalling issue preventing this from working you should address that, rather than hack around it with --net host.
Bridge networking is Docker's standard networking mode. You should prefer it if at all possible. There are, confusingly, two different modes of it, but the form you show with an explicit docker network create is a best practice and you should use it if at all possible. Host networking completely disables Docker's network isolation. It means containers see and use exactly the same network interfaces the host has available, without an intermediate NAT layer.
With bridge networking, you need the docker run -p option to make specific ports visible outside of Docker. As an operator you can remap ports, bind to specific interfaces on a multi-homed system, or simply decline to make a service visible to other hosts at all. The explicit docker network create form lets containers connect to each other using their docker run --name as host names. If you're running multiple application stacks on the same host, they can be partially isolated from each other by using separate networks. Each container has its own separate network space, and localhost means "this container". This mode is also an easy step to the networking models in multi-host systems like Docker Swarm or Kubernetes.
With host networking, none of the above works at all; you cannot use docker run --net host -p ... and you have no choice about where or how ports are exposed. You can't reach other containers, unless they're configured to publish ports themselves. Since you're using the host's network, localhost means the host's view of itself.
For all that it's frequently recommended in SO answers, --net host is rarely necessary. The two cases I can think of off hand are for a service that needs to interrogate the host's network stack (for instance, a service-discovery system like Consul needs to know every port the host is listening on to advertise that) or for a service that has a large or inconsistent set of ports it uses. If you're using --net host because you've hard-coded localhost in your application, you're better off making that configurable..
Feature
Bridge
Host
Driver
The Bridge network is provided by the Bridge driver
The host network is provided by the host driver.
Default
bridge is the default network and provided by a bridge driver
Host does not default.
Connectivity
The bridge driver provides intercontainer connectivity for all containers running on the same machine.
The host driver instructs Docker not to create any special networking namespace or resources for attached containers.

Relationship between docker0, Docker Bridge Driver and Containers

I was watching a YouTube video on Docker networking and saw this slide:
And I'm trying to make sense of it. From the docker0 docs:
"By default, the Docker server creates and configures the host system’s docker0 a network interface called docker0, which is an ethernet bridge device. If you don’t specify a different network when starting a container, the container is connected to the bridge and all traffic coming from and going to the container flows over the bridge to the Docker daemon, which handles routing on behalf of the container."
But I'm still a little confused on the flow of traffic here. Let's say I install Docker on a new host. I assume docker0 is created & configured at installation time. So now my host has this docker0 ethernet bridge on it.
Now let's say I start a container on my new Docker host:
docker run -it -p 9200:9200 -d --name myapp myapp
Since I didn't specify a network driver, bridge is selected for me by default. According to the blurb in docs above, the container should now be sending/receiving traffic over that docker0 bridge. However, in the diagram above that, the indication is that there's no traffic flowing to/from the bridge-based containers (C4, C5, C6) from docker0, and I'm wondering: why?! Any ideas? Thanks in advance!
You are right, that scheme is not fitting exactly what is happening. I didn't saw the video, maybe that "picture" is a snapshot of a concrete moment. Maybe we should see the video to understand the context.
Anyway, when Docker create docker0 inteface, there are some iptables rules created using new chains (DOCKER and DOCKER-ISOLATION). By default, Docker containers are only accesible from your host. Then using -p option on docker run command you are mapping ports from your host to the container directly. Doing that you can reach certain port on your host which is really on the container. You can check the NAT table before and after running the container using iptables -t nat -L. You'll see the difference and the rule for the mapping.
And yes, the containers are created on the same network and they can try to communicate between them on that network. By default, the network range used for docker is 172.17.0.0/16 so your first container will be 172.17.0.2 the second will be 172.17.0.3 and so on. (172.17.0.1 is your docker0 ip).

How to assign specific IP to container and make that accessible outside of VM host?

I wish to make two of my containers available outside of the VM host on their separate, specific IP addresses (192.168.0.222, 192.168.0.227), without port mapping. That means I wish to access any port directly on the containers by using its IP. I already have machines running in the network outside of the VM host in the range 192.168.0.1–192.168.0.221.
Is this now possible with Docker 1.10.0, and if so, how?
I'm on OS X 10.11 with docker version 1.10.0, build 590d5108 and docker-machine version 0.6.0, build e27fb87, using boot2docker/VirtualBox driver.
I have been trying to figure this out for some while, without luck, and I've read the following questions and answers:
How to assign static public IP to docker container
How to expose docker container's ip and port to outside docker host without port mapping?
How can I make other machines on my network access my Docker containers (using port mapping)?
According to Jessie Frazelle, this should now be possible.
See "IPs for all the Things"
This is so cool I can hardly stand it.
In Docker 1.10, the awesome libnetwork team added the ability to specifiy a specific IP for a container. If you want to see the pull request it’s here: docker/docker#19001.
# create a new bridge network with your subnet and gateway for your ip block
$ docker network create --subnet 203.0.113.0/24 --gateway 203.0.113.254 iptastic
# run a nginx container with a specific ip in that block
$ docker run --rm -it --net iptastic --ip 203.0.113.2 nginx
# curl the ip from any other place (assuming this is a public ip block duh)
$ curl 203.0.113.2
# BOOM golden
That does illustrate the new docker run --ip option that you now see in docker network connect.
If specified, the container's IP address(es) is reapplied when a stopped container is restarted. If the IP address is no longer available, the container fails to start.
One way to guarantee that the IP address is available is to specify an --ip-range when creating the network, and choose the static IP address(es) from outside that range. This ensures that the IP address is not given to another container while this container is not on the network.
$ docker network create --subnet 172.20.0.0/16 --ip-range 172.20.240.0/20 multi-host-network
$ docker network connect --ip 172.20.128.2 multi-host-network container2
The "making accessible" part would involve, as usual, port forwarding.

Docker: Connectivity between Physical Machine - VM -Docker container

I have just started to have some experimentation with docker.
On my Windows host I have a virtual machine which holds a docker container. I want to have a communication between host and container or may be other VMs and this container.
Host ip is 192.168.2.10 with subnet mask 255.255.255.0
VM ip is 192.168.254.130 with subnet mask 255.255.255.0
Container gets an ip 172.17.0.13
I have seen few blogs talking about bridging but I am still not sure about it and how to do that. I am not very much into networking stuff.
A little guidance will help.
Thanks
EDIT:
I followed this bridge-building but could not understand what ip range to give to bridge, so, I gave 192.168.254.1/24. The command ip addr show bridge0 shows state UNKNOWN.
The normal way to do this is just to publish a port on the container and use the IP of the VM e.g:
docker run -d -p 80:80 nginx
Then visit the IP of the VM in a browser running on your host and you should get the webpage.
I'll assume you are using Docker on Windows with Linux host running on Virtualbox. Note that by default docker-machine creates a NAT adapter (with a port forward) and a host-only adapter, sometimes it is tricky to get different machines to talk to the correct ip.
As answered by Adrian you typically "publish" ports by port forwarding, but if your container has to communicate via many ports and you are only running one such container / host it could be easier to start the container via docker run --net host ..., this way host's ethernet adapters are directly visible within the container (as I discovered here).

how to properly specify an IP for a docker container

I'm trying to explicitly specify an IP address for my docker container in the following way:
sudo docker run -it -p 172.17.0.2:10000:10000 -p 9000:9000 -p 9090:9090 -v /home/eugene/dev/shared:/opt/shared -d eugene/dev_img_1.3
I'm getting the following error:
Error response from daemon: Cannot start container b2242e5da6e1b701ba4880f25fa8d465d5f008787b49898ad9e46eb26e417e48: port has already been allocated
I really do not care about port 10000. My goal is to have a specific container IP of my choosing, as well as to have ports 9000 and 9090 exposed to the host.
I have looked at some other questions, but did not see a clear syntax to do this
The -p argument is used to forward ports from the container to the host, not for assigning IPs.
There is no easy way to assign a fixed IP to a Docker container and I would strongly advise you not to try. Instead re-architect your system so that it isn't dependent on a fixed IP. If this really isn't possible, I think you can choose an IP by using the LXC execution driver and various flags, but I would strongly recommend against this.
You can assign a fixed ip using pipework, but it's not "the docker way". I would agree with Adrian. Re-design away from fixed IP's.
This can be done in different ways.
You can edit your system-wide Docker server settings (by editing DOCKER_OPTS in /etc/default/docker) and add the option --ip=IP_ADDRESS in Ubuntu and then restart your server. If you are using only 1 docker container and want to have dockers IP same as your host, start the docker container using --net=host flag to set the container to have the host machine IP address.
Other way is to have these options configured at server startup(by editing DOCKER_OPTS in /etc/default/docker):
--bip=CIDR — to supply a specific IP address and netmask for the "docker0" bridge, using standard notation like 192.168.1.8/23.
For example with --fixed-cidr=192.168.1.0/25, IPs for your containers will be chosen from the first half of 192.168.1.0/24 subnet. The "docker0" Ethernet bridge settings are used every time you create a new container. You are trying to bind a container's ports to a specific port using the -p flag , which will not help you in assigning a IP address to the container.
Another way to assign a IP address in any particular range(Example: 172.30.1.21/30). Stop the docker using stop docker , then use ip link and ip addr commands to set up the "bridge br0" and start docker using docker -d -b br0

Resources