What does --net=host option in Docker command really do? - docker

I'm a little bit beginner to Docker. I couldn't find any clear description of what this option does in docker run command in deep and bit confused about it.
Can we use it to access the applications running on docker containers without specifying a port? As an example if I run a webapp deployed via a docker image in port 8080 by using option -p 8080:8080 in docker run command, I know I will have to access it on 8080 port on Docker containers ip /theWebAppName. But I cannot really think of a way how --net=host option works.

After the docker installation you have 3 networks by default:
docker network ls
NETWORK ID NAME DRIVER SCOPE
f3be8b1ef7ce bridge bridge local
fbff927877c1 host host local
023bb5940080 none null local
I'm trying to keep this simple. So if you start a container by default it will be created inside the bridge (docker0) network.
$ docker run -d jenkins
1498e581cdba jenkins "/bin/tini -- /usr..." 3 minutes ago Up 3 minutes 8080/tcp, 50000/tcp friendly_bell
In the dockerfile of jenkins the ports 8080 and 50000 are exposed. Those ports are opened for the container on its bridge network. So everything inside that bridge network can access the container on port 8080 and 50000. Everything in the bridge network is in the private range of "Subnet": "172.17.0.0/16", If you want to access them from the outside you have to map the ports with -p 8080:8080. This will map the port of your container to the port of your real server (the host network). So accessing your server on 8080 will route to your bridgenetwork on port 8080.
Now you also have your host network. Which does not containerize the containers networking. So if you start a container in the host network it will look like this (it's the first one):
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1efd834949b2 jenkins "/bin/tini -- /usr..." 6 minutes ago Up 6 minutes eloquent_panini
1498e581cdba jenkins "/bin/tini -- /usr..." 10 minutes ago Up 10 minutes 8080/tcp, 50000/tcp friendly_bell
The difference is with the ports. Your container is now inside your host network. So if you open port 8080 on your host you will acces the container immediately.
$ sudo iptables -I INPUT 5 -p tcp -m tcp --dport 8080 -j ACCEPT
I've opened port 8080 in my firewall and when I'm now accesing my server on port 8080 I'm accessing my jenkins. I think this blog is also useful to understand it better.

The --net=host option is used to make the programs inside the Docker container look like they are running on the host itself, from the perspective of the network. It allows the container greater network access than it can normally get.
Normally you have to forward ports from the host machine into a container, but when the containers share the host's network, any network activity happens directly on the host machine - just as it would if the program was running locally on the host instead of inside a container.
While this does mean you no longer have to expose ports and map them to container ports, it means you have to edit your Dockerfiles to adjust the ports each container listens on, to avoid conflicts as you can't have two containers operating on the same host port. However, the real reason for this option is for running apps that need network access that is difficult to forward through to a container at the port level.
For example, if you want to run a DHCP server then you need to be able to listen to broadcast traffic on the network, and extract the MAC address from the packet. This information is lost during the port forwarding process, so the only way to run a DHCP server inside Docker is to run the container as --net=host.
Generally speaking, --net=host is only needed when you are running programs with very specific, unusual network needs.
Lastly, from a security perspective, Docker containers can listen on many ports, even though they only advertise (expose) a single port. Normally this is fine as you only forward the single expected port, however if you use --net=host then you'll get all the container's ports listening on the host, even those that aren't listed in the Dockerfile. This means you will need to check the container closely (especially if it's not yours, e.g. an official one provided by a software project) to make sure you don't inadvertently expose extra services on the machine.

Remember one point that the host networking driver only works on Linux hosts, and is not supported on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server

you can create your own new network like --net="anyname"
this is done to isolate the services from different container.
suppose the same service are running in different containers, but the port mapping
remains same, the first container starts well , but the same service from second container will fail.
so to avoid this, either change the port mappings or create a network.

Related

How to communicate with a running Docker container in a Host X from another Host Y(not from a container in Host Y)

I am experimenting about Docker-networking, I had set up a scenario as below,
Installed docker in a host-X connected over a network (host-X IP: 60.0.0.28) and run a basic docker container of ubuntu-OS (Docker Container is connected to the default docker bridge network only i.e. 172.17.0.0/16 & 172.17.0.2 is container IP). Now trying to communicate that running container from another host-Y with in the same network (host-Y IP: 60.0.0.40) in which no docker is installed.
I had added basic route in host-Y like, "ip route add 172.17.0.0/16 via 60.0.0.28 dev ens3" .
From the container i am able to ping the Host-Y & in reverse case, i am only able to ping the docker gateway "172.17.0.1" from Host-Y but not able to reach the container.
There are a wide variety of situations where the Docker-internal IP addresses just aren't useful; calling from a different host is one of them. You should totally ignore those as an implementation detail.
If you take Docker out of the picture, and run the process directly on the host, this should be straightforward: from host Y, you can call the process on host X given its DNS name and the port the server is running on.
hostY$ curl http://hostX:12345/
If the process is actually running in a Docker container, you need to make sure you've started the container with a published port. This doesn't necessarily need to match the port the process is listening on.
hostX$ docker run -p 12345:12345 imagename
Once you've done this, the process can be reached via the host's DNS name or IP address, and the published port, the same way as with a non-container server.
In normal circumstances you should not need to think about the Docker-internal IP addresses; you do not need manual ip route-setup commands like you show, and you shouldn't docker inspect or docker run --ip to find or set this detail.
Let’s assume you want to start Dockerized nginx on host X.
You’d run:
docker run --detach -p 8080:80 nginx
Then you could access your nginx instance using http://60.0.0.28:8080.

communicate with a service inside a docker from the host without using it's IP

I have a process running on a host that needs to communicate with a docker and I want it to be done by some parameter that can't change (like docker name or host name) unlike IP (prefer not to make the IP of the docker static or install external dockers for this).
I'm aware that dockers can resolve addressees by name in a private network and that's what I want but not between dockers but between process running on the host and docker.
couldn't find a solution, can it be done ?
Edit:
I'm not allowed to use host network and open additional ports on the host for security reasons.
You're welcome to choose the way which fits your needs better.
Option 1. Use host's networking. In this case Docker does not create separate net for container and you connect to container's services as if they would run on your host:
docker run --network=host <image_name>
Drawback of this approach - low isolation and thus security. You dont need to expose any ports here - if service listens on 8080, just open localhost:8080 and enjoy.
Second approach is more correct - you expose (somehow forward) internal ports in container and map them onto ports in the host.
docker run -p 8080:80 <image_name>
This will map port 80 from container to port 8080 on the host. As in previous example, you still connect using localhost, e.g. localhost:8080.

Docker: how to open ports to the host machine?

What could be the reason for Docker containers not being able to connect via ports to the host system?
Specifically, I'm trying to connect to a MySQL server that is running on the Docker host machine (172.17.0.1 on the Docker bridge). However, for some reason port 3306 is always closed.
The steps to reproduce are pretty simple:
Configure MySQL (or any service) to listen on 0.0.0.0 (bind-address=0.0.0.0 in ~/.my.cnf)
run
$ docker run -it alpine sh
# apk add --update nmap
# nmap -p 3306 172.17.0.1
That's it. No matter what I do it will always show
PORT STATE SERVICE
3306/tcp closed mysql
I've tried the same with an ubuntu image, a Windows host machine, and other ports as well.
I'd like to avoid --net=host if possible, simply to make proper use of containerization.
It turns out the IPs weren't correct. There was nothing blocking the ports and the services were running fine too. ping and nmap showed the IP as online but for some reason it wasn't the host system.
Lesson learned: don't rely on route in the container to return the correct host address. Instead check ifconfig or ipconfig on the Linux or Windows host respectively and pass this IP via environment variables.
Right now I'm transitioning to using docker-compose and have put all required services into containers, so the host system doesn't need to get involved and I can simply rely on Docker's DNS. This is much more satisfying.

Exposing Docker Container Ports

I understand that to expose ports in a docker container, you can use the -p flag (e.g. -p 1-100:1-100). But is there a nice way to expose a large percentage of possible ports from the container to the host machine? For instance if I am running a router of sorts in a container that lives in a VM, and I would like to expose all ports in the container from 32768 upwards to 65535, is there a nice way to do this? As it stands I've tried using the -p flag and it complains about memory allocation errors.
Nvm. I figured out my misunderstanding. -P is what I want, and I want to expose and not explicitly map ports.
tl;dr
docker run --net=host ...
Docker offers different networking modes for containers. By default the networking mode is bridge which implies the need to expose ports.
If you run a container with networking mode host then you won't need to expose/forward ports as both the docker host and the container will share the very same network interface.
In the container, localhost will refer to the docker host. Any port opened in the container is in fact opened on the docker host network interface.

How to make a container visible to the outside network, and handle I.P addresses in production

I have:
a Windows server on bare metal with Hyper-V
Ubuntu server running in Hyper-V
a Docker container with an NGINX web application running in Ubuntu server
Every time I run a Docker image it gets a new I.P. address on the Docker0 network interface. For production, I don't know how to make the Docker container visible to the external network. I also don't know how to handle the fact that the I.P address changes every time the image is run.
What's the correct way to:
make a Docker container visible to the external network?
handle Docker container I.P. addresses in a repeatable way in production?
When you run your Docker container with docker run, you should use the -p switch to forward ports, for example:
docker run -p 80:80 nginx
This would route port 80 from the Ubuntu server to port 80 within the Nginx container.
You should check the Docker documentation on this at https://docs.docker.com/reference/run/#expose-incoming-ports.
When you have multiple containers and links, you should use EXPOSE in the Dockerfile as documented here: https://docs.docker.com/reference/builder/#expose.

Resources