Docker Networking with two interfaces - docker

I am trying to setup my docker server which has two network interfaces eth0 going to my lan and eth1 going to a internal network for my vpn tunnel. And now all my containers are available through both interfaces. But I want to decide which containers are available through each interface.
I'm using docker-compose to start my containers and I have tried to create some docker networks and assign those, but I couldn't solve it that way. I also found something about macvlan networks but that seemed a bit to much for me. So I am wondering if there is another way or did I maybe misconfigure something? Or is macvlan still the easiest way to fix this?(if possible this way)

After doing some more digging i found another way which is easier to setup. I totally forgot this worked but you can just specify a which interface a port needs to bind to by giving the IP address of the interface when binding the port.
Like this when using docker run:
-p 192.168.1.100:8080:80/tcp
Or like this in docker compose:
ports:
- "192.168.1.100:8080:80/tcp"

Related

How to expose a Docker container port to one specific Docker network only, when a container is connected to multiple networks?

From the Docker documentation:
--publish or -p flag. Publish a container's port(s) to the host.
--expose. Expose a port or a range of ports.
--link. Add link to another container. Is a legacy feature of Docker. It may eventually be removed.
I am using docker-compose with several networks. I do not want to publish any ports to the host, yet when I use expose, the port is then exposed to all the networks that container is connected to. It seems that after a lot of testing and reading I cannot figure out how to limit this to a specific network.
For example in this docker-compose file with where container1 joins the following three networks: internet, email and database.
services:
container1:
networks:
- internet
- email
- database
Now what if I have one specific port that I want to expose to ONLY the database network, so NOT to the host machine and also NOT to the email and internet networks in this example? If I would use ports: on container1 it is exposed to the host or I can bind it to a specific IP address of the host. *I also tried making a custom overlay network, giving the container a static IPv4 address and trying to set the ports in that format in ports: like - '10.8.0.3:80:80', but that also did not work because I think the binding can only happen to a HOST IP address. If i use expose: on container1 the port will be exposed to all three networks: internet, email and database.
I am aware I can make custom firewall ruling but it annoys me that I cannot write such simple config in my docker-compose file. Also, maybe something like 80:10.8.0.3:80 (HOST_IP:HOST_PORT:CONTAINER_IP:CONTAINER_PORT) would make perfect sense here (did not test it).*
Am I missing something or is this really not possible in Docker and Docker-compose?
Also posted here: https://github.com/docker/compose/issues/8795
No, container to container networking in docker is one-size-fits-many. When two containers are on the same network, and ICC has not been disabled, container-to-container communication is unrestricted. Given Docker's push into the developer workflow, I don't expect much development effort to change this.
This is handled by other projects like Kubernetes by offloading the networking to a CNI where various vendors support networking policies. This may be iptables rules, eBPF code, some kind of sidecar proxy, etc to implement it. But it has to be done as the container networking is setup, and docker doesn't have the hooks for you to implement anything there.
Perhaps you could hook into docker events and run various iptables commands for containers after they've been created. The application could also be configured to listen on the specific IP address for the network it trusts, but this requires injecting the subnet you trust and then looking up your container IP in your entrypoint, non-trivial to script up, and I'm not even sure it would work. Otherwise, this is solved by either restructuring the application so components that need to be on a less secure network are minimized, by hardening the sensitive ports, or switching the runtime over to something like Kubernetes with a network policy.
Things that won't help:
Removing exposed ports: this won't help since expose is just documentation. Changing exposed ports doesn't change networking between containers, or between the container and host.
Links: links are a legacy feature that adds entries to the host file when the container is created. This was replaced by creating networks with DNS resolution of other containers.
Removing published ports on the host: This doesn't impact container to container communication. The published port with -p creates a port forward from the host to the container, which you do want to limit, but containers can still communicate over a shared network without that published port.
The answer to this for me was to remove the -p command as that binds the container to the host and makes it available outside the host.
If you don't specify -p options. The container is available on all the networks it is connected to. On whichever port or ports the application is listening on.
It seems the -P forces the container on to the host and binds it to the port specified.
In your example if you don't use -p when staring "container1". "container1" would be available to the networks: internet, email, database with all its ports but not outside the host.

Can't connect to localhost of the host machine from inside of my Docker container

The question is basic: how to I connect to the localhost of the host machine from inside of a Docker container?
I tried answers from this post, using add-host host.docker.internal:host-gateway or writing --network=host when running my container but none of these methods seem to work.
I have a simple hello world webserver up on my machine, and I can see it's contents with curl localhost:8000 from my host, but I can't curl it from inside the container. I tried curl host.docker.internal:8000, curl localhost:8000, and curl 127.0.0.1:8000 from inside the container (based on the solution I used to make localhost available there) but none of them seem to work and I get a Connection refused error every time.
I asked somebody else to try this out for me on their own machine and it worked for them, so I don't think I'm doing anything wrong.
Does anybody have any idea what is wrong with my containers?
Host machine: Ubuntu 20.04.01
Docker version: 20.10.7
Used image: Ubuntu 20.04 (and i386/ubuntu18.04)
Temporary solution
This does not completely solve the problem for production purposes, but at least in order to get the localhost working, by adding these lines into docker-compose.yml it solved my issue for now (source):
services:
my-service:
network_mode: host
I am using apache nifi to use Java REST endpoints with the same ubuntu and docker versions, so in my case, it looks like this:
services:
nifi:
network_mode: host
After changing docker-compose.yml, I recommend stopping docker container, removing containers(docker-compose rm - do not use if you need some containers, otherwise use docker container rm container_id) and build with docker-compose up --build again.
In this case, I needed to use another localhost IP for my service to access with a browser (nifi started on other ip - 127.0.1.1 but works fine as well).
Searching for the problem / deeper into ubuntu-docker networking
Firstly, I will write down some useful commands that may be useful to find out a solution for the docker-ubuntu networking issue:
ip a - show all routing, network devices, interfaces and tunnels (mainly I can observe state DOWN with docker0)
ifconfig - list all interfaces
brctl show - ethernet bridge administration (docker0 has no attached interface / veth pair)
docker network ls - manages docker networks - names, drivers, scope...
docker network inspect bridge - I can see for docker0 bridge has no attached docker containers - empty and not used bridge
(useful link for ubuntu-docker networking explanation)
I guess that problem lays within veth pair (see link above), because when docker-compose occurs, there is a new bridge created (not docker0) that is connected to veth pair in my case, and docker0 is not used. My guess is that if docker0 is used, then host.docker.internal:host-gateway would work. Somehow in ubuntu networking, there is docker0 not used as the default bridge and this maybe should be changed.
I don't have much time left actually, well, I suppose someone can use this information and resolve the core of the problem later on.

Why we need custom bridge to communicate with others Dockers container using name? Why the default's bridge cannot do it please?

I'm working on Docker container and I find it strange the default network prevent from communicate between container using the name,
thanks for any hint
Containers on the default bridge network can only access each other by IP addresses, unless you use the --link option, which is considered legacy. On a user-defined bridge network, containers can resolve each other by name or alias.
From official docker documentation
Technically, there is nothing stopping docker to resolve the container names on default bridge network. I think it is just a decision that is made by docker team to force users to create bridge networks consciously. So that they know what they are doing and securely use it for production.

Docker-Compose multiple networks, make ports available to outside the host

I am currently deploying a docker network with backend and fronend.
All containers are part of a network basic and one container should be accessible from outside the host machine.
When using docker-toolbox on windows, it works fine. I can access all containers with forwarded ports outside the host machine
ports:
- 8080:8080
My problem is, that on Redhat 7, I didn't find a solution do make it accessible wihtout manipulating the iptable so far. I can access all containers with mapped ports inside my host machine. But for making them accessible from outside my hostemachine, I need to do: sysctl net.ipv4.conf.all.forwarding=1
sudo iptables -P FORWARD ACCEPT
I think there should be an easier way to user docker networks to do this, right?
There was an external setting, which was continously resetting the forwarding.
It was nothing directly related to Docker(-Compose).

Syn flood and net.ipv4.tcp_syncookies

I am trying to configure a Docker container, running tengine on Ubuntu 14, to use syncookies. However I am facing some issues.
The host has net.ipv4.tcp_syncookies=1 enabled and syncookies work directly on the host. But the container on the same host does not use syncookies.
Does anyone know a way of getting the container to use syncookies?
Thanks in advance :).
I suspect the default bridge will be missing a lot of customizations you make on the host network interface. Bypass the bridge completely and attach the container directly to the host network (not a good general practice, but your use case is atypical) with a:
docker run --network host ...

Resources