Docker-Compose multiple networks, make ports available to outside the host - docker

I am currently deploying a docker network with backend and fronend.
All containers are part of a network basic and one container should be accessible from outside the host machine.
When using docker-toolbox on windows, it works fine. I can access all containers with forwarded ports outside the host machine
ports:
- 8080:8080
My problem is, that on Redhat 7, I didn't find a solution do make it accessible wihtout manipulating the iptable so far. I can access all containers with mapped ports inside my host machine. But for making them accessible from outside my hostemachine, I need to do: sysctl net.ipv4.conf.all.forwarding=1
sudo iptables -P FORWARD ACCEPT
I think there should be an easier way to user docker networks to do this, right?

There was an external setting, which was continously resetting the forwarding.
It was nothing directly related to Docker(-Compose).

Related

How to expose a Docker container port to one specific Docker network only, when a container is connected to multiple networks?

From the Docker documentation:
--publish or -p flag. Publish a container's port(s) to the host.
--expose. Expose a port or a range of ports.
--link. Add link to another container. Is a legacy feature of Docker. It may eventually be removed.
I am using docker-compose with several networks. I do not want to publish any ports to the host, yet when I use expose, the port is then exposed to all the networks that container is connected to. It seems that after a lot of testing and reading I cannot figure out how to limit this to a specific network.
For example in this docker-compose file with where container1 joins the following three networks: internet, email and database.
services:
container1:
networks:
- internet
- email
- database
Now what if I have one specific port that I want to expose to ONLY the database network, so NOT to the host machine and also NOT to the email and internet networks in this example? If I would use ports: on container1 it is exposed to the host or I can bind it to a specific IP address of the host. *I also tried making a custom overlay network, giving the container a static IPv4 address and trying to set the ports in that format in ports: like - '10.8.0.3:80:80', but that also did not work because I think the binding can only happen to a HOST IP address. If i use expose: on container1 the port will be exposed to all three networks: internet, email and database.
I am aware I can make custom firewall ruling but it annoys me that I cannot write such simple config in my docker-compose file. Also, maybe something like 80:10.8.0.3:80 (HOST_IP:HOST_PORT:CONTAINER_IP:CONTAINER_PORT) would make perfect sense here (did not test it).*
Am I missing something or is this really not possible in Docker and Docker-compose?
Also posted here: https://github.com/docker/compose/issues/8795
No, container to container networking in docker is one-size-fits-many. When two containers are on the same network, and ICC has not been disabled, container-to-container communication is unrestricted. Given Docker's push into the developer workflow, I don't expect much development effort to change this.
This is handled by other projects like Kubernetes by offloading the networking to a CNI where various vendors support networking policies. This may be iptables rules, eBPF code, some kind of sidecar proxy, etc to implement it. But it has to be done as the container networking is setup, and docker doesn't have the hooks for you to implement anything there.
Perhaps you could hook into docker events and run various iptables commands for containers after they've been created. The application could also be configured to listen on the specific IP address for the network it trusts, but this requires injecting the subnet you trust and then looking up your container IP in your entrypoint, non-trivial to script up, and I'm not even sure it would work. Otherwise, this is solved by either restructuring the application so components that need to be on a less secure network are minimized, by hardening the sensitive ports, or switching the runtime over to something like Kubernetes with a network policy.
Things that won't help:
Removing exposed ports: this won't help since expose is just documentation. Changing exposed ports doesn't change networking between containers, or between the container and host.
Links: links are a legacy feature that adds entries to the host file when the container is created. This was replaced by creating networks with DNS resolution of other containers.
Removing published ports on the host: This doesn't impact container to container communication. The published port with -p creates a port forward from the host to the container, which you do want to limit, but containers can still communicate over a shared network without that published port.
The answer to this for me was to remove the -p command as that binds the container to the host and makes it available outside the host.
If you don't specify -p options. The container is available on all the networks it is connected to. On whichever port or ports the application is listening on.
It seems the -P forces the container on to the host and binds it to the port specified.
In your example if you don't use -p when staring "container1". "container1" would be available to the networks: internet, email, database with all its ports but not outside the host.

Can't connect to localhost of the host machine from inside of my Docker container

The question is basic: how to I connect to the localhost of the host machine from inside of a Docker container?
I tried answers from this post, using add-host host.docker.internal:host-gateway or writing --network=host when running my container but none of these methods seem to work.
I have a simple hello world webserver up on my machine, and I can see it's contents with curl localhost:8000 from my host, but I can't curl it from inside the container. I tried curl host.docker.internal:8000, curl localhost:8000, and curl 127.0.0.1:8000 from inside the container (based on the solution I used to make localhost available there) but none of them seem to work and I get a Connection refused error every time.
I asked somebody else to try this out for me on their own machine and it worked for them, so I don't think I'm doing anything wrong.
Does anybody have any idea what is wrong with my containers?
Host machine: Ubuntu 20.04.01
Docker version: 20.10.7
Used image: Ubuntu 20.04 (and i386/ubuntu18.04)
Temporary solution
This does not completely solve the problem for production purposes, but at least in order to get the localhost working, by adding these lines into docker-compose.yml it solved my issue for now (source):
services:
my-service:
network_mode: host
I am using apache nifi to use Java REST endpoints with the same ubuntu and docker versions, so in my case, it looks like this:
services:
nifi:
network_mode: host
After changing docker-compose.yml, I recommend stopping docker container, removing containers(docker-compose rm - do not use if you need some containers, otherwise use docker container rm container_id) and build with docker-compose up --build again.
In this case, I needed to use another localhost IP for my service to access with a browser (nifi started on other ip - 127.0.1.1 but works fine as well).
Searching for the problem / deeper into ubuntu-docker networking
Firstly, I will write down some useful commands that may be useful to find out a solution for the docker-ubuntu networking issue:
ip a - show all routing, network devices, interfaces and tunnels (mainly I can observe state DOWN with docker0)
ifconfig - list all interfaces
brctl show - ethernet bridge administration (docker0 has no attached interface / veth pair)
docker network ls - manages docker networks - names, drivers, scope...
docker network inspect bridge - I can see for docker0 bridge has no attached docker containers - empty and not used bridge
(useful link for ubuntu-docker networking explanation)
I guess that problem lays within veth pair (see link above), because when docker-compose occurs, there is a new bridge created (not docker0) that is connected to veth pair in my case, and docker0 is not used. My guess is that if docker0 is used, then host.docker.internal:host-gateway would work. Somehow in ubuntu networking, there is docker0 not used as the default bridge and this maybe should be changed.
I don't have much time left actually, well, I suppose someone can use this information and resolve the core of the problem later on.

How docker process communication between different containers on default bridge on the same host?

Here is my situation:
First,I run a MySQL container(IP:172.17.0.2) on centOS;
Then I run a Nacos contanier with specified datasource(MySQL above) on the same host, but i didn't use the ip of the MySQL container, instead I used the ip of the bridge Gateway(172.17.0.1)(two containers both link to the default bridge).
What surprised me was that Nacos works well, it can query config data from MySQL container normally.
How did this happen? I have read some documention but didn't get the answer.It really confused me.
On modern Docker installations, try to avoid using the default bridge network. docker network create a network (it doesn't need any special options, but it does need to be created) and then launch your containers on --net that network. If you're using Compose, it creates a ("user bridge") network named default for you.
On your CentOS host, if you run ifconfig, you should see a docker0 interface with the 172.17.0.1 address. When you launch a container with the docker run -p option, that container is accessible via the first port number on all host interfaces, including the docker0 interface.
Meanwhile, inside a container (on the default bridge network), it sees that same IP address as the normal IPv4 gateway address (try docker run --rm busybox route -n). So, when you connect to 172.17.0.1:3306, you're connecting out to the host, and then connecting to the published port of the database container.
This isn't a totally standard way to connect between containers, though it will work. You should prefer using Docker named networks, which will let you connect to another container using the container's name without manually doing any IP-address lookups. If you really can't move off of the default bridge network, then the standard approach is to --link to the other container, but this entire path is considered outdated.

How to use confluent/cp-kafka image in docker compose with advertising on localhost and my network container name kafka?

How to use confluent/cp-kafka image in docker compose with exposing on localhost and my network container name kafka?
Do not link this as duplicate of:
Connect to docker kafka container from localhost and another docker container
Cannot produce message to kafka from service running in docker
These do not solve my issue because the methods they use are depreciated by confluent/cp-kafka and I want to connect on localhost and on the docker network.
In the configure script on confluent/cp-kafka they do this annoying task:
# By default, LISTENERS is derived from ADVERTISED_LISTENERS by replacing
# hosts with 0.0.0.0. This is good default as it ensures that the broker
# process listens on all ports.
if [[ -z "${KAFKA_LISTENERS-}" ]]
then
export KAFKA_LISTENERS
KAFKA_LISTENERS=$(cub listeners "$KAFKA_ADVERTISED_LISTENERS")
fi
It always sets whatever I give KAFKA_ADVERTISED_LISTENERS to 0.0.0.0! Using the docker network, doing
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9093,PLAINTEXT://kafka:9093
I expect the listeners to be either localhost:9092 or 0.0.0.0:9092 and some docker ip PLAINTEXT://172.17.0.1:9093 (whatever kafka resolves to on the docker network)
Currently I can get only one or the other to work. So using localhost, it only works on the host system, no docker containers can access it. Using kafka, it only works in the docker network, no host applications can access it. I want it to work with both. I am using docker compose so that I can have zookeeper, kafka, redis, and my application start up. I have other applications that will startup without docker.
Update
So when I set PLAINTEXT://localhost:9092 I can access kafka running docker, outside of docker.
When I set PLAINTEXT://kafka:9092 I cannot access kafka running docker, outside of docker.
This is expected, however doing this: PLAINTEXT://localhost:9092,PLAINTEXT://kafka:9093 I would expect to access kafka running docker, both inside and outside docker. The confluent/cp-kafka image is wiping out localhost and kafka. Setting them both to 0.0.0.0, then throwing an error that I set 2 different ports to the same ip...
Maybe I'm just clashing into some opinionated docker image and should look for a different image...
Maybe I'm just clashing into some opinionated docker image and should look for a different image...
The image is fine. You might want to read this explanation of the listeners.
tl;dr - you don't want to (and shouldn't?) use the same listener "protocol" in different networks.
Use the advertised.listeners, no need to edit the listeners
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
When PLAINTEXT://localhost:9093 is being loaded inside of the container, you need to add port mappings for 9093, which should be self explanatory, and you connect to localhost:9093 and it should work.
Then, if you also had PLAINTEXT://kafka:9092, that will only work within the Docker Compose network overlay, not externally to your DNS servers, because that's how Docker networking works. You should be able to run other applications as part of that Docker network with the --network flag, or link containers using Docker Compose
Keep in mind that if you're running on Mac, the recommended way (as per the Confluent docs) is to run these containers in Docker Machine, in a VM, where you can manage the external port mappings correctly using the --net=host flag of Docker. However, using the blog above, it all works fine on a Mac outside a VM.

Docker container linking via port forwarding?

It seems that the preferred way to expose services to other Docker containers is container linking, which sets some environment variables that you then have to use in your application code to look up host names and port numbers:
psql -h $PG_PORT_5432_TCP_ADDR -p $PG_PORT_5432_TCP_PORT
Is there a reason this is not done via port forwarding in a way that is transparent to the application? So that in the same way that I can just run my web server inside the container on standard port 80 and have Docker figure out what actual port to use, I could just be doing
psql -h 0.0.0.0 # no -p necessary, we use the default port
The port forwarding would be set up when I start docker, just like with server ports.
This is possible! It has actually be proposed by the CoreOS team; you can read more in the following blog post:
http://coreos.com/blog/Jumpers-and-the-software-defined-localhost/
Docker will soon allow to start a container sharing the network namespace of another container; it will help with those scenarios (and in the short term, it will allow to do what you suggest very easily).
Project Atomic is also following this approach:
http://www.projectatomic.io/docs/inter-container-networking/
Geard uses iptables to enable containers to connect to each other. Network namespaces allows adding iptables rules to the network namespace of a container. The basic idea is to make remote endpoints appear as if they were local to a container. For example the database container could be made to appear to be running locally inside the application container.

Resources