How to expose a Docker container port to one specific Docker network only, when a container is connected to multiple networks? - docker

From the Docker documentation:
--publish or -p flag. Publish a container's port(s) to the host.
--expose. Expose a port or a range of ports.
--link. Add link to another container. Is a legacy feature of Docker. It may eventually be removed.
I am using docker-compose with several networks. I do not want to publish any ports to the host, yet when I use expose, the port is then exposed to all the networks that container is connected to. It seems that after a lot of testing and reading I cannot figure out how to limit this to a specific network.
For example in this docker-compose file with where container1 joins the following three networks: internet, email and database.
services:
container1:
networks:
- internet
- email
- database
Now what if I have one specific port that I want to expose to ONLY the database network, so NOT to the host machine and also NOT to the email and internet networks in this example? If I would use ports: on container1 it is exposed to the host or I can bind it to a specific IP address of the host. *I also tried making a custom overlay network, giving the container a static IPv4 address and trying to set the ports in that format in ports: like - '10.8.0.3:80:80', but that also did not work because I think the binding can only happen to a HOST IP address. If i use expose: on container1 the port will be exposed to all three networks: internet, email and database.
I am aware I can make custom firewall ruling but it annoys me that I cannot write such simple config in my docker-compose file. Also, maybe something like 80:10.8.0.3:80 (HOST_IP:HOST_PORT:CONTAINER_IP:CONTAINER_PORT) would make perfect sense here (did not test it).*
Am I missing something or is this really not possible in Docker and Docker-compose?
Also posted here: https://github.com/docker/compose/issues/8795

No, container to container networking in docker is one-size-fits-many. When two containers are on the same network, and ICC has not been disabled, container-to-container communication is unrestricted. Given Docker's push into the developer workflow, I don't expect much development effort to change this.
This is handled by other projects like Kubernetes by offloading the networking to a CNI where various vendors support networking policies. This may be iptables rules, eBPF code, some kind of sidecar proxy, etc to implement it. But it has to be done as the container networking is setup, and docker doesn't have the hooks for you to implement anything there.
Perhaps you could hook into docker events and run various iptables commands for containers after they've been created. The application could also be configured to listen on the specific IP address for the network it trusts, but this requires injecting the subnet you trust and then looking up your container IP in your entrypoint, non-trivial to script up, and I'm not even sure it would work. Otherwise, this is solved by either restructuring the application so components that need to be on a less secure network are minimized, by hardening the sensitive ports, or switching the runtime over to something like Kubernetes with a network policy.
Things that won't help:
Removing exposed ports: this won't help since expose is just documentation. Changing exposed ports doesn't change networking between containers, or between the container and host.
Links: links are a legacy feature that adds entries to the host file when the container is created. This was replaced by creating networks with DNS resolution of other containers.
Removing published ports on the host: This doesn't impact container to container communication. The published port with -p creates a port forward from the host to the container, which you do want to limit, but containers can still communicate over a shared network without that published port.

The answer to this for me was to remove the -p command as that binds the container to the host and makes it available outside the host.
If you don't specify -p options. The container is available on all the networks it is connected to. On whichever port or ports the application is listening on.
It seems the -P forces the container on to the host and binds it to the port specified.
In your example if you don't use -p when staring "container1". "container1" would be available to the networks: internet, email, database with all its ports but not outside the host.

Related

For Docker Netrworking: Why (what scenario(s)) would you not use just "--network host" for "Host" mode networking?

This is a followup to an earlier question that I had asked, "https://stackoverflow.com/questions/72046646/does-docker-persist-the-resolv-conf-from-the-physical-etc-resolv-conf-in-the-co".
I've been testing with containers on 2 different machines, and using "--network host" and from that earlier thread in that case it is using a default "Host" mode network named "host"(?).
Since with "host" mode networking, the container and the app inside the container are basically on the same IP as the physical host where the container is running, under what (example) scenarios would you actually want to create a named "host" mode network and then have container use that named "host" mode network?
What would the advantages/differences be between using the custom/named "host" mode network vs. just using "--network host"?
It seems like both situations (using "--network host" vs. "create network xyz" where xyz is a named host network, and then doing the container "docker run --network xyz" would functionally be the same?
Sorry for the newbie question :( and thanks again in advance.
Jim
I don't think you can create a host-mode named network, and if you did, there'd be no reason to use it. If you need host networking – and you almost certainly don't – use docker run --net host or Compose network_mode: host.
But really, you don't need host networking.
With standard Docker networking, you can use docker run -p to publish individual ports out to the host. You get a choice to not publish a given port, and can remap the port. This also means that if, for example, you're running three services each with their own PostgreSQL server, there's no conflict over the single port 5432.
The cases where you actually need it are pretty limited. If an application listens on a very large number of ports or it doesn't listen on a predictable port then the docker run -p mechanism doesn't work well. If it needs to actively manage the host network then it needs to be given access to it (and it might be better run outside a container). If you've hard-coded localhost in your application, then in Docker your database isn't usually there (configuration via environment variables would be better).

disable IP forwarding if no port mapping definition in docker-compose.yml file

I am learning docker network. I created a simple docker-compose file that starts two tomcat containers:
version: '3'
services:
tomcat-server-1:
container_name: tomcat-server-1
image: .../apache-tomcat-10.0:1.0
ports:
- "8001:8080"
tomcat-server-2:
container_name: tomcat-server-2
image: .../apache-tomcat-10.0:1.0
After I start the containers with docker-compose up I can see that tomcat-server-1 responses on http://localhost:8001. At the first glance, the tomcat-server-2 is not available from localhost. That great, this is what I need.
When I inspect the two running containers I can see that they use the following internal IPs:
tomcat-server-1: 172.18.0.2
tomcat-server-2: 172.18.0.3
I see that the tomcat-server-1 is available from the host machine via http://172.18.0.2:8080 as well.
Then the following surprised me:
The tomcat-server-2 is also available from the host machine vie http://172.18.0.3:8080 despite port mapping is not defined for this container in the docker-compose.yml file.
What I would like to reach is the following:
The two tomcat servers must see each other in the internal docker network via hostnames.
Tomcat must be available from the host machine ONLY if the port mapping is defined in the docker-compose file, eg.: "8001:8080".
If no port mapping definition then the container could NOT be unavailable. Either from localhost or its internal IP, eg.: 172.18.0.3.
I have tried to use different network configurations like the bridge, none, and host mode. No success.
Of course, the host mode can not work because both tomcat containers use internally the same port 8080. So if I am correct then only bridge or none mode that I can consider.
Is that possible to configure the docker network this way?
That would be great to solve this via only the docker-compose file without any external docker, iptable, etc. manipulation.
Without additional firewalling setup, you can't prevent a Linux-native host from reaching the container-private IP addresses.
That having been said, the container-private IP addresses are extremely limited. You can't reach them from other hosts. If Docker is running in a Linux VM (as the Docker Desktop application provides on MacOS or Windows) then the host outside the VM can't reach them either. In most cases I would recommend against looking up the container-private IP addresses up at all since they're not especially useful.
I wouldn't worry about this case too much. If your security requirements need you to prevent non-Docker host processes from contacting the containers, then you probably also have pretty strict controls over what's actually running on the host and who can log in; you shouldn't have unexpected host processes that might be trying to connect to the containers.

Multiple docker containers with same container port connected to the same network

I am having an app that depends on multiple docker containers. I use docker compose so that all of them are in the same network for inter-container communication. But, two of my containers are listening to the same port 8080 inside their respective containers but, are mapped to different ports on the host: 8072,8073. For inter-container communication since we use the container's port will this cause problems?
Constraints:
I need both the containers for my app to run. Thus I cannot isolate the other container with same internal port to a different network
All containers should run on the same host.
Am new to docker and I am not sure how to solve this.
Thanks
IIUC see the documentation here:
https://docs.docker.com/compose/networking
You need not expose each of the service's ports on the host unless you wish to access them from the host, i.e. outside of the docker-compose created network.
Ports must be unique per host but each service in your docker-compose created network can use the same port with impunity and is referenced by <service-name>:<port>.
In the Docker example, there could be 2 Postgres services. Each would need a unique name: db1; db2 but both could use the same port - "5432" and be uniquely addressable from the service called web (and each other) as db1:8432 and db2:8432.
Each service corresponds effectively to a different host. So, as long as the ports are unique for each service|host, you're good. And, as long as any ports you expose on the host are unique, you're good too....
Extending the example, db1 could expose port 9432:8432 but then db2 would need to find a different host port to use, perhaps 9433:8432.
Within the docker-compose created network, you would access db1 as db1:8432 and db2 as db2:8432.
From the host (outside the docker-compose create network), you would access db1 as localhost:9432 and db2 as localhost:9433.
NB It's likely a good practice to only expose service ports to the host when those service's must be accessible from outside (e.g. web probably must be exposed but dbX probably need not be exposed). You may wish to be more liberal in exposing service ports while debugging.

I am migrating from local to remote Docker, can I discover the daemon's public IP?

I am using Docker Compose to deploy my applications. In my docker-compose.yml I have a container my-frontend which must know the public IP of the backend my-backend. The image my-frontend is NodeJS application which runs in the client's browser.
Before I did this:
my-backend:
image: my-backend:latest
ports:
- 81:80
my-frontend:
image: my-frontend:latest
ports:
- 80:80
environment:
- BACKEND=http://localhost:81
This works fine when I deploy to a local Docker daemon and when the client runs locally.
I am now migrating to a remote Docker daemon. In this situation, the client does not run on the same host as the Docker daemon any more. Hence, I need to alter the environment variable BACKEND in my-frontend:
environment:
- BACKEND=http://<ip-of-daemon>:81
When I hardcode <ip-of-daemon> to the actual ip of the Docker daemon, everything is working fine. But I am wondering if there is a way to dynamically fill this in? So I can use the same docker-compose.yml for any remote Docker daemon.
With Docker Compose, your Docker containers will all appear on the same machine. Perhaps you are using tools like Swarm or Kubernetes in order to distribute your containers on different hosts, which would mean that your backend and frontend containers would indeed be accessible via different public IP addresses.
The usual way of dealing with this is to use a frontend proxy like Traefik on a single entry point. This means that from the browser's perspective, the IP address for your frontend and backend is the same. Internally, the proxy will use filtering rules to direct traffic to the correct LAN name. The usual approach is to use a URL path prefix like /backend/.
You correctly mentioned in the comments that, assuming your frontend container is accessible on a static public IP, you could just internally proxy from there to your backend, using NginX. That should work just fine.
Either of these approaches will allow a single IP to appear to "share" ports - this resolves the problem of wanting to listen on the same IP on 80/443 in more than one container. You need to try to avoid non-standard ports for backend calls, since some networks can block them (e.g. mobile networks, corporate firewalled environments).
I am not sure what an alternative would be to those approaches. You can certainly obtain a machine's public IP if you can run code on the host, but if your container orchestration is sending containers to machines, the only code that will run is inside each container, and I don't believe public IP information is exposed there.
Update based on your use-case
I had initially assumed from your question that you were expecting your containers to spin up on arbitrary hosts in a Docker farm. In fact, your current approach confirmed in the comments is a number of non-connected Docker hosts, so whenever you deploy, your containers are guaranteed to share a public IP. I understand the purpose behind your question a bit better now - you were wanting to specify a base URL for your backend, including a fully-qualified domain, non-standard port, and URL path prefix.
As I indicated in the discussion, this is probably not necessary, since you are able to put a proxy URL path prefix (/backend) in your frontend NginX. This negates the need for a non-standard port.
If you wanted to specify a custom backend prefix (e.g. /backend/v1 to version your API) then you could do that in env vars in your Docker Compose config.
If you need to refer to the backend's fully-qualified address in your JavaScript for the purposes of connecting to AJAX/WebSocket servers, you can just derive this from window.location.host. In your dev env this will be a bare IP address, and in your remote envs, it sounds like you have a domain.
Addendum
Some of the confusion on this question was about what sort of IP addresses we are referring to. For example:
I believe that the public IP of my-backend is equal to the docker daemon's IP
Well, your Docker host has several IP addresses, and the public address is just one of them. For example, the virtual network interface docker0 is the LAN IP of your Docker host, and if you ask for the IP of your Docker host, that would indeed be a correct answer (though of course it is not accessible on the public internet).
In fact, I would say the LAN address belongs to the daemon (since Docker sets it up) and the public IP does not (it is a feature of the box, not Docker).
In any of your Docker hosts, try this command:
ifconfig docker0
That will give you some information about your host's IP, and is useful if a Docker container wishes to contact the host (e.g. if you want to connect to a service that is not running in a container). It is quite useful to pass the IP herein into a container as an env var, in order to allow this connection to take place.
my-backend:
image: my-backend:latest
ports:
- 81:80
my-frontend:
image: my-frontend:latest
ports:
- 80:80
environment:
- BACKEND="${BACKEND_ENV}"
Where BACKEND_ENV is and enviroment variable setted to the the docker daemon's ip.
In the machine where is docker-compose executed set the environment variable before.
export BACKEND_ENV="http://remoteip..."
Or just start the frontend pointing to the remote address
docker run -p 80:80 -e BACKEND='http://remote_backend_ip:81' my-frontend:latest

Docker-Compose multiple networks, make ports available to outside the host

I am currently deploying a docker network with backend and fronend.
All containers are part of a network basic and one container should be accessible from outside the host machine.
When using docker-toolbox on windows, it works fine. I can access all containers with forwarded ports outside the host machine
ports:
- 8080:8080
My problem is, that on Redhat 7, I didn't find a solution do make it accessible wihtout manipulating the iptable so far. I can access all containers with mapped ports inside my host machine. But for making them accessible from outside my hostemachine, I need to do: sysctl net.ipv4.conf.all.forwarding=1
sudo iptables -P FORWARD ACCEPT
I think there should be an easier way to user docker networks to do this, right?
There was an external setting, which was continously resetting the forwarding.
It was nothing directly related to Docker(-Compose).

Resources