How to access host port from docker container which is bound to localhost - docker

I have services defined in my own docker-compose.yaml file, and they
have their own bridged network to communicate with each other.
One of this services needs access to services running on the host machine.
According to this answer I added the following lines to my service within the docker-compose.yaml file:
extra_hosts:
- "host.docker.internal:host-gateway"
This works, despite the fact that the services running on the host need to bind to 0.0.0.0. If I bind to localhost, I'm not able to access them. But I don't want to expose the port to anyone else.
Is there a way to achieve this with bridged network mode?
I'm using the following versions:
Docker version 20.10.5, build 55c4c88
docker-compose version 1.28.5, build unknown
and I'm running on aarch64

the solution was just a misunderstanding from other readings. e.g.
another SO article
a baeldung article
As I explicitly defined an additional bridged network within my docker-compose.yaml file I assumed that I had to bind the service on the host to the IP address of that particular interface (I checked the IP address of the container and then looked up the address on the host's interface list) which was 172.20.0.1)
But docker0 was 172.17.0.1 (which should be the default one).
After binding the service on the host to the docker0 IP address, and adding
extra_hosts:
- "host.docker.internal:host-gateway"
to `docker-compose.yaml', I was able to access it, but it was also blocked from anyone else.
The explanation why this is working is probably, as explained here, b/c the IP route within each docker container includes the docker0 IP address, even if you have your own network set up.
Please correct me in case I mixed something up.

Related

How to expose a Docker container port to one specific Docker network only, when a container is connected to multiple networks?

From the Docker documentation:
--publish or -p flag. Publish a container's port(s) to the host.
--expose. Expose a port or a range of ports.
--link. Add link to another container. Is a legacy feature of Docker. It may eventually be removed.
I am using docker-compose with several networks. I do not want to publish any ports to the host, yet when I use expose, the port is then exposed to all the networks that container is connected to. It seems that after a lot of testing and reading I cannot figure out how to limit this to a specific network.
For example in this docker-compose file with where container1 joins the following three networks: internet, email and database.
services:
container1:
networks:
- internet
- email
- database
Now what if I have one specific port that I want to expose to ONLY the database network, so NOT to the host machine and also NOT to the email and internet networks in this example? If I would use ports: on container1 it is exposed to the host or I can bind it to a specific IP address of the host. *I also tried making a custom overlay network, giving the container a static IPv4 address and trying to set the ports in that format in ports: like - '10.8.0.3:80:80', but that also did not work because I think the binding can only happen to a HOST IP address. If i use expose: on container1 the port will be exposed to all three networks: internet, email and database.
I am aware I can make custom firewall ruling but it annoys me that I cannot write such simple config in my docker-compose file. Also, maybe something like 80:10.8.0.3:80 (HOST_IP:HOST_PORT:CONTAINER_IP:CONTAINER_PORT) would make perfect sense here (did not test it).*
Am I missing something or is this really not possible in Docker and Docker-compose?
Also posted here: https://github.com/docker/compose/issues/8795
No, container to container networking in docker is one-size-fits-many. When two containers are on the same network, and ICC has not been disabled, container-to-container communication is unrestricted. Given Docker's push into the developer workflow, I don't expect much development effort to change this.
This is handled by other projects like Kubernetes by offloading the networking to a CNI where various vendors support networking policies. This may be iptables rules, eBPF code, some kind of sidecar proxy, etc to implement it. But it has to be done as the container networking is setup, and docker doesn't have the hooks for you to implement anything there.
Perhaps you could hook into docker events and run various iptables commands for containers after they've been created. The application could also be configured to listen on the specific IP address for the network it trusts, but this requires injecting the subnet you trust and then looking up your container IP in your entrypoint, non-trivial to script up, and I'm not even sure it would work. Otherwise, this is solved by either restructuring the application so components that need to be on a less secure network are minimized, by hardening the sensitive ports, or switching the runtime over to something like Kubernetes with a network policy.
Things that won't help:
Removing exposed ports: this won't help since expose is just documentation. Changing exposed ports doesn't change networking between containers, or between the container and host.
Links: links are a legacy feature that adds entries to the host file when the container is created. This was replaced by creating networks with DNS resolution of other containers.
Removing published ports on the host: This doesn't impact container to container communication. The published port with -p creates a port forward from the host to the container, which you do want to limit, but containers can still communicate over a shared network without that published port.
The answer to this for me was to remove the -p command as that binds the container to the host and makes it available outside the host.
If you don't specify -p options. The container is available on all the networks it is connected to. On whichever port or ports the application is listening on.
It seems the -P forces the container on to the host and binds it to the port specified.
In your example if you don't use -p when staring "container1". "container1" would be available to the networks: internet, email, database with all its ports but not outside the host.

disable IP forwarding if no port mapping definition in docker-compose.yml file

I am learning docker network. I created a simple docker-compose file that starts two tomcat containers:
version: '3'
services:
tomcat-server-1:
container_name: tomcat-server-1
image: .../apache-tomcat-10.0:1.0
ports:
- "8001:8080"
tomcat-server-2:
container_name: tomcat-server-2
image: .../apache-tomcat-10.0:1.0
After I start the containers with docker-compose up I can see that tomcat-server-1 responses on http://localhost:8001. At the first glance, the tomcat-server-2 is not available from localhost. That great, this is what I need.
When I inspect the two running containers I can see that they use the following internal IPs:
tomcat-server-1: 172.18.0.2
tomcat-server-2: 172.18.0.3
I see that the tomcat-server-1 is available from the host machine via http://172.18.0.2:8080 as well.
Then the following surprised me:
The tomcat-server-2 is also available from the host machine vie http://172.18.0.3:8080 despite port mapping is not defined for this container in the docker-compose.yml file.
What I would like to reach is the following:
The two tomcat servers must see each other in the internal docker network via hostnames.
Tomcat must be available from the host machine ONLY if the port mapping is defined in the docker-compose file, eg.: "8001:8080".
If no port mapping definition then the container could NOT be unavailable. Either from localhost or its internal IP, eg.: 172.18.0.3.
I have tried to use different network configurations like the bridge, none, and host mode. No success.
Of course, the host mode can not work because both tomcat containers use internally the same port 8080. So if I am correct then only bridge or none mode that I can consider.
Is that possible to configure the docker network this way?
That would be great to solve this via only the docker-compose file without any external docker, iptable, etc. manipulation.
Without additional firewalling setup, you can't prevent a Linux-native host from reaching the container-private IP addresses.
That having been said, the container-private IP addresses are extremely limited. You can't reach them from other hosts. If Docker is running in a Linux VM (as the Docker Desktop application provides on MacOS or Windows) then the host outside the VM can't reach them either. In most cases I would recommend against looking up the container-private IP addresses up at all since they're not especially useful.
I wouldn't worry about this case too much. If your security requirements need you to prevent non-Docker host processes from contacting the containers, then you probably also have pretty strict controls over what's actually running on the host and who can log in; you shouldn't have unexpected host processes that might be trying to connect to the containers.

Private addressable IP for Docker-Compose like Vagrant

Problem
I'm using Docker-Compose and want to set a locally-addressable IP (such as 10.1.1.100) for one of the containers. This IP is not on my host machine's subnet.
Vagrant style
In a similar Vagrant project, there's a line:
config.vm.network :private_network, ip: "10.1.2.100"
This works great in that project. I can target the machine at 10.1.2.100 as if it's an available IP on my network. I don't even have to create a subnet.
Question
I've been looking for how I'd setup a container with a locally-addressable IP with Docker (specifically Docker-Compose), but haven't been able to get it working.
Failed configurations
I've tried adding networks, and assigned a static IP with ipv4_address: 10.1.1.100. Sadly, it seems as though this entire network is only accessible via Docker itself, not via the host machine.
If I try to use ports to expose an IP as 10.1.1.100:80:80, I get this error instead:
Cannot start service SERVICE_NAME: Ports are not available: listen tcp 10.1.1.100:80: bind: can't assign requested address.
But this works fine if I simply put 80:80. So it must be the IP binding that causes this issue.
I also tried setting network_mode on only this service and neither host nor bridge worked correctly.
Lastly, I found I could add to driver_opts:
com.docker.network.bridge.host_binding_ipv4: "10.1.1.100"
This made it impossible to start the container similar to the same error I received when using the ports method.

How to assign the host ip to a service that is running with docker compose

I have several services specified inside a docker compose file that are communication with each other via links. Now I want one of these services to talk to the outside world and fetch some data from another server in the host network. But the docker service uses its internally assigned IP address which leads to the firewall of the host network blocking his requests. How can I tell this docker service to use the IP address of the host instead?
EDIT:
I got a step further, what I'm looking for is the network_mode option with the value host. But the Problem is that network_mode: "host" cannot be mixed with links. So i guess i have to change the configuration of all the docker services to not use links. I will try how this works out.
You should open a port like to that service
ports:
8000:8000
The 8000 on the left is the host port and the 8000 on the right will be the IP port

How to set up container using Docker Compose to use a static IP and be accessible outside of VM host?

I'd like to specify a static IP address for my container in my docker-compose.yml, so that I can access it e.g. using https://<ip-address> or ssh user#<ip-address> from outside the host VM.
That is, making it possible for other machines on my company network to access the Docker container directly, on a specific static IP address. I do not wish to map specific ports, I wish to be able to access the container directly.
A starting point for the docker-compose.yml:
master:
image: centos:7
container_name: master
hostname: master
Is this possible?
I'm using the Virtualbox driver, as I'm on OS X.
So far it is not possible, it will be in the Docker v 1.10 (that should be released in a couple of weeks from now).
Edit:
See the PR on GH.
I believe extra hosts entry is a solution.
extra_hosts:
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229"
See extra_hosts.
Edit:
As pointed by M. Auzias in comments, i misunderstood the question. Answer is incorrect.
You could specify the IP address of the container with --ip parameter when running it, so in that it way the IP would always be the same for the container. After that you could ssh to your host VM, and then "attach" to the container.
Otherwise, I'm not sure... Maybe try and run the container with --net=host
From https://docs.docker.com/engine/userguide/networking/dockernetworks/
The host network adds a container on the hosts network stack. You’ll
find the network configuration inside the container is identical to
the host.

Resources