disable IP forwarding if no port mapping definition in docker-compose.yml file - docker

I am learning docker network. I created a simple docker-compose file that starts two tomcat containers:
version: '3'
services:
tomcat-server-1:
container_name: tomcat-server-1
image: .../apache-tomcat-10.0:1.0
ports:
- "8001:8080"
tomcat-server-2:
container_name: tomcat-server-2
image: .../apache-tomcat-10.0:1.0
After I start the containers with docker-compose up I can see that tomcat-server-1 responses on http://localhost:8001. At the first glance, the tomcat-server-2 is not available from localhost. That great, this is what I need.
When I inspect the two running containers I can see that they use the following internal IPs:
tomcat-server-1: 172.18.0.2
tomcat-server-2: 172.18.0.3
I see that the tomcat-server-1 is available from the host machine via http://172.18.0.2:8080 as well.
Then the following surprised me:
The tomcat-server-2 is also available from the host machine vie http://172.18.0.3:8080 despite port mapping is not defined for this container in the docker-compose.yml file.
What I would like to reach is the following:
The two tomcat servers must see each other in the internal docker network via hostnames.
Tomcat must be available from the host machine ONLY if the port mapping is defined in the docker-compose file, eg.: "8001:8080".
If no port mapping definition then the container could NOT be unavailable. Either from localhost or its internal IP, eg.: 172.18.0.3.
I have tried to use different network configurations like the bridge, none, and host mode. No success.
Of course, the host mode can not work because both tomcat containers use internally the same port 8080. So if I am correct then only bridge or none mode that I can consider.
Is that possible to configure the docker network this way?
That would be great to solve this via only the docker-compose file without any external docker, iptable, etc. manipulation.

Without additional firewalling setup, you can't prevent a Linux-native host from reaching the container-private IP addresses.
That having been said, the container-private IP addresses are extremely limited. You can't reach them from other hosts. If Docker is running in a Linux VM (as the Docker Desktop application provides on MacOS or Windows) then the host outside the VM can't reach them either. In most cases I would recommend against looking up the container-private IP addresses up at all since they're not especially useful.
I wouldn't worry about this case too much. If your security requirements need you to prevent non-Docker host processes from contacting the containers, then you probably also have pretty strict controls over what's actually running on the host and who can log in; you shouldn't have unexpected host processes that might be trying to connect to the containers.

Related

How to expose a Docker container port to one specific Docker network only, when a container is connected to multiple networks?

From the Docker documentation:
--publish or -p flag. Publish a container's port(s) to the host.
--expose. Expose a port or a range of ports.
--link. Add link to another container. Is a legacy feature of Docker. It may eventually be removed.
I am using docker-compose with several networks. I do not want to publish any ports to the host, yet when I use expose, the port is then exposed to all the networks that container is connected to. It seems that after a lot of testing and reading I cannot figure out how to limit this to a specific network.
For example in this docker-compose file with where container1 joins the following three networks: internet, email and database.
services:
container1:
networks:
- internet
- email
- database
Now what if I have one specific port that I want to expose to ONLY the database network, so NOT to the host machine and also NOT to the email and internet networks in this example? If I would use ports: on container1 it is exposed to the host or I can bind it to a specific IP address of the host. *I also tried making a custom overlay network, giving the container a static IPv4 address and trying to set the ports in that format in ports: like - '10.8.0.3:80:80', but that also did not work because I think the binding can only happen to a HOST IP address. If i use expose: on container1 the port will be exposed to all three networks: internet, email and database.
I am aware I can make custom firewall ruling but it annoys me that I cannot write such simple config in my docker-compose file. Also, maybe something like 80:10.8.0.3:80 (HOST_IP:HOST_PORT:CONTAINER_IP:CONTAINER_PORT) would make perfect sense here (did not test it).*
Am I missing something or is this really not possible in Docker and Docker-compose?
Also posted here: https://github.com/docker/compose/issues/8795
No, container to container networking in docker is one-size-fits-many. When two containers are on the same network, and ICC has not been disabled, container-to-container communication is unrestricted. Given Docker's push into the developer workflow, I don't expect much development effort to change this.
This is handled by other projects like Kubernetes by offloading the networking to a CNI where various vendors support networking policies. This may be iptables rules, eBPF code, some kind of sidecar proxy, etc to implement it. But it has to be done as the container networking is setup, and docker doesn't have the hooks for you to implement anything there.
Perhaps you could hook into docker events and run various iptables commands for containers after they've been created. The application could also be configured to listen on the specific IP address for the network it trusts, but this requires injecting the subnet you trust and then looking up your container IP in your entrypoint, non-trivial to script up, and I'm not even sure it would work. Otherwise, this is solved by either restructuring the application so components that need to be on a less secure network are minimized, by hardening the sensitive ports, or switching the runtime over to something like Kubernetes with a network policy.
Things that won't help:
Removing exposed ports: this won't help since expose is just documentation. Changing exposed ports doesn't change networking between containers, or between the container and host.
Links: links are a legacy feature that adds entries to the host file when the container is created. This was replaced by creating networks with DNS resolution of other containers.
Removing published ports on the host: This doesn't impact container to container communication. The published port with -p creates a port forward from the host to the container, which you do want to limit, but containers can still communicate over a shared network without that published port.
The answer to this for me was to remove the -p command as that binds the container to the host and makes it available outside the host.
If you don't specify -p options. The container is available on all the networks it is connected to. On whichever port or ports the application is listening on.
It seems the -P forces the container on to the host and binds it to the port specified.
In your example if you don't use -p when staring "container1". "container1" would be available to the networks: internet, email, database with all its ports but not outside the host.

How to access host port from docker container which is bound to localhost

I have services defined in my own docker-compose.yaml file, and they
have their own bridged network to communicate with each other.
One of this services needs access to services running on the host machine.
According to this answer I added the following lines to my service within the docker-compose.yaml file:
extra_hosts:
- "host.docker.internal:host-gateway"
This works, despite the fact that the services running on the host need to bind to 0.0.0.0. If I bind to localhost, I'm not able to access them. But I don't want to expose the port to anyone else.
Is there a way to achieve this with bridged network mode?
I'm using the following versions:
Docker version 20.10.5, build 55c4c88
docker-compose version 1.28.5, build unknown
and I'm running on aarch64
the solution was just a misunderstanding from other readings. e.g.
another SO article
a baeldung article
As I explicitly defined an additional bridged network within my docker-compose.yaml file I assumed that I had to bind the service on the host to the IP address of that particular interface (I checked the IP address of the container and then looked up the address on the host's interface list) which was 172.20.0.1)
But docker0 was 172.17.0.1 (which should be the default one).
After binding the service on the host to the docker0 IP address, and adding
extra_hosts:
- "host.docker.internal:host-gateway"
to `docker-compose.yaml', I was able to access it, but it was also blocked from anyone else.
The explanation why this is working is probably, as explained here, b/c the IP route within each docker container includes the docker0 IP address, even if you have your own network set up.
Please correct me in case I mixed something up.

I am migrating from local to remote Docker, can I discover the daemon's public IP?

I am using Docker Compose to deploy my applications. In my docker-compose.yml I have a container my-frontend which must know the public IP of the backend my-backend. The image my-frontend is NodeJS application which runs in the client's browser.
Before I did this:
my-backend:
image: my-backend:latest
ports:
- 81:80
my-frontend:
image: my-frontend:latest
ports:
- 80:80
environment:
- BACKEND=http://localhost:81
This works fine when I deploy to a local Docker daemon and when the client runs locally.
I am now migrating to a remote Docker daemon. In this situation, the client does not run on the same host as the Docker daemon any more. Hence, I need to alter the environment variable BACKEND in my-frontend:
environment:
- BACKEND=http://<ip-of-daemon>:81
When I hardcode <ip-of-daemon> to the actual ip of the Docker daemon, everything is working fine. But I am wondering if there is a way to dynamically fill this in? So I can use the same docker-compose.yml for any remote Docker daemon.
With Docker Compose, your Docker containers will all appear on the same machine. Perhaps you are using tools like Swarm or Kubernetes in order to distribute your containers on different hosts, which would mean that your backend and frontend containers would indeed be accessible via different public IP addresses.
The usual way of dealing with this is to use a frontend proxy like Traefik on a single entry point. This means that from the browser's perspective, the IP address for your frontend and backend is the same. Internally, the proxy will use filtering rules to direct traffic to the correct LAN name. The usual approach is to use a URL path prefix like /backend/.
You correctly mentioned in the comments that, assuming your frontend container is accessible on a static public IP, you could just internally proxy from there to your backend, using NginX. That should work just fine.
Either of these approaches will allow a single IP to appear to "share" ports - this resolves the problem of wanting to listen on the same IP on 80/443 in more than one container. You need to try to avoid non-standard ports for backend calls, since some networks can block them (e.g. mobile networks, corporate firewalled environments).
I am not sure what an alternative would be to those approaches. You can certainly obtain a machine's public IP if you can run code on the host, but if your container orchestration is sending containers to machines, the only code that will run is inside each container, and I don't believe public IP information is exposed there.
Update based on your use-case
I had initially assumed from your question that you were expecting your containers to spin up on arbitrary hosts in a Docker farm. In fact, your current approach confirmed in the comments is a number of non-connected Docker hosts, so whenever you deploy, your containers are guaranteed to share a public IP. I understand the purpose behind your question a bit better now - you were wanting to specify a base URL for your backend, including a fully-qualified domain, non-standard port, and URL path prefix.
As I indicated in the discussion, this is probably not necessary, since you are able to put a proxy URL path prefix (/backend) in your frontend NginX. This negates the need for a non-standard port.
If you wanted to specify a custom backend prefix (e.g. /backend/v1 to version your API) then you could do that in env vars in your Docker Compose config.
If you need to refer to the backend's fully-qualified address in your JavaScript for the purposes of connecting to AJAX/WebSocket servers, you can just derive this from window.location.host. In your dev env this will be a bare IP address, and in your remote envs, it sounds like you have a domain.
Addendum
Some of the confusion on this question was about what sort of IP addresses we are referring to. For example:
I believe that the public IP of my-backend is equal to the docker daemon's IP
Well, your Docker host has several IP addresses, and the public address is just one of them. For example, the virtual network interface docker0 is the LAN IP of your Docker host, and if you ask for the IP of your Docker host, that would indeed be a correct answer (though of course it is not accessible on the public internet).
In fact, I would say the LAN address belongs to the daemon (since Docker sets it up) and the public IP does not (it is a feature of the box, not Docker).
In any of your Docker hosts, try this command:
ifconfig docker0
That will give you some information about your host's IP, and is useful if a Docker container wishes to contact the host (e.g. if you want to connect to a service that is not running in a container). It is quite useful to pass the IP herein into a container as an env var, in order to allow this connection to take place.
my-backend:
image: my-backend:latest
ports:
- 81:80
my-frontend:
image: my-frontend:latest
ports:
- 80:80
environment:
- BACKEND="${BACKEND_ENV}"
Where BACKEND_ENV is and enviroment variable setted to the the docker daemon's ip.
In the machine where is docker-compose executed set the environment variable before.
export BACKEND_ENV="http://remoteip..."
Or just start the frontend pointing to the remote address
docker run -p 80:80 -e BACKEND='http://remote_backend_ip:81' my-frontend:latest

Make docker machine available under host name in Windows

I'm trying to make a docker machine available to my Windows by a host name. After creating it like
docker-machine create -d virtualbox mymachine
and setting up a docker container that exposes the port 80, how can I give that docker machine a host name such that I can enter "http://mymachine/" into my browser to load the website? When I change "mymachine" to the actual IP address then it works.
There is an answer to this question but I would like to achieve it without an entry in the hosts file. Is that possible?
You might want to refer to docker documentaion:
https://docs.docker.com/engine/userguide/networking/#exposing-and-publishing-ports
You expose ports using the EXPOSE keyword in the Dockerfile or the
--expose flag to docker run. Exposing ports is a way of documenting which ports are used, but does not actually map or open any ports.
Exposing ports is optional.
You publish ports using the --publish or --publish-all flag to docker
run. This tells Docker which ports to open on the container’s network
interface. When a port is published, it is mapped to an available
high-order port (higher than 30000) on the host machine, unless you
specify the port to map to on the host machine at runtime. You cannot
specify the port to map to on the host machine when you build the
image (in the Dockerfile), because there is no way to guarantee that
the port will be available on the host machine where you run the
image.
I also suggest reviewing the -P flag as it differs from the -p one.
Also i suggest you try "Kitematic" for Windows or Mac, https://kitematic.com/ . It's much simpler (but dont forget to commit after any changes!)
Now concerning the network in your company, it has nothing to do with docker, as long as you're using docker locally on your computer it wont matter what configuration your company set. Even you dont have to change any VM network config in order to expose things to your local host, all comes by default if you're using Vbox ( adapter 1 ==> NAT & adapter 2 ==> host only )
hope this is what you're looking for
If the goal is to keep it as simple as possible for multiple developers, localhost will be your best bet. As long as the ports you're exposing and publishing are available on host, you can just use http://localhost in the browser. If it's a port other than 80/443, just append it like http://localhost:8080.
If you really don't want to go the /etc/hosts or localhost route, you could also purchase a domain and have it route to 127.0.0.1. This article lays out the details a little bit more.
Example:
dave-mbp:~ dave$ traceroute yoogle.com
traceroute to yoogle.com (127.0.0.1), 64 hops max, 52 byte packets
1 localhost (127.0.0.1) 0.742 ms 0.056 ms 0.046 ms
Alternatively, if you don't want to purchase your own domain and all developers are on the same network and you are able to control DHCP/DNS, you can setup your own DNS server to include a private route back to 127.0.0.1. Similar concept to the Public DNS option, but a little more brittle since you might allow your devs to work remote, outside of a controlled network.
Connecting by hostname requires that you go through hostname to IP resolution. That's handled by the hosts file and falls back to DNS. This all happens before you ever touch the docker container, and docker machine itself does not have any external hooks to go out and configure your hosts file or DNS servers.
With newer versions of Docker on windows, you run containers with HyperV and networking automatically maps ports to localhost so you can connect to http://localhost. This won't work with docker-machine since it's spinning up virtualbox VM's without the localhost mapping.
If you don't want to configure your hosts file, DNS, and can't use a newer version of docker, you're left with connecting by IP. What you can do is use a free wildcard DNS service like http://xip.io/ that maps any name you want, along with your IP address, back to that same IP address. This lets you use things like a hostname based reverse proxy to connect to multiple containers inside of docker behind the same port.
One last option is to run your docker host VM with a static IP. Docker-machine doesn't support this directly yet, so you can either rely on luck to keep the same IP from a given range, or use another tool like Vagrant to spin up the docker host VM with a static IP on the laptop. Once you have a static IP, you can modify the host file once, create a DNS entry for every dev, or use the same xip.io URL, to access the containers each time.
If you're on a machine with Multicasting DNS (that's Bonjour on a Mac), then the approach that's worked for me is to fire up an Avahi container in the Docker Machine vbox. This lets me refer to VM services at <docker-machine-vm-name>.local. No editing /etc/hosts, no crazy networking settings.
I use different Virtualbox VMs for different projects for my work, which keeps a nice separation of concerns (prevents port collisions, lets me blow away all the containers and images without affecting my other projects, etc.)
Using docker-compose, I just put an Avahi instance at the top of each project:
version: '2'
services:
avahi:
image: 'enernoclabs/avahi:latest'
network_mode: 'host'
Then if I run a webserver in the VM with a docker container forwarding to port 80, it's just http://machine-name.local in the browser.
You can add a domain name entry in your hosts file :
X.X.X.X mymachine # Replace X.X.X.X by the IP of your docker machine
You could also set up a DNS server on your local network if your app is meant to be reachable from your coworkers at your workplace and if your windows machine is meant to remain up as a server.
that would require to make your VM accessible from local network though, but port forwarding could then be a simple solution if your app is the only webservice running on your windows host. (Note that you could as well set up a linux server to avoid using docker-machine on windows, but you would still have to set up a static IP for this server to ensure that your domain name resolution works).
You could also buy your own domain name (or get a free one) and assign it your docker-machine's IP if you don't have rights to write in your hosts file.
But these solution may not work anymore after some time if app host doesn't have a static IP and if your docker-machine IP changes). Not setting up a static IP doesn't imply it will automatically change though, there should be some persistence if you don't erase the machine to create a new one, but that wouldn't be guaranteed either.
Also note that if you set up a DNS server, you'd have to host it on a device with a static IP as well. Your coworkers would then have to configure their machine to use this one.
I suggest nginx-proxy. This is what I use all the time. It comes in especially handy when you are running different containers that are all supposed to answer to the same port (e.g. multiple web-services).
nginx-proxy runs seperately from your service and listens to docker-events to update it's own configuration. After you spun up your service and query the port nginx-proxy is listening to, you will be redirected to your service. Therefore you either need to start nginx-proxy with the DEFAULT_HOST flag or send the desired host as header param with the request.
As I am running this only with plain docker, I don't know if it works with docker-machine, though.
If you go for this option, you can decide for a certain domain (e.g. .docker) to be completely resolved to localhost. This can be either done company-wide by DNS, locally with hosts file or an intermediate resolver (the specific solution depends on your OS, of course). If you then try to reach http://service1.docker nginx-proxy will route to the container that has then ENV VIRTUAL_HOST=service1.docker. This is really convenient, because it only needs one-time setup and is from then on dynamic.

How to assign the host ip to a service that is running with docker compose

I have several services specified inside a docker compose file that are communication with each other via links. Now I want one of these services to talk to the outside world and fetch some data from another server in the host network. But the docker service uses its internally assigned IP address which leads to the firewall of the host network blocking his requests. How can I tell this docker service to use the IP address of the host instead?
EDIT:
I got a step further, what I'm looking for is the network_mode option with the value host. But the Problem is that network_mode: "host" cannot be mixed with links. So i guess i have to change the configuration of all the docker services to not use links. I will try how this works out.
You should open a port like to that service
ports:
8000:8000
The 8000 on the left is the host port and the 8000 on the right will be the IP port

Resources