Using two host IP's with docker-compose - docker

I've googled this to death.
I want one container bound to one host IP/port, the other container on a different one, for various reasons.
I have (snipped out some info)
container1:
network_mode:
host ports:
- 192.168.1.224:80:80
container2:
network_mode:
host ports:
- 192.168.1.225:80:80
Docker actually starts up, but when I visit each IP in the browser, both URL's will return container1's stuff.
Has anyone done this? All I can find online is mostly related to docker and not docker-compose (starting docker with some arguments), or people arguing it should be done another way.

Delete the network_mode: host setting: it's getting in your way here.
Specifying network_mode: host bypasses all of Docker's normal networking setup. The ports: setting has no effect. Each process here sees both of your host interfaces, and presumably tries to bind to both of them. If you use the default network_mode: bridge, each container gets an isolated network stack, and you can use ports: as you've done to selectively expose containers to specific interfaces.
network_mode: host is really only appropriate in a couple of specific cases; only if your server process is listening on thousands of ports, or its specific port is unpredictable, or if you have an actual need to inspect the host's network setup but can't run your process directly on the host.

Related

How to map extra host in docker-compose to container gateway?

I have a case where I need to call external API from docker container, but can only do that by its URL. In order to do that I'm mapping this URL to container gateway and that's working, but I need it to be dynamic, because I need to run this docker-compose on different devices and from what I see, the gateways are different.
version: '3'
services:
pdf-service:
image: $IMAGE:latest
container_name: pdf-$LOCALE
environment:
- NODE_ENV=production
- LOCALE=${LOCALE}
- API_URL=${API_URL}
extra_hosts:
- ${API_HOST}:172.24.0.1
tty: true
restart: always
ports:
- ${PORT}:8124
At the moment I've hardcoded it as you can see - 172.24.0.1, which is container's gateway. I've found out about something like host.gateway, but have no idea how to use it correctly. Also I've read that it's not working in production? My production environment is Debian 10 with Docker v. 18.09.1 and docker-compose v.1.21.0.
The IP 172.24.0.1 is internal to Docker. When you add an extra host you need to map a name to a public IP.
From your host run ping <api_host> to get the public IP. Use that IP in your docker-compose.yml instead of 172.24.0.1.
If the service you want to reach runs also in Docker, then it must expose a port on the host in order for you to reach it. So then the IP you need is the local network IP of your host (or the public one if for some reason you need to).
If you had to reach an IP that is Docker internal then you wouldn't have to declare an extra_host. You would just put the containers/services on the same network and refer each other by name.
I think you can use network, container can join many network, and if two container in a same network, they can find another by host (service name)

How to expose a Docker container port to one specific Docker network only, when a container is connected to multiple networks?

From the Docker documentation:
--publish or -p flag. Publish a container's port(s) to the host.
--expose. Expose a port or a range of ports.
--link. Add link to another container. Is a legacy feature of Docker. It may eventually be removed.
I am using docker-compose with several networks. I do not want to publish any ports to the host, yet when I use expose, the port is then exposed to all the networks that container is connected to. It seems that after a lot of testing and reading I cannot figure out how to limit this to a specific network.
For example in this docker-compose file with where container1 joins the following three networks: internet, email and database.
services:
container1:
networks:
- internet
- email
- database
Now what if I have one specific port that I want to expose to ONLY the database network, so NOT to the host machine and also NOT to the email and internet networks in this example? If I would use ports: on container1 it is exposed to the host or I can bind it to a specific IP address of the host. *I also tried making a custom overlay network, giving the container a static IPv4 address and trying to set the ports in that format in ports: like - '10.8.0.3:80:80', but that also did not work because I think the binding can only happen to a HOST IP address. If i use expose: on container1 the port will be exposed to all three networks: internet, email and database.
I am aware I can make custom firewall ruling but it annoys me that I cannot write such simple config in my docker-compose file. Also, maybe something like 80:10.8.0.3:80 (HOST_IP:HOST_PORT:CONTAINER_IP:CONTAINER_PORT) would make perfect sense here (did not test it).*
Am I missing something or is this really not possible in Docker and Docker-compose?
Also posted here: https://github.com/docker/compose/issues/8795
No, container to container networking in docker is one-size-fits-many. When two containers are on the same network, and ICC has not been disabled, container-to-container communication is unrestricted. Given Docker's push into the developer workflow, I don't expect much development effort to change this.
This is handled by other projects like Kubernetes by offloading the networking to a CNI where various vendors support networking policies. This may be iptables rules, eBPF code, some kind of sidecar proxy, etc to implement it. But it has to be done as the container networking is setup, and docker doesn't have the hooks for you to implement anything there.
Perhaps you could hook into docker events and run various iptables commands for containers after they've been created. The application could also be configured to listen on the specific IP address for the network it trusts, but this requires injecting the subnet you trust and then looking up your container IP in your entrypoint, non-trivial to script up, and I'm not even sure it would work. Otherwise, this is solved by either restructuring the application so components that need to be on a less secure network are minimized, by hardening the sensitive ports, or switching the runtime over to something like Kubernetes with a network policy.
Things that won't help:
Removing exposed ports: this won't help since expose is just documentation. Changing exposed ports doesn't change networking between containers, or between the container and host.
Links: links are a legacy feature that adds entries to the host file when the container is created. This was replaced by creating networks with DNS resolution of other containers.
Removing published ports on the host: This doesn't impact container to container communication. The published port with -p creates a port forward from the host to the container, which you do want to limit, but containers can still communicate over a shared network without that published port.
The answer to this for me was to remove the -p command as that binds the container to the host and makes it available outside the host.
If you don't specify -p options. The container is available on all the networks it is connected to. On whichever port or ports the application is listening on.
It seems the -P forces the container on to the host and binds it to the port specified.
In your example if you don't use -p when staring "container1". "container1" would be available to the networks: internet, email, database with all its ports but not outside the host.

Do I have to expose the port if I am using the ports config?

Do I have to expose the port if I am using the ports config?
In the docker-compose.yml below, do I have to keep expose 2022 or can I remove it? Is there a difference between them?
myproject-app:
build: ../myproject-app
container_name: myproject-app
image: myproject-app
expose:
- 2022
ports:
- 2022:2022
volumes:
- ../myproject-app/:/home/myproject/myproject-app/
- /home/myproject/myproject-app/node_modules
Exposed ports needed for another service, if you want to connect two services inside docker, so service with exposed port will be available inside docker net.
Ports property (ports:) make service port available on your host machine, so you can connect it with your OS net.
So you can delete expose property.
"Expose" means basically nothing in modern Docker. There is pretty much no reason to put an expose: line in your docker-compose.yml file. It's considered polite to include an EXPOSE line in your Dockerfile to document what port(s) your service will listen on, but it's not strictly necessary.
In modern Docker, with named networks, any container can connect to any port of any other container on the same network, provided a process is listening there. Before there were named networks, one container had to explicitly "link" to another to be able to call it, and then it could only reach the exposed ports of the target container. This setup is considered obsolete now (you never need links: either).
Plain Docker has an option (docker run -P, with a capital P) to publish all exposed ports on random host ports. Compose doesn't have an equivalent option. Ports that are exposed but not published also show up in the docker ps output. But those are really the only things "expose" does at all.

Docker-compose: Docker containers can't connect using service names

I have 3 containers. One is a lighttpd server serving static content (front). I have 2 flask servers handling the backend (back and model)
This is my docker-compose.yml
version: "3"
services:
front:
image: ecd3:latest
ports:
- 4200:80
tty: true
links:
- "back"
depends_on:
- back
networks:
- mynet
back:
image: esd3:latest
ports:
- 5000:5000
links:
- "model"
depends_on:
- model
networks:
- mynet
model:
image: mok:latest
ports:
- 5001:5001
networks:
- mynet
networks:
mynet:
I'm trying to send an http request to my flask server (back) from my frontend (front). I have bound the flask server to 0.0.0.0 and even used the service name in the frontend (http://back:5000/endpoint)
Trying to curl the flask server inside the frontend container (curl back:5000) gives me this:
curl: (52) Empty reply from server
Pinging the flask server from inside the frontend container works. This means that the connection must have been established.
Why can't I connect to my flask server from my frontend?
We discovered several things in the comments. Firstly, that you had a proxy problem that prevented one container using the API in another container.
Secondly, and critically, you discovered that the service names in your Docker Compose configuration file are made available in the virtual networking system set up by Docker. So, you can ping front from back and vice-versa. Importantly, it's worth noting that you can do this because they are on the same virtual network, mynet. If they were on different Docker networks, then by design the DNS names would not be available, and the virtual container IP addresses would not be reachable.
Incidentally, since you have all of your containers on the same network, and you have not changed any network settings, you could drop this network for now. In other words, you can remove the networks definition and the three container references to it, since they can just join the default network instead.
Thirdly, you learned that Docker's virtual DNS entries are not made available on the host, and so front and back are not available here. Even if the were (e.g. if manual entries were made in the hosts file) those IPs would not work, since there is no direct networking route from the host to the containers.
Instead, those containers are exposed by a Docker device that proxies connections from a custom localhost port down to those containers (4200, 5000 and 5001 in your case).
A good interim solution is to load your frontend at http://localhost:4200 and hardwire its API address as http://localhost:5000. You may have some CORS issues with that though, since browsers will see these as different servers.
Moreover, if you go live, you may have some problems with mobile networks and corporate firewalls - you will probably want your frontend app to sit on port 443, but since it is a separate server, you will either need a different IP address for your API, so it can also go on 443, or you will need to use another port. A clean solution for this is to put a frontend proxy in front of both containers, and then just expose the proxy in Docker. This will send HTTP requests from the outside to the correct container, depending on a filtering criteria set by you. I recommend Traefik for this, but there are undoubtedly several other approaches.

How can I make docker-compose bind the containers only on defined network instead of 0.0.0.0?

In recent versions docker-compose automatically creates a new network for the services it creates. Basically, every docker-compose setup is getting its own IP range, so that in theory I could call my services on the network's IP address with the predefined ports. This is great when developing multiple projects at the same time, since there is then no need to change the ports in docker-compose.yml (i.e. I can run multiple nginx projects at the same time on port 8080 on different interfaces)
However, this does not work as intended: every exposed port is still exposed on 0.0.0.0 and thus there are port conflicts with multiple projects. It is possible to put the bind IP into docker-compose.yml, however this is a killer for portability -- not every developer on the team uses the same OS or works on the same projects, therefore it's not clear which IP to configure.
It's be great to define the IP to bind the containers to in terms of the network created for this particular project. docker-compose should both know which network it created as well as its IP, so this shouldn't be a problem, however I couldn't find an easy way to do it. Is there a way or is this something yet to be implemented?
EDIT: An example of a port conflict: imagine two projects, each with an application server running on port 8080 and a MySQL database running on port 3306, both respectively exposed as "8080:8080" and "3306:3306". Running the first one with docker-compose creates a network called something like app1_network with an IP range of 172.18.0.0/16. Every exposed port is exposed on 0.0.0.0, i.e. on 127.0.0.1, on the WAN address, on the default bridge (172.17.0.0/16) and also on the 172.18.0.0/16. In this case I can reach my application server of all of 127.0.0.1:8080, 172.17.0.1:8080, 172.18.0.1:8080 and als on $WAN_IP:8080. If I start the second application now, it starts a second network app2_network 172.19.0.0/16, but still tries to bind every exposed port on all interfaces. Those ports are of course already taken (except for 172.19.0.1). If there had been a possibility to restrict each application to its network, application 1 would have available at 172.18.0.1:8080 and the second at 172.19.0.1:8080 and I wouldn't need to change port mappings to 8081 and 3307 respectively to run both applications at the same time.
In your service configuration, in docker-compose.yml:
ports:
- "127.0.0.1:8001:8001"
Reference: https://github.com/compose-spec/compose-spec/blob/master/spec.md#ports
You can publish a port to a single IP address on the host by including the IP before the ports:
docker run -p 127.0.0.1:80:80 -d nginx
The above runs nginx on the loopback interface. You can use a similar port mapping inside of a docker-compose.yml file. e.g.:
ports:
- "127.0.0.1:80:80"
docker-compose doesn't have any special abilities to infer which network interface to use based on the docker network. You'd need to specify the unique IP address to use in each compose file, and that IP needs to be for a network interface on your host. For a developer machine, that IP may change as DHCP gives the laptop/workstation new addresses.
Because of the difficulty implementing your goal, most would either map different ports on the host to different containers, so 13307:3307 for container a, 23307:3307 for container b, 33307:3307 for container c, or whatever number scheme makes sense for you. And when dealing with HTTP traffic, then using a reverse proxy like traefik often makes the most sense.
It can be achieved by configuring network in docker-compose file.
Please consider below two docker-compose files. There is still drawback of needing to specify subnet unique across all project you work on at the same time. On the other hand you need to know which service you connecting too - this is why it cannot assign it dynamically.
my-project.yaml:
services:
nginx:
networks:
- my-project-network
image: nginx
ports:
- 80:80
networks:
my-project-network:
driver_opts:
com.docker.network.bridge.host_binding_ipv4: "172.20.0.1"
ipam:
config:
- subnet: "172.20.0.0/16"
my-other-project.yaml
services:
nginx:
networks:
- my-other-project-network
image: nginx
ports:
- 80:80
networks:
my-other-project-network:
driver_opts:
com.docker.network.bridge.host_binding_ipv4: "172.21.0.1"
ipam:
config:
- subnet: "172.21.0.0/16"
Note: that if you have other service binding to *:80 like for instance apache running on host - it will also bind on docker-compose networks' interfaces and you will not be able to use this port.
To run above two services:
docker-compose -f my-project.yaml up -d
docker-compose -f my-other-project.yaml up -d

Resources