I want my container to use specific ethernet card. How to define in docker-compose file to allow container to use the specific ip?
For inbound connections, there's an optional bind-address parameter of ports::
version: '3'
services:
postgres:
image: 'postgres:9.3'
ports:
- '10.20.30.40:5432:5432' # only listens on this host address
For outbound connections (as with processes running directly on the host) this is harder to directly control, and is subject to the host's usual routing rules.
Related
I have a case where I need to call external API from docker container, but can only do that by its URL. In order to do that I'm mapping this URL to container gateway and that's working, but I need it to be dynamic, because I need to run this docker-compose on different devices and from what I see, the gateways are different.
version: '3'
services:
pdf-service:
image: $IMAGE:latest
container_name: pdf-$LOCALE
environment:
- NODE_ENV=production
- LOCALE=${LOCALE}
- API_URL=${API_URL}
extra_hosts:
- ${API_HOST}:172.24.0.1
tty: true
restart: always
ports:
- ${PORT}:8124
At the moment I've hardcoded it as you can see - 172.24.0.1, which is container's gateway. I've found out about something like host.gateway, but have no idea how to use it correctly. Also I've read that it's not working in production? My production environment is Debian 10 with Docker v. 18.09.1 and docker-compose v.1.21.0.
The IP 172.24.0.1 is internal to Docker. When you add an extra host you need to map a name to a public IP.
From your host run ping <api_host> to get the public IP. Use that IP in your docker-compose.yml instead of 172.24.0.1.
If the service you want to reach runs also in Docker, then it must expose a port on the host in order for you to reach it. So then the IP you need is the local network IP of your host (or the public one if for some reason you need to).
If you had to reach an IP that is Docker internal then you wouldn't have to declare an extra_host. You would just put the containers/services on the same network and refer each other by name.
I think you can use network, container can join many network, and if two container in a same network, they can find another by host (service name)
I run pihole on my RPi behind nginx reverse proxy, along with several other proxied containers. I want to:
map the port 80 of the pihole container to an internal-only network (that nginx proxies to public port 80)
map the port 53 (DNS) to the default network (so that it's publicly available).
By default all ports are published on all networks the container is part of, which I'm trying to avoid. In essence I'd like to do this:
version: '3'
services:
pihole:
container_name: pihole
hostname: pihole
image: pihole/pihole:latest
networks:
- default
- intraonly
ports:
- default:53:53/tcp
- default:53:53/udp
- intraonly:80/tcp
- intraonly:443/tcp
[...nginx & other services definitions follow...]
networks:
intraonly:
driver: bridge
internal: true
The above obviously fails, because the documentation says clearly it expects an IP address only in the port definition:
Specify the host IP address to bind to AND both ports (the default is 0.0.0.0, meaning all interfaces): (IPADDR:HOSTPORT:CONTAINERPORT).
That seems crazy however, as the IP address changes every time I rebuild the container. In other places the documentation suggests to avoid addressing other containers by IP address and chose the symbolic service names (published by DNS) instead.
What am I missing? What is the right/robust way to expose a port on a specific interface without hardcoding IP address? (I'm aware I could achieve internal-only ports by using expose syntax), but the question of binding ports to specific custom networks still stands.)
Context: I am using docker-compose.yml to set up a container for the mongoDB, where network sets up as following
...
services:
mongo:
networks:
mongodb_net:
ipv4_address: 192.168.178.23
networks:
mongodb_net:
ipam:
config:
- subnet: 192.168.178.0/24
...
which is exactly the same as the IP address of my WiFi connection.
Question:
After the setting above, why some websites are not accessible anymore (e.g. PING doesn't return any packages) on my browser?
I tried to change the YAML file to other IP address, the problem resolves. But I want to understand what was the reason. Is it because that the docker service occupies the same IP as the WiFi so that interrupting the normal internet access?
Docker defines its own network setup. You can see some details of this on Linux running ifconfig and looking at iptables output. If you manually configure a Docker network to have the same CIDR block as your external network, you can wind up in a sequence where:
I want to call 8.8.8.8.
It's not on any of my local networks, so I'll route to the default gateway 192.168.178.1.
That address is on the docker1 network 192.168.178.0/24.
...and the outbound packets never actually leave your host.
You should almost never need to manually configure IP addresses or networks in Docker. It has its own internal network setup and handles this for you. In a Compose context, Compose will also do some additional setup that you generally need, like creating a default network; Networking in Compose has more details.
To get access to a container from outside of Docker space, you need to publish ports: out of that container, and then it will be reachable on your host's IP address at the published port.
services:
mongo:
ports: ['27017:27017']
# no networks: or manual IP configuration; just use the `default` network
In recent versions docker-compose automatically creates a new network for the services it creates. Basically, every docker-compose setup is getting its own IP range, so that in theory I could call my services on the network's IP address with the predefined ports. This is great when developing multiple projects at the same time, since there is then no need to change the ports in docker-compose.yml (i.e. I can run multiple nginx projects at the same time on port 8080 on different interfaces)
However, this does not work as intended: every exposed port is still exposed on 0.0.0.0 and thus there are port conflicts with multiple projects. It is possible to put the bind IP into docker-compose.yml, however this is a killer for portability -- not every developer on the team uses the same OS or works on the same projects, therefore it's not clear which IP to configure.
It's be great to define the IP to bind the containers to in terms of the network created for this particular project. docker-compose should both know which network it created as well as its IP, so this shouldn't be a problem, however I couldn't find an easy way to do it. Is there a way or is this something yet to be implemented?
EDIT: An example of a port conflict: imagine two projects, each with an application server running on port 8080 and a MySQL database running on port 3306, both respectively exposed as "8080:8080" and "3306:3306". Running the first one with docker-compose creates a network called something like app1_network with an IP range of 172.18.0.0/16. Every exposed port is exposed on 0.0.0.0, i.e. on 127.0.0.1, on the WAN address, on the default bridge (172.17.0.0/16) and also on the 172.18.0.0/16. In this case I can reach my application server of all of 127.0.0.1:8080, 172.17.0.1:8080, 172.18.0.1:8080 and als on $WAN_IP:8080. If I start the second application now, it starts a second network app2_network 172.19.0.0/16, but still tries to bind every exposed port on all interfaces. Those ports are of course already taken (except for 172.19.0.1). If there had been a possibility to restrict each application to its network, application 1 would have available at 172.18.0.1:8080 and the second at 172.19.0.1:8080 and I wouldn't need to change port mappings to 8081 and 3307 respectively to run both applications at the same time.
In your service configuration, in docker-compose.yml:
ports:
- "127.0.0.1:8001:8001"
Reference: https://github.com/compose-spec/compose-spec/blob/master/spec.md#ports
You can publish a port to a single IP address on the host by including the IP before the ports:
docker run -p 127.0.0.1:80:80 -d nginx
The above runs nginx on the loopback interface. You can use a similar port mapping inside of a docker-compose.yml file. e.g.:
ports:
- "127.0.0.1:80:80"
docker-compose doesn't have any special abilities to infer which network interface to use based on the docker network. You'd need to specify the unique IP address to use in each compose file, and that IP needs to be for a network interface on your host. For a developer machine, that IP may change as DHCP gives the laptop/workstation new addresses.
Because of the difficulty implementing your goal, most would either map different ports on the host to different containers, so 13307:3307 for container a, 23307:3307 for container b, 33307:3307 for container c, or whatever number scheme makes sense for you. And when dealing with HTTP traffic, then using a reverse proxy like traefik often makes the most sense.
It can be achieved by configuring network in docker-compose file.
Please consider below two docker-compose files. There is still drawback of needing to specify subnet unique across all project you work on at the same time. On the other hand you need to know which service you connecting too - this is why it cannot assign it dynamically.
my-project.yaml:
services:
nginx:
networks:
- my-project-network
image: nginx
ports:
- 80:80
networks:
my-project-network:
driver_opts:
com.docker.network.bridge.host_binding_ipv4: "172.20.0.1"
ipam:
config:
- subnet: "172.20.0.0/16"
my-other-project.yaml
services:
nginx:
networks:
- my-other-project-network
image: nginx
ports:
- 80:80
networks:
my-other-project-network:
driver_opts:
com.docker.network.bridge.host_binding_ipv4: "172.21.0.1"
ipam:
config:
- subnet: "172.21.0.0/16"
Note: that if you have other service binding to *:80 like for instance apache running on host - it will also bind on docker-compose networks' interfaces and you will not be able to use this port.
To run above two services:
docker-compose -f my-project.yaml up -d
docker-compose -f my-other-project.yaml up -d
In docker-compose legacy yml if you link a service it used to create an environment variable servicename_PORT which you could use to discover the port of a linked container. In the new v2 format we have user defined networks which add the service name to the internal DNS and so we can connect to linked services, but how do we find the port a linked service exposes? The only way I can think of is to create an environment variable for each linked service where I can put the port, but then I will have the same port twice in the docker-compose: once in the expose section of the service itself and once as an environment variable in the service that connects to it. Is there a more DRY way of discovering the exposed port?
For this, you usually use a registrator + service-discover, this means, a service like https://www.consul.io / registrator
Basically, this adds an API for you to either watch a kv store for you service defintions ( port / ip ) which then can be random, or even use DNS included in consul. The latter wont help with ports, thats what you use a registry for.
If you want to dodge this best-practice way. mount the docker socket and use docker inspect <servicename> to find the port.
services:
other:
container_name: foo
image: YYYY
theonedoingthelookup:
image: ZZZZ
volumes:
- /var/run/docker.sock:/var/run/docker.sock
You will need to have the docker cli tool installed in the container, then run this inside the ZZZZ container
docker inspect YYYY
Use some grep / awk / filters to extract the information you need
It's whatever port the service is running as in the container. Port mappings don't apply to container <-> container communication, only host <-> container communication.
For example:
version: '2'
services:
a:
...
networks:
- my-net
b:
...
networks:
- my-net
networks:
my-net:
Let's say a is running a webserver at port 8080, b would be able to hit it by sending a request to a:8080.