How to set bind9 docker container as dns of other container - docker

I'm trying to set up ssl for my home network and I've set up a bind9 container with a custom domain that points to my unraid server. So far so good. I've also set up a private step-ca certificate authority which needs it's dns set to the bind9 container so that it knows about my private domain. The setup works if I set the dns of the step container to the internal docker ip address of the bind container but since these ip addresses are ephemeral I can't rely on that hence why I'm binding the bind9 ip address to something within 192.168.0.1/24 and accessing it there. This works if I set the dns server of my pc to the bind9 container but I am unable to do so for other docker containers for some reason
in short, step-ca and my proxy traefik need their dns set to bind9 which I want to have set up with a static ip address on the 192.168.0.1/24 subnet. traefik also needs to be able to talk to containers on the bridge network br0 otherwise it won't be able to proxy requests to the containers

The addresses of your containers don't need to be ephemeral. We can set up a custom network using the networks top-level element that defines a static range for the network using the ipam option, and then we can assign our containers static address on this network.
We can use the dns option to configure containers to use the bind9 container for name resolution.
Here's an example docker-compose.yaml that sets up a bind9 container and a couple of additional containers that will use it for DNS:
version: "3"
services:
bind9:
image: docker.io/internetsystemsconsortium/bind9:9.19
volumes:
- "./bind:/etc/bind"
- bind9_cache:/var/cache/bind
- bind9_log:/var/log
- bind9_lib:/var/lib/bind
networks:
bind9:
ipv4_address: 192.168.133.10
web1:
image: docker.io/alpinelinux/darkhttpd:latest
networks:
bind9:
ipv4_address: 192.168.133.20
dns: 192.168.133.10
web2:
image: docker.io/alpinelinux/darkhttpd:latest
networks:
bind9:
ipv4_address: 192.168.133.21
dns: 192.168.133.10
networks:
bind9:
ipam:
driver: default
config:
- subnet: 192.168.133.0/24
gateway: 192.168.133.1
volumes:
bind9_cache:
bind9_lib:
bind9_log:
In in the bind directory, I have bind configured to serve the following zonefile:
$TTL 604800
# IN SOA docker.example. root.docker.example. (
2 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
# IN NS ns.docker.example.
ns IN A 192.168.133.10
web1 IN A 192.168.133.20
web2 IN A 192.168.133.21
web IN A 192.168.133.20
web IN A 192.168.133.21
From either the web1 or web2 containers, we can confirm that they are using our bind instance for name resolution:
/ $ wget -O- web1.docker.example:8080
Connecting to web1.docker.example:8080 (192.168.133.20:8080)
writing to stdout
<html>
.
.
.
</html>
Recall that docker-compose is just a fancy wrapper for docker run, so you can accomplish the same thing without using docker-compose (although it will of course make life much easier).
If you need to access the bind9 container, you would of course just publish the appropriate ports on your host by adding the necessary ports section to the compose configuration (or by using the --publish/-p option on the docker run command line):
bind9:
image: docker.io/internetsystemsconsortium/bind9:9.19
ports:
- "53:53/udp"
- "53:53/tcp"
volumes:
- "./bind:/etc/bind"
- bind9_cache:/var/cache/bind
- bind9_log:/var/log
- bind9_lib:/var/lib/bind
networks:
bind9:
ipv4_address: 192.168.133.10

Related

Assign IP address in docker with a bridge network which has no IP

I want to ask a static IP within the docker container.
I can assign an IP address if there is the IPAM configuration in the network.
version: "2.1"
services:
nginx:
image: ghcr.io/linuxserver/nginx
container_name: nginx
volumes:
- ./config:/config
ports:
- 443:443
restart: always
networks:
br-uplink:
ipv4_address: 192.168.1.2
networks:
br-uplink:
driver: bridge
name: br-uplink
ipam:
config:
- subnet: "192.168.1.0/24"
gateway: "192.168.1.1"
but if there is no IPAM, then this does not work.
$ docker compose up -d
[+] Running 1/2
⠿ Network br-uplink Created
⠋ Container nginx Creating
Error response from daemon: user specified IP address is supported only when connecting to networks with user configured subnets
So if I remove ipv4_address and IPAM configuration, then a random address is assigned.
and if I assign manually within the container, it works.
docker compose exec nginx bash
ip add flush dev eth0
ip add add 192.168.11.2/24 dev eth0
How can I make this possible automatically?
I don't want to create my own Dockerfile for this, it would be happy if this can be done within the docker-compose.yml.
Any ideas?

Expose one container port but other not reacheable from the host machine

I do not know how to achieve that. Now all the ports are exposed to the host machine but I just want to expose one container port (80), not the other (8080). Here is the docker-compose file:
---
version: "3.9"
services:
app:
image: sandbox/app
container_name: app
volumes:
- ./src/app:/app/
expose:
- "8080"
restart: unless-stopped
networks:
custom-net:
ipv4_address: 10.0.0.7
web_server:
image: nginx:latest
container_name: proxy
ports:
- "80:80"
networks:
custom-net:
ipv4_address: 10.0.0.6
networks:
custom-net:
name: custom-net
driver: bridge
ipam:
driver: default
config:
- subnet: 10.0.0.0/8
If I run from the local machine nmap 10.0.0.6, it shows port as open in port 80. This container exposure is the desired one. But when I run nmap 10.0.0.7, it also shows as open 8080 port, how it could be that one? Checking some stackoverflow thread, ports is defined like that:
Expose ports. Either specify both ports (HOST:CONTAINER), or just the container port (a random host port will be chosen).
and expose:
Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified.
Do I miss some network concepts or do I have wrong docker-compose file?
You must be on a native-Linux host. If you happen to know the Docker-internal IP addresses, and you're on a native-Linux host, then you can always connect to a container using those addresses; you can't prevent this (without iptables magic) but it's also not usually harmful. This trick doesn't work in other environments (on MacOS or Windows hosts, or if Docker is in a Linux VM, or from a different host from the container) and it's much more portable to connect only to containers' published ports:.
You should be able to use a much simpler Compose file. Delete all of the networks: blocks and the expose: blocks. You also do not need container_name:, and you should not need to inject code using volumes:. Trimming out all of the unnecessary options leaves you with
version: '3.8' # last version supported by standalone docker-compose tool
services:
app:
image: sandbox/app # may want `build: .` _instead of_ this line
restart: unless-stopped
web_server:
image: nginx:latest # needs some custom configuration?
ports:
- "80:80"
That should literally be the entire file.
From outside Docker but on the same machine, http://localhost:80 matches the first ports: of the web_server container, so forwards to the second ports:, on which the Nginx server is listening. The Nginx configuration should include a line like proxy_pass http://app:8080 which will forward to the application container.
Compared to your original file:
expose: is an artifact of first-generation Docker networking. In a Compose file it does absolutely nothing at all and it's always safe to delete it.
Connections between containers (where web_server uses app as a host name) connect directly to the specified port; they do not use or require expose: or ports: settings, and they ignore ports: if they're present.
Compose assigns container names on its own, and there are docker-compose CLI equivalents to almost all Docker commands that can figure out the right mapping. You don't need to manually specify container_name:.
Docker automatically assigns IP addresses to containers. These are usually an internal implementation detail; it's useful to know that containers do have their own IP addresses (and so you can have multiple containers that internally listen on the same port) but you never need to know these addresses, look them up, or manually specify them.
Compose automatically creates a network named default for you and attaches containers to it, so in most common cases you don't need networks: at all.
Networking in Compose in the Docker documentation describes how to make connections between containers (again, you do not need to know the container-private IP addresses). Container networking discusses these concepts separately from Compose.

Expose docker-compose windows containers to windows host network

I'im fairly new to docker and docker compose.
I have a simple scenario, based on three applications (app1, app2, app3) that I want to connect to my host's network. The purpose is having an internet connection also inside the container.
Here is my docker-compose file:
version: '3.9'
services:
app1container:
image: app1img
build: ./app1
networks:
network_comp:
ipv4_address: 192.168.1.1
extra_hosts:
anotherpc: 192.168.1.44
ports:
- 80:80
- 8080:8080
app2container:
depends_on:
- "app1container"
image: app2img
build: ./app2
networks:
network_comp:
ipv4_address: 192.168.1.2
ports:
- 3100:3100
app3container:
depends_on:
- "app1container"
image: app3img
build: ./app3
networks:
network_comp:
ipv4_address: 192.168.1.3
ports:
- 9080:9080
networks:
network_comp:
driver: ""
ipam:
driver: ""
config:
- subnet: 192.168.0.0/24
gateway: 192.168.1.254
I already read the docker-compose documentation, which says that there is no a bridge driver for Windows OS. Is there anyway a solution to this issue?
You shouldn't usually need to do any special setup for this to work. When your Compose service has ports:, that makes a port available on the host's IP address. The essential rules for this are:
The service inside the container must listen on the special 0.0.0.0 "all interfaces" address (not 127.0.0.1 "this container only"), on some (usually fixed) port.
The container must be started with Compose ports: (or docker run -p). You choose the first port number, the second port number must match the port inside the container.
The service can be reached via the host's IP address on the first port number (or, if you're using the older Docker Toolbox setup, on the docker-machine ip address).
http://host.example.com:12345 (from other hosts)
|
v
ports: ['12345:8080'] (in the `docker-compose.yml`)
|
v
./my_server -bind 0.0.0.0:8080 (the main container command)
You can remove all of the manual networks: configuration in this file. In particular, it's problematic if you try to specify the Docker network to have the same IP address range as the host network, since these are two separate networks. Compose automatically provides a network named default that should work for most practical applications.

what is the difference between hostname and servicename?

To let other service / system / docker container talk to my container, should I give them docker service name or i must define hostname?
here is the sample docker compose file
version: '3'
networks:
test:
services:
testservicename:
networks:
- test
image: test.thedevcloud.net:8000/test/app:1.2
container_name: testcontainername
hostname: testhostname
ports:
- "8100:8100"
The hostname only affects the internal hostname within your container, the container name/service name itself can be used to connect to your service from other applications and is the actual DNS hostname.
So the hostname only affects the 'inside' of your container and changes nothing about the networking or connection options.

docker postgresql access from other container

I have a docker-compose file which is globally like this.
version '2'
services:
app:
image: myimage
ports:
- "80:80"
networks:
mynet:
ipv4_adress: 192.168.22.22
db:
image: postgres:9.5
ports:
- "6432:5432"
networks:
mynet:
ipv4_adress: 192.168.22.23
...
networks:
mynet:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.22.0/24
I want to put my postgresql and application in subnetworks to avoid the ports to be exposed outside my computer/server.
From within the app container, I can't connect to 192.168.22.23, I installed net-tools to use ifconfig/netstat, and it doesn't seem the containers are able to communicate.
I assume I have this problem because I'm using subnetworks with static ipv4 adresses.
I can access both static IPs from the host (connect to postgres and access the application)
Do you have any piece of advice, the goal is to access the ports of another container to communicate with him, without removing the use of static ips (on app at least). Here, to connect to postgresql from the app container.
The docker run -p option and Docker Compose ports: option take a bind address as an optional parameter. You can use this to make a service accessible from the same host, but not from other hosts:
services:
db:
ports:
- '127.0.0.1:6432:5432'
(The other good use of this setting is if you have a gateway machine with both a public and private network interface, and you want a service to only be accessible from the private network.)
Once you have this, you can dispense with all of the manual networks: setup. Non-Docker services on the same host can reach the service via the special host name localhost and the published port number. Docker services can use inter-container networking; within the same docker-compose.yml file you can use the service name as a host name, and the internal port number.
host$ PGHOST=localhost PGPORT=6432 psql
services:
app:
environment:
- PGHOST=db
- PGPORT=5432
You should remove all of the manual networks: setup, and in general try not to think about the Docker-internal IP addresses at all. If your Docker is Docker for Mac or Docker Toolbox, you cannot reach the internal IP addresses at all. In a multi-host environment they will be similarly unreachable from hosts other than where the container itself is running.

Resources