Docker routing into ipvlan from host network - docker

Goal:
I like to be able to ping and access the docker clients from my host network. And if possible, I like to have as much as possible configured in my docker-compose.yml.
Remark:
ICMP (ping) is just used for simplification. Actually, I like to access ssh on 22 and some other ports. Mapping ports is my current solution, but since I have many docker client container it becomes messy.
___________ ___________ ___________
| host | | docker | | docker |
| client | | host | | client |
| ..16.50 | <--> | ..16.10 | | |
| | | ..20.1 | <--> | ..20.5 |
| | | |
| | <----- not working ----> | |
Problem:
I am able to ping my docker host from docker clients and host clients, but not the docker clients from host clients.
That's my configuration on ubuntu 22.04.
docker host: 192.168.16.10/24
client host network: 192.168.16.50/24
default gw host network: 192.168.161 /24
docker client (container): 192.168.20.5 /24
docker-compose.yml
version: '3'
networks:
ipvlan20:
name: ipvlan20
driver: ipvlan
driver_opts:
parent: enp3s0.20
com.docker.network.bridge.name: br-ipvlan20
ipvlan-mode: l3
ipam:
config:
- subnet: "192.168.20.0/24"
gateway: "192.168.20.1"
services:
portainer:
image: alpine
hostname: ipvlan20
container_name: ipvlan20
restart: always
command: ["sleep","infinity"]
dns: 192.168.16.1
networks:
ipvlan20:
ipv4_address: 192.168.20.5
On my docker host, I added the following link with the vlan gateway IP.
ip link add myipvlan20 link enp3s0.20 type ipvlan mode l3
ip addr add 192.168.20.1/24 dev myipvlan20
ip link set myipvlan20 up
And on my host client, I added a rout to the docker host for the docker client network.
ip route add 192.168.20.0/24 via 192.168.16.10
I tried also:
Do I have to use macvlan? I tried that, but also unsuccessfully.
Do I have to use l3? I also tried with l2, but unsuccessfully as well.

Related

Docker Compose: services unable to resolve each other's hostnames

Consider a simple Docker Compose file.
version: "3.0"
networks:
basic:
services:
srv:
image: alpine
networks:
basic:
aliases:
- server.nowhere.fake
domainname: server.nowhere.fake
entrypoint: tail -f
cli:
image: alpine
networks:
basic:
aliases:
- client.nowhere.fake
domainname: client.nowhere.fake
entrypoint: nslookup server.nowhere.fake
Successful DNS resolutions is easily shown.
$ docker-compose up
Creating network "tmp_basic" with the default driver
Creating tmp_srv_1 ... done
Creating tmp_cli_1 ... done
Attaching to tmp_srv_1, tmp_cli_1
cli_1 | Server: 127.0.0.11
cli_1 | Address: 127.0.0.11:53
cli_1 |
cli_1 | Non-authoritative answer:
cli_1 |
cli_1 | Non-authoritative answer:
cli_1 | Name: server.nowhere.fake
cli_1 | Address: 192.168.192.2
cli_1 |
tmp_cli_1 exited with code 0
However, a more manual approach yields less productive results.
$ docker-compose run -d srv
Creating network "tmp_basic" with the default driver
tmp_srv_run_8ff7ac6b8cc8
$
$ docker-compose run cli
Server: 127.0.0.11
Address: 127.0.0.11:53
** server can't find server.nowhere.fake: NXDOMAIN
** server can't find server.nowhere.fake: NXDOMAIN
In fact, it seems irrelevant whether the server is running, as its address is not resolved.
For some scenarios, finer control is required, as with using run for single services instead of upĀ for all, such as in cases of terminal interaction.
In my case, I am seeking to test terminal I/O using a tool that simulates a human, by providing prescribed responses to various prompts.
Why does the lookup fail when the container is started in a separate operation? What solution is available?

How to connect to Docker bridge network from host?

For a test project, I want to have the following network connectivity
+---------------------------+
| redis-service-1 in docker |
+-----------+ B +-----------+
| R |
+--------+ | I |
| Host | ---> | D |
+--------+ | G |
| E |
+-----------+ +-----------+
| redis-service-2 in docker |
+---------------------------+
What I want to achieve is for my app running on Host it should be able to connect to both of the Redis services running in two docker containers using the DNS routing given by docker.
docker-compose.yml looks like the following:
version: '3.6'
services:
redis-service-1:
image: redis:5.0
networks:
- overlay
redis-service-2:
image: redis:5.0
networks:
- overlay
Ideally after docker-compose up on the host I want to be able to ping both of the containers as following:
> redis-cli -h redis-service-1 ping
> redis-cli -h redis-service-2 ping
But I am unable to connect to these containers and Redis inside them from the host machine.

Docker, java.net.UnknownHostException: docker-desktop: docker-desktop: Name does not resolve

I am running docker containers successfully on ubuntu machines.
And I'm having trouble running the same docker on mac machines.
I've tried on two macs, and the error messages are the same.
> spark-worker_1 | java.net.UnknownHostException: docker-desktop:
> docker-desktop: Name does not resolve spark-worker_1 | at
> java.net.InetAddress.getLocalHost(InetAddress.java:1506)
> spark-worker_1 | at
> org.apache.spark.util.Utils$.findLocalInetAddress(Utils.scala:946)
> spark-worker_1 | at
> org.apache.spark.util.Utils$.org$apache$spark$util$Utils$$localIpAddress$lzycompute(Utils.scala:939)
> spark-worker_1 | at
> org.apache.spark.util.Utils$.org$apache$spark$util$Utils$$localIpAddress(Utils.scala:939)
> spark-worker_1 | at
> org.apache.spark.util.Utils$$anonfun$localHostName$1.apply(Utils.scala:1003)
> spark-worker_1 | at
> org.apache.spark.util.Utils$$anonfun$localHostName$1.apply(Utils.scala:1003)
> spark-worker_1 | at scala.Option.getOrElse(Option.scala:121)
> spark-worker_1 | at
> org.apache.spark.util.Utils$.localHostName(Utils.scala:1003)
> spark-worker_1 | at
> org.apache.spark.deploy.worker.WorkerArguments.<init>(WorkerArguments.scala:31)
> spark-worker_1 | at
> org.apache.spark.deploy.worker.Worker$.main(Worker.scala:778)
> spark-worker_1 | at
> org.apache.spark.deploy.worker.Worker.main(Worker.scala)
> spark-worker_1 | Caused by: java.net.UnknownHostException:
> docker-desktop: Name does not resolve spark-worker_1 | at
> java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> spark-worker_1 | at
> java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929)
> spark-worker_1 | at
> java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324)
> spark-worker_1 | at
> java.net.InetAddress.getLocalHost(InetAddress.java:1501)
> spark-worker_1 | ... 10 more docker_spark-worker_1 exited with
> code 51
Here are my docker-compose.yml file
services:
spark-master:
build:
context: ../../
dockerfile: ./danalysis/docker/spark/Dockerfile
image: spark:latest
container_name: spark-master
hostname: node-master
ports:
- "7077:7077"
network_mode: host
environment:
- "SPARK_LOCAL_IP=node-master"
- "SPARK_MASTER_PORT=7077"
- "SPARK_MASTER_WEBUI_PORT=10080"
command: "/start-master.sh"
dns:
- 192.168.1.1 # IP necessary to connect to a database instance external to where the server in which the container is running
spark-worker:
image: spark:latest
environment:
- "SPARK_MASTER=spark://node-master:7077"
- "SPARK_WORKER_WEBUI_PORT=8080"
command: "/start-worker.sh"
ports:
- 8080
network_mode: host
depends_on:
- spark-master
dns:
- 192.168.1.1 # IP necessary to connect to a database instance external to where the server in which the container is running
** edit **
So I found a way to make it work by commenting few lines out. so why those two are problems?
And even though the container runs fine and connects to the spark-master, it is using some internal ip, as you can see, the 172.18.0.2 is not what we see normally in our network, I think the ip is from docker container not the host
# network_mode: host
depends_on:
- spark-master
# dns:
# - 192.168.1.1 # IP necessary to connect to a database instance external to where the server in which the container is running
Try changing the docker network type to macvlan in docker compose file. This should attach the container directly to your network (making it seem like another physical machine) with an ip similar to host. And you can try adding this to your etc hosts.
The proper way to run containers on different machine would be to use network type overlay connect the docker demons on these machines.
Or create a docker swarm cluster using the laptops.
https://docs.docker.com/network/

How to serve two provider backends with traefik under the same https domain url

[ With the helpful comment of Siyu i could fix the problems, additionally I needed to set an entrypoint in labels - i have added my corrected docker-compose.yaml, which was all i needed to fix ]
Currently i have reconfigured my synology workstation to handle https traffic with traefik.
I want to serve docker containers with traefik and still to provide also the web interface of the synology workstation via http (by using traefik also as SSL offloader). Traefik has now the problem to handle two provider backends, one being the "original" synology webserver and one the docker containers which come and go.
The current setup works for providing the "test.com" (Synology DSM webinterface). But if try to access a container with "/dashboard" it just gives me an 404.
How can is set this up, so that both backends (docker + webserver outside docker) are served?
Datapoints
The docker interface is recognized and the
the labels (*see below) are read from traefik (can be seen in the logs)
The synology nginx runs outside of docker (not as a container!)
The whole synology workstation serves in a IPv4 /IPv6 environment (both)
Synology nginx was modified not serve on the standard http/https port (where it does only redirect to port 5000/5001, as i can see in the configuration of nginx)
Intended setup which should be served
Notice that the original synology is a catch all domain (/*)
+-----------------------------------------------------------------------
| Synology Workstation
|
| +--------------------------------------------------------+
| | Docker |
| | +---------+ +-------------------+ |
|-->HTTPS-->|-->HTTPS-->| Traefik |-->HTTP-->| test.com/dashboard| |
| 443:443 | | | | | |
| | +---------+--+ +-------------------+ |
| | | | |
| | | | +------------------+ |
| | | +--HTTP-->| test.com/stats | |
| | | +------------------- |
| | | |
| +----------------|----------------------------------------
| | +-------------------+
| +--HTTP-->|test.com/* |
| |(nginx of synology)|
| +-------------------+
+--------------------------------------------------------------------
The traefik.toml looks like this:
debug=true
logLevel="DEBUG"
[traefikLog]
filePath = "/etc/traefik/traefik.log"
[accessLog]
filePath = "/etc/traefik/access.log"
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "/etc/pki/tls/certs/test.com.crt"
keyFile = "/etc/pki/tls/private/test.com.key"
[backends]
[backends.wbackend]
[backends.wbackend.servers.server]
url = "http://workstation.test.com:5000"
#weight = 10
[frontends]
[frontends.workstation]
backend = "wbackend"
passHostHeader = true
entrypoints = ["https"]
[frontends.workstation.routes.route1]
rule = "Host:workstation.test.com"
# You MUST ADD file otherwise traefik does not parse the fronted rules
[file]
[docker]
endpoint = "unix:///var/run/docker.sock"
Docker-compose snippt (see labels which map the domain).
---
version: '2'
services:
traefik:
# Check latest version: https://hub.docker.com/r/library/traefik/tags/
image: traefik:1.7.6
restart: unless-stopped
container_name: traefik
mem_limit: 300m
#network_mode: host
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /volume1/container/traefik/etc/pki/tls/certs/workstation.test.com.crt:/etc/pki/tls/certs/workstation.test.com.crt
- /volume1/container/traefik/etc/pki/tls/private/workstation.test.com.key:/etc/pki/tls/private/workstation.test.com.key
- /volume1/container/traefik/etc/traefik:/etc/traefik
ports:
- "80:80"
- "443:443"
labels:
- traefik.stat.frontend.rule=Host:workstation.test.com;Path:/dashboard
- traefik.stat.backend=traefik
- traefik.stat.frontend.entryPoints=https
- traefik.stat.frontend.rule=Host:workstation.test.com;PathPrefixStrip:/dashboard
- traefik.stat.port=8080
A few problems with your config:
your toml is not passed in
api is not enabled
missing backend in labels
should use PathPrefixStrip
Try
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /path/to/traefik.toml:/etc/traefik/traefik.toml
command: --api
ports:
- "80:80"
- "443:443"
- "8080:8080" // help you debug
labels:
- traefik.backend=traefik
- "traefik.frontend.rule=PathPrefixStrip:/dashboard/;Host:test.io"
- traefik.port=8080

Redis sentinel failover configuration in docker swarm

Description
I'm trying to create a Redis cluster in a docker swarm. I'm using the bitnami-redis-docker image for creating my containers. Going through the bitnami documentation they always suggest to use 1 master node as opposed to reading the Redis documentation which states that there should be at least 3 master nodes, which is why I'm confused as to which one is right. Given that all bitnami slave are by default read-only, if I setup only a single master in one of the swarm leader nodes, and if it fails I believe sentinel will try to promote a different slave redis instance as master, but given that it is read-only all write operations will fail. If I change that to make the master redis instance as global meaning that it will be created in all of the nodes available in the swarm, in this case do I require sentinel at all? Also if the below setup is a good one is there a reason to introduce a load balancer?
Setup
+------------------+ +------------------+ +------------------+ +------------------+
| Node-1 | | Node-2 | | Node-3 | | Node-4 |
| Leader | | Worker | | Leader | | Worker |
+------------------+ +------------------+ +------------------+ +------------------+
| M1 | | M2 | | M3 | | M4 |
| R1 | | R2 | | R3 | | R4 |
| S1 | | S2 | | S3 | | S4 |
| | | | | | | |
+------------------+ +------------------+ +------------------+ +------------------+
Legends -
Masters are called M1, M2, M3, ..., Mn
Slaves are called R1, R2, R3, ..., Rn (R stands for replica).
Sentinels are called S1, S2, S3, ..., Sn
Docker
version: '3'
services:
redis-master:
image: 'bitnami/redis:latest'
ports:
- '6379:6379'
environment:
- REDIS_REPLICATION_MODE=master
- REDIS_PASSWORD=laSQL2019
- REDIS_EXTRA_FLAGS=--maxmemory 100mb
volumes:
- 'redis-master-volume:/bitnami'
deploy:
mode: global
redis-slave:
image: 'bitnami/redis:latest'
ports:
- '6379'
depends_on:
- redis-master
volumes:
- 'redis-slave-volume:/bitnami'
environment:
- REDIS_REPLICATION_MODE=slave
- REDIS_MASTER_HOST=redis-master
- REDIS_MASTER_PORT_NUMBER=6379
- REDIS_MASTER_PASSWORD=laSQL2019
- REDIS_PASSWORD=laSQL2019
- REDIS_EXTRA_FLAGS=--maxmemory 100mb
deploy:
mode: replicated
replicas: 4
redis-sentinel:
image: 'bitnami/redis:latest'
ports:
- '16379'
depends_on:
- redis-master
- redis-slave
volumes:
- 'redis-sentinel-volume:/bitnami'
entrypoint: |
bash -c 'bash -s <<EOF
"/bin/bash" -c "cat <<EOF > /opt/bitnami/redis/etc/sentinel.conf
port 16379
dir /tmp
sentinel monitor master-node redis-master 6379 2
sentinel down-after-milliseconds master-node 5000
sentinel parallel-syncs master-node 1
sentinel failover-timeout master-node 5000
sentinel auth-pass master-node laSQL2019
sentinel announce-ip redis-sentinel
sentinel announce-port 16379
EOF"
"/bin/bash" -c "redis-sentinel /opt/bitnami/redis/etc/sentinel.conf"
EOF'
deploy:
mode: global
volumes:
redis-master-volume:
driver: local
redis-slave-volume:
driver: local
redis-sentinel-volume:
driver: local
The bitnami solution is a failover solution hence it has one master node
Sentinel is a HA solution i.e automatic failover. But it does not provide scalability in terms of distribution of data across multiple nodes. You would need to setup clustering if you want 'sharding' in addition to 'HA'.

Resources