Description
I'm trying to create a Redis cluster in a docker swarm. I'm using the bitnami-redis-docker image for creating my containers. Going through the bitnami documentation they always suggest to use 1 master node as opposed to reading the Redis documentation which states that there should be at least 3 master nodes, which is why I'm confused as to which one is right. Given that all bitnami slave are by default read-only, if I setup only a single master in one of the swarm leader nodes, and if it fails I believe sentinel will try to promote a different slave redis instance as master, but given that it is read-only all write operations will fail. If I change that to make the master redis instance as global meaning that it will be created in all of the nodes available in the swarm, in this case do I require sentinel at all? Also if the below setup is a good one is there a reason to introduce a load balancer?
Setup
+------------------+ +------------------+ +------------------+ +------------------+
| Node-1 | | Node-2 | | Node-3 | | Node-4 |
| Leader | | Worker | | Leader | | Worker |
+------------------+ +------------------+ +------------------+ +------------------+
| M1 | | M2 | | M3 | | M4 |
| R1 | | R2 | | R3 | | R4 |
| S1 | | S2 | | S3 | | S4 |
| | | | | | | |
+------------------+ +------------------+ +------------------+ +------------------+
Legends -
Masters are called M1, M2, M3, ..., Mn
Slaves are called R1, R2, R3, ..., Rn (R stands for replica).
Sentinels are called S1, S2, S3, ..., Sn
Docker
version: '3'
services:
redis-master:
image: 'bitnami/redis:latest'
ports:
- '6379:6379'
environment:
- REDIS_REPLICATION_MODE=master
- REDIS_PASSWORD=laSQL2019
- REDIS_EXTRA_FLAGS=--maxmemory 100mb
volumes:
- 'redis-master-volume:/bitnami'
deploy:
mode: global
redis-slave:
image: 'bitnami/redis:latest'
ports:
- '6379'
depends_on:
- redis-master
volumes:
- 'redis-slave-volume:/bitnami'
environment:
- REDIS_REPLICATION_MODE=slave
- REDIS_MASTER_HOST=redis-master
- REDIS_MASTER_PORT_NUMBER=6379
- REDIS_MASTER_PASSWORD=laSQL2019
- REDIS_PASSWORD=laSQL2019
- REDIS_EXTRA_FLAGS=--maxmemory 100mb
deploy:
mode: replicated
replicas: 4
redis-sentinel:
image: 'bitnami/redis:latest'
ports:
- '16379'
depends_on:
- redis-master
- redis-slave
volumes:
- 'redis-sentinel-volume:/bitnami'
entrypoint: |
bash -c 'bash -s <<EOF
"/bin/bash" -c "cat <<EOF > /opt/bitnami/redis/etc/sentinel.conf
port 16379
dir /tmp
sentinel monitor master-node redis-master 6379 2
sentinel down-after-milliseconds master-node 5000
sentinel parallel-syncs master-node 1
sentinel failover-timeout master-node 5000
sentinel auth-pass master-node laSQL2019
sentinel announce-ip redis-sentinel
sentinel announce-port 16379
EOF"
"/bin/bash" -c "redis-sentinel /opt/bitnami/redis/etc/sentinel.conf"
EOF'
deploy:
mode: global
volumes:
redis-master-volume:
driver: local
redis-slave-volume:
driver: local
redis-sentinel-volume:
driver: local
The bitnami solution is a failover solution hence it has one master node
Sentinel is a HA solution i.e automatic failover. But it does not provide scalability in terms of distribution of data across multiple nodes. You would need to setup clustering if you want 'sharding' in addition to 'HA'.
Related
Goal:
I like to be able to ping and access the docker clients from my host network. And if possible, I like to have as much as possible configured in my docker-compose.yml.
Remark:
ICMP (ping) is just used for simplification. Actually, I like to access ssh on 22 and some other ports. Mapping ports is my current solution, but since I have many docker client container it becomes messy.
___________ ___________ ___________
| host | | docker | | docker |
| client | | host | | client |
| ..16.50 | <--> | ..16.10 | | |
| | | ..20.1 | <--> | ..20.5 |
| | | |
| | <----- not working ----> | |
Problem:
I am able to ping my docker host from docker clients and host clients, but not the docker clients from host clients.
That's my configuration on ubuntu 22.04.
docker host: 192.168.16.10/24
client host network: 192.168.16.50/24
default gw host network: 192.168.161 /24
docker client (container): 192.168.20.5 /24
docker-compose.yml
version: '3'
networks:
ipvlan20:
name: ipvlan20
driver: ipvlan
driver_opts:
parent: enp3s0.20
com.docker.network.bridge.name: br-ipvlan20
ipvlan-mode: l3
ipam:
config:
- subnet: "192.168.20.0/24"
gateway: "192.168.20.1"
services:
portainer:
image: alpine
hostname: ipvlan20
container_name: ipvlan20
restart: always
command: ["sleep","infinity"]
dns: 192.168.16.1
networks:
ipvlan20:
ipv4_address: 192.168.20.5
On my docker host, I added the following link with the vlan gateway IP.
ip link add myipvlan20 link enp3s0.20 type ipvlan mode l3
ip addr add 192.168.20.1/24 dev myipvlan20
ip link set myipvlan20 up
And on my host client, I added a rout to the docker host for the docker client network.
ip route add 192.168.20.0/24 via 192.168.16.10
I tried also:
Do I have to use macvlan? I tried that, but also unsuccessfully.
Do I have to use l3? I also tried with l2, but unsuccessfully as well.
For a test project, I want to have the following network connectivity
+---------------------------+
| redis-service-1 in docker |
+-----------+ B +-----------+
| R |
+--------+ | I |
| Host | ---> | D |
+--------+ | G |
| E |
+-----------+ +-----------+
| redis-service-2 in docker |
+---------------------------+
What I want to achieve is for my app running on Host it should be able to connect to both of the Redis services running in two docker containers using the DNS routing given by docker.
docker-compose.yml looks like the following:
version: '3.6'
services:
redis-service-1:
image: redis:5.0
networks:
- overlay
redis-service-2:
image: redis:5.0
networks:
- overlay
Ideally after docker-compose up on the host I want to be able to ping both of the containers as following:
> redis-cli -h redis-service-1 ping
> redis-cli -h redis-service-2 ping
But I am unable to connect to these containers and Redis inside them from the host machine.
I have a server working well with the following docker-compose.yml. I can find in container /etc/letsencrypt/live/v2.10studio.tech/fullchain.pem and /etc/letsencrypt/live/v2.10studio.tech/privkey.pem.
version: "3"
services:
frontend:
restart: unless-stopped
image: staticfloat/nginx-certbot
ports:
- 80:8080/tcp
- 443:443/tcp
environment:
CERTBOT_EMAIL: owner#company.com
volumes:
- ./conf.d:/etc/nginx/user.conf.d:ro
- letsencrypt:/etc/letsencrypt
10studio:
image: bitnami/nginx:1.16
restart: always
volumes:
- ./build:/app
- ./default.conf:/opt/bitnami/nginx/conf/server_blocks/default.conf:ro
- ./configs/config.prod.js:/app/lib/config.js
depends_on:
- frontend
volumes:
letsencrypt:
networks:
default:
external:
name: 10studio
I tried to create another server with the same setting, but I could not find live under /etc/letsencrypt of the container.
Does anyone know what's wrong? where do files under /etc/letsencrypt/live come from?
Edit 1:
I have one file conf.d/.conf, I tried to rebuild and got the following message:
root#iZj6cikgrkjzogdi7x6rdoZ:~/10Studio/pfw# docker-compose up --build --force-recreate --no-deps
Creating pfw_pfw_1 ... done
Creating pfw_10studio_1 ... done
Attaching to pfw_pfw_1, pfw_10studio_1
10studio_1 | 11:25:33.60
10studio_1 | 11:25:33.60 Welcome to the Bitnami nginx container
pfw_1 | templating scripts from /etc/nginx/user.conf.d to /etc/nginx/conf.d
pfw_1 | Substituting variables
pfw_1 | -> /etc/nginx/user.conf.d/*.conf
pfw_1 | /scripts/util.sh: line 116: /etc/nginx/user.conf.d/*.conf: No such file or directory
pfw_1 | Done with startup
pfw_1 | Run certbot
pfw_1 | ++ parse_domains
pfw_1 | ++ for conf_file in /etc/nginx/conf.d/*.conf*
pfw_1 | ++ xargs echo
pfw_1 | ++ sed -n -r -e 's&^\s*ssl_certificate_key\s*\/etc/letsencrypt/live/(.*)/privkey.pem;\s*(#.*)?$&\1&p' /etc/nginx/conf.d/certbot.conf
pfw_1 | + auto_enable_configs
pfw_1 | + for conf_file in /etc/nginx/conf.d/*.conf*
pfw_1 | + keyfiles_exist /etc/nginx/conf.d/certbot.conf
pfw_1 | ++ parse_keyfiles /etc/nginx/conf.d/certbot.conf
pfw_1 | ++ sed -n -e 's&^\s*ssl_certificate_key\s*\(.*\);&\1&p' /etc/nginx/conf.d/certbot.conf
pfw_1 | + return 0
pfw_1 | + '[' conf = nokey ']'
pfw_1 | + set +x
10studio_1 | 11:25:33.60 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-nginx
10studio_1 | 11:25:33.61 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-nginx/issues
10studio_1 | 11:25:33.61 Send us your feedback at containers#bitnami.com
10studio_1 | 11:25:33.61
10studio_1 | 11:25:33.62 INFO ==> ** Starting NGINX setup **
10studio_1 | 11:25:33.64 INFO ==> Validating settings in NGINX_* env vars...
10studio_1 | 11:25:33.64 INFO ==> Initializing NGINX...
10studio_1 | 11:25:33.65 INFO ==> ** NGINX setup finished! **
10studio_1 |
10studio_1 | 11:25:33.66 INFO ==> ** Starting NGINX **
If I do docker-compose up -d --build, I still cannot find /etc/letsencrypt/live in the container.
Please go through the original site of this image staticfloat/nginx-certbot, it will create and automatically renew website SSL certificates.
With the configuraiton file under ./conf.d
Create a config directory for your custom configs:
$ mkdir conf.d
And a .conf in that directory:
server {
listen 443 ssl;
server_name server.company.com;
ssl_certificate /etc/letsencrypt/live/server.company.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/server.company.com/privkey.pem;
location / {
...
}
}
because /etc/letsencrypt is mounted from a persistent volume letsencrypt
services:
frontend:
restart: unless-stopped
image: staticfloat/nginx-certbot
...
volumes:
...
- letsencrypt:/etc/letsencrypt
volumes:
letsencrypt:
If you need reference /etc/letsencrypt/live, you need mount the same volume letsencrypt into your new application as well
It works after changing ports: - 80:8080/tcp to ports: - 80:80/tcp.
As /etc/letsencrypt is a mounted volume that is persisted over restarts of your container, I would assume that any process added these files to the volume. According to a quick search using my favorite search engine, /etc/letsencrypt/live is filled with files after creating certificates
[ With the helpful comment of Siyu i could fix the problems, additionally I needed to set an entrypoint in labels - i have added my corrected docker-compose.yaml, which was all i needed to fix ]
Currently i have reconfigured my synology workstation to handle https traffic with traefik.
I want to serve docker containers with traefik and still to provide also the web interface of the synology workstation via http (by using traefik also as SSL offloader). Traefik has now the problem to handle two provider backends, one being the "original" synology webserver and one the docker containers which come and go.
The current setup works for providing the "test.com" (Synology DSM webinterface). But if try to access a container with "/dashboard" it just gives me an 404.
How can is set this up, so that both backends (docker + webserver outside docker) are served?
Datapoints
The docker interface is recognized and the
the labels (*see below) are read from traefik (can be seen in the logs)
The synology nginx runs outside of docker (not as a container!)
The whole synology workstation serves in a IPv4 /IPv6 environment (both)
Synology nginx was modified not serve on the standard http/https port (where it does only redirect to port 5000/5001, as i can see in the configuration of nginx)
Intended setup which should be served
Notice that the original synology is a catch all domain (/*)
+-----------------------------------------------------------------------
| Synology Workstation
|
| +--------------------------------------------------------+
| | Docker |
| | +---------+ +-------------------+ |
|-->HTTPS-->|-->HTTPS-->| Traefik |-->HTTP-->| test.com/dashboard| |
| 443:443 | | | | | |
| | +---------+--+ +-------------------+ |
| | | | |
| | | | +------------------+ |
| | | +--HTTP-->| test.com/stats | |
| | | +------------------- |
| | | |
| +----------------|----------------------------------------
| | +-------------------+
| +--HTTP-->|test.com/* |
| |(nginx of synology)|
| +-------------------+
+--------------------------------------------------------------------
The traefik.toml looks like this:
debug=true
logLevel="DEBUG"
[traefikLog]
filePath = "/etc/traefik/traefik.log"
[accessLog]
filePath = "/etc/traefik/access.log"
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "/etc/pki/tls/certs/test.com.crt"
keyFile = "/etc/pki/tls/private/test.com.key"
[backends]
[backends.wbackend]
[backends.wbackend.servers.server]
url = "http://workstation.test.com:5000"
#weight = 10
[frontends]
[frontends.workstation]
backend = "wbackend"
passHostHeader = true
entrypoints = ["https"]
[frontends.workstation.routes.route1]
rule = "Host:workstation.test.com"
# You MUST ADD file otherwise traefik does not parse the fronted rules
[file]
[docker]
endpoint = "unix:///var/run/docker.sock"
Docker-compose snippt (see labels which map the domain).
---
version: '2'
services:
traefik:
# Check latest version: https://hub.docker.com/r/library/traefik/tags/
image: traefik:1.7.6
restart: unless-stopped
container_name: traefik
mem_limit: 300m
#network_mode: host
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /volume1/container/traefik/etc/pki/tls/certs/workstation.test.com.crt:/etc/pki/tls/certs/workstation.test.com.crt
- /volume1/container/traefik/etc/pki/tls/private/workstation.test.com.key:/etc/pki/tls/private/workstation.test.com.key
- /volume1/container/traefik/etc/traefik:/etc/traefik
ports:
- "80:80"
- "443:443"
labels:
- traefik.stat.frontend.rule=Host:workstation.test.com;Path:/dashboard
- traefik.stat.backend=traefik
- traefik.stat.frontend.entryPoints=https
- traefik.stat.frontend.rule=Host:workstation.test.com;PathPrefixStrip:/dashboard
- traefik.stat.port=8080
A few problems with your config:
your toml is not passed in
api is not enabled
missing backend in labels
should use PathPrefixStrip
Try
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /path/to/traefik.toml:/etc/traefik/traefik.toml
command: --api
ports:
- "80:80"
- "443:443"
- "8080:8080" // help you debug
labels:
- traefik.backend=traefik
- "traefik.frontend.rule=PathPrefixStrip:/dashboard/;Host:test.io"
- traefik.port=8080
I'm installing a Gitlab instance with docker-compose on a server machine on my local network, and I'd like to access to my Gitlab instance from anywhere in my local network by visiting for example "https://my-hostname"
I follow this.
I'm running:
web:
image: 'gitlab/gitlab-ce:latest'
restart: always
hostname: 'gitlab.example.com'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'https://gitlab.example.com'
# Add any other gitlab.rb configuration here, each on its own line
ports:
- '7780:80'
- '7443:443'
- '7722:22'
volumes:
- '/srv/gitlab/config:/etc/gitlab'
- '/srv/gitlab/logs:/var/log/gitlab'
- '/srv/gitlab/data:/var/opt/gitlab'
Now I have very (very) limited network knowledge, so basically, how do I access to my running gitlab instance ? When I go to the local network IP of my host, my browser tells me that it can't connect.
Here is what I'm hoping to achieve:
LOCAL NETWORK
+--------------------------------------------------------------------------+
| |
| +--------------------+ |
| | My_Server | |
| | | |
| | +----------------+ | |
| | | | | "https://my-hostname" +-------------------+ |
| | | Docker: Gitlab | <------------------------+ My_Client | |
| | | | | +-------------------+ |
| | +----------------+ | |
| | | |
| +--------------------+ |
| |
+--------------------------------------------------------------------------+
The ports part of your configuration maps the host's ports to the container's ports.
So if you have
ports:
- '7780:80'
- '7443:443'
- '7722:22'
that is redirecting port 7780 on your host to port 80 on your container, and so forth. You should be able to access your container's services (via its local IP address, and then its hostname via local DNS) with this knowledge.