Dockerized Redis cluster nodes - docker

I am trying to set up a dockerized redis cluster spanning multiple host machines.
In my curretn setup I have two hosts with public ip addresses and start a similar configuration on both, this config consists of a compose.yml:
services:
redis-cluster:
container_name: node-redis
build:
context: ../../
dockerfile: deployment/node/cluster-dockerfile
restart: always
ports:
- "7000:7000"
- "7001:7001"
- "7002:7002"
networks:
node_net:
ipv4_address: 10.20.0.6
networks:
node_net:
driver: bridge
ipam:
config:
- subnet: 10.20.0.0/16
gateway: 10.20.0.1
which is identical on both hosts.
The Dockerfile uses supervisord to start 3 redis instances (on ports 7000, 7001 and 7002) as such:
FROM ubuntu:20.04
RUN apt update && \
DEBIAN_FRONTEND=noninteractive apt install -y redis-server supervisor
COPY ./deployment/production-node/cluster-files/node1 /app/cluster-files
COPY ./deployment/production-node/cluster-files/node1/supervisord.conf /etc/supervisor/supervisord.conf
CMD supervisord -c /etc/supervisor/supervisord.conf && \
sleep infinity
Each redis instance is configured as such:
port <port number>
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes
masterauth pass
requirepass pass
protected-mode no
bind 0.0.0.0
unixsocket /tmp/redis.sock
loglevel debug
logfile "serverlog.7000.txt"
cluster-config-file nodes7000.conf
cluster-announce-ip <public ip of host machine>
cluster-announce-port <port number>
After running docker compose up on both hosts and redis instances stating correctly i try to use
redis-cli to create cluster as such:
redis-cli -a pass --cluster create <host1-ip>:7000 <host1-ip>:7001 \
<host1-ip>:7002 <host2-ip>:7000 <host2-ip>:7001 <host2-ip>:7002 \
--cluster-replicas 1
This results in waiting infinitely for the cluster to join.
After some consideration I figured that this may be caused by not exposing proper cluster bus ports in docker to solve this I changed the compsoe file to list additional ports:
- "7000:7000"
- "7001:7001"
- "7002:7002"
- "17000:17000"
- "17001:17001"
- "17002:17002"
And added this line to the redis.conf files:
cluster-port 17000 <and 17001, 17002 respective to the other port used by instance>
After those changes I am not even able to connect to a single instance and get an instant connection refused when tryin to create cluster.
As of now I am not sure how to solve this problem and would be gratefull for any hints as to how properly configure this kind of redis cluster without starting containers in the host network mode.

Related

Cannot connect to docker container (redis) in host mode

This probably just related to WSL in general but Redis is my use case.
This works fine and I can connect like:
docker exec -it redis-1 redis-cli -c -p 7001 -a Password123
But I cannot make any connections from my local windows pc to the container. I get
Could not connect: Error 10061 connecting to host.docker.internal:7001. No connection could be made because the target machine actively refused it.
This is the same error when the container isn't running, so not sure if it's a docker issue or WSL?
version: '3.9'
services:
redis-cluster:
image: redis:latest
container_name: redis-cluster
command: redis-cli -a Password123 -p 7001 --cluster create 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 127.0.0.1:7006 --cluster-replicas 1 --cluster-yes
depends_on:
- redis-1
- redis-2
- redis-3
- redis-4
- redis-5
- redis-6
network_mode: host
redis-1:
image: "redis:latest"
container_name: redis-1
network_mode: host
entrypoint: >
redis-server
--port 7001
--appendonly yes
--cluster-enabled yes
--cluster-config-file nodes.conf
--cluster-node-timeout 5000
--masterauth Password123
--requirepass Password123
--bind 0.0.0.0
--protected-mode no
# Five more the same as the above
According to the provided docker-compose.yml file, container ports are not exposed, so they are unreachable from the outside (your windows/wls host). Check here for the official reference. More about docker and ports here
As an example for redis-1 service, you should add the following to the definition.
...
redis-1:
ports:
- 7001:7001
...
...
The docker exec ... is working because the port is reachable from inside the container.

Private "host" for docker compose network

Given a docker-compose file something like this
version: "3.8"
services:
service-one:
ports:
- "8881:8080"
image: service-one:latest
service-one:
ports:
- "8882:8080"
image: service-two:latest
what happens is that service-one is exposed to the host network on port 8881 and service-two would be exposed on the host network at port 8882.
What I'd like to be able to arrange is that in the network created for the docker-compose there be a "private host" on which service-one will be exposed at port 8881 and service-two will be exposed on port 8882 such that any container in the docker-compose network will be able to connect to the "private host" and connect to the services on their configured HOST_PORT but not on the actual docker host. That is, to have whatever network configuration that usually bridges from the CONTAINER_PORT to the HOST_PORT happen privately within the docker-compose network without having the opportunity for there to be port conflicts on the actual host network.
I tweak this to fit to your case. The idea is to run socat in a gateway so that containers nor images changed (just service names). So, from service-X-backend you are able to connect to:
service-one on port 8881, and
service-two on port 8882
Tested with nginx containers.
If you wish to make some ports public, you need to publish them from the gateway itself.
version: "3.8"
services:
service-one-backend:
image: service-one:latest
networks:
- gw
service-two-backend:
image: service-two:latest
networks:
- gw
gateway:
image: debian
networks:
gw:
aliases:
- service-one
- service-two
depends_on:
- service-one-backend
- service-two-backend
command: sh -c "apt-get update
&& apt-get install -y socat
&& nohup bash -c \"socat TCP-LISTEN: 8881,fork TCP:service-one-backend:8080 2>&1 &\"
&& socat TCP-LISTEN: 8882,fork TCP:service-two-backend:8080"
networks:
gw:

Issue while connecting to redis master node running in sentinel mode in docker conatiners

I am running redis in docker containers and I am using redis sentinel mode.
I have setup the following configuration -
3 redis sentinels nodes
1 redis master node
2 redis slave nodes
I am running all these in my local machine. So in total 6 docker containers running through docker-compose in bridge networking mode.
All conatiners has port mappings to outside.
All containers can access each other as they are in the same bridge docker network created during running of docker-compose up.
I have created a Java client using Redisson library to access redis.
Configured the client to use redis-sentinel mode as follows -
Config config = new Config();
config.useSentinelServers()
.setMasterName("redis-master")
.addSentinelAddress("redis://127.0.0.1:26379")
.addSentinelAddress("redis://127.0.0.1:26380")
.addSentinelAddress("redis://127.0.0.1:26381")
RedissonClient client = Redisson.create(config);
This is where I am facing the issue.
Whenever I try to run some commands on redis through this client, the request goes through sentinel nodes which gives me current redis master node address.
But my java client cannot communicate to redis master directly as the ip returned by sentinel node is the internel docker network ip for master node which is not accessible outside docker network and it fails with similar exception as below -
Exception in thread "main" org.redisson.client.RedisConnectionException: Unable to connect to Redis server: 172.21.0.2/172.21.0.2:6379
at org.redisson.connection.pool.ConnectionPool$2$1.operationComplete(ConnectionPool.java:161)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:511)
How to fix this issue?
Do I need to run it in some different network mode?
or some way to translate this internal docker ip to the actual ip of machine running docker conatiners?
I ran into this issue today as well, attempting to get this set up for a test instance. My initial compose file was based on https://blog.alexseifert.com/2016/11/14/using-redis-sentinel-with-docker-compose/ with modifications to suit my needs. I was able to find a workaround by 1) binding the ports to my host machine, 2) setting the depends_on flag in docker-compose, and 3) setting my sentinel.conf to point to the hostname that my master was running on:
https://docs.docker.com/compose/startup-order/
It's a little hard to explain, but I'll try my best:
Your master/replica will look something like this in docker-compose:
redis-master:
image: redis:5.0.4-alpine
volumes:
- <mounted-data-directory>
- "<local-master-config-directory>/redis.conf:/usr/local/etc/redis/redis.conf"
ports:
- "6379:6379"
command:
- redis-server
- /usr/local/etc/redis/redis.conf
redis-replica:
image: redis:5.0.4-alpine
links:
- redis-master
volumes:
- <mounted-data-directory>
- "<local-replica-config-directory>:/usr/local/etc/redis/redis.conf"
ports:
- "6380:6380"
depends_on:
- redis-master
command:
- redis-server
- /usr/local/etc/redis/redis.conf
- --slaveof redis-master 6379
For my sentinels I gave each Dockerfile and a sentinel.conf (each having different ports):
Dockerfile:
FROM redis:5.0.4-alpine
RUN mkdir -p /redis
WORKDIR /redis
COPY sentinel.conf .
RUN chown redis:redis /redis/*
ENTRYPOINT ["redis-server", "/redis/sentinel.conf", "--sentinel"]
sentinel.conf
port 26379
dir /tmp
bind 0.0.0.0
sentinel monitor mymaster <hostname> 6379 2
sentinel down-after-milliseconds mymaster 1000
sentinel parallel-syncs mymaster 1
sentinel failover-timeout mymaster 10000
It's worth noting that I attempted to do this with 127.0.0.1 and localhost and I don't think either worked, so I had the hostname set on the machine I ran this on. I was kind of trying any and everything at that point.
Each sentinel (I had three) had a separate entry referencing their build contexts and mapping the port in sentinel.conf to the local port. So in docker-compose my sentinels looked like this:
# Instance 1
redis-sentinel:
build:
context: <path-to-context>
links:
- redis-master
ports:
- "26379:26379"
depends_on:
- redis-replica
What I did was definitely a hack and I wouldn't do it in production. I'm pretty sure there's a much better networking solution for docker, I just didn't want to go too far down the rabbit hole for what I needed to test. Hope this helps.

Use docker container as SSH tunnel inside network

I am trying to use a docker container to set up a SSH tunnel to a remote database that is only reachable via SSH. I have a docker network with several containers and want to make the database available for all the containers in the network.
The Dockerfile for the SSH container looks like this:
FROM debian:stable
RUN apt-get update && apt-get -y --force-yes install openssh-client autossh postgresql-client
COPY .ssh /root/.ssh
RUN chown root:root /root/.ssh/config
EXPOSE 12345
ENTRYPOINT ["/usr/bin/autossh", "-M", "0", "-v", "-T", "-N", "-4", "-L", "12345:localhost:1234", "user#remotedb" ]
Inside the .ssh diretctory are my keys and the config file, which looks like that:
Host remotedb
StrictHostKeyChecking no
ServerAliveInterval 30
ServerAliveCountMax 3
The tunnel itself works on this container, meaning I can access the db from inside it as localhost:12345.
Now I want to access it also from other containers in the same network.
My docker-compose.yml looks like this (I commented out some trials):
version: '2'
networks:
my_network:
driver: bridge
ipam:
config:
- subnet: 10.12.0.0/16
gateway: 10.12.0.1
services:
service_1:
image: my/image:alias
volumes:
- somevolume
# links:
# - my_ssh
ports:
- "8080"
environment:
ENV1: blabla
networks:
my_network:
ipv4_address: 10.12.0.12
my_ssh:
build:
context: ./dir_with_Dockerfile
# ports:
# - "23456:12345"
expose:
- "12345"
networks:
my_network:
ipv4_address: 10.12.0.13
I've tried to access the remote database from inside service_1 with hostnames 'my_ssh', the ipv4_address, 'localhost', and with ports 12345 and 23456. None of these combinations have worked. Where could I go wrong?
Or how else could I achieve a permanent connection from my containers to the remote database?
More of a suggestion than an answer; setting up OpenVPN on your database network and your docker swarm would allow you to connect the two networks together. It would also make it easier for you to configure more hosts in the future.

How to deploy an IPv6 container with Docker Swarm Mode or Docker Compose

In the end I'd like to have a pure IPv6 network deployed via compose or swarm mode. For now, I'd just like to have a single container deployed with IPv6 (only). I am not currently interested in routing (just container to container connectivity).
My setup:
OS: Centos 7
dockerd --ipv6 --fixed-cidr-v6=2001:db8:1::/64 --iptables=true --ip-masq=true --mtu=1600 --experimental=true
docker-engine-17.05.0.ce-1.el7.centos.x86_64.rpm
Host has IPv4 and IPv6 addresses. Forwarding is on for both (not that it matters for me).
I've tried what seems to be every combination (I'm only listing a couple)
Self-contained Docker stack with container and network:
version: '3'
networks:
app_net:
driver: overlay
driver_opts:
com.docker.network.enable_ipv6: "true"
ipam:
driver: default
config:
-
subnet: 172.16.238.0/24
-
subnet: 2001:3984:3989::/64
services:
app:
image: alpine
command: sleep 600
networks:
app_net:
ipv4_address: 0.0.0.0
ipv6_address: 2001:3984:3989::10
Result: Only IPv4 address in container, 0.0.0.0 is ignored.
Externally pre-created network
(as per https://stackoverflow.com/a/39818953/1735931)
docker network create --driver overlay --ipv6
--subnet=2001:3984:3989::/64 --attachable ext_net
version: '3'
networks:
ext_net:
external:
name: ext_net
services:
app:
image: alpine
command: ifconfig eth0 0.0.0.0 ; sleep 600
cap_add:
- NET_ADMIN
networks:
ext_net:
ipv4_address: 0.0.0.0
ipv6_address: 2001:3984:3989::10
Result: Both IPv4 and IPv6 addresses in container, but cap_add is ignored (not supported in Swarm Mode), and thus the ifconfig disable ipv4 attempt above does not work.
I don't currently have docker-compose installed, and will probably try that next, but is there a way to run pure IPv6 containers in Docker Swarm Mode?
Note: I am able to run and configure a few IPv6-only containers manually without swarm/compose:
(Create network as above or even just use the default bridge)
$ docker run --cap-add=NET_ADMIN --rm -it alpine
$$ ifconfig eth0 0.0.0.0
$$ ping6 other-container-ipv6-address # WORKS!
or shorthand:
$ docker run --cap-add=NET_ADMIN --rm -it alpine sh -c "/sbin/ifconfig eth0 0.0.0.0 ; sh"
I was able to hack it with docker-compose via severe ugliness. If you're desperate, here it is. (This method can never work for Swarm Mode due to privilege escalation).
The Plan
Grant containers rights to manage IP's
Remove IPv4 IP address from within each container on startup.
Use a volume to improvise a hosts file in place of DNS (DNS is IPv4-only in docker).
Steps
Enable IPv6 in Docker daemon.
Create a docker-compose.yml file that creates an ipv6 network, a volume for shared files, and two containers
Run an entrypoint script in each container that performs the aforementioned steps.
Files
docker-compose.yml
# Note: enable_ipv6 does not work in version 3!
version: '2.1'
networks:
app_net:
enable_ipv6: true
driver: overlay
ipam:
driver: default
config:
-
subnet: 172.16.238.0/24
-
subnet: 2001:3984:3989::/64
services:
app1:
build: ./server
hostname: server1
command: blablabla # example of arg passing to ipv6.sh
cap_add:
- NET_ADMIN
volumes:
- ipv6stuff:/ipv6stuff
networks:
- app_net
app2:
build: ./server
hostname: server2
command: SOMETHING # example of arg passing to ipv6.sh
cap_add:
- NET_ADMIN
volumes:
- ipv6stuff:/ipv6stuff
networks:
- app_net
volumes:
ipv6stuff:
server/Dockerfile
FROM alpine:latest
ADD files /
RUN apk --update add bash #simpler scripts
# Has to be an array for parameters to work via command: x in compose file, if needed
ENTRYPOINT ["/ipv6.sh"]
server/files/ipv6.sh
#!/bin/bash
# Optionally conditional logic based on parameters here...
# (for example, conditionally leave ipv4 address alone in some containers)
#
# Remove ipv4
ifconfig eth0 0.0.0.0
IP6=$(ip addr show eth0 | grep inet6 | grep global | awk '{print $2}' | cut -d / -f 1)
echo "Host $HOSTNAME has ipv6 ip $IP6"
# Store our entry in the shared volume
echo "$IP6 $HOSTNAME" > /ipv6stuff/hosts.$HOSTNAME
# Remove existing ipv4 line from /etc/hosts just to be thorough
# Docker does not allow removal of this file and thus simple sed -i isn't going to work.
cp /etc/hosts /tmp/1 ; sed -i "s/^.*\s$HOSTNAME//" /tmp/1 ; cat /tmp/1 > /etc/hosts
# Wait for all containers to start
sleep 2
# Put everyone's entries in our hosts file.
cat /ipv6stuff/hosts.* >> /etc/hosts
echo "My hosts file:"
cat /etc/hosts
# test connectivity (hardcoded)
ping6 -c 3 server1
ping6 -c 3 server2

Resources