Cannot get Redis Cluster to run on Docker Compose - docker

I'm trying to figure out how to run Redis cluster using Docker Compose.
However, I'm getting the following error:
Error response from daemon: Ports are not available: exposing port TCP 0.0.0.0:6380 -> 0.0.0.0:0: listen tcp 0.0.0.0:6380: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.
version: '3.9'
services:
dynamodb-local:
container_name: dynamodb-local
image: amazon/dynamodb-local:latest
command: -jar DynamoDBLocal.jar -sharedDb -dbPath ./data
ports:
- 8000:8000
volumes:
- ./docker/dynamodb:/home/dynamodblocal/data
working_dir: /home/dynamodblocal
networks:
- webnet
redis-master: # Setting up master node
image: bitnami/redis:latest
ports:
- 6379:6379
environment:
- REDIS_REPLICATION_MODE=master
- REDIS_PASSWORD=my_master_password
volumes:
- ./docker/redis:/bitnami/redis/data # Redis master data volume
- ./docker/redis/conf/redis.conf:/opt/bitnami/redis/conf/redis.conf # Redis master configuration volume
networks:
- webnet
redis-replica:
image: bitnami/redis:latest
ports:
- 6380-6382:6379
depends_on:
- redis-master
environment:
- REDIS_REPLICATION_MODE=slave
- REDIS_MASTER_HOST=redis-master
- REDIS_MASTER_PORT_NUMBER=6379
- REDIS_MASTER_PASSWORD=my_master_password
- REDIS_PASSWORD=my_replica_password
deploy:
replicas: 3
networks:
- webnet
networks:
webnet:
driver: bridge

Related

multiple docker compose files with traefik (v2.1) and database networks

I would like to build a docker landscape. I use a container with a traefik (v2. 1) image and a mysql container for multiple databases.
traefik/docker-compose.yml
version: "3.3"
services:
traefik:
image: "traefik:v2.1"
container_name: "traefik"
restart: always
command:
- "--log.level=DEBUG"
- "--api=true"
- "--api.dashboard=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--providers.docker.network=proxy"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
- "--entrypoints.traefik-dashboard.address=:8080"
- "--certificatesresolvers.devnik-resolver.acme.httpchallenge=true"
- "--certificatesresolvers.devnik-resolver.acme.httpchallenge.entrypoint=web"
#- "--certificatesresolvers.devnik-resolver.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory"
- "--certificatesresolvers.devnik-resolver.acme.email=####"
- "--certificatesresolvers.devnik-resolver.acme.storage=/letsencrypt/acme.json"
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- "./letsencrypt:/letsencrypt"
- "./data:/etc/traefik"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
networks:
- "proxy"
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.rule=Host(`devnik.dev`)"
- "traefik.http.routers.traefik.entrypoints=traefik-dashboard"
- "traefik.http.routers.traefik.tls.certresolver=devnik-resolver"
#basic auth
- "traefik.http.routers.traefik.service=api#internal"
- "traefik.http.routers.traefik.middlewares=auth"
- "traefik.http.middlewares.auth.basicauth.usersfile=/etc/traefik/.htpasswd"
#Docker Networks
networks:
proxy:
database/docker-compose.yml
version: "3.3"
services:
#MySQL Service
mysql:
image: mysql:5.7
container_name: mysql
restart: always
ports:
- "3306:3306"
volumes:
#persist data
- ./mysqldata/:/var/lib/mysql/
- ./init:/docker-entrypoint-initdb.d
networks:
- "mysql"
environment:
MYSQL_ROOT_PASSWORD: ####
TZ: Europe/Berlin
#Docker Networks
networks:
mysql:
driver: bridge
For the structure I want to control all projects via multiple docker-compose files. These containers should run on the same network as the traefik container and some with the mysql container.
This also works for the following case (but only sometimes)
dev-releases/docker-compose.yml
version: "3.3"
services:
backend:
image: "registry.gitlab.com/devnik/dev-releases-backend/master:latest"
container_name: "dev-releases-backend"
restart: always
volumes:
#laravel logs
- "./logs/backend:/app/storage/logs"
#cron logs
- "./logs/backend/cron.log:/var/log/cron.log"
labels:
- "traefik.enable=true"
- "traefik.http.routers.dev-releases-backend.rule=Host(`dev-releases.backend.devnik.dev`)"
- "traefik.http.routers.dev-releases-backend.entrypoints=websecure"
- "traefik.http.routers.dev-releases-backend.tls.certresolver=devnik-resolver"
networks:
- proxy
- mysql
environment:
TZ: Europe/Berlin
#Docker Networks
networks:
proxy:
external:
name: "traefik_proxy"
mysql:
external:
name: "database_mysql"
As soon as I restart the containers in dev-releases/ via docker-compose up -d I get the typical error "Gateway timeout" when calling them in the browser.
As soon as I comment the network networks: #- mysql and restart the docker-compose in dev-releases/ it works again.
My guess is that I have not configured the external networks correctly. Is it not possible to use 2 external networks?
I'd like some container have access to the 'mysql' network but it should not be accessible for the whole traefik network.
Let me know if you need more information
EDIT (26.03.2020)
I make it running.
I put all my containers into one network "proxy". It seems mysql also have to be in the proxy network.
So I add following to database/docker-compose.yml
networks:
proxy:
external:
name: "traefik_proxy"
And removed the database_mysql network out of dev-releases/docker-compose.yml
based on the names of the files, your mysql network should be mysql_mysql.
you can verify this by executing
$> docker network ls
You are also missing a couple of labels for your services such as
traefik command line
- '--providers.docker.watch=true'
- '--providers.docker.swarmMode=true'
labels
- traefik.docker.network=proxy
- traefik.http.services.dev-releases-backend.loadbalancer.server.port=yourport
- traefik.http.routers.dev-releases-backend.service=mailcatcher
You can check this for more info

How to create redis-cluster in docker based environment

I want to create Redis cluster in my docker based environment, Any docker base image that supports replication and allow me to create cluster using docker-compose would be helpful.
Here is my working .yml file
version: '3.7'
services:
fix-redis-volume-ownership: # This service is to authorise redis-master with ownership permissions
image: 'bitnami/redis:latest'
user: root
command: chown -R 1001:1001 /bitnami
volumes:
- ./data/redis:/bitnami
- ./data/redis/conf/redis.conf:/opt/bitnami/redis/conf/redis.conf
redis-master: # Setting up master node
image: 'bitnami/redis:latest'
ports:
- '6329:6379' # Port 6329 will be exposed to handle connections from outside server
environment:
- REDIS_REPLICATION_MODE=master # Assigning the node as a master
- ALLOW_EMPTY_PASSWORD=yes # No password authentication required/ provide password if needed
volumes:
- ./data/redis:/bitnami # Redis master data volume
- ./data/redis/conf/redis.conf:/opt/bitnami/redis/conf/redis.conf # Redis master configuration volume
redis-replica: # Setting up slave node
image: 'bitnami/redis:latest'
ports:
- '6379' # No port is exposed
depends_on:
- redis-master # will only start after the master has booted completely
environment:
- REDIS_REPLICATION_MODE=slave # Assigning the node as slave
- REDIS_MASTER_HOST=redis-master # Host for the slave node is the redis-master node
- REDIS_MASTER_PORT_NUMBER=6379 # Port number for local
- ALLOW_EMPTY_PASSWORD=yes # No password required to connect to node
You can use bitnami-docker-redis.
With Docker Compose the master/replica mode can be setup using:
version: '2'
services:
redis-master:
image: 'bitnami/redis:latest'
ports:
- '6379'
environment:
- REDIS_REPLICATION_MODE=master
- REDIS_PASSWORD=my_master_password
volumes:
- '/path/to/redis-persistence:/bitnami'
redis-replica:
image: 'bitnami/redis:latest'
ports:
- '6379'
depends_on:
- redis-master
environment:
- REDIS_REPLICATION_MODE=slave
- REDIS_MASTER_HOST=redis-master
- REDIS_MASTER_PORT_NUMBER=6379
- REDIS_MASTER_PASSWORD=my_master_password
- REDIS_PASSWORD=my_replica_password
Scale the number of replicas using:
$ docker-compose up --detach --scale redis-master=1 --scale redis-secondary=3
The above command scales up the number of replicas to 3. You can scale
down in the same way.
Note: You should not scale up/down the number of master nodes. Always
have only one master node running.
bitnami-docker-redis-cluster
you can use this to create replica with master and slave node
version: '3'
services:
redis:
image: redis:5.0.0
container_name: master
ports:
- "6379:6379"
networks:
- redis-replication
redis-slave:
image: redis:5.0.0
container_name: slave
ports:
- "6380:6379"
command: redis-server --slaveof master 6379
depends_on:
- redis
networks:
- redis-replication
networks:
redis-replication:
driver: bridge
or you can use this with redislabs/redismod:
redis:
image: redislabs/redismod:latest
ports:
- "6329:6329"
command:
[
"--loadmodule",
"/usr/lib/redis/modules/redisai.so",
"--loadmodule",
"/usr/lib/redis/modules/redisearch.so",
"--loadmodule",
"/usr/lib/redis/modules/redisgraph.so",
"--loadmodule",
"/usr/lib/redis/modules/redistimeseries.so",
"--loadmodule",
"/usr/lib/redis/modules/rejson.so",
"--loadmodule",
"/usr/lib/redis/modules/redisbloom.so",
"--loadmodule",
"/usr/lib/redis/modules/redisgears.so",
"Plugin",
"/var/opt/redislabs/modules/rg/plugin/gears_python.so",
--port 6329,
]
redis-slave:
image: redislabs/redismod:latest
ports:
- "6380:6379"
command:
[
"--loadmodule",
"/usr/lib/redis/modules/redisai.so",
"--loadmodule",
"/usr/lib/redis/modules/redisearch.so",
"--loadmodule",
"/usr/lib/redis/modules/redisgraph.so",
"--loadmodule",
"/usr/lib/redis/modules/redistimeseries.so",
"--loadmodule",
"/usr/lib/redis/modules/rejson.so",
"--loadmodule",
"/usr/lib/redis/modules/redisbloom.so",
"--loadmodule",
"/usr/lib/redis/modules/redisgears.so",
"Plugin",
"/var/opt/redislabs/modules/rg/plugin/gears_python.so",
--REPLICAOF redis 6329,
]
depends_on:
- redis

Traefik not detecting containers running in network_mode=host (192.168.99.x)

i am running traefik container in dockertoolbox with default network bridge
and one more container running on network_mode=host
but the traefik is detecting the service with 127.0.0.1 instead of DockerHost
IP= 192.168.99.x
can anyone help me with this
version: '3.7'
services:
reverse_proxy:
image: traefik
command: --api --docker --docker.domain=docker.localhost --logLevel=DEBUG
ports:
- "81:80"
- "8081:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
- backend
whoami:
image: containous/whoami
labels:
- "traefik.frontend.rule=Host:whoami.localhost"
- "traefik.enable=true"
- "traefik.backend=whoami"
- "traefik.port=80"
network_mode: host
restart:
always
networks:
backend:
driver: bridge
NOTE: using dockertoolbox in windows 10

jupyter fails to open a directory to run a docker container

The docker is running and I want to run a docker container in Windows 10. When I run the docker-compose from Windows power shell, some downloading jobs are completed, an error occurs, and the docker container cannot run. It seems that jupyter fails to build or open a directory. Anyone could help me about this problem? The command line and the error is as the following:
PS C:\Users\mmva> cd C:\Users\mmva\Documents\GitHub\CerebralCortex-DockerCompose
PS C:\Users\mmva\Documents\GitHub\CerebralCortex-DockerCompose> docker-compose up
Building jupyter
Step 1/19 : FROM jupyter/jupyterhub
latest: Pulling from jupyter/jupyterhub
efd26ecc9548: Extracting [==================================================>] 51.34MB/51.34MB
a3ed95caeb02: Download complete
298ffe4c3e52: Download complete
758b472747c8: Download complete
8b9809a68afc: Download complete
93b253b5483d: Download complete
ef8136abb53c: Download complete
ERROR: Service 'jupyter' failed to build: failed to register layer: re-exec error: exit status 1: output: Failed to OpenForBackup failed in Win32: open \\?\C:\ProgramData\Docker\windowsfilter\eb9ac9d604f051d5490a876043809e7929197356387569bc50a3694b77d1b721\usr\share\man\man3\Locale::gettext.3pm.gz: The filename, directory name, or volume label syntax is incorrect. (0x1f) \\?\C:\ProgramData\Docker\windowsfilter\eb9ac9d604f051d5490a876043809e7929197356387569bc50a3694b77d1b721\usr\share\man\man3\Locale::gettext.3pm.gz
My docker version is 17.09.0-ce-win33 (13620).
I think the docker-compose's version is 3.
The content of docker-compose file:
version: '3'
# IPTABLES RULES IF NECESSARY
#-A INPUT -i br+ -j ACCEPT
#-A INPUT -i docker0 -j ACCEPT
#-A OUTPUT -o br+ -j ACCEPT
#-A OUTPUT -o docker0 -j ACCEPT
# The .env file is for production use with server-specific configurations
services:
# Frontend web proxy for accessing services and providing TLS encryption
nginx:
build: ./nginx
container_name: md2k-nginx
restart: always
volumes:
- ./nginx/site:/var/www
- ./nginx/nginx-selfsigned.crt:/etc/ssh/certs/ssl-cert.crt
- ./nginx/nginx-selfsigned.key:/etc/ssh/certs/ssl-cert.key
ports:
- "443:443"
- "80:80"
links:
- apiserver
- grafana
- jupyter
apiserver:
build: ../CerebralCortex-APIServer
container_name: md2k-api-server
restart: always
expose:
- 80
links:
- mysql
- kafka
- minio
depends_on:
- mysql
environment:
- MINIO_HOST=${MINIO_HOST:-minio}
- MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY:-ZngmrLWgbSfZUvgocyeH}
- MINIO_SECRET_KEY=${MINIO_SECRET_KEY:-IwUnI5w0f5Hf1v2qVwcr}
- MYSQL_HOST=${MYSQL:-mysql}
- MYSQL_DB_USER=${MYSQL_ROOT_USER:-root}
- MYSQL_DB_PASS=${MYSQL_ROOT_PASSWORD:-random_root_password}
- KAFKA_HOST=${KAFKA_HOST:-kafka}
- JWT_SECRET_KEY=${MINIO_SECRET_KEY:-IwUnI5w0f5Hf1v2qVwcr}
- FLASK_HOST=${FLASK_HOST:-0.0.0.0}
- FLASK_PORT=${FLASK_PORT:-80}
- FLASK_DEBUG=${FLASK_DEBUG:-False}
volumes:
- ./data:/data
# Data vizualizations
grafana:
image: "grafana/grafana"
container_name: md2k-grafana
restart: always
ports:
- "3000:3000"
links:
- influxdb
environment:
- GF_SERVER_ROOT_URL=%(protocol)s://%(domain)s:%(http_port)s/grafana/
# - GF_INSTALL_PLUGINS=raintank-worldping-app,grafana-clock-panel,grafana-simple-json-datasource
volumes:
- timeseries-storage:/var/lib/grafana
# - timeseries-storage:/etc/grafana
influxdb:
image: "influxdb:alpine"
container_name: md2k-influxdb
restart: always
ports:
- "8086:8086"
volumes:
- timeseries-storage:/var/lib/influxdb
# Data Science Dashboard Interface
jupyter:
build: ./jupyterhub
container_name: md2k-jupyterhub
ports:
- 8000
restart: always
network_mode: "host"
pid: "host"
environment:
TINI_SUBREAPER: 'true'
volumes:
- ./jupyterhub/conf:/srv/jupyterhub/conf
command: jupyterhub --no-ssl --config /srv/jupyterhub/conf/jupyterhub_config.py
# Cerebral Cortex backend
kafka:
image: wurstmeister/kafka:0.10.2.0
container_name: md2k-kafka
restart: always
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: ${MACHINE_IP:-10.0.0.1}
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_MESSAGE_MAX_BYTES: 2000000
KAFKA_CREATE_TOPICS: "filequeue:4:1,processed_stream:16:1"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- data-storage:/kafka
depends_on:
- zookeeper
zookeeper:
image: wurstmeister/zookeeper
container_name: md2k-zookeeper
restart: always
ports:
- "2181:2181"
mysql:
image: "mysql:5.7"
container_name: md2k-mysql
restart: always
ports:
- 3306:3306 # Default mysql port
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD:-random_root_password}
- MYSQL_DATABASE=${MYSQL_DATABASE:-cerebralcortex}
- MYSQL_USER=${MYSQL_USER:-cerebralcortex}
- MYSQL_PASSWORD=${MYSQL_PASSWORD:-cerebralcortex_pass}
volumes:
- ./mysql/initdb.d:/docker-entrypoint-initdb.d
- metadata-storage:/var/lib/mysql
minio:
image: "minio/minio"
container_name: md2k-minio
restart: always
ports:
- 9000:9000 # Default minio port
environment:
- MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY:-ZngmrLWgbSfZUvgocyeH}
- MINIO_SECRET_KEY=${MINIO_SECRET_KEY:-IwUnI5w0f5Hf1v2qVwcr}
command: server /export
volumes:
- object-storage:/export
cassandra:
build: ./cassandra
container_name: md2k-cassandra
restart: always
ports:
- 9160:9160 # Thrift client API
- 9042:9042 # CQL native transport
environment:
- CASSANDRA_CLUSTER_NAME=cerebralcortex
volumes:
- data-storage:/var/lib/cassandra
volumes:
object-storage:
metadata-storage:
data-storage:
temp-storage:
timeseries-storage:
user-storage:
log-storage

Unable to connect docker container to logstash via gelf driver

Hi guys i'm having trouble to send my server container logs to my ELK stack. No input is sent to logstash so i'm unable to set kibana index for collecting logs. I think my problem is in the port settings.
Here is the docker-compose yml for the LAMP stack (only the server service):
version: '3'
services:
server:
build: ./docker/apache
links:
- fpm
ports:
- 80:80 # HTTP
- 443:443 # HTTPS
logging:
driver: "gelf"
options:
gelf-address: "udp://127.0.0.1:5000"
tag: "server"
And here is the docker-compose yml for the ELK stack, based on deviantony/docker-elk github project
version: '2'
services:
elasticsearch:
build: elasticsearch/
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
logstash:
build: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./logstash/pipeline:/usr/share/logstash/pipeline
ports:
- "5000:5000"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build: kibana/
volumes:
- ./kibana/config/:/usr/share/kibana/config
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
I've found the mistake, i've to specify the UDP protocol in the logstash service port definition.
logstash:
build: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./logstash/pipeline:/usr/share/logstash/pipeline
ports:
- "5000:5000/udp"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
You need to use the gelf input plugin. Here an example of a functioning compose file:
services:
logstash:
image: docker.elastic.co/logstash/logstash:5.3.1
logging:
driver: "json-file"
networks:
- logging
ports:
- "127.0.0.1:12201:12201/udp"
entrypoint: logstash -e 'input { gelf { } } output { stdout{ } }'
You can test it by running:
docker run --log-driver=gelf --log-opt gelf-address=udp://127.0.0.1:12201 ubuntu /bin/sh -c 'while true; do date "+%d-%m-%Y %H:%M:%S:%3N"; sleep 1 ; done
and checking docker logs on the logstash container.

Resources