I want to create Redis cluster in my docker based environment, Any docker base image that supports replication and allow me to create cluster using docker-compose would be helpful.
Here is my working .yml file
version: '3.7'
services:
fix-redis-volume-ownership: # This service is to authorise redis-master with ownership permissions
image: 'bitnami/redis:latest'
user: root
command: chown -R 1001:1001 /bitnami
volumes:
- ./data/redis:/bitnami
- ./data/redis/conf/redis.conf:/opt/bitnami/redis/conf/redis.conf
redis-master: # Setting up master node
image: 'bitnami/redis:latest'
ports:
- '6329:6379' # Port 6329 will be exposed to handle connections from outside server
environment:
- REDIS_REPLICATION_MODE=master # Assigning the node as a master
- ALLOW_EMPTY_PASSWORD=yes # No password authentication required/ provide password if needed
volumes:
- ./data/redis:/bitnami # Redis master data volume
- ./data/redis/conf/redis.conf:/opt/bitnami/redis/conf/redis.conf # Redis master configuration volume
redis-replica: # Setting up slave node
image: 'bitnami/redis:latest'
ports:
- '6379' # No port is exposed
depends_on:
- redis-master # will only start after the master has booted completely
environment:
- REDIS_REPLICATION_MODE=slave # Assigning the node as slave
- REDIS_MASTER_HOST=redis-master # Host for the slave node is the redis-master node
- REDIS_MASTER_PORT_NUMBER=6379 # Port number for local
- ALLOW_EMPTY_PASSWORD=yes # No password required to connect to node
You can use bitnami-docker-redis.
With Docker Compose the master/replica mode can be setup using:
version: '2'
services:
redis-master:
image: 'bitnami/redis:latest'
ports:
- '6379'
environment:
- REDIS_REPLICATION_MODE=master
- REDIS_PASSWORD=my_master_password
volumes:
- '/path/to/redis-persistence:/bitnami'
redis-replica:
image: 'bitnami/redis:latest'
ports:
- '6379'
depends_on:
- redis-master
environment:
- REDIS_REPLICATION_MODE=slave
- REDIS_MASTER_HOST=redis-master
- REDIS_MASTER_PORT_NUMBER=6379
- REDIS_MASTER_PASSWORD=my_master_password
- REDIS_PASSWORD=my_replica_password
Scale the number of replicas using:
$ docker-compose up --detach --scale redis-master=1 --scale redis-secondary=3
The above command scales up the number of replicas to 3. You can scale
down in the same way.
Note: You should not scale up/down the number of master nodes. Always
have only one master node running.
bitnami-docker-redis-cluster
you can use this to create replica with master and slave node
version: '3'
services:
redis:
image: redis:5.0.0
container_name: master
ports:
- "6379:6379"
networks:
- redis-replication
redis-slave:
image: redis:5.0.0
container_name: slave
ports:
- "6380:6379"
command: redis-server --slaveof master 6379
depends_on:
- redis
networks:
- redis-replication
networks:
redis-replication:
driver: bridge
or you can use this with redislabs/redismod:
redis:
image: redislabs/redismod:latest
ports:
- "6329:6329"
command:
[
"--loadmodule",
"/usr/lib/redis/modules/redisai.so",
"--loadmodule",
"/usr/lib/redis/modules/redisearch.so",
"--loadmodule",
"/usr/lib/redis/modules/redisgraph.so",
"--loadmodule",
"/usr/lib/redis/modules/redistimeseries.so",
"--loadmodule",
"/usr/lib/redis/modules/rejson.so",
"--loadmodule",
"/usr/lib/redis/modules/redisbloom.so",
"--loadmodule",
"/usr/lib/redis/modules/redisgears.so",
"Plugin",
"/var/opt/redislabs/modules/rg/plugin/gears_python.so",
--port 6329,
]
redis-slave:
image: redislabs/redismod:latest
ports:
- "6380:6379"
command:
[
"--loadmodule",
"/usr/lib/redis/modules/redisai.so",
"--loadmodule",
"/usr/lib/redis/modules/redisearch.so",
"--loadmodule",
"/usr/lib/redis/modules/redisgraph.so",
"--loadmodule",
"/usr/lib/redis/modules/redistimeseries.so",
"--loadmodule",
"/usr/lib/redis/modules/rejson.so",
"--loadmodule",
"/usr/lib/redis/modules/redisbloom.so",
"--loadmodule",
"/usr/lib/redis/modules/redisgears.so",
"Plugin",
"/var/opt/redislabs/modules/rg/plugin/gears_python.so",
--REPLICAOF redis 6329,
]
depends_on:
- redis
Related
I'm trying to figure out how to run Redis cluster using Docker Compose.
However, I'm getting the following error:
Error response from daemon: Ports are not available: exposing port TCP 0.0.0.0:6380 -> 0.0.0.0:0: listen tcp 0.0.0.0:6380: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.
version: '3.9'
services:
dynamodb-local:
container_name: dynamodb-local
image: amazon/dynamodb-local:latest
command: -jar DynamoDBLocal.jar -sharedDb -dbPath ./data
ports:
- 8000:8000
volumes:
- ./docker/dynamodb:/home/dynamodblocal/data
working_dir: /home/dynamodblocal
networks:
- webnet
redis-master: # Setting up master node
image: bitnami/redis:latest
ports:
- 6379:6379
environment:
- REDIS_REPLICATION_MODE=master
- REDIS_PASSWORD=my_master_password
volumes:
- ./docker/redis:/bitnami/redis/data # Redis master data volume
- ./docker/redis/conf/redis.conf:/opt/bitnami/redis/conf/redis.conf # Redis master configuration volume
networks:
- webnet
redis-replica:
image: bitnami/redis:latest
ports:
- 6380-6382:6379
depends_on:
- redis-master
environment:
- REDIS_REPLICATION_MODE=slave
- REDIS_MASTER_HOST=redis-master
- REDIS_MASTER_PORT_NUMBER=6379
- REDIS_MASTER_PASSWORD=my_master_password
- REDIS_PASSWORD=my_replica_password
deploy:
replicas: 3
networks:
- webnet
networks:
webnet:
driver: bridge
How to setup login credentials for kibana gui with docker elk stack containers.
What arguments and environmental variables must be passed in docker-compose.yaml file to get this working.
For setting kibana user credentials for docker elk stack, we have to set xpack.security.enabled: true either in elasticsearch.yml or pass this as a environment variable in docker-compose.yml file.
Pass username & password as environment variable in docker-compose.yml like below:
version: '3.3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.6.1
ports:
- "9200:9200"
- "9300:9300"
configs:
- source: elastic_config
target: /usr/share/elasticsearch/config/elasticsearch.yml
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_USERNAME: "elastic"
ELASTIC_PASSWORD: "MyPw123"
http.cors.enabled: "true"
http.cors.allow-origin: "*"
xpack.security.enabled: "true"
networks:
- elk
deploy:
mode: replicated
replicas: 1
logstash:
image: docker.elastic.co/logstash/logstash:6.6.1
ports:
- "5044:5044"
- "9600:9600"
configs:
- source: logstash_config
target: /usr/share/logstash/config/logstash.yml:rw
- source: logstash_pipeline
target: /usr/share/logstash/pipeline/logstash.conf
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
xpack.monitoring.elasticsearch.url: "elasticsearch:9200"
xpack.monitoring.elasticsearch.username: "elastic"
xpack.monitoring.elasticsearch.password: "MyPw123"
networks:
- elk
deploy:
mode: replicated
replicas: 1
kibana:
image: docker.elastic.co/kibana/kibana:6.6.1
ports:
- "5601:5601"
configs:
- source: kibana_config
target: /usr/share/kibana/config/kibana.yml
networks:
- elk
deploy:
mode: replicated
replicas: 1
configs:
elastic_config:
file: ./elasticsearch/config/elasticsearch.yml
logstash_config:
file: ./logstash/config/logstash.yml
logstash_pipeline:
file: ./logstash/pipeline/logstash.conf
kibana_config:
file: ./kibana/config/kibana.yml
networks:
elk:
driver: overlay
Then add this following lines to kibana.yml:
elasticsearch.username: "elastic"
elasticsearch.password: "MyPw123"
Did not managed to get it working without adding XPACK_MONITORING & SECURITY flags to kibana's container and there was no need for a config file
However I was not able to use kibana user, even after logging in with elastic user and changing kibana's password through the UI.
NOTE: looks like you can't setup default built-in users other than elastic superuser in docker-compose through it's environment. I've tried several times with kibana and kibana_system to no success.
version: "3.7"
services:
elasticsearch:
image: elasticsearch:7.4.0
restart: always
ports:
- 9200:9200
environment:
- discovery.type=single-node
- xpack.security.enabled=true
- ELASTIC_PASSWORD=123456
kibana:
image: kibana:7.4.0
restart: always
ports:
- 5601:5601
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
- XPACK_MONITORING_ENABLED=true
- XPACK_MONITORING_COLLECTION_ENABLED=true
- XPACK_SECURITY_ENABLED=true
- ELASTICSEARCH_USERNAME=elastic
- ELASTICSEARCH_PASSWORD="123456"
depends_on:
- elasticsearch
SOURCE
NOTE: looks like this won't work with 8.5.3, Kibana won't accept superuser elastic.
Update
I was able to setup 8.5.3 but with a couple twists. I would build the whole environment, then in elastic's container run the setup-passwords auto
bin/elasticsearch-setup-passwords auto
Grab the auto generated password for kibana_system user and replace it in docker-compose then restart only kibana's container
Kibana 8.5.3 with environment variables:
kibana:
image: kibana:8.5.3
restart: always
ports:
- 5601:5601
environment:
- ELASTICSEARCH_USERNAME="kibana_system"
- ELASTICSEARCH_PASSWORD="sVUurmsWYEwnliUxp3pX"
Restart kibana's container:
docker-compose up -d --build --force-recreate --no-deps kibana
NOTE: make sure to use --no-deps flag otherwise it will restart elastic container if tagged to kibana's
I have very simple docker-compose config:
version: '3.5'
services:
consul:
image: consul:latest
hostname: "consul"
command: "consul agent -server -bootstrap-expect 1 -client=0.0.0.0 -ui -data-dir=/tmp"
environment:
SERVICE_53_IGNORE: 'true'
SERVICE_8301_IGNORE: 'true'
SERVICE_8302_IGNORE: 'true'
SERVICE_8600_IGNORE: 'true'
SERVICE_8300_IGNORE: 'true'
SERVICE_8400_IGNORE: 'true'
SERVICE_8500_IGNORE: 'true'
ports:
- 8300:8300
- 8400:8400
- 8500:8500
- 8600:8600/udp
networks:
- backend
registrator:
command: -internal consul://consul:8500
image: gliderlabs/registrator:master
depends_on:
- consul
links:
- consul
volumes:
- /var/run/docker.sock:/tmp/docker.sock
networks:
- backend
image_tagger:
build: image_tagger
image: image_tagger:latest
ports:
- 8000
networks:
- backend
mongo:
image: mongo
command: [--auth]
ports:
- "27017:27017"
restart: always
networks:
- backend
volumes:
- /mnt/data/mongo-data:/data/db
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: qwerty
postgres:
image: postgres:11.1
# ports:
# - "5432:5432"
networks:
- backend
volumes:
- ./postgres-data:/var/lib/postgresql/data
- ./scripts:/docker-entrypoint-initdb.d
restart: always
environment:
POSTGRES_PASSWORD: qwerty
POSTGRES_DB: ttt
SERVICE_5432_NAME: postgres
SERVICE_5432_ID: postgres
networks:
backend:
name: backend
(and some other services)
Also I configured dnsmasq on host to access containers by internal name.
I spent couple of days, but still not able to make it stable:
1. Very often some services are just not get registered by registrator (sometimes I get 5 out of 15).
2. Very often containers are registered with wrong ip address. So in container info I have one address(correct), in consul - another (incorrect). And when I want to reach some service by address like myservice.service.consul I end up at wrong container.
3. Sometimes resolution fails at all even when containers are registered with correct ip.
Do I have some mistakes in config?
So, at least for now I was able to fix this by passing -resync 15 param to registrator. Not sure if it's correct solution, but it works.
I started using docker and I have created a basic docker-compose.yml file. The problem I am facing currently is that I have multiple containers that need to be one. My docker-compose.yml file:
version: '3.7'
services:
redis:
container_name: redis
image: redis
ports:
- "6379:6379"
mongodb:
container_name: mongodb
image: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: devmyoy123
ports:
- "27017:27017"
node:
container_name: node
image: node
volumes:
- ./node/:/var/app/node
environment:
REDIS_URI: redis://redis:6379
links:
- mongodb
- redis
ports:
- "9000:9000"
httpd:
container_name: httpd
image: php:7.2-apache
volumes:
- ./api/:/usr/local/apache2/htdocs
environment:
REDIS_URI: redis://redis:6379
links:
- mongodb
- redis
- node
ports:
- "80:80"
- "443:443"
I need a way to use node inside the httpd container and I also need the mongodb and redis parameters inside of httpd and node containers.
Please help me with docker-compose file.
Right now, i'm using Solr in docker file, but i need to change it to SolrCloud. I need 2 Solr instances, an internal Zookeeper and docker (local).
This is an example of docker-compose file I did:
version: "3"
services:
mongo:
image: mongo:latest
container_name: mongo
hostname: mongo
networks:
- gsec
ports:
- 27018:27017
sqlserver:
image: microsoft/mssql-server-linux:latest
hostname: sqlserver
container_name: sqlserver
environment:
SA_PASSWORD: "#Password123!"
ACCEPT_EULA: "Y"
networks:
- gsec
ports:
- 1403:1433
solr:
image: solr
container_name: solr
ports:
- "8983:8983"
networks:
- gsec
volumes:
- data:/opt/solr/server/solr/mycores
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- mycore
volumes:
data:
networks:
gsec:
driver: bridge
Thank you in advanced.
Solr docker instance has a zookeeper server embedded into. You have just to start Solr with the right parameters and add the zookeeper ports 9983:9983 in the docker-compose file:
solr:
image: solr
container_name: solr
ports:
- "9983:9983"
- "8983:8983"
networks:
- gsec
volumes:
- data:/opt/solr/server/solr/mycores
entrypoint:
- docker-entrypoint.sh
- solr
- start
- -c
- -f
SolrCloud basically is a Solr cluster where Zookeeper is used to coordinate and configure the cluster.
Usually you use SolrCloud with Docker because you're are learning how it works or because you're preparing your application (locally?) to deploy in a bigger environment.
On the other hand it doesn't make much sense run SolrCloud if you don't have a distributed configuration, i.e. having Solr and Zookeeper running on different nodes.
SolrCloud is the kind of cluster you need when you have hundred or even thousands searches per second with collection of millions or even billions of documents.
Your cluster have to scale horizontally.
Version to use with external zookeper.
'-t' to change data dir in container.
To see other options run: solr start -help
version: '3'
services:
solr1:
image: solr
ports:
- "8984:8984"
entrypoint:
- solr
command:
- start
- -f
- -c
- -h
- "10.1.0.157"
- -p
- "8984"
- -z
- "10.1.0.157:2181,10.1.0.157:2182,10.1.0.157:2183"
- -m
- 1g
- -t
- "/opt/solr/server/solr/mycores"
volumes:
- "./data1/mycores:/opt/solr/server/solr/mycores"
I use this setup locally to test three instances of solr and three instances of zookeeper, based on the official example.
version: '3.7'
services:
solr-1:
image: solr:8.7
container_name: solr-1
ports:
- "8981:8983"
environment:
- ZK_HOST=zoo-1:2181,zoo-2:2181,zoo-3:2181
networks:
- solr
depends_on:
- zoo-1
- zoo-2
- zoo-3
# command:
# - solr-precreate
# - gettingstarted
solr-2:
image: solr:8.7
container_name: solr-2
ports:
- "8982:8983"
environment:
- ZK_HOST=zoo-1:2181,zoo-2:2181,zoo-3:2181
networks:
- solr
depends_on:
- zoo-1
- zoo-2
- zoo-3
solr-3:
image: solr:8.7
container_name: solr-3
ports:
- "8983:8983"
environment:
- ZK_HOST=zoo-1:2181,zoo-2:2181,zoo-3:2181
networks:
- solr
depends_on:
- zoo-1
- zoo-2
- zoo-3
zoo-1:
image: zookeeper:3.6
container_name: zoo-1
restart: always
hostname: zoo-1
volumes:
- zoo1data:/data
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=zoo-2:2888:3888;2181 server.3=zoo-3:2888:3888;2181
networks:
- solr
zoo-2:
image: zookeeper:3.6
container_name: zoo-2
restart: always
hostname: zoo-2
volumes:
- zoo2data:/data
ports:
- 2182:2181
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo-1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=zoo-3:2888:3888;2181
networks:
- solr
zoo-3:
image: zookeeper:3.6
container_name: zoo-3
restart: always
hostname: zoo-3
volumes:
- zoo3data:/data
ports:
- 2183:2181
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zoo-1:2888:3888;2181 server.2=zoo-2:2888:3888;2181 server.3=0.0.0.0:2888:3888;2181
networks:
- solr
networks:
solr:
# persist the zookeeper data in volumes
volumes:
zoo1data:
driver: local
zoo2data:
driver: local
zoo3data:
driver: local