I am trying to setup Redis cluster. The configuration is the bare minimum setup of the cluster. I am using docker-compose to run my application. But it always throwing the following error. However, when I connect Redis to an external tool it gets connected successfully.
ClusterAllFailedError: Failed to refresh slots cache
const db = new Redis.Cluster([{
host: "redis",
port: 6379
}])
I can see Redis master and replica instance on docker container
master
container ID - 8a7f4d9fc877
image - bitnami/redis:latest
ports - 0.0.0.0:32862->6379/tcp
name - mogus_redis_1
slave
container ID - f04433e04de5
image - bitnami/redis:latest
ports - 0.0.0.0:32863->6379/tcp
name - mogus_redis-replica_1
yml file
redis:
image: "bitnami/redis:latest"
ports:
- 6379
environment:
REDIS_REPLICATION_MODE: master
ALLOW_EMPTY_PASSWORD: "yes"
volumes:
- redis-data:/bitnami
redis-replica:
image: "bitnami/redis:latest"
ports:
- 6379
depends_on:
- redis
environment:
REDIS_REPLICATION_MODE: slave
REDIS_MASTER_HOST: redis
REDIS_MASTER_PORT_NUMBER: 6379
ALLOW_EMPTY_PASSWORD: "yes"
You are not using a Redis Cluster, just a single master and a replica. That being the case, your app should just use the singleinstance class, which I assume is something like this:
const db = new Redis([{
host: "redis",
port: 6379
}])
Related
My objective is to set up 3x Redis Server and 3x Redis Sentinel on a single Docker VM using Docker compose and expose each of the Redis Servers and Sentinels to the local network.
The static IP of my Docker host is 192.168.2.90.
I've given the Redis Servers ports numbered 6379, 6380, 6381 and exposed those ports through Docker.
My local network is 192.168.2.0/24
My Docker machine's internal network is 172.16.0.0/12.
Everything works great within the Docker containers themselves. The problem comes when I try to connect to Redis using a different machine on my local network.
My python test script successfully connects to Redis Sentinel. The problem is when it discovers the master and slaves, the addresses are all on the 172.16.0.0/12 subnet.
Redis master:
('172.18.3.1', 6379)
Redis slaves:
[('172.18.3.3', 6381), ('192.168.2.90', 6379),('172.18.3.2', 6380)]
When I telnet into the master and run INFO, likewise it gives me the 172.16.0.0/12 addresses.
role:master
connected_slaves:2
slave0:ip=172.18.3.3,port=6381,state=online,offset=252692195,lag=0
slave1:ip=172.18.3.2,port=6380,state=online,offset=252692195,lag=0
I cannot figure out how to get Redis Server and Redis Sentinel to report the 192.168.2.0/24 subnet.
I've defined my Redis Server containers as follows:
redis-a-1:
container_name: redis-a-1
hostname: redis-a-1
image: redis
command: "redis-server --port 6379 --bind-source-addr 192.168.2.90"
environment:
- REDIS_HOST=192.168.2.90
ports:
- "6379:6379"
restart: unless-stopped
networks:
blue-green-network:
ipv4_address: 172.18.3.1
redis-a-2:
container_name: redis-a-2
hostname: redis-a-2
image: redis
command: "redis-server --port 6380 --bind-source-addr 192.168.2.90 --slaveof 192.168.2.90 6379"
environment:
- REDIS_HOST=192.168.2.90
ports:
- "6380:6380"
restart: unless-stopped
networks:
blue-green-network:
ipv4_address: 172.18.3.2
redis-a-3:
container_name: redis-a-3
hostname: redis-a-3
image: redis
command: "redis-server --port 6381 --bind-source-addr 192.168.2.90 --slaveof 192.168.2.90 6379"
environment:
- REDIS_HOST=192.168.2.90
ports:
- "6381:6381"
restart: unless-stopped
networks:
blue-green-network:
ipv4_address: 172.18.3.3
I've tried a bunch of things to get each of the Redis Servers to report their IP addresses to each other as the Docker host's IP address but with no success.
Any thoughts on how I can do this or am I on a fool's erand?
Many thanks.
Note: I am aware that the machine itself is a single point of failure.
How do you launch Postgres from Docker, using docker-compose?
My docker-compose.yml looks like:
version: "3.6"
services:
db:
container_name: db
image: postgres:14-alpine
environment:
- POSTGRES_USER=test
- POSTGRES_PASSWORD=test
- POSTGRES_DB=test
ports:
- "5432:5432"
command: -c fsync=off -c synchronous_commit=off -c full_page_writes=off --max-connections=200 --shared-buffers=4GB --work-mem=20MB
tmpfs:
- /var/lib/postgresql
web:
container_name: web
build:
context: ..
dockerfile: test_tools/Dockerfile
shm_size: '2gb'
volumes:
- /dev/shm:/dev/shm
depends_on:
- db
This is a simple test environment to mimic a web server and a database server.
Yet when I build this, it fails with:
Creating db ... error
ERROR: for db Cannot start service db: driver failed programming external connectivity on endpoint db (bdaebf844ee8ddd593b6bc75733d8aa6196112b62f7909be060017a9a33b3c34): Error starting userland proxy: listen tcp4 0.0.0.0:5432: bind: address already in use
Why is my Postgres container trying to allocate a port on the host?
I do have Postgres running on port 5432 of the host, but why would this be interfering? These are just test containers that only need to talk to each other, and should not be accessible to the host, much less allocate host ports.
I've confirmed with docker ps -a that there are no other containers that might also be consuming port 5432.
ports:
- 5432
will start your Postgres, but on a random (free) host port.
Try to map postgres to different port on host for example
ports:
5432:15432
will make your db works on port 15432 on your host.
I beginner in Docker, I write the simple docker-compose.yml file for run two service container first container for node app and another one for redis issue with my app server unable to connect with redis container here is my code:
version: '3'
services:
redis:
image: redis
ports:
- "6379:6379"
networks:
- test
app_server:
image: app_server
depends_on:
- redis
links:
- redis
ports:
- "4004:4004"
networks:
- test
networks:
test:
Output:
Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED
Looks like your webapp is connecting to 127.0.0.1/localhost instead of redis. So not a docker issue, but more of a programming issue within your web app. you could add environment variable in your webapp (something like REDIS_HOST) and then give that parameter in the compose-file. This of course requires your web application to read redis host from environment variable.
Example environment variable assignment in compose:
webapp:
image: my_web_app
environment:
- REDIS_HOST=redis
Again, this requires that your web app is actually utilizing REDIS_HOST environment variable in its code.
127.0.0.1:6379 is connect to current container localhost not to redis container
With your docker-composer file. Now your connect to redis via redis container name. Becase docker-compose automatic create an docker bridge network - whic allow you call to another container via their name...
docker inspect to see redis container name - for example current redis container name is redis_abc, so you can connect to redis via redis_abc:6379 Or more simple, just add container_name: redis_server to docker-compose file for certain container name..
https://docs.docker.com/network/bridge/
I am trying to use docker swarm to create simple nodejs service that lays behind Haproxy and connect to mysql. So, I created this docker compose file:
And I have several issues:
The backend service can't connect to the database using: localhost or 127.0.0.1, so, I managed to connect to the database using the private ip(10.0.1.4) of the database container.
The backend tries to connect to the database too soon even though it depends on it.
The application can't be reached from outside.
version: '3'
services:
db:
image: test_db:01
ports:
- 3306
networks:
- db
test:
image: test-back:01
ports:
- 3000
environment:
- SERVICE_PORTS=3000
- DATABASE_HOST=localhost
- NODE_ENV=development
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 5s
restart_policy:
condition: on-failure
max_attempts: 3
window: 60s
networks:
- web
- db
depends_on:
- db
extra_hosts:
- db:10.0.1.4
proxy:
image: dockercloud/haproxy
depends_on:
- test
environment:
- BALANCE=leastconn
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- 80:80
networks:
- web
deploy:
placement:
constraints: [node.role == manager]
networks:
web:
driver: overlay
db:
driver: bridge
I am running the following:
docker stack deploy --compose-file=docker-compose.yml prod
All the services are running.
curl http://localhost/api/test <-- Not working
But, as I mentioned above the issues I have.
Docker version 18.03.1-ce, build 9ee9f40
docker-compose version 1.18.0, build 8dd22a9
What do I missing?
The backend service can't connect to the database using: localhost or 127.0.0.1, so, I managed to connect to the database using the private ip(10.0.1.4) of the database container.
don't use IP addresses for connection. Use just the DNS name.
So you must change connection to DATABASE_HOST=db, because this is the service name you've defined.
Localhost is wrong, because the service is running in a different container as your test service.
The backend tries to connect to the database too soon even though it depends on it.
depends_on does not work as you expected. Please read https://docs.docker.com/compose/compose-file/#depends_on and the info box "There are several things to be aware of when using depends_on:"
TL;DR: depends_on option is ignored when deploying a stack in swarm mode with a version 3 Compose file.
The application can't be reached from outside.
Where is your haproxy configuration that it must request for http://test:3000 when something requests haproxy on /api/test?
For DATABASE_HOST=localhost - the localhost word means my local container. You need to use the service name where db is hosted. localhost is a special dns name always pointing to the application host. when using containers - it will be the container. In cloud development, you need to forget about using localhost (will point to the container) or IPs (they can change every time you run the container and you will not be able to use load-balancing), and simply use service names.
As for the readiness - docker has no possibility of knowing, if the application you started in container is ready. You need to make the service aware of the database unavailability and code/script some mechanisms of polling/fault tolerance.
Markus is correct, so follow his advice.
Here is a compose/stack file that should work assuming your app listens on port 3000 in the container and db is setup with proper password, database, etc. (you usually set these things as environment vars in compose based on their Docker Hub readme).
Your app should be designed to crash/restart/wait if it can't fine the DB. That's the nature of all distributed computing... that anything "remote" (another container, host, etc.) can't be assumed to always be available. If your app just crashes, that's fine and a normal process for Docker, which will re-create the Swarm Service task each time.
If you could attempt to make this with public Docker Hub images, I can try to test for you.
Note that in Swarm, it's likely easier to use Traefik for the proxy (Traefik on Swarm Mode Guide), which will autoupdate and route incoming requests to the correct container based on the hostname you give the labels... But note that you should test first just the app and db, then after you know that works, try adding in a proxy layer.
Also, in Swarm, all your networks should be overlay, and you don't need to specify as that is the default in stacks.
Below is a sample using traefik with your above settings. I didn't give the test service a specific traefik hostname so it should accept all traffic coming in on 80 and forward to 3000 on the test service.
version: '3'
services:
db:
image: test_db:01
networks:
- db
test:
image: test-back:01
environment:
- SERVICE_PORTS=3000
- DATABASE_HOST=db
- NODE_ENV=development
networks:
- web
- db
deploy:
labels:
- traefik.port=3000
- traefik.docker.network=web
proxy:
image: traefik
networks:
- web
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "80:80"
- "8080:8080" # traefik dashboard
command:
- --docker
- --docker.swarmMode
- --docker.domain=traefik
- --docker.watch
- --api
deploy:
placement:
constraints: [node.role == manager]
networks:
web:
db:
I am running into an issue and I am not sure how to resolve this.
My redis sentinel eco system is as follows:
3 sentinel cluster --> Managing 1 master and 2 slaves using docker-compose
I have created a docker overlay network for the eco system and using docker stack deploy to run the docker compose yml. The redis-cli on each node displays the correct INFO configuration. However external clients are running into an issue.
When I supply the sentinel address to the client application (in my case it's a spring redis app) I am getting the overlay network's internal IP address for the master redis. This is not recognizable to the client and it fails. How can I get an IP address that can be resolved externally? Secondly is it even possible since docker swarm manages the IP addresses on the overlay network. Is this the right approach i.e. using docker swarm? Any feedback would be greatly appreciated.
version: '3'
services:
redis-master:
image: redis:latest
volumes:
- "/docker-service-data/master:/data"
- /redis-docker/redis.conf:/etc/redis.conf
command: redis-server /etc/redis.conf
ports:
- 6379:6379
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.role == manager]
networks:
- rev_proxy
redis-slave:
image: redis:latest
volumes:
- "/docker-service-data/slave:/data"
- /redis-docker/redis.conf:/etc/redis.conf
command: redis-server /etc/redis.conf --slaveof redis-master 6379
deploy:
mode: replicated
replicas: 2
placement:
constraints: [node.role == worker]
networks:
- rev_proxy
sentinel_1:
image: <private-registry>/redis-sentinel:1
deploy:
mode: replicated
replicas: 3
ports:
- 26379:26379
depends_on:
- redis-master
networks:
- rev_proxy
networks:
rev_proxy:
external:
name: rev_proxy_net
redis.conf:
I have commented the bind statement so that the replica listens to all interfaces
protected mode is no
There is no authentication at this point.
sentinel.conf:
sentinel monitor master redis-master 6379 2
sentinel down-after-milliseconds master 1000
sentinel parallel-syncs master 1
sentinel failover-timeout master 1000
You might need version: '3.3' and endpoint_mode: vip option.
Refer to this link, https://docs.docker.com/compose/compose-file/#endpoint_mode