Redis-sentinel docker setup issue - docker

I m trying to setup a Redis Sentinel setup in my docker using bitnami setup.
https://hub.docker.com/r/bitnami/redis-sentinel
Here is my docker file
version: '2'
networks:
app-tier:
driver: bridge
services:
redis:
image: 'bitnami/redis:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- '6379:6379'
networks:
- app-tier
slave:
image: 'bitnami/redis:latest'
ports:
- '6380:6380'
environment:
- ALLOW_EMPTY_PASSWORD=yes
- REDIS_MASTER_HOST=127.0.0.1
networks:
- app-tier
redis-sentinel:
image: 'bitnami/redis-sentinel:latest'
environment:
- REDIS_MASTER_HOST=127.0.0.1
ports:
- '26379:26379'
networks:
- app-tier
but in this case my slave is not able to sync up with master and master info shows 0 slave connected.
Another alternative i tried is
version: '2'
networks:
app-tier:
driver: bridge
services:
redis:
image: 'bitnami/redis:latest'
environment:
- REDIS_REPLICATION_MODE=master
- ALLOW_EMPTY_PASSWORD=yes
ports:
- '6379:6379'
networks:
- app-tier
slave:
image: 'bitnami/redis:latest'
ports:
- '6380:6380'
environment:
- ALLOW_EMPTY_PASSWORD=yes
- REDIS_REPLICATION_MODE=slave
- REDIS_MASTER_HOST=redis
depends_on:
- redis
networks:
- app-tier
redis-sentinel:
image: 'bitnami/redis-sentinel:latest'
environment:
- REDIS_MASTER_HOST=redis
depends_on:
- redis
ports:
- '26379:26379'
networks:
- app-tier
in this case sentinel and slave is runing on virtual n/w which i m not able to access in local.
So here my moto is to monitor slave node so that i can see the logs.
Can anyone help

Related

Logging to Elastic search is not working while pointing to docker url in Api docker image

Logging to Elastic search work fine when testing from local debugging to localhost app connection to elasticsearch url at https://localhost:9200, but it fails to connect with docker images of the dotnet.monitoring.api to the docker image of Elastic search at http://elasticsearch:9200
Below is the docker compose file.
version: '3.4'
services:
productdb:
container_name: productdb
environment:
SA_PASSWORD: "SwN12345678"
ACCEPT_EULA: "Y"
restart: always
ports:
- "1433:1433"
elasticsearch:
container_name: elasticsearch
environment:
- node.name=elasticsearch
- cluster.name-es-docker-cluster
- xpack.security.enabled=false
- "discovery.type=single-node"
networks:
- es-net
volumes:
- data01:/urs/share/elasticsearch/data
ports:
- 9200:9200
kibana:
container_name: kibana
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
networks:
- es-net
depends_on:
- elasticsearch
ports:
- 5601:5601
dotnet.monitoring.api:
container_name: dotnet.monitoring.api
environment:
- ASPNETCORE_ENVIRONMENT=Development
- "ElasticConfiguration:Url=http://elasticsearch:9200/"
- "ConnectionStrings:Product=server=productdb;Database=ProductDb;User Id=sa;Password=SampleP#&&W0rd;TrustServerCertificate=True;"
depends_on:
- productdb
- kibana
ports:
- "8001:80"
volumes:
data01:
driver: local
networks:
es-net:
driver: bridge
Your image of API is not in the same network. Make es-net as default.
networks:
default:
name: es-net
external: true

The docker Container of caddy is in restarting state

This is docker-compose file that starts the containers all are working fine except the caddy.
version: '3'
services:
db:
image: postgres:latest
restart: always
expose:
- "5555"
volumes:
- pgdata:/var/lib/postgresql/data/
environment:
- POSTGRES_DB=chiefonboarding
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
networks:
- global
web:
image: chiefonboarding/chiefonboarding:latest
restart: always
expose:
- "9000"
environment:
- SECRET_KEY=somethingsupersecret
- BASE_URL=https://on.hr.gravesfoods.com
- DATABASE_URL=postgres://postgres:postgres#db:5432/chiefonboarding
- ALLOWED_HOSTS=on.hr.gravesfoods.com
- DEFAULT_FROM_EMAIL=hello#gravesfoods.com
depends_on:
- db
networks:
- global
caddy:
image: caddy:2.3.0-alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- $PWD/Caddyfile:/etc/caddy/Caddyfile
- $PWD/site:/srv
- caddy_data:/data
- caddy_config:/config
networks:
- global
volumes:
pgdata:
caddy_data:
caddy_config:
networks:
global:
Also these are the logs it is generating:
[https://on.hr.gravesfoods.com:80] scheme and port violate convention "level":"info","ts":1656425557.6256478,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile" run: adapting config using caddyfile: server block 0, key 0 (https://on.hr.gravesfoods.com:80): determining listener address: [https://on.hr.gravesfoods.com:80] scheme and port violate convention.

Docker compose containers cannot connect to each other through the network bridge

I am trying to run this docker compose file but my microservices cannot connect to eureka server through the docker network bridge. Does anyone know where is the problem? This is the docker compose file I am running
version: '3'
services:
zipkin:
image: openzipkin/zipkin
container_name: zipkin
ports:
- "9411:9411"
networks:
- spring
rabbitmq:
image: rabbitmq:3.9.11-management-alpine
container_name: rabbitmq
ports:
- "5672:5672"
- "15672:15672"
networks:
- spring
eureka-server:
image: shaslan/eureka-server:latest
container_name: eureka-server
ports:
- "8761:8761"
environment:
- "SPRING_PROFILES_ACTIVE=docker"
networks:
- spring
depends_on:
- zipkin
api-gw:
image: shaslan/apigw:latest
container_name: apigw
ports:
- "8083:8083"
environment:
- "SPRING_PROFILES_ACTIVE=docker"
networks:
- spring
depends_on:
- zipkin
- eureka-server
customer:
image: shaslan/customer:latest
container_name: customer
ports:
- "8080:8080"
environment:
- "SPRING_PROFILES_ACTIVE=docker"
networks:
- spring
depends_on:
- zipkin
- eureka-server
- rabbitmq
fraud:
image: shaslan/fraud:latest
container_name: fraud
ports:
- "8081:8081"
environment:
- "SPRING_PROFILES_ACTIVE=docker"
networks:
- spring
depends_on:
- zipkin
- eureka-server
- rabbitmq
notification:
image: shaslan/notification:latest
container_name: notification
ports:
- "8082:8082"
environment:
- "SPRING_PROFILES_ACTIVE=docker"
networks:
- spring
depends_on:
- zipkin
- eureka-server
- rabbitmq
networks:
spring:
driver: bridge
When I open up my eureka server it doesn't discover any microservice. Any help would be appreciated

saleor backend not available under droplet ip from digital ocean

I'd like to deploy my saleor-shop application completely via docker.
So I've built the respective images for saleor backend, storefront & dashboard.
Running the app locally works fine.
Backend is available on localhost:8000/graphql
Storefront runs at localhost:3000
Dashboard runs at localhost:9000
If I'd like to run the app on the droplet IP --> I get issues with running the saleor backend.
As of now trying to access XXX.XX.XXX.XXX:8000 results in "This site can't be reached".
The storefront and dashboard are accessible on XXX.XX.XXX.XXX:3000 and XXX.XX.XXX.XXX:9000 however without any interaction with the backend cause its not available. Thats why the graphql calls are not functioning on the storefront and logging into the dashboard does not work either cause the backend is not available. I think I'm missing something here and would appreciate any help.
[
Within my droplet I'm using the following docker-compose.yml file to get my docker containers up:
services:
api:
ports:
- 8000:8000
image: XXX/murukku-shop
restart: unless-stopped
networks:
- saleor-backend-tier
depends_on:
- db
- redis
- jaeger
env_file: common.env
environment:
- JAEGER_AGENT_HOST=jaeger
- STOREFRONT_URL=http://XXX.XX.XXX.XXX:3000/
- DASHBOARD_URL=http://XXX.XX.XXX.XXX:9000/
storefront:
image: XXX/murukku-storefront
ports:
- 3000:80
restart: unless-stopped
dashboard:
image: XXX/murukku-dashboard
ports:
- 9000:80
restart: unless-stopped
db:
image: library/postgres:11.1-alpine
ports:
- 5432:5432
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
- saleor-db:/var/lib/postgresql/data
environment:
- POSTGRES_USER=saleor
- POSTGRES_PASSWORD=saleor
redis:
image: library/redis:5.0-alpine
ports:
- 6379:6379
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
- saleor-redis:/data
worker:
image: XXX/murukku-shop
restart: unless-stopped
networks:
- saleor-backend-tier
env_file: common.env
depends_on:
- redis
- mailhog
environment:
- EMAIL_URL=smtp://mailhog:1025
jaeger:
image: jaegertracing/all-in-one
ports:
- "5775:5775/udp"
- "6831:6831/udp"
- "6832:6832/udp"
- "5778:5778"
- "16686:16686"
- "14268:14268"
- "9411:9411"
restart: unless-stopped
networks:
- saleor-backend-tier
mailhog:
image: mailhog/mailhog
ports:
- 1025:1025 # smtp server
- 8025:8025 # web ui. Visit http://localhost:8025/ to check emails
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
saleor-db:
driver: local
saleor-redis:
driver: local
saleor-media:
networks:
saleor-backend-tier:
driver: bridge
I was testing Saleor like you in a docker setup and I've found a solution ! You have to set more env variable, they are all explained on the github page of the storefront and the dashboard.
Here is my config if you want :
version: '2'
services:
api:
ports:
- 8000:8000
build:
context: ./saleor
dockerfile: ./Dockerfile
args:
STATIC_URL: '/static/'
restart: unless-stopped
networks:
- saleor-backend-tier
depends_on:
- db
- redis
- jaeger
volumes:
- ./saleor/saleor/:/app/saleor:Z
- ./saleor/templates/:/app/templates:Z
- ./saleor/tests/:/app/tests
# shared volume between worker and api for media
- saleor-media:/app/media
command: python manage.py runserver 0.0.0.0:8000
env_file: common.env
environment:
# - DEFAULT_CURRENCY=EUR
#- DEFAULT_COUNTRY=
- ALLOWED_CLIENT_HOSTS=localhost,127.0.0.1,192.168.0.50
- ALLOWED_HOSTS=localhost,192.168.0.50
- JAEGER_AGENT_HOST=jaeger
- STOREFRONT_URL=http://192.168.0.50:3000/
- DASHBOARD_URL=http://192.168.0.50:9000/
storefront:
build:
context: ./saleor-storefront
dockerfile: ./Dockerfile.dev
ports:
- 3000:3000
restart: unless-stopped
volumes:
- ./saleor-storefront/:/app:cached
- /app/node_modules/
command: npm start -- --host 0.0.0.0
environment:
- NEXT_PUBLIC_API_URI=http://192.168.0.50:8000/graphql/
- API_URI=http://192.168.0.50:8000/graphql/
dashboard:
build:
context: ./saleor-dashboard
dockerfile: ./Dockerfile.dev
ports:
- 9000:9000
restart: unless-stopped
volumes:
- ./saleor-dashboard/:/app:cached
- /app/node_modules/
command: npm start -- --host 0.0.0.0
environment:
- API_URI=http://192.168.0.50:8000/graphql/
- APP_MOUNT_URI=/dashboard/
- STATIC_URL=http://192.168.0.50:9000/
db:
image: library/postgres:11.1-alpine
ports:
- 5432:5432
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
- saleor-db:/var/lib/postgresql/data
environment:
- POSTGRES_USER=saleor
- POSTGRES_PASSWORD=saleor
redis:
image: library/redis:5.0-alpine
ports:
- 6379:6379
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
- saleor-redis:/data
worker:
build:
context: ./saleor
dockerfile: ./Dockerfile
args:
STATIC_URL: '/static/'
command: celery -A saleor --app=saleor.celeryconf:app worker --loglevel=info
restart: unless-stopped
networks:
- saleor-backend-tier
env_file: common.env
depends_on:
- redis
- mailhog
volumes:
- ./saleor/saleor/:/app/saleor:Z,cached
- ./saleor/templates/:/app/templates:Z,cached
# shared volume between worker and api for media
- saleor-media:/app/media
environment:
- EMAIL_URL=smtp://mailhog:1025
jaeger:
image: jaegertracing/all-in-one
ports:
- "5775:5775/udp"
- "6831:6831/udp"
- "6832:6832/udp"
- "5778:5778"
- "16686:16686"
- "14268:14268"
- "9411:9411"
restart: unless-stopped
networks:
- saleor-backend-tier
mailhog:
image: mailhog/mailhog
ports:
- 1025:1025 # smtp server
- 8025:8025 # web ui. Visit http://localhost:8025/ to check emails
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
saleor-db:
driver: local
saleor-redis:
driver: local
saleor-media:
networks:
saleor-backend-tier:
driver: bridge
PS : It's my first answer on stackoverflow :D Don't forget to tick me as the answer if I solved your problem ;)

Docker elastic stack cannot receive connection error

I have docker-compose file looking like below
version: '3'
services:
redis:
build: ./docker/redis
postgresql:
build: ./docker/postgresql
ports:
- "5433:5432"
env_file:
- .env
graphql:
build: .
command: npm run start
volumes:
- ./logs/:/usr/app/logs/
ports:
- "3000:3000"
env_file:
- .env
depends_on:
- "redis"
- "postgresql"
links:
- "redis"
- "postgresql"
elasticsearch:
build: ./docker/elasticsearch
container_name: elasticsearch
ports:
- "9200:9200"
depends_on:
- "graphql"
links:
- "kibana"
kibana:
build: ./docker/kibana
ports:
- "5601:5601"
depends_on:
- "graphql"
networks:
- elastic
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
metricbeat:
build: ./docker/metricbeat
depends_on:
- "graphql"
- "elasticsearch"
- "kibana"
networks:
- elastic
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
packetbeat:
build: ./docker/packetbeat
depends_on:
- "graphql"
- "elasticsearch"
- "kibana"
networks:
- elastic
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
logstash:
build: ./docker/logstash
ports:
- "9600:9600"
volumes:
- ./logs:/usr/logs
depends_on:
- "graphql"
- "elasticsearch"
- "kibana"
networks:
- elastic
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
networks:
elastic:
driver: bridge
When I run docker-compose build and docker-compose up, I get "unable to revive connection: http://elasticsearch:9200" from every container. I don't think any of the containers are able to talk to each other right now. However, it really feels like everything should work because I have exposed all the ports for elastic components, linked them with same networks and URL is also pointing to the correct alias. What am I doing wrong?
The dockerfile settings are all correct as each container runs correctly in isolation - just not able to talk to each other at all

Resources