Docker elastic stack cannot receive connection error - docker

I have docker-compose file looking like below
version: '3'
services:
redis:
build: ./docker/redis
postgresql:
build: ./docker/postgresql
ports:
- "5433:5432"
env_file:
- .env
graphql:
build: .
command: npm run start
volumes:
- ./logs/:/usr/app/logs/
ports:
- "3000:3000"
env_file:
- .env
depends_on:
- "redis"
- "postgresql"
links:
- "redis"
- "postgresql"
elasticsearch:
build: ./docker/elasticsearch
container_name: elasticsearch
ports:
- "9200:9200"
depends_on:
- "graphql"
links:
- "kibana"
kibana:
build: ./docker/kibana
ports:
- "5601:5601"
depends_on:
- "graphql"
networks:
- elastic
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
metricbeat:
build: ./docker/metricbeat
depends_on:
- "graphql"
- "elasticsearch"
- "kibana"
networks:
- elastic
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
packetbeat:
build: ./docker/packetbeat
depends_on:
- "graphql"
- "elasticsearch"
- "kibana"
networks:
- elastic
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
logstash:
build: ./docker/logstash
ports:
- "9600:9600"
volumes:
- ./logs:/usr/logs
depends_on:
- "graphql"
- "elasticsearch"
- "kibana"
networks:
- elastic
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
networks:
elastic:
driver: bridge
When I run docker-compose build and docker-compose up, I get "unable to revive connection: http://elasticsearch:9200" from every container. I don't think any of the containers are able to talk to each other right now. However, it really feels like everything should work because I have exposed all the ports for elastic components, linked them with same networks and URL is also pointing to the correct alias. What am I doing wrong?
The dockerfile settings are all correct as each container runs correctly in isolation - just not able to talk to each other at all

Related

Adguard Home docker compose config and db missing

im trying to run adguard with docker compose. I created a lot more containers with docker compose but this one is not creating any files into the mapped folder.
I tried to rebuild the docker command of the official instruction but any time i recreate the container i end up at the setup page and all settings are deleted.
Any ideas?
This is my compose file:
version: "3"
volumes:
homematic_data:
external: true
networks:
homematic:
services:
samba:
image: dperson/samba
container_name: samba
restart: always
ports:
- "137:137/udp"
- "138:138/udp"
- "139:139/tcp"
- "445:445/tcp"
healthcheck:
disable: true
environment:
- TZ='Europe/Berlin'
- WORKGROUP=workgroup
- RECYCLE=false
- USER1=pi;PASSWORD;1000
- SHARE1=homematic_docker;/shares/homematic_docker;yes;no;yes;pi;pi
volumes:
- /home/pi:/shares/homematic_docker
networks:
- homematic
promtail:
image: grafana/promtail:latest
container_name: promtail
volumes:
- /var/log:/var/log
- ./promtail:/etc/promtail
restart: unless-stopped
command: -config.file=/etc/promtail/promtail-config.yml
networks:
- homematic
node-exporter:
image: quay.io/prometheus/node-exporter:latest
container_name: node_exporter
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
- /:/host:ro,rslave
command:
- '--path.rootfs=/host'
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- --collector.filesystem.ignored-mount-points
- "^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/aufs)($$|/)"
ports:
- 9100:9100
networks:
- homematic
restart: always
###################### portainer
portainer:
image: portainer/portainer-ce:latest
container_name: portainer
restart: unless-stopped
security_opt:
- no-new-privileges:true
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./portainer:/data
ports:
- 9000:9000
adguard:
image: adguard/adguardhome
container_name: adguard
restart: unless-stopped
ports:
- 53:53/tcp
- 53:53/udp
- 67:67/udp
- 69:68/udp
- 80:80/tcp
- 443:443/tcp
- 443:443/udp
- 3000:3000/tcp
- 853:853/tcp
- 784:784/udp
- 853:853/udp
- 8853:8853/udp
- 5443:5443/tcp
- 5443:5443/udp
# environment:
# - TZ=Europe/Berlin
volumes:
- /home/pi/homematicDocker/adguard/work:/opt/adguardhome/work\
- /home/pi/homematicDocker/adguard/conf:/opt/adguardhome/conf\
# network_mode: host
raspberrymatic:
image: ghcr.io/jens-maus/raspberrymatic:3.67.10.20230117-27abde9
container_name: homematic
hostname: homematic-raspi
privileged: true
restart: unless-stopped
stop_grace_period: 30s
volumes:
- homematic_data:/usr/local:rw
- /lib/modules:/lib/modules:ro
- /run/udev/control:/run/udev/control
ports:
- "8080:80"
- "2001:2001"
- "2010:2010"
- "9292:9292"
- "8181:8181"
networks:
- homematic
Within the folder "/opt/adguardhome/work" I see a folder data with a database inside. After i finished the setup also the folder conf inside the container has a yaml file.
Unfortunately i copied the backslashes of the docker command into the volume mapping, thats was the problem why i didnt get any data. Thank you Mike!

The docker Container of caddy is in restarting state

This is docker-compose file that starts the containers all are working fine except the caddy.
version: '3'
services:
db:
image: postgres:latest
restart: always
expose:
- "5555"
volumes:
- pgdata:/var/lib/postgresql/data/
environment:
- POSTGRES_DB=chiefonboarding
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
networks:
- global
web:
image: chiefonboarding/chiefonboarding:latest
restart: always
expose:
- "9000"
environment:
- SECRET_KEY=somethingsupersecret
- BASE_URL=https://on.hr.gravesfoods.com
- DATABASE_URL=postgres://postgres:postgres#db:5432/chiefonboarding
- ALLOWED_HOSTS=on.hr.gravesfoods.com
- DEFAULT_FROM_EMAIL=hello#gravesfoods.com
depends_on:
- db
networks:
- global
caddy:
image: caddy:2.3.0-alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- $PWD/Caddyfile:/etc/caddy/Caddyfile
- $PWD/site:/srv
- caddy_data:/data
- caddy_config:/config
networks:
- global
volumes:
pgdata:
caddy_data:
caddy_config:
networks:
global:
Also these are the logs it is generating:
[https://on.hr.gravesfoods.com:80] scheme and port violate convention "level":"info","ts":1656425557.6256478,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile" run: adapting config using caddyfile: server block 0, key 0 (https://on.hr.gravesfoods.com:80): determining listener address: [https://on.hr.gravesfoods.com:80] scheme and port violate convention.

Why am I unable to route to my API backend with Traefik

I had two container frontend (nginx :80) and backend (nodejs :3000).
I'm trying to redirect all path to my frontend : localhost/* to my frontend
Except one path to my backend API : localhost/v1/* to my backend
I secure my database container (mongodb) by allowing only communication with my backend
Here is my docker-compose.yml (I'm only using this)
version: '3'
services:
traefik:
image: traefik:v2.3
container_name: traefik
command:
- --api.insecure=true
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --entrypoints.web.address=:80
ports:
- "8080:8080"
- "443:443"
- "80:80"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
frontend:
image: registry.gitlab.com/test/frontend
container_name: frontend
build:
context: ../frontend/.
labels:
- traefik.enable=true
- traefik.http.routers.frontend.rule=PathPrefix(`/`)
- traefik.http.routers.frontend.entrypoints=web
networks:
- traefik-network
backend:
image: registry.gitlab.com/test/backend
container_name: backend
build:
context: ../backend/.
labels:
- traefik.enable=true
- traefik.http.routers.backend.rule=PathPrefix(`/v1`)
- traefik.http.routers.backend.service=backend
- traefik.http.routers.backend.entrypoints=web
- traefik.http.services.backend.loadbalancer.server.port=3000
command: yarn start
environment:
- MONGODB_URL=mongodb://mongodb:27017/backend
depends_on:
- mongodb
volumes:
- ../backend/.:/usr/src/backend
networks:
- traefik-network
- backend-network
mongodb:
image: mongo:4.2.1-bionic
container_name: mongodb
ports:
- 27017:27017
volumes:
- dbdata:/data/db
networks:
- backend-network
volumes:
dbdata:
networks:
backend-network:
traefik-network:
The problem is...
If the frontend (backend and traefik too) is turned on
the paths to localhost/* work (this is what I want),
but the paths to localhost/v1/* don't work (Problem here!).
If the frontend is turned off but traefik and backend is turned on
the paths to localhost/* don't work (of course, that's right),
but the paths to localhost/v1/* work (of course, this is what I want).
I've tried a lot of solutions but nothing seems to work the way I want it to.
What did I misunderstand?
Thanks for helping,
Have a nice day
Try to add the following labels to the backend service
- "traefik.http.routers.backend.rule=Host(`servicex.me`) && Path(`/v1`)"
and frontend
- traefik.http.routers.frontend.rule=Host(`servicex.me`)
you also need to add this line to your /etc/hosts
127.0.0.1 servicex.me
and make sure that you stop and start the services
Complete Example
version: '3'
services:
traefik:
image: traefik:v2.3
container_name: traefik
command:
- --api.insecure=true
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --entrypoints.web.address=:80
ports:
- "8080:8080"
- "443:443"
- "80:80"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
frontend:
image: registry.gitlab.com/test/frontend
container_name: frontend
build:
context: ../frontend/.
labels:
- traefik.enable=true
- traefik.http.routers.frontend.rule=Host(`servicex.me`)
- traefik.http.routers.frontend.entrypoints=web
- traefik.http.routers.frontend.service=frontend
- traefik.http.services.frontend.loadbalancer.server.port=80
networks:
- traefik-network
backend:
image: registry.gitlab.com/test/backend
container_name: backend
build:
context: ../backend/.
labels:
- traefik.enable=true
- "traefik.http.routers.backend.rule=Host(`servicex.me`) && Path(`/v1`)"
- traefik.http.routers.backend.service=backend
- traefik.http.routers.backend.entrypoints=web
- traefik.http.services.backend.loadbalancer.server.port=3000
command: yarn start
environment:
- MONGODB_URL=mongodb://mongodb:27017/backend
depends_on:
- mongodb
volumes:
- ../backend/.:/usr/src/backend
networks:
- traefik-network
- backend-network
mongodb:
image: mongo:4.2.1-bionic
container_name: mongodb
ports:
- 27017:27017
volumes:
- dbdata:/data/db
networks:
- backend-network
volumes:
dbdata:
networks:
backend-network:
traefik-network:
BTW, why do you need both traefik and nginx (Both are doing the same job), it would be better if you can replace one with another.
I added this label to my containers
traefik.docker.network=traefik-network
It works fine now

saleor backend not available under droplet ip from digital ocean

I'd like to deploy my saleor-shop application completely via docker.
So I've built the respective images for saleor backend, storefront & dashboard.
Running the app locally works fine.
Backend is available on localhost:8000/graphql
Storefront runs at localhost:3000
Dashboard runs at localhost:9000
If I'd like to run the app on the droplet IP --> I get issues with running the saleor backend.
As of now trying to access XXX.XX.XXX.XXX:8000 results in "This site can't be reached".
The storefront and dashboard are accessible on XXX.XX.XXX.XXX:3000 and XXX.XX.XXX.XXX:9000 however without any interaction with the backend cause its not available. Thats why the graphql calls are not functioning on the storefront and logging into the dashboard does not work either cause the backend is not available. I think I'm missing something here and would appreciate any help.
[
Within my droplet I'm using the following docker-compose.yml file to get my docker containers up:
services:
api:
ports:
- 8000:8000
image: XXX/murukku-shop
restart: unless-stopped
networks:
- saleor-backend-tier
depends_on:
- db
- redis
- jaeger
env_file: common.env
environment:
- JAEGER_AGENT_HOST=jaeger
- STOREFRONT_URL=http://XXX.XX.XXX.XXX:3000/
- DASHBOARD_URL=http://XXX.XX.XXX.XXX:9000/
storefront:
image: XXX/murukku-storefront
ports:
- 3000:80
restart: unless-stopped
dashboard:
image: XXX/murukku-dashboard
ports:
- 9000:80
restart: unless-stopped
db:
image: library/postgres:11.1-alpine
ports:
- 5432:5432
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
- saleor-db:/var/lib/postgresql/data
environment:
- POSTGRES_USER=saleor
- POSTGRES_PASSWORD=saleor
redis:
image: library/redis:5.0-alpine
ports:
- 6379:6379
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
- saleor-redis:/data
worker:
image: XXX/murukku-shop
restart: unless-stopped
networks:
- saleor-backend-tier
env_file: common.env
depends_on:
- redis
- mailhog
environment:
- EMAIL_URL=smtp://mailhog:1025
jaeger:
image: jaegertracing/all-in-one
ports:
- "5775:5775/udp"
- "6831:6831/udp"
- "6832:6832/udp"
- "5778:5778"
- "16686:16686"
- "14268:14268"
- "9411:9411"
restart: unless-stopped
networks:
- saleor-backend-tier
mailhog:
image: mailhog/mailhog
ports:
- 1025:1025 # smtp server
- 8025:8025 # web ui. Visit http://localhost:8025/ to check emails
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
saleor-db:
driver: local
saleor-redis:
driver: local
saleor-media:
networks:
saleor-backend-tier:
driver: bridge
I was testing Saleor like you in a docker setup and I've found a solution ! You have to set more env variable, they are all explained on the github page of the storefront and the dashboard.
Here is my config if you want :
version: '2'
services:
api:
ports:
- 8000:8000
build:
context: ./saleor
dockerfile: ./Dockerfile
args:
STATIC_URL: '/static/'
restart: unless-stopped
networks:
- saleor-backend-tier
depends_on:
- db
- redis
- jaeger
volumes:
- ./saleor/saleor/:/app/saleor:Z
- ./saleor/templates/:/app/templates:Z
- ./saleor/tests/:/app/tests
# shared volume between worker and api for media
- saleor-media:/app/media
command: python manage.py runserver 0.0.0.0:8000
env_file: common.env
environment:
# - DEFAULT_CURRENCY=EUR
#- DEFAULT_COUNTRY=
- ALLOWED_CLIENT_HOSTS=localhost,127.0.0.1,192.168.0.50
- ALLOWED_HOSTS=localhost,192.168.0.50
- JAEGER_AGENT_HOST=jaeger
- STOREFRONT_URL=http://192.168.0.50:3000/
- DASHBOARD_URL=http://192.168.0.50:9000/
storefront:
build:
context: ./saleor-storefront
dockerfile: ./Dockerfile.dev
ports:
- 3000:3000
restart: unless-stopped
volumes:
- ./saleor-storefront/:/app:cached
- /app/node_modules/
command: npm start -- --host 0.0.0.0
environment:
- NEXT_PUBLIC_API_URI=http://192.168.0.50:8000/graphql/
- API_URI=http://192.168.0.50:8000/graphql/
dashboard:
build:
context: ./saleor-dashboard
dockerfile: ./Dockerfile.dev
ports:
- 9000:9000
restart: unless-stopped
volumes:
- ./saleor-dashboard/:/app:cached
- /app/node_modules/
command: npm start -- --host 0.0.0.0
environment:
- API_URI=http://192.168.0.50:8000/graphql/
- APP_MOUNT_URI=/dashboard/
- STATIC_URL=http://192.168.0.50:9000/
db:
image: library/postgres:11.1-alpine
ports:
- 5432:5432
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
- saleor-db:/var/lib/postgresql/data
environment:
- POSTGRES_USER=saleor
- POSTGRES_PASSWORD=saleor
redis:
image: library/redis:5.0-alpine
ports:
- 6379:6379
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
- saleor-redis:/data
worker:
build:
context: ./saleor
dockerfile: ./Dockerfile
args:
STATIC_URL: '/static/'
command: celery -A saleor --app=saleor.celeryconf:app worker --loglevel=info
restart: unless-stopped
networks:
- saleor-backend-tier
env_file: common.env
depends_on:
- redis
- mailhog
volumes:
- ./saleor/saleor/:/app/saleor:Z,cached
- ./saleor/templates/:/app/templates:Z,cached
# shared volume between worker and api for media
- saleor-media:/app/media
environment:
- EMAIL_URL=smtp://mailhog:1025
jaeger:
image: jaegertracing/all-in-one
ports:
- "5775:5775/udp"
- "6831:6831/udp"
- "6832:6832/udp"
- "5778:5778"
- "16686:16686"
- "14268:14268"
- "9411:9411"
restart: unless-stopped
networks:
- saleor-backend-tier
mailhog:
image: mailhog/mailhog
ports:
- 1025:1025 # smtp server
- 8025:8025 # web ui. Visit http://localhost:8025/ to check emails
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
saleor-db:
driver: local
saleor-redis:
driver: local
saleor-media:
networks:
saleor-backend-tier:
driver: bridge
PS : It's my first answer on stackoverflow :D Don't forget to tick me as the answer if I solved your problem ;)

Setting up docker auto build to use docker-compose file

I am trying to set up auto builds using docker cloud/docker hub. It is always looking for Dockerfile when I have a docker-compose.yml. I am unable to find any option to change this. I am wondering whether this isn't possible or am I missing something?
This is my docker-compose.yml
version: '3'
services:
reverse-proxy:
image: traefik
ports:
- "80:80"
- "443:443"
- "${TRAEFIK_DASHBOARD_PORT}:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik/traefik.toml:/etc/traefik/traefik.toml
- ./traefik/certs/journal.crt:/certs/journal.crt
- ./traefik/certs/journal.key:/certs/journal.key
networks:
- web
prisma:
image: prismagraphql/prisma:1.8
restart: always
ports:
- "${PRISMA_PORT}"
networks:
- web
environment:
PRISMA_CONFIG: |
port: ${PRISMA_PORT}
managementApiSecret: ${PRISMA_MANAGEMENT_API_SECRET}
databases:
default:
connector: postgres
host: ${PRISMA_DB_HOST}
port: ${PRISMA_DB_PORT}
database: ${PRISMA_DB}
user: ${PRISMA_DB_USER}
password: ${PRISMA_DB_PASSWORD}
migrations: ${PRISMA_ENABLE_MIGRATION}
graphql-server:
build:
context: ./graphql-server/
args:
- PORT=${GRAPHQL_SERVER_PORT}
networks:
- web
ports:
- "${GRAPHQL_SERVER_PORT}"
volumes:
- ./graphql-server:/usr/src/app
depends_on:
- prisma
command: ["./wait-for-it.sh", "prisma:${PRISMA_PORT}", "--", "./bootstrap.sh"]
environment:
- PRISMA_SERVICE_NAME=prisma
- PRISMA_PORT
- GRAPHQL_SERVER_PORT
- APOLLO_ENGINE_KEY
- PRISMA_ENDPOINT
- PRISMA_MANAGEMENT_API_SECRET
labels:
- "traefik.backend=graphql"
- "traefik.frontend.rule=Host:api.journal.com"
- "traefik.enable=true"
- "traefik.port=8080"
- "traefik.docker.network=web"
react-client:
build:
context: ./react-client/
args:
- PORT=${REACT_CLIENT_PORT}
ports:
- "${REACT_CLIENT_PORT}"
volumes:
- ./react-client:/usr/src/app
depends_on:
- graphql-server
environment:
- GRAPHQL_SERVER_PORT
- REACT_CLIENT_PORT
networks:
- web
networks:
web:
external: true
Both docker hub and docker cloud are trying to get only the dockerfile and not docker-compose. I also saw a post mentioning docker-compose should be used only for running and not for building; so I am not sure whether I am doing something wrong.

Resources