im trying to run adguard with docker compose. I created a lot more containers with docker compose but this one is not creating any files into the mapped folder.
I tried to rebuild the docker command of the official instruction but any time i recreate the container i end up at the setup page and all settings are deleted.
Any ideas?
This is my compose file:
version: "3"
volumes:
homematic_data:
external: true
networks:
homematic:
services:
samba:
image: dperson/samba
container_name: samba
restart: always
ports:
- "137:137/udp"
- "138:138/udp"
- "139:139/tcp"
- "445:445/tcp"
healthcheck:
disable: true
environment:
- TZ='Europe/Berlin'
- WORKGROUP=workgroup
- RECYCLE=false
- USER1=pi;PASSWORD;1000
- SHARE1=homematic_docker;/shares/homematic_docker;yes;no;yes;pi;pi
volumes:
- /home/pi:/shares/homematic_docker
networks:
- homematic
promtail:
image: grafana/promtail:latest
container_name: promtail
volumes:
- /var/log:/var/log
- ./promtail:/etc/promtail
restart: unless-stopped
command: -config.file=/etc/promtail/promtail-config.yml
networks:
- homematic
node-exporter:
image: quay.io/prometheus/node-exporter:latest
container_name: node_exporter
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
- /:/host:ro,rslave
command:
- '--path.rootfs=/host'
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- --collector.filesystem.ignored-mount-points
- "^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/aufs)($$|/)"
ports:
- 9100:9100
networks:
- homematic
restart: always
###################### portainer
portainer:
image: portainer/portainer-ce:latest
container_name: portainer
restart: unless-stopped
security_opt:
- no-new-privileges:true
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./portainer:/data
ports:
- 9000:9000
adguard:
image: adguard/adguardhome
container_name: adguard
restart: unless-stopped
ports:
- 53:53/tcp
- 53:53/udp
- 67:67/udp
- 69:68/udp
- 80:80/tcp
- 443:443/tcp
- 443:443/udp
- 3000:3000/tcp
- 853:853/tcp
- 784:784/udp
- 853:853/udp
- 8853:8853/udp
- 5443:5443/tcp
- 5443:5443/udp
# environment:
# - TZ=Europe/Berlin
volumes:
- /home/pi/homematicDocker/adguard/work:/opt/adguardhome/work\
- /home/pi/homematicDocker/adguard/conf:/opt/adguardhome/conf\
# network_mode: host
raspberrymatic:
image: ghcr.io/jens-maus/raspberrymatic:3.67.10.20230117-27abde9
container_name: homematic
hostname: homematic-raspi
privileged: true
restart: unless-stopped
stop_grace_period: 30s
volumes:
- homematic_data:/usr/local:rw
- /lib/modules:/lib/modules:ro
- /run/udev/control:/run/udev/control
ports:
- "8080:80"
- "2001:2001"
- "2010:2010"
- "9292:9292"
- "8181:8181"
networks:
- homematic
Within the folder "/opt/adguardhome/work" I see a folder data with a database inside. After i finished the setup also the folder conf inside the container has a yaml file.
Unfortunately i copied the backslashes of the docker command into the volume mapping, thats was the problem why i didnt get any data. Thank you Mike!
Related
I want to deploy a service that will allow me to use Spark and MongoDB in a Jupiter notebook.
I use docker-compose to build up the service, and it`s as followed:
version: "3.3"
volumes:
shared-workspace:
networks:
spark-net:
driver: bridge
services:
spark-master:
image: uqteaching/cloudcomputing:spark-master-v1
container_name: spark-master
networks:
- "spark-net"
ports:
- "8080:8080"
- "7077:7077"
environment:
- INIT_DAEMON_STEP=setup_spark
- "PYSPARK_PYTHON=/usr/bin/python3"
- "PYSPARK_DRIVER_PYTHON=/usr/bin/python3"
spark-worker-1:
image: uqteaching/cloudcomputing:spark-worker-v1
container_name: spark-worker-1
depends_on:
- spark-master
networks:
- "spark-net"
ports:
- "8081:8081"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
- "PYSPARK_PYTHON=/usr/bin/python3"
- "PYSPARK_DRIVER_PYTHON=/usr/bin/python3"
spark-worker-2:
image: uqteaching/cloudcomputing:spark-worker-v1
container_name: spark-worker-2
depends_on:
- spark-master
networks:
- "spark-net"
ports:
- "8082:8082"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
- "PYSPARK_PYTHON=/usr/bin/python3"
- "PYSPARK_DRIVER_PYTHON=/usr/bin/python3"
mongo:
image: mongo
container_name: 'mongo'
networks:
- "spark-net"
ports:
- "27017:27017"
mongo_admin:
image: mongo-express
container_name: 'mongoadmin'
networks:
- "spark-net"
depends_on:
- mongo
links:
- mongo
ports:
- "8091:8091"
jupyter-notebook:
container_name: jupyternb
image: jupyter/all-spark-notebook:42f4c82a07ff
depends_on:
- mongo
- spark-master
links:
- mongo
expose:
- "8888"
networks:
- "spark-net"
ports:
- "8888:8888"
volumes:
- ./nbs:/home/jovyan/work/nbs
- ./events:/tmp/spark-events
environment:
- "PYSPARK_PYTHON=/usr/bin/python3"
- "PYSPARK_DRIVER_PYTHON=/usr/bin/python3"
command: "start-notebook.sh \
--ip=0.0.0.0 \
--allow-root \
--no-browser \
--notebook-dir=/home/jovyan/work/nbs \
--NotebookApp.token='' \
--NotebookApp.password=''
"
And the result is something like this:
I dont know why. Even I set these 2` services to listen to a different port.
They are using 8081/tcp at the same time, which caused them both to crash.
I want to solve this.
mongo-express seems to need port 8081 internal, so use another external port to be able to login to the webui.
http://localhost:8092 would then be something like this:
mongo_admin:
image: mongo-express
container_name: 'mongoadmin'
networks:
- "spark-net"
depends_on:
- mongo
links:
- mongo
ports:
- "8092:8091"
May be somebody had such specific problem... There are two web applications in different directories, both in docker containers. Linux (centos). When I run the first application (docker-compose up -d) everything works fine. If I launch the second application from another directory, then the first one docker container launched falls. Why? The names of the containers are different, the ports forwarded in the docker are also different.
First app config docker-compose.yml
services:
web:
container_name: myapp-nginx
image: nginx:latest
ports:
- "8000:80"
- "443:443"
volumes:
- ./:/myapp
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
- ./php-fpm/php-ini-overrides.ini:/etc/php/7.4/fpm/conf.d/99-overrides.ini
links:
- php
php:
build: .
container_name: myapp-php-fpm
image: php:7.4-fpm
volumes:
- ./:/myapp
- ./logs:/myapp/logs
- ./php-fpm/php-ini-overrides.ini:/etc/php/7.4/fpm/conf.d/99-overrides.ini
links:
- mysql:db
mysql:
image: mariadb:latest
container_name: myapp-mysql
volumes:
- /opt/myapp/data:/var/lib/mysql
env_file:
- mysql.env
restart: unless-stopped
ports:
- "3306:3306"
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: myapp-phpmyadmin
environment:
- MAX_EXECUTION_TIME=600
- UPLOAD_LIMIT=800M
- PMA_HOST=localhost
- PMA_PORT=3306
- PMA_ARBITRARY=1
ports:
- "80:80"
links:
- mysql:db
Second app docker-compose.yml
version: '3'
services:
web:
container_name: client-nginx
image: nginx:latest
ports:
- "20203:81"
volumes:
- ./:/myapp_client
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
- ./php-fpm/php-ini-overrides.ini:/etc/php/7.4/fpm/conf.d/99-overrides.ini
links:
- php
php:
build: .
container_name: client-php-fpm
image: php:7.4-fpm
volumes:
- ./:/myapp_client
- ./logs:/myapp_client/logs
- ./php-fpm/php-ini-overrides.ini:/etc/php/7.4/fpm/conf.d/99-overrides.ini
links:
- mysql:db
mysql:
image: mariadb:latest
container_name: client-mysql
volumes:
- /opt/myapp_client/data:/var/lib/mysql
env_file:
- mysql.env
restart: unless-stopped
ports:
- "20202:3307"
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: client-phpmyadmin
environment:
- MAX_EXECUTION_TIME=600
- UPLOAD_LIMIT=800M
- PMA_HOST=localhost
- PMA_PORT=3307
- PMA_ARBITRARY=1
ports:
- "20204:82"
links:
- mysql:db
I'd like to deploy my saleor-shop application completely via docker.
So I've built the respective images for saleor backend, storefront & dashboard.
Running the app locally works fine.
Backend is available on localhost:8000/graphql
Storefront runs at localhost:3000
Dashboard runs at localhost:9000
If I'd like to run the app on the droplet IP --> I get issues with running the saleor backend.
As of now trying to access XXX.XX.XXX.XXX:8000 results in "This site can't be reached".
The storefront and dashboard are accessible on XXX.XX.XXX.XXX:3000 and XXX.XX.XXX.XXX:9000 however without any interaction with the backend cause its not available. Thats why the graphql calls are not functioning on the storefront and logging into the dashboard does not work either cause the backend is not available. I think I'm missing something here and would appreciate any help.
[
Within my droplet I'm using the following docker-compose.yml file to get my docker containers up:
services:
api:
ports:
- 8000:8000
image: XXX/murukku-shop
restart: unless-stopped
networks:
- saleor-backend-tier
depends_on:
- db
- redis
- jaeger
env_file: common.env
environment:
- JAEGER_AGENT_HOST=jaeger
- STOREFRONT_URL=http://XXX.XX.XXX.XXX:3000/
- DASHBOARD_URL=http://XXX.XX.XXX.XXX:9000/
storefront:
image: XXX/murukku-storefront
ports:
- 3000:80
restart: unless-stopped
dashboard:
image: XXX/murukku-dashboard
ports:
- 9000:80
restart: unless-stopped
db:
image: library/postgres:11.1-alpine
ports:
- 5432:5432
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
- saleor-db:/var/lib/postgresql/data
environment:
- POSTGRES_USER=saleor
- POSTGRES_PASSWORD=saleor
redis:
image: library/redis:5.0-alpine
ports:
- 6379:6379
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
- saleor-redis:/data
worker:
image: XXX/murukku-shop
restart: unless-stopped
networks:
- saleor-backend-tier
env_file: common.env
depends_on:
- redis
- mailhog
environment:
- EMAIL_URL=smtp://mailhog:1025
jaeger:
image: jaegertracing/all-in-one
ports:
- "5775:5775/udp"
- "6831:6831/udp"
- "6832:6832/udp"
- "5778:5778"
- "16686:16686"
- "14268:14268"
- "9411:9411"
restart: unless-stopped
networks:
- saleor-backend-tier
mailhog:
image: mailhog/mailhog
ports:
- 1025:1025 # smtp server
- 8025:8025 # web ui. Visit http://localhost:8025/ to check emails
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
saleor-db:
driver: local
saleor-redis:
driver: local
saleor-media:
networks:
saleor-backend-tier:
driver: bridge
I was testing Saleor like you in a docker setup and I've found a solution ! You have to set more env variable, they are all explained on the github page of the storefront and the dashboard.
Here is my config if you want :
version: '2'
services:
api:
ports:
- 8000:8000
build:
context: ./saleor
dockerfile: ./Dockerfile
args:
STATIC_URL: '/static/'
restart: unless-stopped
networks:
- saleor-backend-tier
depends_on:
- db
- redis
- jaeger
volumes:
- ./saleor/saleor/:/app/saleor:Z
- ./saleor/templates/:/app/templates:Z
- ./saleor/tests/:/app/tests
# shared volume between worker and api for media
- saleor-media:/app/media
command: python manage.py runserver 0.0.0.0:8000
env_file: common.env
environment:
# - DEFAULT_CURRENCY=EUR
#- DEFAULT_COUNTRY=
- ALLOWED_CLIENT_HOSTS=localhost,127.0.0.1,192.168.0.50
- ALLOWED_HOSTS=localhost,192.168.0.50
- JAEGER_AGENT_HOST=jaeger
- STOREFRONT_URL=http://192.168.0.50:3000/
- DASHBOARD_URL=http://192.168.0.50:9000/
storefront:
build:
context: ./saleor-storefront
dockerfile: ./Dockerfile.dev
ports:
- 3000:3000
restart: unless-stopped
volumes:
- ./saleor-storefront/:/app:cached
- /app/node_modules/
command: npm start -- --host 0.0.0.0
environment:
- NEXT_PUBLIC_API_URI=http://192.168.0.50:8000/graphql/
- API_URI=http://192.168.0.50:8000/graphql/
dashboard:
build:
context: ./saleor-dashboard
dockerfile: ./Dockerfile.dev
ports:
- 9000:9000
restart: unless-stopped
volumes:
- ./saleor-dashboard/:/app:cached
- /app/node_modules/
command: npm start -- --host 0.0.0.0
environment:
- API_URI=http://192.168.0.50:8000/graphql/
- APP_MOUNT_URI=/dashboard/
- STATIC_URL=http://192.168.0.50:9000/
db:
image: library/postgres:11.1-alpine
ports:
- 5432:5432
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
- saleor-db:/var/lib/postgresql/data
environment:
- POSTGRES_USER=saleor
- POSTGRES_PASSWORD=saleor
redis:
image: library/redis:5.0-alpine
ports:
- 6379:6379
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
- saleor-redis:/data
worker:
build:
context: ./saleor
dockerfile: ./Dockerfile
args:
STATIC_URL: '/static/'
command: celery -A saleor --app=saleor.celeryconf:app worker --loglevel=info
restart: unless-stopped
networks:
- saleor-backend-tier
env_file: common.env
depends_on:
- redis
- mailhog
volumes:
- ./saleor/saleor/:/app/saleor:Z,cached
- ./saleor/templates/:/app/templates:Z,cached
# shared volume between worker and api for media
- saleor-media:/app/media
environment:
- EMAIL_URL=smtp://mailhog:1025
jaeger:
image: jaegertracing/all-in-one
ports:
- "5775:5775/udp"
- "6831:6831/udp"
- "6832:6832/udp"
- "5778:5778"
- "16686:16686"
- "14268:14268"
- "9411:9411"
restart: unless-stopped
networks:
- saleor-backend-tier
mailhog:
image: mailhog/mailhog
ports:
- 1025:1025 # smtp server
- 8025:8025 # web ui. Visit http://localhost:8025/ to check emails
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
saleor-db:
driver: local
saleor-redis:
driver: local
saleor-media:
networks:
saleor-backend-tier:
driver: bridge
PS : It's my first answer on stackoverflow :D Don't forget to tick me as the answer if I solved your problem ;)
I'm trying to run a Nextcloud instance on my Raspbery Pi 3B+ using a docker-compose file from this source: https://blog.ssdnodes.com/blog/installing-nextcloud-docker/
This works out of the box without any issues on a Ubuntu Server.
I've replaced the following images to be compatible with the arm infrastructure of the Pi:
jwilder/nginx-proxy:alpine with braingamer/nginx-proxy-arm or budry/jwilder-nginx-proxy-arm (I tried both)
jrcs/letsencrypt-nginx-proxy-companion with budry/jrcs-letsencrypt-nginx-proxy-companion-arm
mariadb with linuxserver/mariadb
nextcloud:latest with linuxserver/nextcloud
Unfortunately this doesn't work on the Pi, the Pi returns first a 502 Bad Gateway, then after some time the error ERR_TOO_MANY_REDIRECTS.
What am I doing wrong?
Thanks
My docker-compose.yml:
version: '3'
services:
proxy:
image: braingamer/nginx-proxy-arm
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
container_name: nextcloud-proxy
networks:
- nextcloud_network
ports:
- 80:80
- 443:443
volumes:
- ./proxy/conf.d:/etc/nginx/conf.d:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- ./proxy/certs:/etc/nginx/certs:ro
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: unless-stopped
letsencrypt:
image: budry/jrcs-letsencrypt-nginx-proxy-companion-arm
container_name: nextcloud-letsencrypt
depends_on:
- proxy
networks:
- nextcloud_network
volumes:
- ./proxy/certs:/etc/nginx/certs:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: unless-stopped
db:
image: linuxserver/mariadb
container_name: nextcloud-mariadb
networks:
- nextcloud_network
volumes:
- db:/var/lib/mysql
- /etc/localtime:/etc/localtime:ro
environment:
- PUID=1000
- PGID=1000
- MYSQL_ROOT_PASSWORD=***PASSWORD***
- MYSQL_PASSWORD=***PASSWORD***
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
ports:
- 3306:3306
restart: unless-stopped
app:
image: linuxserver/nextcloud
container_name: nextcloud-app
networks:
- nextcloud_network
depends_on:
- letsencrypt
- proxy
- db
volumes:
- nextcloud:/var/www/html
- ./app/config:/var/www/html/config
- ./app/custom_apps:/var/www/html/custom_apps
- ./app/data:/var/www/html/data
- ./app/themes:/var/www/html/themes
- /etc/localtime:/etc/localtime:ro
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/London
- VIRTUAL_HOST=nextcloud.domain.tld
- LETSENCRYPT_HOST=nextcloud.domain.tld
- LETSENCRYPT_EMAIL=mail#nextcloud.domain.tld
volumes:
nextcloud:
db:
networks:
nextcloud_network:
The tutorial used a Nginx reverse proxy and Let’s Encrypt, for the latter you need a valid domain. If you look at your compose file for linuxserver/nextcloud under environment, it asks for a domain for VIRTUAL_HOST, LETSENCRYPT_HOST and LETSENCRYPT_EMAIL. It then tries to create a ssl certificate for the specified domain (nextcloud.domain.tld), which is not valid, so it doesn't work.
This was the case for me, so I just removed the proxy and ssl from my compose file and nextcloud works now :)
Here is my current working compose file:
version: '3'
services:
db:
image: tobi312/rpi-mariadb:10.5
container_name: nextcloud-mariadb
networks:
- nextcloud_network
volumes:
- db:/var/lib/mysql
- /etc/localtime:/etc/localtime:ro
environment:
- MYSQL_ROOT_PASSWORD=very_secure_password
- MYSQL_PASSWORD=very_secure_password
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
restart: unless-stopped
app:
image: nextcloud:latest
container_name: nextcloud-app
networks:
- nextcloud_network
depends_on:
- db
volumes:
- nextcloud:/var/www/html
- ./app/config:/var/www/html/config
- ./app/custom_apps:/var/www/html/custom_apps
- ./app/data:/var/www/html/data
- ./app/themes:/var/www/html/themes
- /etc/localtime:/etc/localtime:ro
restart: unless-stopped
ports:
- 80:80
volumes:
nextcloud:
db:
networks:
nextcloud_network:
driver: bridge
Hope it helps.
I installed a fully dockerized Nextcloud server on Ubuntu LTS 20.04.
Right now, it is accessible via nginx from the subdomain I assigned to it, with a SSL certificate from Lets Encrypt.
I would like to be able to access it from a local IP from within the network on port 8140.
I tried adding the ports to the docker-compose.yml file with:
ports:
- "8140:8140"
But the ports get assigned to 0.0.0.0 instead of the machine's IP address.
Anyone knows how to expose the container to the local network?
Here's an example of the docker-compose.yml I used:
version: '3'
services:
proxy:
image: jwilder/nginx-proxy:alpine
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
container_name: nextcloud-proxy
networks:
- nextcloud_network
ports:
- 80:80
- 443:443
- "8140:8140"
volumes:
- ./proxy/conf.d:/etc/nginx/conf.d:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- ./proxy/certs:/etc/nginx/certs:ro
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: unless-stopped
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nextcloud-letsencrypt
depends_on:
- proxy
networks:
- nextcloud_network
volumes:
- ./proxy/certs:/etc/nginx/certs:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: unless-stopped
db:
image: mariadb
container_name: nextcloud-mariadb
networks:
- nextcloud_network
volumes:
- db:/var/lib/mysql
- /etc/localtime:/etc/localtime:ro
environment:
- MYSQL_ROOT_PASSWORD=toor
- MYSQL_PASSWORD=mysql
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
restart: unless-stopped
app:
image: nextcloud:latest
container_name: nextcloud-app
networks:
- nextcloud_network
depends_on:
- letsencrypt
- proxy
- db
volumes:
- nextcloud:/var/www/html
- ./app/config:/var/www/html/config
- ./app/custom_apps:/var/www/html/custom_apps
- ./app/data:/var/www/html/data
- ./app/themes:/var/www/html/themes
- /etc/localtime:/etc/localtime:ro
environment:
- VIRTUAL_HOST=nextcloud.YOUR-DOMAIN
- LETSENCRYPT_HOST=nextcloud.YOUR-DOMAIN
- LETSENCRYPT_EMAIL=YOUR-EMAIL
restart: unless-stopped
volumes:
nextcloud:
db:
networks:
nextcloud_network:
As far as I know, you append the IP Address you are binding to locally as follows:
ports:
- 192.168.0.254:80:80
- 192.168.0.254:443:443
- "192.168.0.254:8140:8140"