Docker: how to mount local folder inside container? - docker

I need to share a folder from my OSX machine with a running Docker container, but I can't find how to do it.
Here's a working Docker-compose file:
version: '2'
services:
mariadb:
image: 'bitnami/mariadb:10.3'
environment:
- MARIADB_ROOT_PASSWORD=bitnami
- MARIADB_USER=bn_moodle
- MARIADB_DATABASE=bitnami_moodle
- ALLOW_EMPTY_PASSWORD=yes
volumes:
- 'mariadb_data:/bitnami'
phpmyadmin:
image: 'bitnami/phpmyadmin:4'
ports:
- '8081:80'
- '4430:443'
depends_on:
- mariadb
volumes:
- 'phpmyadmin_data:/bitnami'
moodle:
image: 'bitnami/moodle:3'
environment:
- MARIADB_HOST=mariadb
- MARIADB_PORT_NUMBER=3306
- MOODLE_DATABASE_USER=bn_moodle
- MOODLE_DATABASE_NAME=bitnami_moodle
- ALLOW_EMPTY_PASSWORD=yes
ports:
- '80:80'
- '443:443'
volumes:
- 'moodle_data:/bitnami'
depends_on:
- mariadb
volumes:
mariadb_data:
driver: local
phpmyadmin_data:
driver: local
moodle_data:
driver: local
This file correctly starts 3 Docker containers, 1 for Moodle, 1 for MariaDb and 1 for Phpmyadmin.
What I need to do now is to share the content of a local folder with a folder inside the Moodle container. But I can't figure out how to change the Volumes key to reflect that. I tried with a mapping like:
moodle_data:
- moodle_data:/Users/macbook/Code/Php/moodle-docker/moodle/Users/macbook/Code/Php/moodle-docker/moodle
But it didn't work.. what am I doing wrong here? Thanks in advance to anybody who can help!

You need to map your host_folder with your container_folder using host_folder:container_folder. As mentioned on the comments:
moodle:
image: 'bitnami/moodle:3'
environment:
- MARIADB_HOST=mariadb
- MARIADB_PORT_NUMBER=3306
- MOODLE_DATABASE_USER=bn_moodle
- MOODLE_DATABASE_NAME=bitnami_moodle
- ALLOW_EMPTY_PASSWORD=yes
ports:
- '80:80'
- '443:443'
volumes:
- /Users/macbook/Code/Php/moodle-docker/moodle:/bitnami/gatto
- moodle_data:/bitnami
depends_on:
- mariadb
Remember: Your folder on host_folder must be acessible by docker daemon

Related

Adguard Home docker compose config and db missing

im trying to run adguard with docker compose. I created a lot more containers with docker compose but this one is not creating any files into the mapped folder.
I tried to rebuild the docker command of the official instruction but any time i recreate the container i end up at the setup page and all settings are deleted.
Any ideas?
This is my compose file:
version: "3"
volumes:
homematic_data:
external: true
networks:
homematic:
services:
samba:
image: dperson/samba
container_name: samba
restart: always
ports:
- "137:137/udp"
- "138:138/udp"
- "139:139/tcp"
- "445:445/tcp"
healthcheck:
disable: true
environment:
- TZ='Europe/Berlin'
- WORKGROUP=workgroup
- RECYCLE=false
- USER1=pi;PASSWORD;1000
- SHARE1=homematic_docker;/shares/homematic_docker;yes;no;yes;pi;pi
volumes:
- /home/pi:/shares/homematic_docker
networks:
- homematic
promtail:
image: grafana/promtail:latest
container_name: promtail
volumes:
- /var/log:/var/log
- ./promtail:/etc/promtail
restart: unless-stopped
command: -config.file=/etc/promtail/promtail-config.yml
networks:
- homematic
node-exporter:
image: quay.io/prometheus/node-exporter:latest
container_name: node_exporter
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
- /:/host:ro,rslave
command:
- '--path.rootfs=/host'
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- --collector.filesystem.ignored-mount-points
- "^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/aufs)($$|/)"
ports:
- 9100:9100
networks:
- homematic
restart: always
###################### portainer
portainer:
image: portainer/portainer-ce:latest
container_name: portainer
restart: unless-stopped
security_opt:
- no-new-privileges:true
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./portainer:/data
ports:
- 9000:9000
adguard:
image: adguard/adguardhome
container_name: adguard
restart: unless-stopped
ports:
- 53:53/tcp
- 53:53/udp
- 67:67/udp
- 69:68/udp
- 80:80/tcp
- 443:443/tcp
- 443:443/udp
- 3000:3000/tcp
- 853:853/tcp
- 784:784/udp
- 853:853/udp
- 8853:8853/udp
- 5443:5443/tcp
- 5443:5443/udp
# environment:
# - TZ=Europe/Berlin
volumes:
- /home/pi/homematicDocker/adguard/work:/opt/adguardhome/work\
- /home/pi/homematicDocker/adguard/conf:/opt/adguardhome/conf\
# network_mode: host
raspberrymatic:
image: ghcr.io/jens-maus/raspberrymatic:3.67.10.20230117-27abde9
container_name: homematic
hostname: homematic-raspi
privileged: true
restart: unless-stopped
stop_grace_period: 30s
volumes:
- homematic_data:/usr/local:rw
- /lib/modules:/lib/modules:ro
- /run/udev/control:/run/udev/control
ports:
- "8080:80"
- "2001:2001"
- "2010:2010"
- "9292:9292"
- "8181:8181"
networks:
- homematic
Within the folder "/opt/adguardhome/work" I see a folder data with a database inside. After i finished the setup also the folder conf inside the container has a yaml file.
Unfortunately i copied the backslashes of the docker command into the volume mapping, thats was the problem why i didnt get any data. Thank you Mike!

Docker container can not resolve .test domain running on localhost

I have magento running in a docker container using this tutorial (https://github.com/markshust/docker-magento).
The docker container is accessed via https://magento.test and this works fine in the browser. We have a script in magento that is trying to connect to https://magento.test from within the container but this fails with Could not resolve host: magento.test.
Basically the host can access magento.test and connect to the docker container. But the docker container can not connect to itself.
I have tried adding extra hosts to the docker-composer.yml (see below) but this has not worked. I am guessing the IP 127.0.0.1 is incorrect.
version: "3"
services:
app:
image: markoshust/magento-nginx:1.18-8
ports:
- "80:8000"
- "443:8443"
volumes: &appvolumes
- ~/.composer:/var/www/.composer:cached
- ~/.ssh/id_rsa:/var/www/.ssh/id_rsa:cached
- ~/.ssh/known_hosts:/var/www/.ssh/known_hosts:cached
- appdata:/var/www/html
- sockdata:/sock
- ssldata:/etc/nginx/certs
extra_hosts: &appextrahosts
## Selenium support, replace "magento.test" with URL of your site
- "magento.test:127.0.0.1"
phpfpm:
image: markoshust/magento-php:7.4-fpm-15
volumes: *appvolumes
env_file: env/phpfpm.env
#extra_hosts: *appextrahosts
db:
image: mariadb:10.4
command:
--max_allowed_packet=64M
--optimizer_use_condition_selectivity=1
--optimizer_switch="rowid_filter=off"
ports:
- "3306:3306"
env_file: env/db.env
volumes:
- dbdata:/var/lib/mysql
redis:
image: redis:6.2-alpine
ports:
- "6379:6379"
elasticsearch:
image: markoshust/magento-elasticsearch:7.16-0
ports:
- "9200:9200"
- "9300:9300"
environment:
- "discovery.type=single-node"
## Set custom heap size to avoid memory errors
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
## Avoid test failures due to small disks
## More info at https://github.com/markshust/docker-magento/issues/488
- "cluster.routing.allocation.disk.threshold_enabled=false"
- "index.blocks.read_only_allow_delete"
rabbitmq:
image: markoshust/magento-rabbitmq:3.9-0
ports:
- "15672:15672"
- "5672:5672"
volumes:
- rabbitmqdata:/var/lib/rabbitmq
env_file: env/rabbitmq.env
mailcatcher:
image: sj26/mailcatcher
ports:
- "1080:1080"
## Blackfire support, uncomment to enable
#blackfire:
# image: blackfire/blackfire:2
# ports:
# - "8307"
# env_file: env/blackfire.env
## Selenium support, uncomment to enable
#selenium:
# image: selenium/standalone-chrome-debug:3.8.1
# ports:
# - "5900:5900"
# extra_hosts: *appextrahosts
volumes:
appdata:
dbdata:
rabbitmqdata:
sockdata:
ssldata:
Any help would be greatly appreciated, thanks!
Does host network usage solve your problem?
services:
app:
image: markoshust/magento-nginx:1.18-8
network_mode: "host" # share host network
ports:
- "80:8000"
- "443:8443"
volumes: &appvolumes
- ~/.composer:/var/www/.composer:cached
- ~/.ssh/id_rsa:/var/www/.ssh/id_rsa:cached
- ~/.ssh/known_hosts:/var/www/.ssh/known_hosts:cached
- appdata:/var/www/html
- sockdata:/sock
- ssldata:/etc/nginx/certs
extra_hosts: &appextrahosts
## Selenium support, replace "magento.test" with URL of your site
- "magento.test:127.0.0.1"

Issue with Collabora CODE, Nextcloud & Nginx using docker-compose

I am trying to get a docker-compose.yaml together to run Nextcloud and Collabora CODE via Nginx Proxy Manager.
So far I have Nginx and Nextcloud working really nicely with persistent volumes so that my configs survive kill & rm. My issue is that I cannot get my Collabora CODE instance to link to Nextcloud. There are multiple bits that I might have got wrong so I'll dump as much info as I can here.
I have the following subdomains all pointing at my server:
collabora.domain.tld nextcloud.domain.tld nginx.domain.tld
... and set up as proxy hosts:
As I mentioned, the Nginx and Nextcloud setups are great. When I point my browser at collabora.domain.tld I see the OK message. I can also access the admin page at collabora.domain.tld/loleaflet/dist/admin/admin.html
The NPM entry for collabora.domain.tld is below:
My docker-compose.yaml has gone through several iterations in an attempt to get this working, but my current attempt is below:
version: '3'
volumes:
nextcloud-data:
nextcloud-db:
npm-data:
npm-ssl:
npm-db:
networks:
frontend:
backend:
services:
code:
image: collabora/code:latest
restart: always
environment:
- password=${COLLABORA_PASSWORD:?Not defined!}
- username=${COLLABORA_USERNAME:?Not defined!}
- domain=${COLLABORA_DOMAIN:?Not defined!}
expose:
- "9980"
networks:
- frontend
- backend
nextcloud-app:
image: nextcloud:stable
restart: always
volumes:
- nextcloud-data:/var/www/html
environment:
- MYSQL_PASSWORD=${NC_MYSQL_PASSWORD:?Not defined!}
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud-user
- MYSQL_HOST=nextcloud-db
networks:
- frontend
- backend
nextcloud-db:
image: mariadb
restart: always
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW --innodb-file-per-table=1 --skip-innodb-read-only-compressed
volumes:
- nextcloud-db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=${NC_MYSQL_ROOT_PASSWORD:?Not defined!}
- MYSQL_PASSWORD=${NC_MYSQL_PASSWORD:?Not defined!}
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud-user
networks:
- backend
npm-app:
image: jc21/nginx-proxy-manager:latest
restart: always
ports:
- "80:80"
- "81:81"
- "443:443"
environment:
- DB_MYSQL_HOST=npm-db
- DB_MYSQL_PORT=3306
- DB_MYSQL_USER=npm-user
- DB_MYSQL_PASSWORD=${NPM_MYSQL_PASSWORD:?Not defined!}
- DB_MYSQL_NAME=npm
volumes:
- npm-data:/data
- npm-ssl:/etc/letsencrypt
networks:
- frontend
- backend
npm-db:
image: jc21/mariadb-aria:latest
restart: always
environment:
- MYSQL_ROOT_PASSWORD=${NPM_MYSQL_ROOT_PASSWORD:?Not defined!}
- MYSQL_DATABASE=npm
- MYSQL_USER=npm-user
- MYSQL_PASSWORD=${NPM_MYSQL_PASSWORD:?Not defined!}
volumes:
- npm-db:/var/lib/mysql
networks:
- backend
$COLLABORA_DOMAIN is set to nextcloud.domain.tld.
Any ideas what I have done wrong, and how to get my Nextcloud connected to CODE?
What do your custom locations look like? See, e.g. https://www.collaboraoffice.com/code/nginx-reverse-proxy/
I have a very similar setup, except the collabora instance is not in docker compose, as that never worked for me.
Make sure you have specified your domain environmental variable correctly (dot escaping etc.)

ElasticSearch container won't start up in Docker

I'm attempting to run this script in Win10 to configure everything.
All containers except the elastic container are initialized correctly and
Elastic times out and then exits with code 124.
https://imgur.com/a/FO8ckwc (some log outputs)
I'm running this script where I didn't touch anything except the Windows ports (you can see the comments)
https://pastebin.com/7Z8Gnenr
version: '3.1'
# Generated on 23-04-2018
services:
alfresco:
image: openmbeeguest/mms-repo:3.2.4-SNAPSHOT
environment:
CATALINA_OPTS: "-Xmx1G -XX:+UseConcMarkSweepGC"
depends_on:
- postgresql
- activemq
- elastic
networks:
- internal
ports:
- 8080:8080
volumes:
- alf_logs:/usr/local/tomcat/logs
- alf_data:/opt/alf_data
tmpfs:
- /tmp
- /usr/local/tomcat/temp/
- /usr/local/tomcat/work/
solr:
image: openmbeeguest/mms-solr:3.2.4-SNAPSHOT
environment:
CATALINA_OPTS: "-Xmx1G -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:G1HeapRegionSize=8m -XX:MaxGCPauseMillis=200"
depends_on:
- alfresco
networks:
- internal
volumes:
- solr_logs:/usr/local/tomcat/logs/
- solr_content_store:/opt/solr/ContentStore
tmpfs:
- /tmp
- /usr/local/tomcat/temp/
- /usr/local/tomcat/work/
activemq:
image: openmbeeguest/mms-activemq:3.2.4-SNAPSHOT
ports:
#I changed these Windows side ports
- 61615:61616
- 61617:61614
- 8162:8161
# ORIGINAL
#- 61616:61616
#- 61614:61614
#- 8161:8161
volumes:
- activemq-data-volume:/data/activemq
- activemq-log-volume:/var/log/activemq
- activemq-conf-volume:/opt/activemq/conf
environment:
- ACTIVEMQ_ADMIN_LOGIN admin
- ACTIVEMQ_ADMIN_PASSWORD admin
networks:
- internal
elastic:
image: openmbeeguest/mms-elastic:3.2.4-SNAPSHOT
environment:
CLEAN: 'false'
ports:
- 9200:9200
volumes:
- elastic-data-volume:/usr/share/elasticsearch/data
networks:
- internal
postgresql:
image: openmbeeguest/mms-postgres:3.2.4-SNAPSHOT
volumes:
- pgsql_data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=alfresco
- POSTGRES_PASSWORD=alfresco
- POSTGRES_DB=alfresco
networks:
- internal
volumes:
alf_logs:
alf_data:
solr_logs:
solr_content_store:
pgsql_data:
activemq-data-volume:
activemq-log-volume:
activemq-conf-volume:
elastic-data-volume:
nginx-external-volume:
networks:
internal:
Any help would be greatly appreciated!
Do you have the logs from the elasticsearch container to share? Without that it's hard to tell why it's exiting.
One thing that's tripped me up repeatedly though is the vm.max_map_count setting - the default in Docker is too low for elasticsearch to function, so it's a good first thing to check.

Jwilder nginx proxy - 503 after docker compose structure update

I'm using jwilder/nginx-proxy with separate docker-compose.yaml. It looks like this:
proxy:
image: jwilder/nginx-proxy
restart: always
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx/conf.d/proxy.conf:/etc/nginx/conf.d/proxy.conf:ro
- /Users/marcin/Docker/local_share/certificates:/etc/nginx/certs:ro
ports:
- "80:80"
- "443:443"
container_name: proxy
I'm using it for quite a long time and it's working fine when my project docker-compose.yaml looks like this:
web:
build: /Users/marcin/Docker/definitions/php-nginx/php-7.1-ubuntu
volumes:
- /Users/marcin/Docker/projects/test.local/html/:/usr/share/nginx/html/
- /Users/marcin/Docker/projects/test.local/nginx/conf.d/:/etc/nginx/conf.d/
- /Users/marcin/Docker/projects/test.local/nginx/log/:/var/log/nginx/
- /Users/marcin/Docker/projects/test.local/supervisor/conf.d/:/etc/supervisor/conf.d/
- /Users/marcin/Docker/projects/test.local/supervisor/log/:/var/log/supervisor/
- /Users/marcin/Docker/projects/test.local/cron/:/root/.cron/
- /Users/marcin/Docker/local_share/:/root/.local_share/
- /Users/marcin/Docker/local_share/certificates/:/usr/share/nginx/certificates/
working_dir: /usr/share/nginx/html/
links:
- db
container_name: test.php
hostname: test.local
ports:
- "336:22"
- "8081:80"
- "18080:443"
environment:
- VIRTUAL_HOST=test.local
- CERT_NAME=default
- HTTPS_METHOD=noredirect
db:
build: /Users/marcin/Docker/definitions/mysql/5.7
environment:
- MYSQL_ROOT_PASSWORD=pass
- MYSQL_DATABASE=
- MYSQL_USER=
- MYSQL_PASSWORD=
expose:
- 3306
volumes:
- /Users/marcin/Docker/projects/test.local/mysql/data/:/var/lib/mysql/
- /Users/marcin/Docker/projects/test.local/mysql/conf.d/:/etc/mysql/conf.d/source
- /Users/marcin/Docker/projects/test.local/mysql/log/:/var/log/mysql/
ports:
- "33060:3306"
container_name: test.db
hostname: test.local
I can access site without any problem using http://test.local or https://test.local what is expected.
However I had to update my file structure to newer version:
version: "3.2"
services:
web:
build: /Users/marcin/Docker/definitions/php-nginx/php-7.1-ubuntu
volumes:
- /Users/marcin/Docker/projects/test.local/html/:/usr/share/nginx/html/
- /Users/marcin/Docker/projects/test.local/nginx/conf.d/:/etc/nginx/conf.d/
- /Users/marcin/Docker/projects/test.local/nginx/log/:/var/log/nginx/
- /Users/marcin/Docker/projects/test.local/supervisor/conf.d/:/etc/supervisor/conf.d/
- /Users/marcin/Docker/projects/test.local/supervisor/log/:/var/log/supervisor/
- /Users/marcin/Docker/projects/test.local/cron/:/root/.cron/
- /Users/marcin/Docker/local_share/:/root/.local_share/
- /Users/marcin/Docker/local_share/certificates/:/usr/share/nginx/certificates/
working_dir: /usr/share/nginx/html/
links:
- db
container_name: test.php
hostname: test.local
ports:
- "336:22"
- "8081:80"
- "18080:443"
environment:
- VIRTUAL_HOST=test.local
- CERT_NAME=default
- HTTPS_METHOD=noredirect
db:
build: /Users/marcin/Docker/definitions/mysql/5.7
environment:
- MYSQL_ROOT_PASSWORD=pass
- MYSQL_DATABASE=
- MYSQL_USER=
- MYSQL_PASSWORD=
expose:
- 3306
volumes:
- /Users/marcin/Docker/projects/test.local/mysql/data/:/var/lib/mysql/
- /Users/marcin/Docker/projects/test.local/mysql/conf.d/:/etc/mysql/conf.d/source
- /Users/marcin/Docker/projects/test.local/mysql/log/:/var/log/mysql/
ports:
- "33060:3306"
container_name: test.db
hostname: test.local
and after that it seems not to work. I can access site using ip and port without a problem, but I cannot longer use domain to access it. When I try I'm getting:
503 Service Temporarily Unavailable
nginx/1.13.8
And this is for sure from jwilder nginx (and not the nginx in project).
So the question is - where should I put environment variables to make it work? It seems that when they are placed as they are at the moment they are not read by proxy.
The 503 indicates that the nginx-proxy container can see your container running in docker and it has the configuration needed for nginx to route traffic to it, but it is unable to connect to that container over the docker network. For container-to-container networking to work, you need to have a common docker network defined. You should first run the following to create a network:
docker network create proxy
Then update your nginx-proxy compose file to use the network (this should also be upgraded to at least a v2 syntax, I've gone with 3.2 to match your other file):
version: "3.2"
networks:
proxy:
external: true
services:
proxy:
image: jwilder/nginx-proxy
restart: always
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx/conf.d/proxy.conf:/etc/nginx/conf.d/proxy.conf:ro
- /Users/marcin/Docker/local_share/certificates:/etc/nginx/certs:ro
ports:
- "80:80"
- "443:443"
container_name: proxy
networks:
- proxy
And then do something similar for your application:
version: "3.2"
networks:
proxy:
external: true
services:
web:
build: /Users/marcin/Docker/definitions/php-nginx/php-7.1-ubuntu
volumes:
- /Users/marcin/Docker/projects/test.local/html/:/usr/share/nginx/html/
- /Users/marcin/Docker/projects/test.local/nginx/conf.d/:/etc/nginx/conf.d/
- /Users/marcin/Docker/projects/test.local/nginx/log/:/var/log/nginx/
- /Users/marcin/Docker/projects/test.local/supervisor/conf.d/:/etc/supervisor/conf.d/
- /Users/marcin/Docker/projects/test.local/supervisor/log/:/var/log/supervisor/
- /Users/marcin/Docker/projects/test.local/cron/:/root/.cron/
- /Users/marcin/Docker/local_share/:/root/.local_share/
- /Users/marcin/Docker/local_share/certificates/:/usr/share/nginx/certificates/
working_dir: /usr/share/nginx/html/
links:
- db
container_name: test.php
hostname: test.local
ports:
- "336:22"
- "8081:80"
- "18080:443"
environment:
- VIRTUAL_HOST=test.local
- CERT_NAME=default
- HTTPS_METHOD=noredirect
networks:
- proxy
- default
db:
build: /Users/marcin/Docker/definitions/mysql/5.7
environment:
- MYSQL_ROOT_PASSWORD=pass
- MYSQL_DATABASE=
- MYSQL_USER=
- MYSQL_PASSWORD=
expose:
- 3306
volumes:
- /Users/marcin/Docker/projects/test.local/mysql/data/:/var/lib/mysql/
- /Users/marcin/Docker/projects/test.local/mysql/conf.d/:/etc/mysql/conf.d/source
- /Users/marcin/Docker/projects/test.local/mysql/log/:/var/log/mysql/
ports:
- "33060:3306"
container_name: test.db
hostname: test.local
If you were upgrading from a v1 syntax (without a version defined), you will find that docker switches from running everything on the same network without dns to running each compose project or stack on a dedicated network with dns. To run your apps on other networks, you'll need to explicitly configure that. In the above example, only the web container was placed on the proxy network, and both are on the default network created for this project or stack.

Resources