I'm running docker with apache2. When doing docker-compose up -d it needs 777 permission to var/lib directory. If I give 777 permission then docker start but the same movement other application like Skype, sublime won't able to start and give an error like
cannot open cookie file /var/lib/snapd/cookie/snap.sublime-text
/var/lib/snapd has 'other' write 40777
so here the problem is sublime need 755 permission but docker need 777 permission
Also, one of snaps file of docker is also available inside /var/lib/snapd/snaps
Due to this problem, I'm not able to simultaneously use docker and other application
My docker-compose.yml
version: "3"
services:
app:
image: markoshust/magento-nginx:1.13
ports:
- 80:8000
links:
- db
- phpfpm
- redis
- elasticsearch
volumes:
- ./.docker/nginx.conf:/etc/nginx/conf.d/default.conf
- .:/var/www/html:delegated
- ~/.composer:/var/www/.composer:delegated
- sockdata:/sock
phpfpm:
image: markoshust/magento-php:7.1-fpm
links:
- db
volumes:
- ./.docker/php.ini:/usr/local/etc/php/php.ini
- .:/var/www/html:delegated
- ~/.composer:/var/www/.composer:delegated
- sockdata:/sock
db:
image: percona:5.7
ports:
- 3306:3306
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=test
- MYSQL_USER=test
- MYSQL_PASSWORD=test
volumes:
- dbdata:/var/lib/mysql
redis:
image: redis:3.0
elasticsearch:
image: elasticsearch:5.2
volumes:
- esdata:/usr/share/elasticsearch/data
volumes:
dbdata:
sockdata:
esdata:
# Mark Shust's Docker Configuration for Magento
(https://github.com/markoshust/docker-magento)
# Version 12.0.0
Related
I'm trying to run Grafana with Prometheus using docker compose.
However I keep getting the following error from Graphana container:
service init failed: html/template: pattern matches no files: /usr/share/grafana/public/emails/*.html, emails/*.txt
Here's the content of docker-compose.yml:
version: "3.3"
volumes:
prometheus_data: {}
grafana_data: {}
services:
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
expose:
- 9090
volumes:
- ./infrastructure/config/prometheus/:/etc/prometheus/
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.retention.time=1y'
graphana:
image: grafana/grafana:latest
user: '472'
volumes:
- grafana_data:/var/lib/grafana
- ./infrastructure/config/grafana/grafana.ini:/etc/grafana/grafana.ini
- ./infrastructure/config/grafana/datasource.yml:/etc/grafana/provisioning/datasources/datasource.yml
ports:
- 3000:3000
links:
- prometheus
As for the content of grafana.ini and datasource.yml files I'm using the default Grafana configuration files that are provided in its official Github repository.
The answer here suggests that it can be resolved by setting the correct permissions to grafana config folder. However, I tried giving full permission (with chmod -R 777 command) to the ./infrastructure/config/grafana folder and it didn't resolve the issue.
If anyone can provide any help on how to solve this problem it'd be greatly appreciated!
USE THIS in your docker_compose
grafana:
hostname: 'grafana'
image: grafana/grafana:latest
restart: always
tmpfs:
- /run
volumes:
- grafana_data:/var/lib/grafana
- ./infrastructure/config/grafana/grafana.ini:/etc/grafana /grafana.ini
- ./infrastructure/config/grafana/datasource.yml:/etc/grafana/provisioning/datasources/datasource.yml
ports:
- "3000:3000"
I have several Postgresql services, and some other services which useful in my case (for creating HA Postgresql cluster). This cluster is described in docker-compose below:
version: '3.3'
services:
haproxy:
image: haproxy:alpine
ports:
- "5000:5000"
- "5001:5001"
- "8008:8008"
configs:
- haproxy_cfg
networks:
- dbs
command: haproxy -f /haproxy_cfg
etcd:
image: quay.io/coreos/etcd:v3.1.2
configs:
- etcd_cfg
networks:
- dbs
command: /bin/sh /etcd_cfg
dbnode1:
image: seocahill/patroni:1.2.5
secrets:
- patroni.yml
environment:
- PATRONI_NAME=dbnode1
- PATRONI_POSTGRESQL_DATA_DIR=data/dbnode1
- PATRONI_POSTGRESQL_CONNECT_ADDRESS=dbnode1:5432
- PATRONI_RESTAPI_CONNECT_ADDRESS=dbnode1:8008
env_file:
- test.env
networks:
- dbs
entrypoint: patroni
command: /run/secrets/patroni.yml
dbnode2:
image: seocahill/patroni:1.2.5
secrets:
- patroni.yml
environment:
- PATRONI_NAME=dbnode2
- PATRONI_POSTGRESQL_DATA_DIR=data/dbnode2
- PATRONI_POSTGRESQL_CONNECT_ADDRESS=dbnode2:5432
- PATRONI_RESTAPI_CONNECT_ADDRESS=dbnode2:8008
env_file:
- test.env
networks:
- dbs
entrypoint: patroni
command: /run/secrets/patroni.yml
dbnode3:
image: seocahill/patroni:1.2.5
secrets:
- patroni.yml
environment:
- PATRONI_NAME=dbnode3
- PATRONI_POSTGRESQL_DATA_DIR=data/dbnode3
- PATRONI_POSTGRESQL_CONNECT_ADDRESS=dbnode3:5432
- PATRONI_RESTAPI_CONNECT_ADDRESS=dbnode3:8008
env_file:
- test.env
networks:
- dbs
entrypoint: patroni
command: /run/secrets/patroni.yml
networks:
dbs:
external: true
configs:
haproxy_cfg:
file: config/haproxy.cfg
etcd_cfg:
file: config/etcd.sh
secrets:
patroni.yml:
file: patroni.test.yml
I took this yml-code from https://github.com/seocahill/ha-postgres-docker-stack.git. And i use next command to deploy this services in docker swarm - docker network create -d overlay --attachable dbs && docker stack deploy -c docker-stack.test.yml test_pg_cluster. But if i create some databases and insert some data to it and then restart servies - my data will be lost.
I know that i need to use volume for saving data on host.
I create volume with docker command: docker volume create pgdata with default docker volume directory and mount it like this:
dbnode1:
image: seocahill/patroni:1.2.5
secrets:
- patroni.yml
environment:
- PATRONI_NAME=dbnode1
- PATRONI_POSTGRESQL_DATA_DIR=data/dbnode1
- PATRONI_POSTGRESQL_CONNECT_ADDRESS=dbnode1:5432
- PATRONI_RESTAPI_CONNECT_ADDRESS=dbnode1:8008
env_file:
- test.env
volumes:
pgdata:/data/dbnode1
networks:
- dbs
entrypoint: patroni
command: /run/secrets/patroni.yml
volumes:
pgdata:
When container started it has own configs in data directory data/dbnode1 inside container. And if i mount volume pgdata for store data in host, i can't connect to db and there is empty folder in container directory data/dbnode1. How can i create a persistent data volume for saving changed data in PostgerSQL?
It is way easier to create volumes by adding the path directly. Check this example.
dbnode1:
image: seocahill/patroni:1.2.5
secrets:
- patroni.yml
environment:
- PATRONI_NAME=dbnode1
- PATRONI_POSTGRESQL_DATA_DIR=data/dbnode1
- PATRONI_POSTGRESQL_CONNECT_ADDRESS=dbnode1:5432
- PATRONI_RESTAPI_CONNECT_ADDRESS=dbnode1:8008
env_file:
- test.env
volumes:
- /opt/dbnode1/:/data/dbnode1
networks:
- dbs
entrypoint: patroni
command: /run/secrets/patroni.yml
Note the lines
volumes:
- /opt/dbnode1/:/data/dbnode1
where I am using a the path /opt/dbnode1/ to store the filesystem from the container at /data/dbnode1.
Also it is important to note that docker swarm does not create folders for you. Thus, you have to create the folder before starting the service. Run mkdir -p /opt/dbnode1 to do so.
I designed a docker-compose.yml file that also supposed to work with individual volumes.
I created a raid-drive which is mounted as /dataraid to my system. I can read/write to the system, but when using it in my compose file, I get read-only file system error messages.
Adjusting the volumes to a other path like /home/myname/test the compose file works.
I have no idea what the /dataraid makes it "read-only".
What are the permissions settings a compose file needs?
error message:
ERROR: for db Cannot start service db: error while creating mount source path '/dataraid/nextcloud/mariadb': mkdir /dataraid: read-only file system
compose:
version: '3'
services:
db:
image: mariadb
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
restart: always
volumes:
- /dataraid/nextcloud/mariadb:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=PASSWORD
env_file:
- db.env
redis:
image: redis
restart: always
app:
image: nextcloud:fpm
restart: always
volumes:
- /dataraid/nextcloud/html:/var/www/html
environment:
- MYSQL_HOST=db
env_file:
- db.env
depends_on:
- db
- redis
web:
build: ./web
restart: always
volumes:
- /dataraid/nextcloud/html:/var/www/html:ro
environment:
- VIRTUAL_HOST=name.de
- LETSENCRYPT_HOST=name.de
- LETSENCRYPT_EMAIL=x#y.de
depends_on:
- app
ports:
- 4080:80
networks:
- proxy-tier
- default
collabora:
image: collabora/code
expose:
- 9980
cap_add:
- MKNOD
environment:
- domain=name.de
- VIRTUAL_HOST=name.de
- VIRTUAL_PORT=9980
- VIRTUAL_PROTO=https
- LETSENCRYPT_HOST=name.de
- LETSENCRYPT_EMAIL=x#y.de
- username= #optional
- password= #optional
networks:
- proxy-tier
restart: always
cron:
build: ./app
restart: always
volumes:
- /dataraid/nextcloud/html:/var/www/html
entrypoint: /cron.sh
depends_on:
- db
- redis
proxy:
build: ./proxy
restart: always
ports:
- 443:443
- 80:80
environment:
- VIRTUAL_PROTO=https
- VIRTUAL_PORT=443
labels:
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
volumes:
- /dataraid/nextcloud/nginx-certs:/etc/nginx/certs:ro
- /dataraid/nextcloud/nginx-vhost.d:/etc/nginx/vhost.d
- /dataraid/nextcloud/nginx-html:/usr/share/nginx/html
- /dataraid/nextcloud/nginx-conf.d:/etc/nginx/conf.d
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
- proxy-tier
letsencrypt-companion:
image: jrcs/letsencrypt-nginx-proxy-companion
restart: always
volumes:
- /dataraid/nextcloud/nginx-certs:/etc/nginx/certs
- /dataraid/nextcloud/nginx-vhost.d:/etc/nginx/vhost.d
- /dataraid/nextcloud/nginx-html:/usr/share/nginx/html
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
- proxy-tier
depends_on:
- proxy
networks:
proxy-tier:
see error messages:
bernd#sys-dock:/dataraid/Docker-Configs/nextcloud$ docker-compose up -d
Creating network "nextcloud_default" with the default driver
Creating network "nextcloud_proxy-tier" with the default driver
Creating nextcloud_db_1 ...
Creating nextcloud_proxy_1 ... error
Creating nextcloud_db_1 ... error
Creating nextcloud_collabora_1 ...
ERROR: for nextcloud_proxy_1 Cannot start service proxy: error while creating mount source path '/dataraid/nextcloud/nginx-certs': mkdir /dataraid: read-only file system
Creating nextcloud_redis_1 ... done
Creating nextcloud_collabora_1 ... done
ERROR: for proxy Cannot start service proxy: error while creating mount source path '/dataraid/nextcloud/nginx-certs': mkdir /dataraid: read-only file system
ERROR: for db Cannot start service db: error while creating mount source path '/dataraid/nextcloud/mariadb': mkdir /dataraid: read-only file system
ERROR: Encountered errors while bringing up the project.
If docker starts before the filesystem gets mounted, you could be seeing issues with the docker engine trying to write to the parent filesystem. You can restart the docker daemon to rule this out (systemctl restart docker in systemd base environments).
If restarting the daemon helps, then you can add a dependency between the docker engine and the external filesystem mounts. In systemd, that involves an After= clause in the unit file. E.g. you could create a /etc/systemd/system/docker.service.d/override.conf file containing:
[Unit]
After=nfs-client.target
(Note that I'm not sure that nfs-client.target is the correct unit file for your
filesystem, you'll want to check where it gets mounted.)
Another issue I've seen people encounter recently is Snap based docker installs, which run docker inside of another container technology, which would prevent access to paths not explicitly configured in the Snap.
I've created PrestaShop store on server. Is there any possible way to use docker for my store and migrate it into another server using docker? I know that I'll need docker-compose but to be honest I don't know what to do with files on current server.
Ok, so I deeped into problem and solution for ma quesstion is as below. What I did is pull original image from prestashop and copy there my files.
Next step was use mariadb image. I had backup.sql file exported from previous store phpmyadmin
version: '2'
services:
prestashop:
image: prestashop
ports:
- 80:80
links:
- mariadb:mariadb
depends_on:
- mariadb
volumes:
- ./src:/var/www/html
- ./src/modules:/var/www/html/modules
- ./src/themes:/var/www/html/themes
- ./src/override:/var/www/html/override
environment:
- PS_DEV_MODE=1
- DB_SERVER=mariadb
- DB_USER=root
- DB_PASSWD=root
- DB_NAME=prestashop
- PS_INSTALL_AUTO=0
mariadb:
image: mariadb
volumes:
- backup.sql:/docker-entrypoint-initdb.d
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=prestashop
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- mariadb
ports:
- 81:80
environment:
- PMA_HOST=mariadb
- PMA_USER=root
- PMA_PASSWORD=root
The biggest issue is IP in docker-machine. Keep in mind that if you are using docker toolbox you have IP 192.168.99.100 but in Docker for Windows your IP depends on localhost (or just type localhost).
You can use this docker-compose.yml :
version: "3"
services:
prestashop:
image: prestashop/prestashop
networks:
mycustomnetwork:
ports:
- 82:80
links:
- mariadb:mariadb
depends_on:
- mariadb
volumes:
- ./src:/var/www/html
- ./src/modules:/var/www/html/modules
- ./src/themes:/var/www/html/themes
- ./src/override:/var/www/html/override
environment:
- PS_DEV_MODE=1
- DB_SERVER=mariadb
- DB_USER=root
- DB_PASSWD=mycustompassword
- DB_NAME=prestashop
- PS_INSTALL_AUTO=0
mariadb:
image: mariadb
networks:
mycustomnetwork:
volumes:
- presta_db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=mycustompassword
- MYSQL_DATABASE=prestashop
phpmyadmin:
image: phpmyadmin/phpmyadmin
networks:
mycustomnetwork:
links:
- mariadb:mariadb
ports:
- 1235:80
depends_on:
- mariadb
environment:
- PMA_HOST=mariadb
- PMA_USER=root
- PMA_PASSWORD=mycustompassword
volumes:
presta_db:
networks:
mycustomnetwork:
external: true
Replace mycustomnetwork and mycustompassword
Then run docker-compose up
Web url : localhost:82
PHP MyAdmin url : localhost:1235
You can follow this tutorial to setup Prestashop in a Docker environment.
https://hub.docker.com/r/prestashop/prestashop/
You will need to add your current files to the Prestashop container and most likely import your database in a MySQL container. Docker-compose will be used to launch those containers together. Once this is done, you will be able to deploy the whole thing anywhere.
You should also include bridge network in your compose file, some examples might work from here https://runnable.com/docker/docker-compose-networking.
This way db can be configured to be accessed only by prestashop on local docker network without being exposed outside. Presta db can also be pointed to the name of the running image, in case your IP changes or something. All what you would leave running is port 80 on the app.
I'm using docker-compose to spawn two containers. I would like to share the /tmp directory between these two containers (but not with the host /tmp if possible). This is because I'm uploading some files through flask to /tmp and want to process these files from celery.
flask:
build: .
command: "gulp"
ports:
- '3000:3000'
- '5000:5000'
links:
- celery
- redis
volumes:
- .:/usr/src/app:rw
celery:
build: .
command: "celery -A web.tasks worker --autoreload --loglevel=info"
environment:
- C_FORCE_ROOT="true"
links:
- redis
- neo4j
volumes:
- .:/usr/src/app:ro
You can used a named volume:
flask:
build: .
command: "gulp"
ports:
- '3000:3000'
- '5000:5000'
links:
- celery
- redis
volumes:
- .:/usr/src/app:rw
- tmp:/tmp
celery:
build: .
command: "celery -A web.tasks worker --autoreload --loglevel=info"
environment:
- C_FORCE_ROOT="true"
links:
- redis
- neo4j
volumes:
- .:/usr/src/app:ro
- tmp:/tmp
When compose creates the volume for the first container, it will be initialized with the contents of /tmp from the image. And after that, it will be persistent until deleted with docker-compose down -v.