docker stack not finding volumes on worker node - docker

I am moving from docker-compose to docker stack
I am trying to launch my database on my worker node from my manager node so I am using a docker-stack.yml file
on the manager I use command: docker stack deploy -c docker-stack.yml mr
I get error:
* error decoding 'Volumes[0]': invalid spec: :/docker-entrypoint-initdb.d: empty section between colons
* error decoding 'Volumes[1]': invalid spec: db_data:: empty section between colons
* error decoding 'Volumes[2]': invalid spec: db_logs:: empty section between colons
Is there a way in docker stacks to specify to look for those volumes locally to the worker node ? also for .env files ?
Here is my docker-stack.yaml:
version: "3.8"
services:
nginx:
container_name: nginx
image: "${NGINX_IMAGE}"
build: build/nginx
deploy:
placement:
constraints: [node.role == manager]
restart: always
env_file: .env
ports:
- "80:80"
- "443:443"
volumes:
- "${APP_HOST_DIR}/public:/var/www/app/public:ro"
- "${APP_HOST_LETSENCRYPT}:${APP_CONTAINER_LETSENCRYPT}"
- "${APP_HOST_NGINX_CONF}:${APP_CONTAINER_NGINX_CONF}"
networks:
- central_mr
depends_on:
- app
app:
container_name: app
image: "${APP_IMAGE}"
deploy:
placement:
constraints: [node.role == manager]
restart: always
build: build/app
env_file: .env
networks:
- central_mr
volumes:
- "${APP_HOST_DIR}:${APP_CONTAINER_DIR}"
dbmr:
container_name: database
image: "${MARIADB_VERSION}"
restart: always
deploy:
placement:
constraints: [node.role == worker]
env_file: .env
volumes:
- "${SQL_INIT}:/docker-entrypoint-initdb.d"
- "db_data:${MARIADB_DATA_DIR}"
- "db_logs:${MARIADB_LOG_DIR}"
environment:
MYSQL_ROOT_PASSWORD: "${MYSQL_ROOT_PASSWORD}"
MYSQL_DATABASE: "${MYSQL_DATABASE}"
MYSQL_USER: "${MYSQL_USER}"
MYSQL_PASSWORD: "${MYSQL_PASSWORD}"
ports:
- "3306:3306"
networks:
- central_mr
volumes:
db_data:
db_logs:
networks:
central_mr:

the .env is on the worker node; I have a different .env on my manager. I need the service to look for .env on the machine it is running onto. Same for volumes
This isn't supported. First, I'm not sure the .env file will be parsed (it wasn't last time I checked). So to expand variables inside of the compose file, you need to source those variables yourself where you run the stack deploy command:
set -a; . ./.env; set +a
docker stack deploy -c docker-compose.yml stack_name
That does not expand the values from the files on the workers. There will only be one state for the service and containers deployed on workers, with one exception.
A few of the fields support service templates which allow you to set fields like an env variable or volume mount, using templates like {{.Node.Hostname}}.

Related

installing magento 2 using docker, server has already docker-compose.yml, Should i write separate for magento 2 in the magento folder?

I have a docker-compose.yml on VPS server root
version: '3'
services:
mysql:
image: mariadb:10.3.17
command: --max_allowed_packet=256M --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
volumes:
- "./data/db:/var/lib/mysql:delegated"
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
restart: always
litespeed:
image: litespeedtech/litespeed:${LSWS_VERSION}-${PHP_VERSION}
env_file:
- .env
volumes:
- ./lsws/conf:/usr/local/lsws/conf
- ./lsws/admin/conf:/usr/local/lsws/admin/conf
- ./bin/container:/usr/local/bin
- ./sites:/var/www/vhosts/
- ./acme:/root/.acme.sh/
- ./logs:/usr/local/lsws/logs/
ports:
- 80:80
- 443:443
- 443:443/udp
- 7080:7080
restart: always
environment:
TZ: ${TimeZone}
phpmyadmin:
image: bitnami/phpmyadmin:5.0.2-debian-10-r72
ports:
- 8080:80
- 8443:443
environment:
DATABASE_HOST: mysql
restart: always
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.1
environment:
- discovery.type=single-node
ports:
- 9200:9200
volumes:
- esdata:/usr/share/elasticsearch/data
restart: always
volumes:
esdata:
it has server configuration in above code, should i write my configuration related to magneto 2 in same file, shown below
version: '3'
services:
web:
image: webdevops/php-apache-dev:ubuntu-16.04
container_name: web
restart: always
user: application
environment:
- WEB_ALIAS_DOMAIN=local.domain.com
- WEB_DOCUMENT_ROOT=/app/pub
- PHP_DATE_TIMEZONE=EST
- PHP_DISPLAY_ERRORS=1
- PHP_MEMORY_LIMIT=2048M
- PHP_MAX_EXECUTION_TIME=300
- PHP_POST_MAX_SIZE=500M
- PHP_UPLOAD_MAX_FILESIZE=1024M
volumes:
- /path/to/magento:/app:cached
ports:
- "80:80"
- "443:443"
- "32823:22"
links:
- mysql
mysql:
image: mariadb:10
container_name: mysql
restart: always
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=magento
volumes:
- db-data:/var/lib/mysql
phpmyadmin:
container_name: phpmyadmin
restart: always
image: phpmyadmin/phpmyadmin:latest
environment:
- MYSQL_ROOT_PASSWORD=root
- PMA_USER=root
- PMA_PASSWORD=root
ports:
- "8080:80"
links:
- mysql:db
depends_on:
- mysql
volumes:
db-data:
external: false
if no then what should be be scenario?
1- should i create new docker-compose-magento.yml on root or inside magento folder?
2- if i write docker-compose.yml inside magento folder then how can i connect it with my server root docker folder so that i can use elasticsearch also.
First, you need to know what application is running using the existing docker-compose file. And that you could check inside the existing virtual host configuration file. And that you could find inside the "sites" directory that is mapped to the lightspeed web server virtual host path that is "/var/www/vhosts" in volume mapping.
If any application is running using that docker-compose file for sure then you have to create a separate docker-compose for running Magento. In this case, a separate docker network will be created for all the Magento 2 docker-compose services and you could not access a service(ElasticSearch) on another network(on a separate docker-compose). You have to implement ES on Magento 2 docker-compose as well.
If nothing is running on the existing docker-compose then you could merge both the docker-compose files as per your requirement and understanding. Or you could apply only your new Magento 2 docker-compose file.
So the main thing here is the usage of two different networks. And docker containers can only talk to another container in the same network.
Also, lightspeed is a web server that uses the same port numbers as in the case of Apache(webdevops/php-apache-dev:ubuntu-16.04). So there will be a port conflict if you create a new docker-compose and try to run both simultaneously. So you need to manage that as well by using different host ports. If this is a production server then that is not possible cause people are not going to access web URLs using non-default port numbers.
The solution for this is Kubernetes, where you can run multiple applications all using the same public ports but with no conflict as in Kubernetes you will divide your single physical server machine into multiple virtual machines and hence no port conflicts.
See this article for Kubernetes setup https://technicallysound.in/how-to-setup-a-static-site-on-kubernetes/
See this article for Magento setup on Docker https://technicallysound.in/how-to-setup-magento-2-on-docker-for-development/

Hey guys, Is there a way to add a init.sql file to my Docker Swarm stack

Another question, how to set the name of the volume in my stack, because when I run the following command: docker stack deploy --compose-file=docker-compose.yml mysql
it adds automatically mysql_ to the name of the volume.
version: '3.7'
services:
db:
image: "mysql:5.7"
deploy:
replicas: 2
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
- mysql-data:/var/lib/mysql
ports:
- '3306:3306'
environment:
MYSQL_ROOT_PASSWORD_FILE: /run/secrets/my_file_secret
secrets:
- my_file_secret
secrets:
my_file_secret:
file: ./my_file_secret.txt
volumes:
mysql-data:
driver: local
You can set a volume name by adding name:
volumes:
data:
name: VOLUME_NAME
The name is used as is and will not be scoped with the stack name
see: https://docs.docker.com/compose/compose-file/#name
But as suggested in the first comment - it doesn't really matter.

Services don't start on docker swarm nodes

I want to deploy HA Postgresql with Failover Patroni and HAProxy (like single entrypoint) in docker swarm.
I have docker-compose.yml -
version: "3.7"
services:
etcd1:
image: patroni
networks:
- test
env_file:
- docker/etcd.env
container_name: test-etcd1
hostname: etcd1
command: etcd -name etcd1 -initial-advertise-peer-urls http://etcd1:2380
etcd2:
image: patroni
networks:
- test
env_file:
- docker/etcd.env
container_name: test-etcd2
hostname: etcd2
command: etcd -name etcd2 -initial-advertise-peer-urls http://etcd2:2380
etcd3:
image: patroni
networks:
- test
env_file:
- docker/etcd.env
container_name: test-etcd3
hostname: etcd3
command: etcd -name etcd3 -initial-advertise-peer-urls http://etcd3:2380
patroni1:
image: patroni
networks:
- test
env_file:
- docker/patroni.env
hostname: patroni1
container_name: test-patroni1
environment:
PATRONI_NAME: patroni1
deploy:
placement:
constraints: [node.role == manager]
# - node.labels.type == primary
# - node.role == manager
patroni2:
image: patroni
networks:
- test
env_file:
- docker/patroni.env
hostname: patroni2
container_name: test-patroni2
environment:
PATRONI_NAME: patroni2
deploy:
placement:
constraints: [node.role == worker]
# - node.labels.type != primary
# - node.role == worker
patroni3:
image: patroni
networks:
- test
env_file:
- docker/patroni.env
hostname: patroni3
container_name: test-patroni3
environment:
PATRONI_NAME: patroni3
deploy:
placement:
constraints: [node.role == worker]
# - node.labels.type != primary
# - node.role == worker
haproxy:
image: patroni
networks:
- test
env_file:
- docker/patroni.env
hostname: haproxy
container_name: test-haproxy
ports:
- "5000:5000"
- "5001:5001"
command: haproxy
networks:
test:
driver: overlay
attachable: true
And deploy this services in docker swarm with this command:
docker stack deploy --compose-file docker-compose.yml test
When i use this command, my services is creating, but service patroni2 and patroni3 don't start on other nodes, which roles are worker. They don't start at all!
I want to see my services deploy on all nodes (3 - one manager and two workers) which existing in docker swarm
But if i delete constraints, all my services start on one node, when i deploy docker-compose.yml in swarm.
May be this services can't see my network, though i deploy it using docker official documentation.
With different service names, docker will not attempt to spread containers across multiple nodes, and will fall back to the least used node that satisfies the requirements, where least used is measured by the number of scheduled containers.
You could attempt to solve this by using the same service name and 3 replicas. This would require that they be defined identically. To make this work, you can leverage a few features, the first being that etcd.tasks will resolve to the individual ip addresses of each etcd service container. And the second are service templates which can be used to inject values like {{.Task.Slot}} into the settings for hostname, volume mounts, and env variables. The challenge is the list at the end will likely not give you what you want, which is a way to uniquely address each replica from the other replicas. Hostname seems like it would work, but it unfortunately does not resolve in docker's DNS implementation (and wouldn't be easy to implement since it's possible to create a container with the capabilities to change the hostname after docker has deployed it).
The option you are left with is configuring constraints on each service to run on specific nodes. That's less than ideal, and reduces the fault tolerance of these services. If you have lots of nodes that can be separated into 3 groups then using node labels would solve the issue.

docker-stack.yml invalid volume type bind

This is my docker-stack.yml file
version: "3"
services:
mysql:
image: mysql:latest
deploy:
replicas: 1
update_config:
parallelism: 1
restart_policy:
condition: on-failure
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: <Censored>
MYSQL_USER: <Censored>
MYSQL_PASSWORD: <Censored>
volumes:
- ./db/data:/var/lib/mysql
- ./db/logs:/var/log/mysql
- ./db/config:/etc/mysql/conf.d
php:
image: wiput1999/php
volumes:
- ./web:/web
nginx:
image: nginx:latest
ports:
- "80:80"
- "443:443"
volumes:
- ./code:/code:ro
- ./site.conf:/etc/nginx/conf.d/default.conf
- /etc/letsencrypt:/etc/letsencrypt
- ./nginx/log:/var/log/nginx
When I run this following stack I got mysql and nginx with this error
"invalid mount config for type "bind": bind source path does not exist"
I have no idea what wrong with my code.
bind is a type of mount that is used to mount a directory (or a file) on the host into the container. All of your volumes are set up like that. So one of your source directories (or files) do not exists on the host. Check each of these:
./db/data
./db/logs
./db/config
./web
./code
./site.conf
/etc/letsencrypt
./nginx/log
You could execute ls -ld ./db/data ./db/logs ./db/config ./web ./code ./site.conf /etc/letsencrypt ./nginx/log >/dev/null and look at the error message to find out which one.
Please consider to use docker configs and docker secrets in place of volumes.
version: "3.3"
services:
nginx:
configs:
- source: nginx_vhost
target: /etc/nginx/conf.d/default.conf
secrets:
- ssl_private_key
...
configs:
nginx_vhost:
file: ./site.conf
secrets:
ssl_private_key:
file: /etc/letsencrypt/private.key
https://docs.docker.com/engine/swarm/configs/ and https://docs.docker.com/compose/compose-file/#configs

Docker-Compose persistent data MySQL

I can't seem to get MySQL data to persist if I run $ docker-compose down with the following .yml
version: '2'
services:
# other services
data:
container_name: flask_data
image: mysql:latest
volumes:
- /var/lib/mysql
command: "true"
mysql:
container_name: flask_mysql
restart: always
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: 'test_pass' # TODO: Change this
MYSQL_USER: 'test'
MYSQL_PASS: 'pass'
volumes_from:
- data
ports:
- "3306:3306"
My understanding is that in my data container using volumes: - /var/lib/mysql maps it to my local machines directory where mysql stores data to the container and because of this mapping the data should persist even if the containers are destroyed. And the mysql container is just a client interface into the db and can see the local directory because of volumes_from: - data
Attempted this answer and it did not work. Docker-Compose Persistent Data Trouble
EDIT
Changed my .yml as shown below and created a the dir ./data but now when I run docker-compose up --build the mysql container wont start throws error saying
data:
container_name: flask_data
image: mysql:latest
volumes:
- ./data:/var/lib/mysql
command: "true"
mysql:
container_name: flask_mysql
restart: always
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: 'test_pass' # TODO: Change this
MYSQL_USER: 'test'
MYSQL_PASS: 'pass'
volumes_from:
- data
ports:
- "3306:3306"
flask_mysql | mysqld: Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied)
flask_mysql | 2016-08-26T22:29:21.182144Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
flask_mysql | 2016-08-26T22:29:21.185392Z 0 [ERROR] --initialize specified but the data directory exists and is not writable. Aborting.
The data container is a superfluous workaround. Data-volumes would do the trick for you. Alter your docker-compose.yml to:
version: '2'
services:
mysql:
container_name: flask_mysql
restart: always
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: 'test_pass' # TODO: Change this
MYSQL_USER: 'test'
MYSQL_PASS: 'pass'
volumes:
- my-datavolume:/var/lib/mysql
volumes:
my-datavolume:
Docker will create the volume for you in the /var/lib/docker/volumes folder. This volume persist as long as you are not typing docker-compose down -v
There are 3 ways:
First way
You need specify the directory to store mysql data on your host machine. You can then remove the data container. Your mysql data will be saved on you local filesystem.
Mysql container definition must look like this:
mysql:
container_name: flask_mysql
restart: always
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: 'test_pass' # TODO: Change this
MYSQL_USER: 'test'
MYSQL_PASS: 'pass'
volumes:
- /opt/mysql_data:/var/lib/mysql
ports:
- "3306:3306"
Second way
Would be to commit the data container before typing docker-compose down:
docker commit my_data_container
docker-compose down
Third way
Also you can use docker-compose stop instead of docker-compose down (then you don't need to commit the container)
first, you need to delete all old mysql data using
docker-compose down -v
after that add two lines in your docker-compose.yml
volumes:
- mysql-data:/var/lib/mysql
and
volumes:
mysql-data:
your final docker-compose.yml will looks like
version: '3.1'
services:
php:
build:
context: .
dockerfile: Dockerfile
ports:
- 80:80
volumes:
- ./src:/var/www/html/
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
- mysql-data:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8080:8080
volumes:
mysql-data:
after that use this command
docker-compose up -d
now your data will persistent and will not be deleted even after using this command
docker-compose down
extra:- but if you want to delete all data then you will use
docker-compose down -v
You have to create a separate volume for mysql data.
So it will look like this:
volumes_from:
- data
volumes:
- ./mysql-data:/var/lib/mysql
And no, /var/lib/mysql is a path inside your mysql container and has nothing to do with a path on your host machine. Your host machine may even have no mysql at all. So the goal is to persist an internal folder from a mysql container.
Adding on to the answer from #Ohmen, you could also add an external flag to create the data volume outside of docker compose. This way docker compose would not attempt to create it. Also you wouldn't have to worry about losing the data inside the data-volume in the event of $ docker-compose down -v.
The below example is from the official page.
version: "3.8"
services:
db:
image: postgres
volumes:
- data:/var/lib/postgresql/data
volumes:
data:
external: true
Actually this is the path and you should mention a valid path for this to work. If your data directory is in current directory then instead of my-data you should mention ./my-data, otherwise it will give you that error in mysql and mariadb also.
volumes:
./my-data:/var/lib/mysql
Feasible bind mount solution:
mariadb:
image: mariadb:latest
restart: unless-stopped
environment:
- MARIADB_ROOT_PASSWORD=${MARIADB_ROOT_PASSWORD}
volumes:
- type: bind
source: /host/dir
target: /var/lib/mysql

Resources