I am using docker-compose for my open-source web application. In the process of publishing my project on github I wanted to make my docker-compose.yaml file easier to understand and adapt. I'm still a beginner with Docker but the file works as intended. I just want to improve the readability and changeability of the volumes used by the containers. The values a/large/directory/or/disk:/var/lib/postgresql/data and /another/large/disk/:/something will most likely need to be adapted to the system the user is running my application on. Can I introduce variables for these? How can I make it more obvious that these values are to be changed by the person running my application?
My current docker-compose.yaml file
version: '3'
services:
postgres:
image: postgres:latest
restart: always
expose:
- 5432
ports:
- 5432:5432
environment:
POSTGRES_USER: 'postgres'
POSTGRES_PASSWORD: 'password'
POSTGRES_DB: 'sample'
volumes:
- /a/large/directory/or/disk:/var/lib/postgresql/data
networks:
- mynetwork
mysql:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: 'db'
MYSQL_USER: 'user'
MYSQL_PASSWORD: 'password'
MYSQL_ROOT_PASSWORD: 'password'
expose:
- 3306
ports:
- 3306:3306
volumes:
- ~/data/mysql:/var/lib/mysql
networks:
- mynetwork
depends_on:
- postgres
core:
restart: always
build: core/
environment:
SPRING_APPLICATION_JSON: '{
"database.postgres.url": "postgres:5432/sample",
"database.postgres.user": "postgres",
"database.postgres.password": "password",
"database.mysql.host": "mysql",
"database.mysql.user": "root",
"database.mysql.password": "password"
}'
volumes:
- ~/data/core:/var
- /another/large/disk/:/something
networks:
- mynetwork
depends_on:
- mysql
ports:
- 8080:8080
web:
restart: always
build: web/
networks:
- mynetwork
depends_on:
- core
ports:
- 3000:3000
networks:
mynetwork:
driver: bridge
volumes:
myvolume:
(I'd also appreciate any other suggestions for improvements to my file!)
Docker Compose supports variable interpolation. But then you need to document those values, and people might just assume docker compose up and have it work without extra setup.
Compose typically isn't used for production deployment, so you wouldn't use a real volume. That being said, you can simply use relative directories rather than home folder or absolute path (./data/app:/mount) to the file itself, or a Docker managed volume.
Related
When I use docker swarm with MySQL I met with problem (If docker container rerun mysql data will disappear)
I spend a lot of time to search how to fix this problem, use volume store data and use glusterFs across host share volume but experiencing database data inconsistency.
Is my method right ? or can someone tell we how to fix this problem?
At last , This is my example yaml file :
version: '3.8'
services:
www:
image: httpd:latest
ports:
- "8001:80"
volumes:
- /usr/papertest/src:/var/www/html/
db:
image: mariadb:latest
restart: always
volumes:
- /usr/test/src:/docker-entrypoint-initdb.d
- /etc/timezone:/etc/timezone:ro
- /usr/test/backup:/var/lib/mysql #/usr/test/backup is glusterfs mount place
environment:
MYSQL_ROOT_PASSWORD: MySQL_PASSWORD
MYSQL_DATABASE: MYSQL_DATABASE
#MYSQL_USER: MYSQL_USER
# MYSQL_PASSWORD: MYSQL_PASSWD
phpmyadmin:
image: phpmyadmin
#restart: always
ports:
- 4567:80
environment:
- PMA_ARBITRARY=1
I'm trying to set up a docker-compose file for running Apache Guacamole.
The compose file has 3 services, 2 for guacamole itself and 1 database image. The problem is that the database has to be initialized before the guacamole container can use it, but the files to initialize the database are in the guacamole image. The solution I came up with is this:
version: "3"
services:
init:
image: guacamole/guacamole:latest
command: ["/bin/sh", "-c", "cp /opt/guacamole/postgresql/schema/*.sql /init/" ]
volumes:
- dbinit:/init
database:
image: postgres:latest
restart: unless-stopped
volumes:
- dbinit:/docker-entrypoint-initdb.d
- dbdata:/var/lib/postgresql/data
environment:
POSTGRES_USER: guac
POSTGRES_PASSWORD: guac
depends_on:
- init
guacd:
image: guacamole/guacd:latest
restart: unless-stopped
guacamole:
image: guacamole/guacamole:latest
restart: unless-stopped
ports:
- "8080:8080"
environment:
GUACD_HOSTNAME: guacd
POSTGRES_HOSTNAME: database
POSTGRES_DATABASE: guac
POSTGRES_USER: guac
POSTGRES_PASSWORD: guac
depends_on:
- database
- guacd
volumes:
dbinit:
dbdata:
So I have one container whose job is to copy the database initialization files into a volume and then I mount that volume in the database. The problem is that this creates a race condition and is ugly. Is there some elegant solution for this? Is it possible to mount the files from the guacamole image into the database container? I would rather avoid having an extra sql file with the docker-compose file.
Thanks in advance!
I try to learn about container, but i have a problem with my docker-compose.yml file, after i run the docker compose up, i always get the same error:
"ERROR: yaml.scanner.ScannerError: mapping values are not allowed
here"
even if i changed the mount path to docker volume, i got the same error, this is my yml file
version: "3"
services:
database:
image: mariadb
ports:
- "3260:3260"
volumes:
- /home/randy/Desktop/Latihan/wordpress-mariadb/mariadb:var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
wordpress:
image: wordpress
ports:
- "2000:80"
volumes:
- /home/randy/Desktop/Latihan/wordpress-mariadb/wordpress:/var/www/html
environment:
WORDPRESS_DB_PASSWORD: root
depends_on:
- database
links:
- database
It appears that your yaml is invalid. When I face these types of issues, what I will do is use a site called http://www.yamllint.com/ which will validate the syntax for you.
This yaml based on your example is valid:
Note: You can use 4 spaces (or 2 which I prefer), but never use tabs.
version: "3"
services:
database:
environment:
MYSQL_ROOT_PASSWORD: root
image: mariadb
ports:
- "3260:3260"
volumes:
- "/home/randy/Desktop/Latihan/wordpress-mariadb/mariadb:var/lib/mysql"
wordpress:
image: wordpress
ports:
- "2000:80"
volumes:
- /home/randy/Desktop/Latihan/wordpress-mariadb/wordpress:/var/www/html
environment:
WORDPRESS_DB_PASSWORD: root
depends_on:
- database
links:
- database
I have a basic Symfony4 setup and I have an issue with loading speed.
Currently I'm trying the docker-sync tool, it's up and running, but it seems that it doesn't do anything, speed stays the same.
Here is my current setup:
docker-compose.yml:
version: '3'
services:
apache:
build: .docker/apache
container_name: sf4_apache
ports:
- 80:80
volumes:
- .docker/config/vhosts:/etc/apache2/sites-enabled
- .:/home/wwwroot/sf4
depends_on:
- php
mysql:
image: mysql
command: "--default-authentication-plugin=mysql_native_password"
container_name: sf4_mysql
restart: always
volumes:
- ./data/db/mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: sf4
MYSQL_USER: sf4
MYSQL_PASSWORD: sf4
php:
build: .docker/php
container_name: sf4_php
volumes:
- .:/home/wwwroot/sf4
environment:
- maildev_host=sf4_maildev
depends_on:
- mysql
links:
- mysql
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: sf4_phpmyadmin
environment:
PMA_HOST: mysql
PMA_PORT: 3306
ports:
- 8080:80
links:
- mysql
docker-sync.yml
version: "2"
options:
verbose: true
syncs:
appcode-native-osx-sync: # tip: add -sync and you keep consistent names as a convention
src: './'
# sync_strategy: 'native_osx' # not needed, this is the default now
sync_excludes: ['ignored_folder', '.ignored_dot_folder']
In this case I synced my entire Symfony application folder, but it doesn't help. docker-sync is up and running as well as all my containers, but performance is still slow. Any other ideas what could I do? I found that one of the solutions is to move the vendor folder out of shared files. How would I do that?
Problem solved. More info about performance on MAC OS here https://docs.docker.com/docker-for-mac/osxfs-caching/
So I mounted my volumes using delegated like this:
apache:
build: .docker/apache
container_name: sf4_apache
ports:
- 80:80
volumes:
- .docker/config/vhosts:/etc/apache2/sites-enabled:delegated
- .:/home/wwwroot/sf4:delegated
depends_on:
- php
Default skeleton Symfony 4 project now loads in less than 1 second. It is still kinda slow, but it's ten times faster than before. :)
Instead of below mysql container DB link, I want to link AWS Mysql RDS in docker yml file. Is it possible?
mysql_db:
image: mysql:5.6
container_name: shishir_db
environment:
MYSQL_ROOT_PASSWORD: "xxxxxxxx"
#MYSQL_USER: "shishir"
#MYSQL_DATABASE: "shishir1"
MYSQL_PASSWORD: "xxxxxxxx"
ports:
- "3306:3306"
There are a couple of ways that you can link to a AWS RDS MySQL instance from your docker-compose.yml.
The first, and perhaps simplest way is to set environment variables on the containers that need access to the RDS MySQL instance. So for example, you could update your OperationEngine service definition to look something like:
OperationEngine:
image: shishir/operationengine:${RELEASE_OTA_VERSION}
container_name: operation_engine
ports:
- "8080:8080"
environment:
- DOCKER_HOST_IP: ${DOCKER_HOST_IP}
- JAVA_OPTS: ${JAVA_OPTS}
- MYSQL_HOST: "your-mysql-cname.rds.amazon.com"
- MYSQL_USER: "username"
- MYSQL_PASSWORD: "password"
volumes:
- ${HOME}/operationengine/logs/:/usr/local/tomcat/logs/
You can then update the configuration in that service to read database connection details from the environment e.g ${MYSQL_HOST}.
The obvious downside to this approach is that you have connection details stored as plain text in your docker-compose.yml, which is not great, but may be acceptable depending on your requirements.
The 2nd approach (and the one I tend to favour) is to bind mount the database configuration into the running container.
Most applications support reading database connection details from a properties file. As an example: lets say that on start-up your application read from /config/database.properties and required the following properties to connect to the database:
config.db.host=your-mysql-cname.rds.amazon.com
config.db.user=foo
config.db.password=bar
I would setup my environment so that at runtime, I bind mount a properties file that provides all of the required values to the container:
OperationEngine:
volumes:
- /secure/config/database.properties:/config/database.properties
The /secure/config directory is part of the filesystem on your Docker host. How that directory gets created and populated is up-to-you. Typically I approach this by having my environment setup scripts make the directory and then clone a private Git repository into this directory which contains the correct configuration for that environment. Naturally only those with the required permission levels can view the Git repositories that contain sensitive configuration details i.e. for production system.
Hope that helps.
No, i have not created networking in my yml file. Below is the entirecontent of my yml file, so what changes would be required?
mysql_db:
image: mysql:5.6
container_name: shishir_db
environment:
MYSQL_ROOT_PASSWORD: "xxxxxxxx"
#MYSQL_USER: "shishir"
#MYSQL_DATABASE: "shishir1"
MYSQL_PASSWORD: "xxxxxxxx"
ports:
- "3306:3306"
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper
ports:
- "2181:2181"
kafka:
image: shishir/kafkaengine:0.10
container_name: kafka
links:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: ${DOCKER_HOST_IP}
KAFKA_ZOOKEEPER_CONNECT: ${DOCKER_HOST_IP}:2181
KAFKA_ADVERTISED_PORT: 9092
KAFKA_CREATE_TOPICS: ${KAFKA_TOPICS}
KAFKA_ZOOKEEPER_SESSION_TIMEOUT_MS: 12000
KAFKA_ZOOKEEPER_CONNECTION_TIMEOUT_MS: 12000
flywaydb:
image: shishir/flywaydb
container_name: flywaydb
links:
- mysql_db
OperationEngine:
image: shishir/operationengine:${RELEASE_OTA_VERSION}
container_name: operation_engine
links:
- mysql_db
ports:
- "8080:8080"
environment:
DOCKER_HOST_IP: ${DOCKER_HOST_IP}
JAVA_OPTS: ${JAVA_OPTS}
volumes:
- ${HOME}/operationengine/logs/:/usr/local/tomcat/logs/