Why is this volume mounted as read only? - docker

I got from this link https://hub.docker.com/_/wordpress a docker-compose.yml file like this.
version: '3.1'
services:
wordpress:
image: wordpress
restart: always
ports:
- 8080:80
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: exampleuser
WORDPRESS_DB_PASSWORD: examplepass
WORDPRESS_DB_NAME: exampledb
volumes:
- wordpress:/var/www/html
db:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: exampledb
MYSQL_USER: exampleuser
MYSQL_PASSWORD: examplepass
MYSQL_RANDOM_ROOT_PASSWORD: '1'
volumes:
- db:/var/lib/mysql
volumes:
wordpress:
db:
But the volume is not getting mounted on the wordpress folder, and if I delete "wordpress" to mount the volume on the local folder it gets mounted but it does as read only. I don't have a Dockerfile am I just using the default image and that's the problem?

I use docker all the time for wordpress, this is my workflow.. (on a Mac)
Create a project folder and add a docker-compose.yml file with this..
version: '3.7'
services:
# here is our mysql container
db:
image: mysql:5.7
volumes:
- ./db:/var/lib/mysql:delegated
ports:
- "3306:3306"
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
# here is our wordpress container
wordpress:
depends_on:
- db
image: wordpress:latest
volumes:
# our persistent local data re routing
- .:/var/www/html/wp-content/themes/testing:delegated
- ./plugins:/var/www/html/wp-content/plugins
- ./uploads:/var/www/html/wp-content/uploads
- ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
ports:
- "80:80"
restart: always
environment:
# our local dev environment
WORDPRESS_DEBUG: 1
DEVELOPMENT: 1
# docker wp config settings
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
WORDPRESS_AUTH_KEY: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb
WORDPRESS_SECURE_KEY: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb
WORDPRESS_LOGGED_IN_KEY: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb
WORDPRESS_NONCE_KEY: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb
WORDPRESS_SECURE_AUTH_SALT: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb
WORDPRESS_LOGGED_IN_SALT: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb
WORDPRESS_NONCE_SALT: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb
WORDPRESS_CONFIG_EXTRA: |
/* Development parameters */
define('WP_CACHE', false);
define('ENVIRONMENT', 'local');
/* Configure mailhog server */
define('WORDPRESS_SMTP_AUTH', false);
define('WORDPRESS_SMTP_SECURE', '');
define('WORDPRESS_SMTP_HOST', 'mailhog');
define('WORDPRESS_SMTP_PORT', '1025');
define('WORDPRESS_SMTP_USERNAME', null);
define('WORDPRESS_SMTP_PASSWORD', null);
define('WORDPRESS_SMTP_FROM', 'whoever#example.com');
define('WORDPRESS_SMTP_FROM_NAME', 'Whoever');
/* other wp-config defines here */
# here is our mailhog container
mailhog:
image: mailhog/mailhog:latest
ports:
- "8025:8025"
Then in this project folder run this terminal command (with docker app running)..
docker-compose up -d
..and bam you will have a wordpress site at http://localhost/
This docker-compose.yml is configured so you will have..
Wordpress site at http://localhost/ or http://localhost:80
Mailhog server that captures all outgoing mail at http://localhost:8025/
Persistent local database, plugins, uploads and uploads.ini
All persistent data is mapped to this project folder, which is your theme folder. This keeps things nice and collected for each project..
After initial build before you do anything, replace the uploads.ini folder with an actual uploads.ini file..
file_uploads = On
memory_limit = 2000M
upload_max_filesize = 2000M
post_max_size = 2000M
max_execution_time = 600
So your project (theme) folder should look like this..
Now you can start building your theme in this folder like this, keeping everything associated with this local environment here.
Every time you run this command to shut down the environment..
docker-compose down
Next time you run this command on this project..
docker-compose up -d
..all your previous database, plugins and uploads will be exactly where you left off, awesome for wordpress local dev!
A few considerations to take into account..
Using GIT with this local wordpress project
If your using GIT to save this project as a repository, then make sure your .gitignore file excludes these files/folders..
/vendor
/uploads
/db
/dist
/node_modules
/plugins
Worth having an alternate cloud backup solution setup like google drive for your /db, /plugins and /uploads folders, just in case of hard drive failure or something.
Deploying to server, staging and production
In your IDE settings for this project you may want exclude certain files and folder from being deployed to staging and production server environments, this is what I normally exclude..
/Users/joshmoto/Sites/demo/db
/Users/joshmoto/Sites/demo/node_modules
/Users/joshmoto/Sites/demo/src
/Users/joshmoto/Sites/demo/uploads
/Users/joshmoto/Sites/demo/composer.json
/Users/joshmoto/Sites/demo/composer.lock
/Users/joshmoto/Sites/demo/docker-compose.yml
/Users/joshmoto/Sites/demo/package-lock.json
/Users/joshmoto/Sites/demo/package.json
/Users/joshmoto/Sites/demo/uploads.ini
/Users/joshmoto/Sites/demo/webpack.mix.js
/Users/joshmoto/Sites/demo/plugins
/Users/joshmoto/Sites/demo/.gitignore
/Users/joshmoto/Sites/demo/vendor
/plugins I manually manage by uploading through cPanel or wordpress dashboard, because the indexing is so heavy when syncing to remote.
/vendors is generate from composer and the same problem, the indexing is crazy so best to manually zip this up and upload through cPanel or whatever server admin you use.
If you have problems uploading media and plugins locally via the wordpress dashboard
You may be faced with some permission problems sometimes with dockers wordpress container wp-content folder. It is an easy fix. While your project is up and running in docker, in the project run these docker compose commands to change permissions on wp-content folder..
Show running docker containers command..
docker ps
Returned result..
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4052536e574a wordpress:latest "docker-entrypoint.s…" 1 hour ago Up 1 hour 0.0.0.0:80->80/tcp demo_wordpress_1
b0fbb0a8f9e6 mysql:5.7 "docker-entrypoint.s…" 1 hour ago Up 1 hour 0.0.0.0:3306->3306/tcp, 33060/tcp demo_db_1
769262e584e7 mailhog/mailhog:latest "MailHog" 1 hour ago Up 1 hour 1025/tcp, 0.0.0.0:8025->8025/tcp demo_mailhog_1
Copy and take note the wordpress:latest container ID, in this example my container ID is..
4052536e574a
Now run this command using this wordpress container ID to execute stuff in this container directory..
docker exec -it 4052536e574a /bin/bash
Returned result..
root#4052536e574a:/var/www/html#
Now we are in the containers root directory you can now run this command which will change the permissions recursively on the wp-content and fix the uploading permissions issue in the local wordpress dashboard.
chown -R www-data:www-data wp-content
You may have to run docker-compose down and docker-compose up -d commands on the project folder for these changes to take effect but I don't think you need too.
If you want a local custom URL instead of http://localhost
In terminal (not in your project folder this time) run this command to edit your hosts file..
sudo nano /etc/hosts
Then add your custom url to your hosts file via terminal and WriteOut before you Exit..
127.0.0.1 local.demo.com
Now your redirect mask is saved in your hosts you will need to add this to your docker-compose.yml below /* other wp-config defines here */..
define('WP_HOME','http://local.demo.com');
define('WP_SITEURL','http://local.demo.com');
Unfortunately you can't use https, you have to use http.
You will need to docker-compose down and docker-compose up -d for these newly define configs to run in your local wp-config.php.
Handling your Staging and Production wp-config
Your server environments should be independently installed with unique salts etc, you should not take anything from the docker-compose.yml config, really you only want to add these extra defines initially..
Staging wp-config.php
define('ENVIRONMENT', 'staging');
define('WP_DEBUG', true);
define('WP_HOME','https://staging.demo.com');
define('WP_SITEURL','https://staging.demo.com');
Production wp-config.php
define('ENVIRONMENT', 'production');
define('WP_DEBUG', false);
define('WP_HOME','https://www.demo.com');
define('WP_SITEURL','https://www.demo.com');
Updating dockers wordpress image
You can't update your local wordpress site via the wordpress dashboard. In my docker-compose.yml example I am defining image: wordpress:latest which gets the latest wordpress image installed on your docker app.
This means if you docker-compose up -d on a project that was using an older version of wordpress at the time, this project would update to the latest wordpress image installed on docker.
I've not personally defined wordpress image versions in docker-compose.yml but you can if you do not want certain projects to use the latest image when you next docker-compose up -d a project.
You have to update the wordpress docker image manually using this command..
docker pull wordpress
And you will have to docker-compose down and docker-compose up -d for your project to use the latest update, which some times requires a wordpress database update when accessing the site for the first time through your browser.
Permission issues when uploading / deleting plugins via dashboard
To change the permissions on the wp-content folder so uploads and plugins can be written via the dashboard.
Make sure you run these commands below while the containers are running...
Show containers command...docker ps
Make a note of the container ID for the running wordpress image
Execute in container command (replace [container_id] with your ID with no brackets)...docker exec -it [container_id] /bin/bash
Change permissions on wp-content folder...chown -R www-data:www-data wp-content
This should resolve any problems uploading and deleting files via the dashboard for uploads and plugins locally with docker compose.

Related

How to Add a shared folder location to my application (Docker)

I have a shared network folder, e.g.
\\pa02ptsdfs002.corp.lgd.afk\files\Public\chris\temp
There is a file in the shared folder that I would like to be visible to my dockerized application. The ultimate goal is, to have my application pick up and process this file, then put it into a database.
I have a Dockerfile and docker-compose.yml that I am thinking I will need to add a volume with the shared folder location (I'm not sure if this is the correct approach, this is where I need help!)
So far I've tried adding a volume in my yml which threw an error when i did docker-compose up -d
airflow:
build: ./airflow
image: digitalImage/airflow
container_name: di-airflow
environment:
AIRFLOW__CORE__EXECUTOR: 'LocalExecutor'
POSTGRES_USER: 'airflowStuff'
POSTGRES_PASSWORD: 'postgresCreds'
POSTGRES_HOST: 'host-postgres'
POSTGRES_PORT: '5432'
POSTGRES_DB: 'postgres-db'
DATE_VALUE: '1 DEC 2020 00:00:00'
volumes:
- ./airflow/released_dags:/usr/local/airflow/dags
- \\pa02ptsdfs002.corp.lgd.afk\files\Public\chris\temp:/usr/local/airflow/dags/inboundFiles
networks:
- di-airflowStuff
ports:
- 8081:8080
depends_on:
- postgres
ERROR: Cannot create container for service airflow: \pa02ptsdfs002.corp.lgd.afk\files\Public\chris\temp%! (EXTRA string=is not a valid Windows path)
p.s. I can access this shared folder location from my file explorer and python without a problem.
You don't need docker-compose to mount an external volume to your container, just configure it when running the container:
docker run --name name -v path_host:path_in_container image:tag
both directories must exist
Microsoft recomends mapping shares to network drives (if you're running docker on Windows):
https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/persistent-storage#smb-mounts

Build and deploy my own docker image to the production server?

I propably have missed something. So, I have installed docker on the production server and I have have a running application locally. It starts and runs on docker-compose.
So I feel like I am almost ready for deployment.
No I'd like to build a docker image and deploy it to the to the production server.
But when I try to build the image from the docker compose files like this
docker-compose -f docker-compose.base.yml -f docker-compose.production.yml build myapp
I keep getting ERROR: No such service: app
I haven't found any documentation the docker site, where the image build procedure from multiple dock-compose files is described. Maybe it's there, but then I have missed it.
My other Problem i.e. question is: where would I place the image tar(.gz) files on the target server.
Can I specify the target location for the images in either the /etc/docker/daemon.json or some other configuration file?
I simply don't know where to dump the image file on the production server.
Again, maybe there is some documentation somewhere on the docker web site. But if so, then I've missed that too.
Addendum source files
I was asked to add my docker-compose files to provide a running example. So, here's the entire code for development and production environments.
For the sake of simplicity, I've kept this example quite basic:
Dockerfile.base
# joint settings for development, production and staging
FROM mariadb:10.3
Dockerfile.production
# syntax = edrevo/dockerfile-plus
INCLUDE+ Dockerfile.base
# add more production specific settings when needed
Dockerfile.development
# syntax = edrevo/dockerfile-plus
INCLUDE+ Dockerfile.base
# add more production specific settings when needed
docker-compose.base.yml
version: "3.8"
services:
mariadb:
container_name: my_mariadb
build:
context: .
volumes:
- ./../../databases/mysql-my:/var/lib/mysql
- ~/.ssh:/root/.ssh
logging:
driver: local
ports:
- "3306:3306"
restart: on-failure
networks:
- backend
networks:
backend:
driver: $DOCKER_NETWORKS_DRIVER
docker-compose.production.yml
services:
mariadb:
build:
dockerfile: Dockerfile.production
environment:
PRODUCTION: "true"
DEBUG: "true"
MYSQL_ROOT_PASSWORD: $DBPASSWORD_PRODUCTION
MYSQL_DATABASE: $DBNAME_PRODUCTION
MYSQL_USER: $DBUSER_PRODUCTION
MYSQL_PASSWORD: $DBPASSWORD_PRODUCTION
docker-compose.development.yml
services:
mariadb:
build:
dockerfile: Dockerfile.development
environment:
DEVELOPMENT: "true"
INFO: "true"
MYSQL_ROOT_PASSWORD: $DBPASSWORD_DEVELOPMENT
MYSQL_DATABASE: $DBNAME_DEVELOPMENT
MYSQL_USER: $DBUSER_DEVELOPMENT
MYSQL_PASSWORD: $DBPASSWORD_DEVELOPMENT
This starts up properly on my development machine when running:
docker-compose -f docker-compose.base.yml \
-f docker-compose.development.yml \
up
But how do I get from here to to there i.e. how can I turn this into self contained image which I can upload to my docker production host?
And where do I put it on the server so the docker production server can find it?
I think I should be able to upload and run the built image without running compose again on the production host, or shouldn't I?
For the time being, the question remains:
Why does this build command
docker-compose -f docker-compose.base.yml \
-f docker-compose.production.yml \
build app
return ERROR: No such service: app?
I see two separate issues in your question:
How to deploy an image to the production server
To "deploy" / start a new image on a production server it needs to be downloaded with docker pull, from a registry where it was uploaded with docker push - i.e. you need a registry that is reachable from the production server and your build environment.
If you do not want to use a registry (public or private one) you can use docker export to export an image as a tar-ball which you can manually upload to the production server and make it available with docker import.
Check docker image ls to see if a specific image is available on your host.
But in your case I think it would be the easiest to just upload your docker-compose.yml and related files to the production server and directly build the images there.
Why does this build command return ERROR: No such service: app?
docker-compose -f docker-compose.base.yml \
-f docker-compose.production.yml \
build app
Because there is no service app! At least in the files you provided in your question there is only a mariadb service defined and no app service.
But starting / building the services should be the same on your local dev host and the production server.

Using volumes in Docker to access container filesystem

I'm new to using Docker and am attempting to get a volume working, but something isn't quite right.
I've installed Wordpress on Docker as explained in Docker's tutorial: https://docs.docker.com/compose/wordpress/
Using Docker Desktop I've managed to get the image up and running and followed through Wordpress's installation wizard. So far so good.
Now, I want to be able to access the Wordpress files on my local machine so I can edit them in vscode.
I've found several articles, e.g. Access running docker container filesystem, which all allude to the same thing: If you set the volumes: property in docker-compose.yml to map your local project directory to the webroot in the container it should "just work". But it doesn't.
My local directory on my Mac is on my Desktop and is called wordpress-docker. Inside this directory lives my docker-compose.yml.
I've edited docker-compose.yml so it has this:
wordpress:
volumes:
- ~/:/var/www/html
I know /var/www/html is correct because if I ssh into the container through Docker Desktop and then run pwd it gives me that directory. I also can see that's exactly where the Wordpress files after running ls -l:
All I'm trying to do is get the files listed above accessible in my wordpress-docker directory on my Desktop. Given that docker-compose.yml is in the same directory I've tried both ~/ and ./ as the part preceding /var/www/html.
No error messages are being shown when restarting the container. But the only file that I have inside wordpress-docker on my Desktop is the docker-compose.yml. It is not showing any of the files on the screenshot above, which is what I want and expect.
Why is it not showing the files that are in the container? All tutorials I've found seem to suggest it's this simple - you map a local directory to the one in the container and then it just works. In my case it does not.
Docker engine 20.10.0, Docker Desktop 3.0.0 on macOS Catalina.
I find that rule of thumb with wordpress dev is that wp core files should be never changed or accessible when using docker local environments.
Every time you docker-compose up -d a docker wordpress environment, docker will re-build a the current installed docker wordpress version.
So rather than mapping the entire wordpress volume, why not use persistent mapping for files and data which is dynamic for your current local project.
Essentially you will have a folder for each local wordpress project on your mac which you can docker up and down at any time. And you commit each folder as repo to git.
Make sure you .gitignore uploads, plugins, database, node_modules
etc...
For example, create a project folder anywhere on your mac and add a docker-compose.yml into the folder with this config code below...
version: '3.7'
networks:
wordpress:
ipam:
config:
- subnet: 172.25.0.0/16
services:
# here is our mysql container
db:
image: mysql:5.7
volumes:
- ./db:/var/lib/mysql:delegated
ports:
- "3306:3306"
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
networks:
- wordpress
# here is our wordpress container
wordpress:
depends_on:
- db
image: wordpress:latest
volumes:
# our persistent local data re routing
- .:/var/www/html/wp-content/themes/testing:delegated
- ./plugins:/var/www/html/wp-content/plugins
- ./uploads:/var/www/html/wp-content/uploads
- ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
ports:
- "80:80"
restart: always
networks:
- wordpress
environment:
# our local dev environment
WORDPRESS_DEBUG: 1
DEVELOPMENT: 1
# docker wp config settings
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
WORDPRESS_AUTH_KEY: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb
WORDPRESS_SECURE_KEY: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb
WORDPRESS_LOGGED_IN_KEY: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb
WORDPRESS_NONCE_KEY: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb
WORDPRESS_SECURE_AUTH_SALT: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb
WORDPRESS_LOGGED_IN_SALT: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb
WORDPRESS_NONCE_SALT: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb
WORDPRESS_CONFIG_EXTRA: |
/* development parameters */
define('WP_CACHE', false);
define('ENVIRONMENT', 'local');
define('WP_DEBUG', true);
/* configure mail server */
define('WORDPRESS_SMTP_AUTH', false);
define('WORDPRESS_SMTP_SECURE', '');
define('WORDPRESS_SMTP_HOST', 'mailhog');
define('WORDPRESS_SMTP_PORT', '1025');
define('WORDPRESS_SMTP_USERNAME', null);
define('WORDPRESS_SMTP_PASSWORD', null);
define('WORDPRESS_SMTP_FROM', 'whoever#example.com');
define('WORDPRESS_SMTP_FROM_NAME', 'Whoever');
/* add anymore custom configs here... */
# here is our mail hog container
mailhog:
image: mailhog/mailhog:latest
ports:
- "8025:8025"
networks:
- wordpress
Now in this folder, run the command docker-compose up -d which will build a brand new installation which looks like this in your ide...
Before accessing and setting this local wordpress install via http://locolhost you need to replace the uploads.ini folder with a true uploads.ini config file. Replace folder with this .ini config code...
file_uploads = On
memory_limit = 2000M
upload_max_filesize = 2000M
post_max_size = 2000M
max_execution_time = 600
So you will now have...
Before you access the site again it's probably a good idea to put some basic theme files in so your theme will run when you visit the site and admin...
For mailhog to work and receive outgoing mail from your local wordpress environment you will need to add this to your functions.php...
// add the action
add_action('wp_mail_failed', 'action_wp_mail_failed', 10, 1);
// configure PHPMailer to send through SMTP
add_action('phpmailer_init', function ($phpmailer) {
$phpmailer->isSMTP();
// host details
$phpmailer->SMTPAuth = WORDPRESS_SMTP_AUTH;
$phpmailer->SMTPSecure = WORDPRESS_SMTP_SECURE;
$phpmailer->SMTPAutoTLS = false;
$phpmailer->Host = WORDPRESS_SMTP_HOST;
$phpmailer->Port = WORDPRESS_SMTP_PORT;
// from details
$phpmailer->From = WORDPRESS_SMTP_FROM;
$phpmailer->FromName = WORDPRESS_SMTP_FROM_NAME;
// login details
$phpmailer->Username = WORDPRESS_SMTP_USERNAME;
$phpmailer->Password = WORDPRESS_SMTP_PASSWORD;
});
Now these urls below will work...
http://locolhost - your site
http://locolhost/admin - your site wp admin (once site is setup)
http://localhost:8025 - mailhog for viewing outgoing mail from site
So every time you docker-compose down and docker-compose up -d on this project, your environment will boot up exactly where you left off.
You can add database data, uploads and plugins and all this data will be persistent without the need to edit core files.
You can add any extra wp-config.php settings to this docker-compose.yml file. But you need to docker-compose down and up -d for these configs to take effect.
This theme folder is called testing in the docker container files, but if you modify the style.css file with the code below, this activated theme will display in the admin with what ever you set...
/*
Theme Name: Project Name
Author: joshmoto
Description: Project Description
Version: 1.0
License: Private
*/
As per you comment. You can try persistent mapping the whole wp-content from your volumes by using this in your existing docker-compose.yml...
# here is our mysql container
db:
image: mysql:5.7
volumes:
- ./db:/var/lib/mysql:delegated
# here is our wordpress container
wordpress:
depends_on:
- db
image: wordpress:latest
volumes:
- ./wp-content:/var/www/html/wp-content
- ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
You will probably need to use persistent db mapping and uploads.ini still if you want your local environment to restart with same theme settings and data from previous session.
Change your docker compose volumes section to -
wordpress:
volumes:
- ~/Desktop/wordpress-docker:/var/www/html
Whatever folder you've mapped from your local to your remote is where the files will go. So wordpress-docker will not have a folder called /var . It will directly have index.js and other files present inside /var/www/html of your container.

How to deploy a docker app to production without using Docker compose?

I have heard it said that
Docker compose is designed for development NOT for production.
But I have seen people use Docker compose on production with bind mounts. Then pull the latest changes from github and it appears live in production without the need to rebuild. But others say that you need to COPY . . for production and rebuild.
But how does this work? Because in docker-compose.yaml you can specify depends-on which doesn't start one container until the other is running. If I don't use docker-compose in production then what about this? How would I push my docker-compose to production (I have 4 services / 4 images that I need to run). With docker-compose up -d it is so easy.
How do I build each image individually?
How can I copy these images to my production server to run them (in correct order)? I can't even find the build images on my machine anywhere.
This is my docker-compose.yaml file that works great for development
version: '3'
services:
# Nginx client server
nginx-client:
container_name: nginx-client
build:
context: .
restart: always
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
ports:
- 28874:3000
volumes:
- ./client:/var/www
- /var/www/node_modules
networks:
- app-network
# MySQL server for the server side app
mysql-server:
image: mysql:5.7.22
container_name: mysql-server
restart: always
tty: true
ports:
- "16427:3306"
environment:
MYSQL_USER: root
MYSQL_ROOT_PASSWORD: BcGH2Gj41J5VF1
MYSQL_DATABASE: todo
volumes:
- ./docker/mysql-server/my.cnf:/etc/mysql/my.cnf
networks:
- app-network
# Nginx server for the server side app
nginx-server:
container_name: nginx-server
image: nginx:1.17-alpine
restart: always
ports:
- 49691:80
volumes:
- ./server:/var/www
- ./docker/nginx-server/etc/nginx/conf.d:/etc/nginx/conf.d
depends_on:
- php-server
- mysql-server
networks:
- app-network
# PHP server for the server side app
php-server:
build:
context: .
dockerfile: ./docker/php-server/Dockerfile
container_name: php-server
restart: always
tty: true
environment:
SERVICE_NAME: php
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./server:/var/www
- ./docker/php-server/local.ini:/usr/local/etc/php/conf.d/local.ini
- /var/www/vendor
networks:
- app-network
depends_on:
- mysql-server
# Networks
networks:
app-network:
driver: bridge
How do you build the docker images? I assume you don't plan using a registry, therefore you'll have to:
give an image name to all services
build the docker images somewhere (a CI/CD server, locally, it does not really matter)
save the images in a file
zip the file
export the zipped file remotely
on the server, unzip and load
I'd create a script for this. Something like this:
#!/bin/bash
set -e
docker-compose build
docker save -o images.tar "$( grep "image: .*" docker-compose.yml | awk '{ print $2 }' )"
gzip images.tar
scp images.tar.gz myserver:~
ssh myserver ./load_images.sh
-----
on myserver, the load_images.sh would look like this:
```bash
#!/bin/bash
if [ ! -f images.tar.gz ] ; then
echo "no file"
exit 1
fi
gunzip images.tar.gz
docker load -i images.tar
Then you'll have to create the docker commands to emulate the docker-compose configuration (I won't go there since it's nothing difficult but it's boring and I'm not feeling like writing that). How do you simulate the depends_on? Well, you'll have to start each container singularly so you'll either prepare another script or you'll do it manually.
About using docker-compose on production:
There's not really a big issue about using docker-compose on production as soon as you do it properly. e.g. some of my production setups tends to look like this:
docker-compose.yml
docker-compose.dev.yml
docker-compose.prd.yml
The devs will use docker-compose -f docker-compose.yml -f docker-compose.dev.yml $cmd while on production you'll use docker-compose -f docker-compose.yml -f docker-compose.prd.yml $cmd.
Taking you file as an example, I'd move all volumes, ports, tty and stdin_open subsections from docker-compose.yml to docker-compose.dev.yml. e.g.
the docker-compose.dev.yml would look like this:
version: '3'
services:
nginx-client:
stdin_open: true
ports:
- 28874:3000
volumes:
- ./client:/var/www
- /var/www/node_modules
mysql-server:
tty: true
ports:
- "16427:3306"
volumes:
- ./docker/mysql-server/my.cnf:/etc/mysql/my.cnf
nginx-server:
ports:
- 49691:80
volumes:
- ./server:/var/www
- ./docker/nginx-server/etc/nginx/conf.d:/etc/nginx/conf.d
php-server:
restart: always
tty: true
volumes:
- ./server:/var/www
- ./docker/php-server/local.ini:/usr/local/etc/php/conf.d/local.ini
- /var/www/vendor
on production, the docker-compose you'll have the strictly required port subsections, define a production environment file where the required passwords are stored (the file will be only on the production server, not in git), etc etc.
Actually, you have so many different approaches you can take.
Generally, docker-compose is used as a container-orchestration tool on development. There are several other production-grade container orchestration tools available on most of the popular hosting services like GCP and AWS. Kubernetes is by far the most popular and most commonly used.
Based on the services used in your docker-compose, it advisable to not use it directly on production. Running a mysql container can lead to issues with data loss as containers are meant to be temporary. It is better to opt for a managed MySQL service like RDS instead. Similarly nginx is also better set up with any reverse-proxy/load-balancer services that your hosting service provides.
When it comes to building the images you can utilise your CI/CD pipeline to build these images from their respective Dockerfiles, and then push to a image registry of your choice and let your hosting service pick up the image and deploy it with th e container-orchestration tool that your hosting service provides.
If you need a lightweight production environment, using Compose is probably fine. Other answers here have hinted at more involved tools, that have advantages like supporting multiple-host clusters and zero-downtime deployments, but they are much more involved.
One core piece missing from your description is an image registry. Docker Hub fits this role, if you want to use it; major cloud providers have one; even GitHub has a container registry now (for public repositories); or you can run your own. This addresses a couple of your problems: (2) you docker build the images locally (or on a dedicated continuous-integration system) and docker push them to the registry, then (3) you docker pull the images on the production system, or let Docker do it on its own.
A good practice that goes along with this is to give each build a unique tag, perhaps a date stamp or commit ID. This makes it very easy to upgrade (or downgrade) by changing the tag and re-running docker-compose up.
For this you'd change your docker-compose.yml like:
services:
nginx-client:
# No `build:`
image: registry.example.com/nginx:${NGINX_TAG:latest}
php-server:
# No `build:`
image: registry.example.com/php:${PHP_TAG:latest}
And then you can update things like:
docker build -t registry.example.com/nginx:20201101 ./nginx
docker build -t registry.example.com/php:20201101 ./php
docker push registry.example.com/nginx:20201101 registry.example.com/php:20201101
ssh production-system.example.com \
NGINX_TAG=20201101 PHP_TAG=20201101 docker-compose up -d
You can use multiple docker-compose.yml files to also use docker-compose build and docker-compose push for your custom images, with a development-only overlay file. There is an example in the Docker documentation.
Do not separately copy your code; it's contained in the image. Do not bind-mount local code over the image code. Especially do not use an anonymous volume to hold libraries, since this will completely ignore any updates in the underlying image. These are good practices in development too, since if you replace everything interesting in an image with volume mounts then it doesn't really have any relation to what you're running in production.
You will need to separately copy the configuration files you reference and the docker-compose.yml itself to the target system, and take responsibility for backing up the database data.
Finally, I'd recommend removing unnecessary options from the docker-compose.yml file (don't manually specify container_name:, use the Compose-provided default network, prefer specifying the command: in an image, and so on). That's not essential but it can help trim down the size of the YAML file.

How to maintain linux/mac permissions on bind mount with docker for windows?

I'm currently developing a Drupal Application in my Mac. Drupal needs some permissions configuration in its subfolders (in linux) and all are already set up. I've got two containers: database (db) and apache-php (web). All my code is in git and I can get it with git clone or pull...
Just introducing the issue, do not worry about drupal or php, since this problem could happen with Python or any other scripting language.
I use bind mount with --volume directly from the code from git, this way (docker-compose.yml):
version: "2"
services:
web:
image: yoshz/apache-php:5.5
links:
- db:database
volumes:
- ./docker/phpconf/apache2:/etc/php5/apache2
- ./www:/var/www
- ./docker/sites-avaliable/:/etc/apache2/sites-available/
ports:
- "8600:80"
db:
extends:
file: docker-common-configs.yaml
service: db
environment:
MYSQL_ROOT_PASSWORD: 1234
MYSQL_DATABASE: drupal
MYSQL_USER: user
MYSQL_PASSWORD: password
A colleague cloned my repository in Windows. Run docker-compose up -d and the application failed.
The problem is, as the volume resides in the windows host, permissions in ./www are not set.
I'd like to have the code accessible for the Windows Host Visual Code application, to change it fast, without deployment.
The solution of force permissions this way:
- ./www:/var/www:rw
won't work, because each subdirectory has its own permissions.
Does anyone have an imaginative solution to face this problem?

Categories

Resources