Using volumes in Docker to access container filesystem - docker
I'm new to using Docker and am attempting to get a volume working, but something isn't quite right.
I've installed Wordpress on Docker as explained in Docker's tutorial: https://docs.docker.com/compose/wordpress/
Using Docker Desktop I've managed to get the image up and running and followed through Wordpress's installation wizard. So far so good.
Now, I want to be able to access the Wordpress files on my local machine so I can edit them in vscode.
I've found several articles, e.g. Access running docker container filesystem, which all allude to the same thing: If you set the volumes: property in docker-compose.yml to map your local project directory to the webroot in the container it should "just work". But it doesn't.
My local directory on my Mac is on my Desktop and is called wordpress-docker. Inside this directory lives my docker-compose.yml.
I've edited docker-compose.yml so it has this:
wordpress:
volumes:
- ~/:/var/www/html
I know /var/www/html is correct because if I ssh into the container through Docker Desktop and then run pwd it gives me that directory. I also can see that's exactly where the Wordpress files after running ls -l:
All I'm trying to do is get the files listed above accessible in my wordpress-docker directory on my Desktop. Given that docker-compose.yml is in the same directory I've tried both ~/ and ./ as the part preceding /var/www/html.
No error messages are being shown when restarting the container. But the only file that I have inside wordpress-docker on my Desktop is the docker-compose.yml. It is not showing any of the files on the screenshot above, which is what I want and expect.
Why is it not showing the files that are in the container? All tutorials I've found seem to suggest it's this simple - you map a local directory to the one in the container and then it just works. In my case it does not.
Docker engine 20.10.0, Docker Desktop 3.0.0 on macOS Catalina.
I find that rule of thumb with wordpress dev is that wp core files should be never changed or accessible when using docker local environments.
Every time you docker-compose up -d a docker wordpress environment, docker will re-build a the current installed docker wordpress version.
So rather than mapping the entire wordpress volume, why not use persistent mapping for files and data which is dynamic for your current local project.
Essentially you will have a folder for each local wordpress project on your mac which you can docker up and down at any time. And you commit each folder as repo to git.
Make sure you .gitignore uploads, plugins, database, node_modules
etc...
For example, create a project folder anywhere on your mac and add a docker-compose.yml into the folder with this config code below...
version: '3.7'
networks:
wordpress:
ipam:
config:
- subnet: 172.25.0.0/16
services:
# here is our mysql container
db:
image: mysql:5.7
volumes:
- ./db:/var/lib/mysql:delegated
ports:
- "3306:3306"
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
networks:
- wordpress
# here is our wordpress container
wordpress:
depends_on:
- db
image: wordpress:latest
volumes:
# our persistent local data re routing
- .:/var/www/html/wp-content/themes/testing:delegated
- ./plugins:/var/www/html/wp-content/plugins
- ./uploads:/var/www/html/wp-content/uploads
- ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
ports:
- "80:80"
restart: always
networks:
- wordpress
environment:
# our local dev environment
WORDPRESS_DEBUG: 1
DEVELOPMENT: 1
# docker wp config settings
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
WORDPRESS_AUTH_KEY: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb
WORDPRESS_SECURE_KEY: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb
WORDPRESS_LOGGED_IN_KEY: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb
WORDPRESS_NONCE_KEY: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb
WORDPRESS_SECURE_AUTH_SALT: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb
WORDPRESS_LOGGED_IN_SALT: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb
WORDPRESS_NONCE_SALT: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb
WORDPRESS_CONFIG_EXTRA: |
/* development parameters */
define('WP_CACHE', false);
define('ENVIRONMENT', 'local');
define('WP_DEBUG', true);
/* configure mail server */
define('WORDPRESS_SMTP_AUTH', false);
define('WORDPRESS_SMTP_SECURE', '');
define('WORDPRESS_SMTP_HOST', 'mailhog');
define('WORDPRESS_SMTP_PORT', '1025');
define('WORDPRESS_SMTP_USERNAME', null);
define('WORDPRESS_SMTP_PASSWORD', null);
define('WORDPRESS_SMTP_FROM', 'whoever#example.com');
define('WORDPRESS_SMTP_FROM_NAME', 'Whoever');
/* add anymore custom configs here... */
# here is our mail hog container
mailhog:
image: mailhog/mailhog:latest
ports:
- "8025:8025"
networks:
- wordpress
Now in this folder, run the command docker-compose up -d which will build a brand new installation which looks like this in your ide...
Before accessing and setting this local wordpress install via http://locolhost you need to replace the uploads.ini folder with a true uploads.ini config file. Replace folder with this .ini config code...
file_uploads = On
memory_limit = 2000M
upload_max_filesize = 2000M
post_max_size = 2000M
max_execution_time = 600
So you will now have...
Before you access the site again it's probably a good idea to put some basic theme files in so your theme will run when you visit the site and admin...
For mailhog to work and receive outgoing mail from your local wordpress environment you will need to add this to your functions.php...
// add the action
add_action('wp_mail_failed', 'action_wp_mail_failed', 10, 1);
// configure PHPMailer to send through SMTP
add_action('phpmailer_init', function ($phpmailer) {
$phpmailer->isSMTP();
// host details
$phpmailer->SMTPAuth = WORDPRESS_SMTP_AUTH;
$phpmailer->SMTPSecure = WORDPRESS_SMTP_SECURE;
$phpmailer->SMTPAutoTLS = false;
$phpmailer->Host = WORDPRESS_SMTP_HOST;
$phpmailer->Port = WORDPRESS_SMTP_PORT;
// from details
$phpmailer->From = WORDPRESS_SMTP_FROM;
$phpmailer->FromName = WORDPRESS_SMTP_FROM_NAME;
// login details
$phpmailer->Username = WORDPRESS_SMTP_USERNAME;
$phpmailer->Password = WORDPRESS_SMTP_PASSWORD;
});
Now these urls below will work...
http://locolhost - your site
http://locolhost/admin - your site wp admin (once site is setup)
http://localhost:8025 - mailhog for viewing outgoing mail from site
So every time you docker-compose down and docker-compose up -d on this project, your environment will boot up exactly where you left off.
You can add database data, uploads and plugins and all this data will be persistent without the need to edit core files.
You can add any extra wp-config.php settings to this docker-compose.yml file. But you need to docker-compose down and up -d for these configs to take effect.
This theme folder is called testing in the docker container files, but if you modify the style.css file with the code below, this activated theme will display in the admin with what ever you set...
/*
Theme Name: Project Name
Author: joshmoto
Description: Project Description
Version: 1.0
License: Private
*/
As per you comment. You can try persistent mapping the whole wp-content from your volumes by using this in your existing docker-compose.yml...
# here is our mysql container
db:
image: mysql:5.7
volumes:
- ./db:/var/lib/mysql:delegated
# here is our wordpress container
wordpress:
depends_on:
- db
image: wordpress:latest
volumes:
- ./wp-content:/var/www/html/wp-content
- ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
You will probably need to use persistent db mapping and uploads.ini still if you want your local environment to restart with same theme settings and data from previous session.
Change your docker compose volumes section to -
wordpress:
volumes:
- ~/Desktop/wordpress-docker:/var/www/html
Whatever folder you've mapped from your local to your remote is where the files will go. So wordpress-docker will not have a folder called /var . It will directly have index.js and other files present inside /var/www/html of your container.
Related
How do I access odoo files (such as installed add-ons folder)?
I've set up Odoo 15 on a container with docker compose, and am accessing the container through remote container extension on VS code, I've looked everywhere I can't seem to get how to access the odoo files such as the installed add-ons folder I've set up my volumes in docker-compose file pretty much in this way: version: '3' services: odoo: image: odoo:15.0 env_file: .env depends_on: - postgres ports: - "127.0.0.1:8069:8069" volumes: - data:/var/lib/odoo - ./config:/etc/odoo - ./extra-addons:/mnt/extra-addons But since I would like to apply changes on the html/css of non custom add-ons that are already present in odoo I'd have to access the source code of odoo that is present in the container (if doable). For example in the volume odoo-addons:/mnt/extra-addons would be a directory where i could add my custom module but what i want is to find the source code of the add-ons already present in Odoo ?!
Use named volumes - it will copy the existing data from the container image into a new volume. In docker-compose you can do it, by defining a volume: version: '2' volumes: data: services: odoo: image: odoo:15.0 env_file: .env depends_on: - postgres ports: - "127.0.0.1:8069:8069" volumes: - data:/var/lib/odoo - ./config:/etc/odoo - ./extra-addons:/mnt/extra-addons If your files reside in the /var/lib/odoo folder you will be able to view/edit the files which are thereby accessing them in the /var/lib/docker/volumes/{someName}_data/_data
Access docker-compose volume from host masine (copy files "into" container and acces contailer-files from outside)
I'm trying to run jira from docker compose (as of now this would not be necessary but later on I will add additional services). So far almost everything works. version: '3' services: jira-local-standalone: container_name: jira-local-standalone image: atlassian/jira-software:8.5.0 volumes: - ./work/jira-home:/var/atlassian/application-data/jira ports: - 8080:8080 environment: TZ: Europe/Berlin JVM_RESERVED_CODE_CACHE_SIZE: 1024m JVM_MINIMUM_MEMORY: 512m JVM_MAXIMUM_MEMORY: 2048m networks: - jiranet networks: jiranet: driver: bridge The only thing that does not work is this: I want the jira home to be accessible via the file manager on my host nachine. To access logs etc. and to "upload" import files. So I want to copy the files to the specified location on my host and be available to jira inside my container. More or less like some sort of a shared folder. I expected the directory ./work/jira-home to be populated during startup (creating the /var/atlassian/application-data/jira inside ./work/jira-home and write logfiles and stuff). But the ./work/jira-home folder remains empty. How can I expose the jira-home from inside the container to my host machine? Thanks and best regards, Sebastian
Why is this volume mounted as read only?
I got from this link https://hub.docker.com/_/wordpress a docker-compose.yml file like this. version: '3.1' services: wordpress: image: wordpress restart: always ports: - 8080:80 environment: WORDPRESS_DB_HOST: db WORDPRESS_DB_USER: exampleuser WORDPRESS_DB_PASSWORD: examplepass WORDPRESS_DB_NAME: exampledb volumes: - wordpress:/var/www/html db: image: mysql:5.7 restart: always environment: MYSQL_DATABASE: exampledb MYSQL_USER: exampleuser MYSQL_PASSWORD: examplepass MYSQL_RANDOM_ROOT_PASSWORD: '1' volumes: - db:/var/lib/mysql volumes: wordpress: db: But the volume is not getting mounted on the wordpress folder, and if I delete "wordpress" to mount the volume on the local folder it gets mounted but it does as read only. I don't have a Dockerfile am I just using the default image and that's the problem?
I use docker all the time for wordpress, this is my workflow.. (on a Mac) Create a project folder and add a docker-compose.yml file with this.. version: '3.7' services: # here is our mysql container db: image: mysql:5.7 volumes: - ./db:/var/lib/mysql:delegated ports: - "3306:3306" restart: always environment: MYSQL_ROOT_PASSWORD: somewordpress MYSQL_DATABASE: wordpress MYSQL_USER: wordpress MYSQL_PASSWORD: wordpress # here is our wordpress container wordpress: depends_on: - db image: wordpress:latest volumes: # our persistent local data re routing - .:/var/www/html/wp-content/themes/testing:delegated - ./plugins:/var/www/html/wp-content/plugins - ./uploads:/var/www/html/wp-content/uploads - ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini ports: - "80:80" restart: always environment: # our local dev environment WORDPRESS_DEBUG: 1 DEVELOPMENT: 1 # docker wp config settings WORDPRESS_DB_HOST: db:3306 WORDPRESS_DB_USER: wordpress WORDPRESS_DB_PASSWORD: wordpress WORDPRESS_DB_NAME: wordpress WORDPRESS_AUTH_KEY: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb WORDPRESS_SECURE_KEY: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb WORDPRESS_LOGGED_IN_KEY: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb WORDPRESS_NONCE_KEY: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb WORDPRESS_SECURE_AUTH_SALT: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb WORDPRESS_LOGGED_IN_SALT: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb WORDPRESS_NONCE_SALT: 5f6ede1b94d25a2294e29eeba929a8c80a5ac0fb WORDPRESS_CONFIG_EXTRA: | /* Development parameters */ define('WP_CACHE', false); define('ENVIRONMENT', 'local'); /* Configure mailhog server */ define('WORDPRESS_SMTP_AUTH', false); define('WORDPRESS_SMTP_SECURE', ''); define('WORDPRESS_SMTP_HOST', 'mailhog'); define('WORDPRESS_SMTP_PORT', '1025'); define('WORDPRESS_SMTP_USERNAME', null); define('WORDPRESS_SMTP_PASSWORD', null); define('WORDPRESS_SMTP_FROM', 'whoever#example.com'); define('WORDPRESS_SMTP_FROM_NAME', 'Whoever'); /* other wp-config defines here */ # here is our mailhog container mailhog: image: mailhog/mailhog:latest ports: - "8025:8025" Then in this project folder run this terminal command (with docker app running).. docker-compose up -d ..and bam you will have a wordpress site at http://localhost/ This docker-compose.yml is configured so you will have.. Wordpress site at http://localhost/ or http://localhost:80 Mailhog server that captures all outgoing mail at http://localhost:8025/ Persistent local database, plugins, uploads and uploads.ini All persistent data is mapped to this project folder, which is your theme folder. This keeps things nice and collected for each project.. After initial build before you do anything, replace the uploads.ini folder with an actual uploads.ini file.. file_uploads = On memory_limit = 2000M upload_max_filesize = 2000M post_max_size = 2000M max_execution_time = 600 So your project (theme) folder should look like this.. Now you can start building your theme in this folder like this, keeping everything associated with this local environment here. Every time you run this command to shut down the environment.. docker-compose down Next time you run this command on this project.. docker-compose up -d ..all your previous database, plugins and uploads will be exactly where you left off, awesome for wordpress local dev! A few considerations to take into account.. Using GIT with this local wordpress project If your using GIT to save this project as a repository, then make sure your .gitignore file excludes these files/folders.. /vendor /uploads /db /dist /node_modules /plugins Worth having an alternate cloud backup solution setup like google drive for your /db, /plugins and /uploads folders, just in case of hard drive failure or something. Deploying to server, staging and production In your IDE settings for this project you may want exclude certain files and folder from being deployed to staging and production server environments, this is what I normally exclude.. /Users/joshmoto/Sites/demo/db /Users/joshmoto/Sites/demo/node_modules /Users/joshmoto/Sites/demo/src /Users/joshmoto/Sites/demo/uploads /Users/joshmoto/Sites/demo/composer.json /Users/joshmoto/Sites/demo/composer.lock /Users/joshmoto/Sites/demo/docker-compose.yml /Users/joshmoto/Sites/demo/package-lock.json /Users/joshmoto/Sites/demo/package.json /Users/joshmoto/Sites/demo/uploads.ini /Users/joshmoto/Sites/demo/webpack.mix.js /Users/joshmoto/Sites/demo/plugins /Users/joshmoto/Sites/demo/.gitignore /Users/joshmoto/Sites/demo/vendor /plugins I manually manage by uploading through cPanel or wordpress dashboard, because the indexing is so heavy when syncing to remote. /vendors is generate from composer and the same problem, the indexing is crazy so best to manually zip this up and upload through cPanel or whatever server admin you use. If you have problems uploading media and plugins locally via the wordpress dashboard You may be faced with some permission problems sometimes with dockers wordpress container wp-content folder. It is an easy fix. While your project is up and running in docker, in the project run these docker compose commands to change permissions on wp-content folder.. Show running docker containers command.. docker ps Returned result.. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4052536e574a wordpress:latest "docker-entrypoint.s…" 1 hour ago Up 1 hour 0.0.0.0:80->80/tcp demo_wordpress_1 b0fbb0a8f9e6 mysql:5.7 "docker-entrypoint.s…" 1 hour ago Up 1 hour 0.0.0.0:3306->3306/tcp, 33060/tcp demo_db_1 769262e584e7 mailhog/mailhog:latest "MailHog" 1 hour ago Up 1 hour 1025/tcp, 0.0.0.0:8025->8025/tcp demo_mailhog_1 Copy and take note the wordpress:latest container ID, in this example my container ID is.. 4052536e574a Now run this command using this wordpress container ID to execute stuff in this container directory.. docker exec -it 4052536e574a /bin/bash Returned result.. root#4052536e574a:/var/www/html# Now we are in the containers root directory you can now run this command which will change the permissions recursively on the wp-content and fix the uploading permissions issue in the local wordpress dashboard. chown -R www-data:www-data wp-content You may have to run docker-compose down and docker-compose up -d commands on the project folder for these changes to take effect but I don't think you need too. If you want a local custom URL instead of http://localhost In terminal (not in your project folder this time) run this command to edit your hosts file.. sudo nano /etc/hosts Then add your custom url to your hosts file via terminal and WriteOut before you Exit.. 127.0.0.1 local.demo.com Now your redirect mask is saved in your hosts you will need to add this to your docker-compose.yml below /* other wp-config defines here */.. define('WP_HOME','http://local.demo.com'); define('WP_SITEURL','http://local.demo.com'); Unfortunately you can't use https, you have to use http. You will need to docker-compose down and docker-compose up -d for these newly define configs to run in your local wp-config.php. Handling your Staging and Production wp-config Your server environments should be independently installed with unique salts etc, you should not take anything from the docker-compose.yml config, really you only want to add these extra defines initially.. Staging wp-config.php define('ENVIRONMENT', 'staging'); define('WP_DEBUG', true); define('WP_HOME','https://staging.demo.com'); define('WP_SITEURL','https://staging.demo.com'); Production wp-config.php define('ENVIRONMENT', 'production'); define('WP_DEBUG', false); define('WP_HOME','https://www.demo.com'); define('WP_SITEURL','https://www.demo.com'); Updating dockers wordpress image You can't update your local wordpress site via the wordpress dashboard. In my docker-compose.yml example I am defining image: wordpress:latest which gets the latest wordpress image installed on your docker app. This means if you docker-compose up -d on a project that was using an older version of wordpress at the time, this project would update to the latest wordpress image installed on docker. I've not personally defined wordpress image versions in docker-compose.yml but you can if you do not want certain projects to use the latest image when you next docker-compose up -d a project. You have to update the wordpress docker image manually using this command.. docker pull wordpress And you will have to docker-compose down and docker-compose up -d for your project to use the latest update, which some times requires a wordpress database update when accessing the site for the first time through your browser. Permission issues when uploading / deleting plugins via dashboard To change the permissions on the wp-content folder so uploads and plugins can be written via the dashboard. Make sure you run these commands below while the containers are running... Show containers command...docker ps Make a note of the container ID for the running wordpress image Execute in container command (replace [container_id] with your ID with no brackets)...docker exec -it [container_id] /bin/bash Change permissions on wp-content folder...chown -R www-data:www-data wp-content This should resolve any problems uploading and deleting files via the dashboard for uploads and plugins locally with docker compose.
Docker for symfony and nginx without mounting the source code
We have a develop and a production system that use symfony 5 + nginx + MySQL services running in a docker environment. At the moment the nginx webserver runs in the same container as the symfony service because of this issue: On our develop environment we are able to mount the symfony sourcecode into the docker container (by a docker-compose file). In our production environment we need to deliver containers that contains all the source code inside because we must not put our source code on the server. So there is no folder on the server from which we can mount our source code. Unfortunately nginx needs the sourceode as well to make his routing decisions so we decided to put the symfony and the nginx services together in one container. Now we want to clean this up to have a better solution by run every service in its own container: version: '3.5' services: php: image: docker_sandbox build: ../. ... volumes: - docker_sandbox_src:/var/www/docker_sandbox // <== VOLUME networks: - docker_sandbox_net ... nginx: image: nginx:1.19.0-alpine ... volumes: - ./nginx/server.conf:/etc/nginx/conf.d/default.conf:ro - docker_sandbox_src:/var/www/docker_sandbox <== VOLUME ... networks: - docker_sandbox_net depends_on: - php mysql: ... volumes: docker_sandbox_src: networks: docker_sandbox_net: driver: bridge One possible solution is to use a named volume that connects the nginx service with the symfony service. The problem with that is that on an update of our symfony image the volume keeps the old changes. So there is no update until we manually delete this volume. Is there a better way to handle this issue? May be a volume that is able to overwrite its content when a new image is deployed. Or an nginx config that does not require the symfony source code in its own container.
How to maintain linux/mac permissions on bind mount with docker for windows?
I'm currently developing a Drupal Application in my Mac. Drupal needs some permissions configuration in its subfolders (in linux) and all are already set up. I've got two containers: database (db) and apache-php (web). All my code is in git and I can get it with git clone or pull... Just introducing the issue, do not worry about drupal or php, since this problem could happen with Python or any other scripting language. I use bind mount with --volume directly from the code from git, this way (docker-compose.yml): version: "2" services: web: image: yoshz/apache-php:5.5 links: - db:database volumes: - ./docker/phpconf/apache2:/etc/php5/apache2 - ./www:/var/www - ./docker/sites-avaliable/:/etc/apache2/sites-available/ ports: - "8600:80" db: extends: file: docker-common-configs.yaml service: db environment: MYSQL_ROOT_PASSWORD: 1234 MYSQL_DATABASE: drupal MYSQL_USER: user MYSQL_PASSWORD: password A colleague cloned my repository in Windows. Run docker-compose up -d and the application failed. The problem is, as the volume resides in the windows host, permissions in ./www are not set. I'd like to have the code accessible for the Windows Host Visual Code application, to change it fast, without deployment. The solution of force permissions this way: - ./www:/var/www:rw won't work, because each subdirectory has its own permissions. Does anyone have an imaginative solution to face this problem?