I'm setting up a wordpress stack and having cross file permissions.
When using docker-compose up permissions seem not to be a problem but when using local docker swarm and using docker stack deploy, nginx gives me a 403 for file permission errors. When inspecting both the nginx:alpine container and the wordpress:php7.1-fpm-alpine container, i do indeed see that the containers each have different permission, on the nginx side it marks the files inside var/www/html as owned by user and group id 82 while on the php7.1 they are owned by www-data.
How can I make sure permission is correct across containers? The files are being bind mounted from the host.
```
version: '3'
services:
nginx:
depends_on:
- wordpress
image: private/nginx
build:
context: ./stack/nginx
volumes:
- "wordpress:/var/www/html"
ports:
- 80:80
wordpress:
depends_on:
- mysql
image: private/wordpress
build:
context: ./stack/wordpress
volumes:
- "wordpress:/var/www/html"
environment:
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: password
mysql:
image: mariadb
volumes:
- "mysql:/var/lib/mysql"
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: password
volumes:
wordpress:
driver: local
driver_opts:
type: none
o: bind
device: "${PWD}/wordpress"
mysql:
```
For anyone else who comes across this question and has no solution,
this is how I approached the situation.
I added the following to my dockerfile, it creates a new www-datagroup AND user with the same id as the id that is used for the www-data user 82 & group 82 in the php-fpm image.
RUN set -x ; \
addgroup -g 82 -S www-data ; \
adduser -u 82 -D -S -G www-data www-data && exit 0 ; exit 1
To my docker file for nginx.
and in my nginx.conf i set the nginx worker user to the newly created ww-data user.
user www-data;
Related
I am trying to develop an automatic Drupal installer which creates an already configured Drupal docker container ready to be used with one single command execution.
In order to achieve this, I have this docker-compose.yml:
version: "3"
services:
# Database
drupal_db:
image: mysql:8.0
command: --default-authentication-plugin=mysql_native_password
container_name: drupal_db
ports:
- "33061:3306"
restart: unless-stopped
volumes:
- drupal_db_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: drupal
MYSQL_DATABASE: drupal
MYSQL_USER: drupal
MYSQL_PASSWORD: drupal
networks:
- drupal
# Drupal
drupal:
image: drupal:9-php7.4-apache
container_name: drupal
ports:
- "8080:80"
restart: unless-stopped
volumes:
- ./drupal/d_modules:/var/www/html/modules
- ./drupal/d_profiles:/var/www/html/profiles
- ./drupal/d_sites:/var/www/html/sites
- ./drupal/d_sites/default/files/translations:/var/www/html/sites/default/files/translations
- ./drupal/d_themes:/var/www/html/themes
- ./scripts/drush:/opt/drupal/scripts
depends_on:
- drupal_db
env_file:
- drupal-install.env
links:
- drupal_db:mysql
networks:
- drupal
volumes:
drupal_db_data: {}
drupal_data: {}
networks:
drupal:
Together with this Makefile:
clear:
docker-compose down -v
autoinstall:
docker-compose up -d
docker exec drupal composer require drush/drush
docker exec drupal bash -c '/opt/drupal/scripts/autoinstall.sh'
autoinstall.sh is an script which is mounted via one of Drupal's container volumes, which runs this:
#!/bin/bash
drush site-install ${DRUPAL_PROFILE} \
--locale=${LOCALE} \
--db-url=${DB_URL} \
--site-name=${SITE_NAME} \
--site-mail=${SITE_MAIL} \
--account-name=${ACCOUNT_NAME} \
--account-mail=${ACCOUNT_MAIL} \
--account-pass=${ACCOUNT_PASS} \
--yes
This uses environment variables, which are specified at docker-compose.yml through the env-file drupal-install.env:
HOST=drupal_db:33061
DBASE=drupal
USER=drupal
PASS=drupal
DATABASE_HOST=drupal_db:33061
DRUPAL_PROFILE=standard
LOCALE=en
DB_URL=mysql://drupal:drupal#drupal_db:3306/drupal
SITE_NAME=NewSite
SITE_MAIL=newsite#test.com
ACCOUNT_NAME=admin
ACCOUNT_MAIL=admin#test.com
ACCOUNT_PASS=admin
However, when running the make autoinstall command, first two lines run with no issues, but the last one throws this error:
Database settings:<br /><br />Resolve all issues below to continue the installation. For help
configuring your database server, see the <a href="https://www.drupal.org/docs/8/install">in
stallation handbook</a>, or contact your hosting provider.<div class="item-list"><ul><li>Fail
ed to connect to your database server. The server reports the following message: <em class="p
laceholder">SQLSTATE[HY000] [2002] Connection refused</em>.<ul><li>Is the database server run
ning?</li><li>Does the database exist or does the database user have sufficient privileges to
create the database?</li><li>Have you entered the correct database name?</li><li>Have you en
tered the correct username and password?</li><li>Have you entered the correct database hostna
me and port number?</li></ul></li></ul></div>
If I manually run:
docker-compose up -d
docker exec -it drupal bash
composer require drush/drush
/opt/drupal/scripts/autoinstall.sh
Everything works perfectly, but the makefile script doesn't work.
Something really weird happens because if I run make autoinstall twice, the first time will throw this error, but it actually works when I run it a second time. It is really strange and I can't find a solution, but I would like to not to have to run the command twice.
I am new to docker
Here is my configuration
Folder Structure
Test :
- docker-compose.yml
- Dockerfile
- www
- index.html
Docker YML
version: "3"
services:
www:
build: .
ports:
- "8001:80"
volumes:
- ./www:/var/www/html/
links:
- db
networks:
- default
db:
image: mysql:8.0.16
command: ['--character-set-server=utf8mb4', '--collation-server=utf8mb4_unicode_ci','--default-authentication-plugin=mysql_native_password']
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: myDb
MYSQL_USER: user
MYSQL_PASSWORD: test
MYSQL_ROOT_PASSWORD: test
volumes:
- ./dump:/docker-entrypoint-initdb.d
- persistent:/var/lib/mysql
networks:
- default
phpmyadmin:
image: phpmyadmin/phpmyadmin:4.8
links:
- db:db
ports:
- 8000:80
environment:
MYSQL_USER: user
MYSQL_PASSWORD: test
MYSQL_ROOT_PASSWORD: test
volumes:
persistent:
Dockerfile
FROM php:7.2.6-apache
RUN docker-php-ext-install mysqli pdo pdo_mysql gd curl
RUN a2enmod rewrite
RUN chmod -R 775 /var/www/html
phpmyadmin dashboard working correctly, But when i enter the web url it shows 403 forbidden error
When it check the log it shows an error like this :
[Mon Sep 02 12:00:44.290707 2019] [autoindex:error] [pid 18] [client 192.168.99.1:52312] AH01276: Cannot serve directory /var/www/html/: No matching DirectoryIndex (index.php,index.html) found, and server-generated directory index forbidden by Options directive
192.168.99.1 - - [02/Sep/2019:12:00:44 +0000] "GET / HTTP/1.1" 403 508 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36"
and my "/var/www/html" directory was empty. How can i fix it ?
Update
I created a file index.php using bash it worked, but i can't locate index.php file on my file system
Please help
If you need any additional info feel free to ask :).
Thanks
Finally i figured the issue, i am using docker toolbox so it only mount C:\Users directory but my project folder is on d drive. so i have to mount my D:\projects directory to vm shared folder. i followed the below steps
In Virtual Box under 'Settings' -> 'Shared Folders' added 'projects'
and pointed it to the location I want to mount. In my case this is
'D:\projects' (Auto-mount and Make Permanent enabled)
Start Docker Quickstart Terminal
Type 'docker-machine ssh default' (the VirtualBox VM that Docker
uses is called 'default')
Go to the root of the VM filesystem, command 'cd /'
Switch to the user root by typing 'sudo su'
Create the directory you want to use as a mount point. Which in my
case is the same as the name of the shared folder in VirtualBox:
'mkdir projects'
Mount the VirtualBox shared folder by typing 'mount -t vboxsf -o
uid=1000,gid=50 projects /projects' (the first 'projects' is the
VirtualBox shared folder name, the second '/projects' is the
directory I just created and want to use as the mount point).
Now I can add a volume to my Docker file like this: '-
/projects/test/www/build/:/var/www/html/' (left side is the
/projects mount in my VM, the right side is the directory to mount
in my docker container)
Run the command 'docker-compose up' to start using the mount (to be
clear: run this command via the Docker Quickstart Terminal outside
of your SSH session on your local file system where your
docker-compose.yml file is located).
And i change the docker-compose.yml like this :
version: "3"
services:
www:
build:
context: .
dockerfile: Dockerfile
ports:
- "8001:80"
volumes:
- /projects/test/www:/var/www/html/
links:
- db
networks:
- default
db:
image: mysql:8.0.16
command: ['--character-set-server=utf8mb4', '--collation-server=utf8mb4_unicode_ci','--default-authentication-plugin=mysql_native_password']
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: myDb
MYSQL_USER: user
MYSQL_PASSWORD: test
MYSQL_ROOT_PASSWORD: test
volumes:
- ./dump:/docker-entrypoint-initdb.d
- persistent:/var/lib/mysql
networks:
- default
phpmyadmin:
image: phpmyadmin/phpmyadmin:4.8
links:
- db:db
ports:
- 8000:80
environment:
MYSQL_USER: user
MYSQL_PASSWORD: test
MYSQL_ROOT_PASSWORD: test
volumes:
persistent:
i also updated the oracle vm.
I found this solution from here : https://github.com/moby/moby/issues/22430#issuecomment-215974613
Thanks bro :)
You need to modify your Dockerfile,
FROM php:7.2.6-apache
RUN docker-php-ext-install pdo_mysql
RUN a2enmod rewrite
COPY www/ /var/www/html
RUN chmod -R 775 /var/www/html
This will copy your www dir to /var/www/html dir inside container, let your web service run.
I'm trying to run mysql under container with mysql parameters i defined on docker-compose.yml file. But i have an access denied when i run :
mysql -utest -ptest
I'm only able to connect with mysql -uroot -proot.
Help me please.
Thanks.
mysql:
container_name: mysql
image: mysql
restart: always
volumes:
- .docker/data/db:/var/lib/mysql
environment:
MYSQL_DATABASE: app
MYSQL_ROOT_PASSWORD: test
MYSQL_USER: test
MYSQL_PASSWORD: test
Try to launch with specified database name like this:
mysql -u test -p test app
Explanation:
MYSQL_USER, MYSQL_PASSWORD
These variables are optional, used in conjunction to create a new user and to set that user's password. This user will be granted superuser permissions (see above) for the database specified by the MYSQL_DATABASE variable. Both variables are required for a user to be created.
From MySQL docker hub page
Permissions are granted only for the database specified by environment variable. When you try to log into default database you have no permissions to it only for app database.
My complete docker-compose file.
version: '3.2'
services:
apache:
container_name: apache
build: .docker/apache/
restart: always
volumes:
- .:/var/www/html/app/
ports:
- 80:80
depends_on:
- php
- mysql
links:
- mysql:mysql
php:
container_name: php
build: .docker/php/
restart: always
volumes:
- .:/var/www/html/app/
working_dir: /var/www/html/app/
mysql:
container_name: mysql
image: mysql
restart: always
volumes:
- .docker/data/db:/var/lib/mysql
environment:
MYSQL_DATABASE: app
MYSQL_ROOT_PASSWORD: test
MYSQL_USER: test
MYSQL_PASSWORD: test
Maybe you could try attaching an interactive bash process to the already running container by following these steps:
Get your container id or name from running docker container ls in your terminal (I'm talking about the mysql container, which should have the mysql name according to your docker-compose.yml file)
Run docker exec -it mysql bash to associate an interactive bash process to the running container
Now, being inside of your container's filesystem, run mysql --user=test --password=test and you should be able to get on with your work
I am trying to set up an extensible docker production environment for a few projects on a virtual machine.
My setup is as follows:
Front end: (this works as expected: thanks to Tevin Jeffery for this)
# ~/proxy/docker-compose.yml
version: '2'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- '80:80'
- '443:443'
volumes:
- '/etc/nginx/vhost.d'
- '/usr/share/nginx/html'
- '/etc/nginx/certs:/etc/nginx/certs:ro'
- '/var/run/docker.sock:/tmp/docker.sock:ro'
networks:
- nginx
letsencrypt-nginx-proxy:
container_name: letsencrypt-nginx-proxy
image: 'jrcs/letsencrypt-nginx-proxy-companion'
volumes:
- '/etc/nginx/certs:/etc/nginx/certs'
- '/var/run/docker.sock:/var/run/docker.sock:ro'
volumes_from:
- nginx-proxy
networks:
- nginx
networks:
nginx:
driver: bridge
Database: (planning to add postgres to support rails apps as well)
# ~/mysql/docker-compose.yml
version: '2'
services:
db:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: wordpress
# ports:
# - 3036:3036
networks:
- db
networks:
db:
driver: bridge
And finaly a wordpress blog to test if everything works:
# ~/wp/docker-compose.yml
version: '2'
services:
wordpress:
image: wordpress
# external_links:
# - mysql_db_1:mysql
ports:
- 8080:80
networks:
- proxy_nginx
- mysql_db
environment:
# for nginx and dockergen
VIRTUAL_HOST: gizmotronic.ca
# wordpress setup
WORDPRESS_DB_HOST: mysql_db_1
# WORDPRESS_DB_HOST: mysql_db_1:3036
# WORDPRESS_DB_HOST: mysql
# WORDPRESS_DB_HOST: mysql:3036
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: wordpress
networks:
proxy_nginx:
external: true
mysql_db:
external: true
My problem is that the Wordpress container can not connect to the database. I get the following error when I try to start (docker-compose up) the Wordpress container:
wordpress_1 | Warning: mysqli::mysqli(): (HY000/2002): Connection refused in - on line 22
wordpress_1 |
wordpress_1 | MySQL Connection Error: (2002) Connection refused
wp_wordpress_1 exited with code 1
UPDATE:
I was finally able to get this working. my main problem was relying on the container defaults for the environment variables. This created an automatic data volume with without a database or user for word press. After I added explicit environment variables to the mysql and Wordpress containers, I removed the data volume and restarted both containers. This forced the mysql container to recreate the database and user.
To ~/mysql/docker-compose.yml:
environment:
MYSQL_ROOT_PASSWORD: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
MYSQL_DATABASE: wordpress
and to ~/wp/docker-compose.yml:
environment:
# for nginx and dockergen
VIRTUAL_HOST: gizmotronic.ca
# wordpress setup
WORDPRESS_DB_HOST: mysql_db_1
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
One problem with docker-compose is that although sometimes your application is linked to your database, the application will NOT wait for your database to be up and ready. Here is an official Docker read:
https://docs.docker.com/compose/startup-order/
I've faced a similar problem where my test application would fail because it couldn't connect to my server because it wasn't up and running yet.
I made a similar workaround to the article posted in the link above by running a shell script to ping the address of the DB until it is available to be used. This script should be the last CMD command in your application.
RESPONSE=$(curl --write-out "%{http_code}\n" --silent --output /dev/null "YOUR_MYSQL_DATABASE:3306")
# Until the mysql sends a 200 HTTP response, we're going to keep checking
until [ $RESPONSE -eq "200" ]; do
sleep 2
echo "MySQL is not ready yet.. retrying... RESPONSE: ${RESPONSE}"
RESPONSE=$(curl --write-out "%{http_code}\n" --silent --output /dev/null "YOUR_MYSQL_DATABASE:3306")
done
# Once we know the server's up, we can start run our application
enter code to start your application here
I'm not 100% sure if this is the problem you're having. Another way to debug your problem is to run docker-compose in detached mode with the -d flag and run docker ps to see if your database is even running. If it is running, run docker logs $YOUR_DB_CONTAINER_ID to see if MySQL is giving you any errors when starting
I'm trying to setup a Drupal site template however I have an issue, this is my current docker-compose:
version: '2'
services:
database:
image: mysql
container_name: database
command: mysqld --user=root --verbose
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: "db"
MYSQL_USER: "user"
MYSQL_PASSWORD: "pass"
MYSQL_ROOT_PASSWORD: "root"
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
restart: always
site:
image: drupal
container_name: site
ports:
- "555:80"
volumes:
- ./drupal:/var/www/html
links:
- database:database
working_dir: /app
restart: always
volumes:
db:
Now if I do that the site doesn't work, no files are in the /var/www/html directory and the site 404's on everything. However if I remove the volume in the site container, it works perfectly and I can start setting it up as if it was a regular site.
What am I missing?
When you don't map a volume in the site service, that means Drupal is using whatever's already in /var/www/html from the drupal image. When you map the volume, you're overwriting /var/www/html with whatever's in ./drupal on the host machine. The results you're seeing imply there may be something wrong with the contents of ./drupal. To start with, I would run the service without mapping a volume and then copy the exact contents of /var/www/html into your local folder:
docker cp compose_site_1:/var/www/html/ ./drupal
Then try running the service again, this time with the volume mapped and see if that works. If it works, that tells you the problem was with the contents of ./drupal.