I have a MySQL docker image running in a docker container on Ubuntu VPS. I bring up MySQL using the docker-compose up -d command via the following docker-composer.yml file
version: "3"
services:
mysql_server:
image: mysql:8.0.21
restart: always
container_name: mysql_server
environment:
MYSQL_DATABASE: db_name
MYSQL_USER: db_username
MYSQL_PASSWORD: db_password
MYSQL_ROOT_PASSWORD: root_password
volumes:
- mysql_server_data:/var/lib/mysql
- /mysql/files/conf.d:/etc/mysql/conf.d
I am having some performance issues and would like to do the following in attempt to improve performance.
I want the data in the mysql_server_data volume to be mounted on /mysql/data without losing any data as this instance in running in production.
I also want to mount the MySQL config file on /mysql/files so I can change the instance configuration to increase performance.
Questions
How can change the data location of a the volume from mysql_server_data to /mysql/data?
Also, how can I mount MySQL's config file on /mysql/files/conf.d to allow me to update the settings?
I tried to mount config file like this
volumes:
- /mysql/files/conf.d:/etc/mysql/conf.d
But that created a directory /mysql/files/conf.d with no config file.
To move the data:
Shutdown the container with docker-compose down, then on the local file system, copy the data from mysql_server_data to /mysql/data. Then change the compose file to reflect the new location. Finally restart the container with docker-compose up.
To mount the config files, as per the docker hub documentation for MySQL, If /my/custom/config-file.cnf is the path and name of your custom configuration file, the your volume map is:
/my/custom:/etc/mysql/conf.d
Note that mapping the volume to the container does not bring the data from your container to you local, but the other way around. So if you want to have the file in the container, you must first create it on your local.
Use the trick suggested by Docker maintainer Sebastiaan van Stijn at https://github.com/moby/moby/issues/31417 to send the tar over stdout:
docker run --rm -v vol_name:/vol_path img_name sh -c 'tar -cOzf - /vol_path' > volume-export.tgz
Related
I want to add my aws credentials file to a docker container, so it can access AWS apis.
The credentials file exists in my host machine at /home/user/.aws/credentials
When running the container from command line, I can do
docker run --rm -d -v /home/user/.aws/:/.aws:ro -d \
--env AWS_CREDENTIAL_PROFILES_FILE=/.aws/credentials proj:latest
In docker compose, I can mount the .aws directory with volumes property like so:
services:
proj:
volumes:
- aws_credentials:/.aws:ro
environment:
AWS_CREDENTIAL_PROFILES_FILE: /.aws/credentials
volumes:
aws_credentials:
external: true
My question is, how to populate the external aws_credentials volume with data?
Approaches that do not work:
Use secrets instead of volumes. I am not using Docker swarm
Use config instead of volumes. I am not using Docker swarm
Use a bind mount instead of a volume. The docker-compose file gets checked into source control, and I do not want directories checked in.
services:
proj:
volumes:
- /home/user/.aws/:/.aws:ro #<-- DO NOT WANT THIS IN SOURCE CONTROL
environment:
AWS_CREDENTIAL_PROFILES_FILE: /.aws/credentials
One answer I came up with is using environment variables like so:
services:
proj:
secrets:
- aws_credentials
environment:
AWS_CREDENTIAL_PROFILES_FILE: /run/secrets/aws_credentials
secrets:
aws_credentials:
file: ${awscredfile}
and making sure awscredfile is either loaded in the environment for the parent process of docker compose, or passed in in an env file with --env-file parameter to docker compose.
I am currently trying to use redmine with a postgres database but I'm running into some issues when setting up the environment.
Lets say I have the following compose file
db-redmine:
image: 'bitnami/postgresql:11'
user: root
container_name: 'orchestrator_redmine_pg'
environment:
- POSTGRESQL_USERNAME=${POSTGRES_USER}
- POSTGRESQL_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRESQL_DATABASE=${POSTGRES_DB}
- BITNAMI_DEBUG=true
volumes:
- 'postgresql_data:/bitnami/postgresql'
redmine:
image: 'bitnami/redmine:4'
container_name: 'orchestrator_redmine'
ports:
- '3000:3000'
environment:
- REDMINE_DB_POSTGRES=orchestrator_redmine_pg
- REDMINE_DB_USERNAME=${POSTGRES_USER}
- REDMINE_DB_PASSWORD=${POSTGRES_PASSWORD}
- REDMINE_DB_NAME=${POSTGRES_DB}
- REDMINE_USERNAME=${REDMINE_USERNAME}
- REDMINE_PASSWORD=${REDMINE_PASSWORD}
volumes:
- 'redmine_data:/bitnami'
depends_on:
- db-redmine
volumes:
postgresql_data:
driver: local
redmine_data:
driver: local
This generates the postgres database for redmine and creates the redmine instance.
Once the containers are up, I enter the redmine instance and configure the application, this means creating custom fields, adding trackers, issue types, etc. Quite a lot of setup is required, so I don't want to do this everytime I want to deploy this kind of containers.
I figured that, because all of the setup data is going to the volumes, I could export such volumes and then, on a new machine, import them. This way, when both apps start on the new machine, they will have all the neccessary information from the previous setup.
This sounded simple enough, but I'm struggling with the export then import phase.
Form what I have seen, I am able to export postgresql_data to a .tar file by doing the following:
docker export postgresql_data --output="postgres_data.tar"
But how can I import the newly generated .tar file on a new machine? If I'm not mistaken, by importing the .tar file to a volume called postgresql_data in the new machine, the data from the template will be used for generating the new container.
Is it there a way to do this? Is this the correct way of duplicating a setup between two hosts?
Is doing something like docker volume create postgresql_data and then copying the files to the volume directory the way to go?
My suggestion is to use pg_dump and pg_restore tools, instead of coping the volume.
You can add a mount path to the postgres container e.g:
db-redmine:
image: 'bitnami/postgresql:11'
user: root
container_name: 'orchestrator_redmine_pg'
environment:
- POSTGRESQL_USERNAME=${POSTGRES_USER}
- POSTGRESQL_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRESQL_DATABASE=${POSTGRES_DB}
- BITNAMI_DEBUG=true
volumes:
- 'postgresql_data:/bitnami/postgresql'
- /dump-files:/dump-files
Now login to the container and run pg_dump "Your-db-name" > /dump-files/dump
Now Copy this file to any newly created container and inside the new container run: pg_restore -d "your-db-name" /dump-files/dump
First, note that docker export has nothing to do with volumes; that command exports a container filesystem as a tar archive. That won't include anything from mounted volumes.
The easiest way to copy the contents of a volume is probably something like:
docker run -v postgresql_data:/data alpine tar -C /data -cf- . |
ssh remotehost docker run -i -v postgresql_data:/data alpine tar -C /data -xf-
This tars up everything from your existing volume and pipes it over an ssh connection to a remote host, where we create a new postgresql_data container and populate it by extracting the tar archive.
For postgres in particular you could do a pg_dump and pg_restore instead of using tar.
You will have to fiddle your container names because you're creating them with docker-compose.
Let's say I have a docker-compose file with two containers:
version: "3"
services:
app:
image: someimage:fpm-alpine
volumes:
- myvolume:/var/www/html
web:
image: nginx:alpine
volumes:
- myvolume:/var/www/html
volumes:
myvolume:
The app container contains the application code in the /var/www/html directory which gets updated with each version of the image, so I don't want this directory to be persistent.
Yet I need to share the data with the nginx container. If I use a volume or a host bind the data is persistent and doesn't get updated with a new version. Maybe there is a way to automatically delete a volume whenever I pull a new image? Or a way to share an anonymous volume?
i think its better for you to use anonymous volume
volumes:
- ./:/var/www/html
You would have to be willing to drop back to docker-compose version 2 and use data containers with the volumes_from directive.
Which is equivalent to --volumes-from on a docker run command.
This should work fine. The problem isn't with docker. You can use volumes to communicate in this way. If you run docker-compose up in a directory with the following compose file:
version: "3"
services:
one:
image: ubuntu
command: sleep 100000
volumes:
- vol:/vol
two:
image: ubuntu
command: sleep 100000
volumes:
- vol:/vol
volumes:
vol:
Then, in a 2nd terminal docker exec -it so_one_1 bash (you might have to do a docker ps to find the exact name of the container, it can change). You'll find yourself in a bash container. Change to the /vol directory cd /vol and then echo "wobble" > wibble.txt", then exit` the shell (ctrl-d).
In the same terminal you can then type docker exec -it so_two_1 bash (again, check the names). Just like last time you can cd /vol and type ls -gAlFh you'll see the wibble.txt file we created in the other container. You can even cat wibble.txt to see the contents. It'll be there.
So if the problem isn't docker, what can it be? I think the problem is that nginx isn't seeing the changes on the filesystem. For that, I believe that setting expires -1; inside a location block in the config will actually disable caching completely and may solve the problem (dev only).
I'm using Docker Toolbox on Windows 10
I can access the php part succesfully via http://192.168.99.100:8000, I have been working around with the mariadb part but still having several problems
I have an sql file as /mariadb/initdb/abc.sql so I should be copied into /docker-entrypoint-initdb.d, after the container is created I use docker-compose exec mariadb to access the container, there is the file as /docker-entrypoint-initdb.d/abc.sql but the file never get executed, I also have tested to import the sql file to the container manually, it was succesful so the sql file is valid
I don't quite understand about the data folder mapping, and what to do to get the folder sync with the container, I always get the warning when recreate the container using docker-compose up -d
WARNING: Service "mariadb" is using volume "/var/lib/mysql" from the previous container. Host mapping "/.../mariadb/data" has no effect. Remove the existing containers (with docker-compose rm mariadb) to use the Recreating db ... done
Questions
How to get the sql file in /docker-entrypoint-initdb.d to be executed ?
What is the right way to map the data folder with the mariadb container ?
Please guide
Thanks
This is my docker-compose.yml
version: "3.2"
services:
php:
image: php:7.1-apache
container_name: web
restart: always
volumes:
- /.../php:/var/www/html
ports:
- "8000:80"
mariadb:
image: mariadb:latest
container_name: db
restart: always
environment:
- MYSQL_ROOT_PASSWORD=12345
volumes:
- /.../mariadb/initdb:/docker-entrypoint-initdb.d
- /.../mariadb/data:/var/lib/mysql
ports:
- "3306:3306"
For me the issue was the fact that Docker didn't clean up my mounted volumes from previous runs.
Doing a:
docker volume ls
Will list any volumes, and if previous exist, then run 'rm' command on the volume to remove it.
As stated on docker mysql docks, scripts in the '/docker-entrypoint-initdb.d' folder is only evalutated the first time the container runs, and if a previous volume remains, it won't run the scripts.
As for the mapping, you simply need to mount your script folder to the '/docker-entrypoint-initdb.d' folder in the image:
volumes:
- ./db/:/docker-entrypoint-initdb.d
I have a single script file in a folder named db, relative to my docker-compose file.
In your Docker file for creating mariaDB, at the end add the abc.sql file to your docker entry point like so:
COPY abc.sql /docker-entrypoint-initdb.d/
Remove the - /.../mariadb/initdb:/docker-entrypoint-initdb.d mapping as any file copied into the entry point will be executed.
Note: Windows containers do not execute anything in docker-entrypoint-initdb.d/
docker-compose:
mysql:
image: mysql:5.7.16
container_name: f_mysql
volumes:
- ./db:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: sheep
expose:
- '3306'
and I use docker exec input this container,
and I type echo $MYSQL_ROOT_PASSWORD, then I got sheep,
but the mysql root password still is '',
when I type 'mysql -uroot', I login mysql.
For me the issue was that I'd created the db volume with the random password option set, then disabled that, but hadn't cleared the volume. So no matter what changes I made to the docker-compose file, the old volume with the old login information was still there.
I had to docker volume ls to find the volume then docker volume rm <name> to remove it. After re-upping, everything worked.
Regarding other answers on this page, the format for specifying env variables is correct, you can use either
environment:
MYSQL_ROOT_PASSWORD: a_password
OR
environment:
- MYSQL_ROOT_PASSWORD=a_password
The image entrypoint script will never make changes to a database which is existing. If you mount an existing data directory into var/lib/mysql then MYSQL_ROOT_PASSWORD will have no effect.
Workaround
Remove all unused volumes: docker volume prune
Remove the volume from your database service: docker volume rm <db_data>
Down containers, remove volumes: docker-compose down --volumes
You need to fix your docker-compose file:
environment:
- MYSQL_ROOT_PASSWORD=sheep
The following is the full docker-compose that achieves what you want:
version: '2'
services:
mysql:
image: mysql:5.7.16
container_name: f_mysql
volumes:
- ./db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=sheep
expose:
- '3306'
Then with a docker exec -it f_mysql /bin/bash and inside the container mysql -u root -p, using sheep, as the password, will be the only way to connect to the mysql server.
This happens when your volume from a directory has wrong permission.
You can fix this letting docker to create directory itself.
In case you have an existent data, you can compare the new one with the previous one in order to apply correct chmod, because this depends on if docker/your-user is part of root group.
Please note that according to the official docker image: "none of those variables will have any effect if you start the container with a data directory that already contains a database". In fact, in this case, you have already a "mysql.user" table, and you should use the user info set there that there. The same thing happens when you try to restore a full dump.
This happened when the the mount directory has ea(extended attribute) on Mac.
It is better to delete the directory once and recreate it or check the permission with the xattr command.
$ ls -l ./db
$ xattr ls ./db
I had a same problem, and after a lot of try I found my solution. When I run first time the docker-composer, I left everything on the original settings like this:
environment:
MYSQL_ROOT_PASSWORD: MYSQL_ROOT_PASSWORD
Then I change the password, say "docker-compose up" but it was still MYSQL_ROOT_PASSWORD.
My solution was delete the "mysql" docker image from my disk. After that, the docker download everything again BUT also set my password for the root as well. Maybe this is not the best, but I am also a beginner in Docker.
So in nutshell the simple "docker-compose up" does not enough.