Export docker volume to another machine - docker

I am currently trying to use redmine with a postgres database but I'm running into some issues when setting up the environment.
Lets say I have the following compose file
db-redmine:
image: 'bitnami/postgresql:11'
user: root
container_name: 'orchestrator_redmine_pg'
environment:
- POSTGRESQL_USERNAME=${POSTGRES_USER}
- POSTGRESQL_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRESQL_DATABASE=${POSTGRES_DB}
- BITNAMI_DEBUG=true
volumes:
- 'postgresql_data:/bitnami/postgresql'
redmine:
image: 'bitnami/redmine:4'
container_name: 'orchestrator_redmine'
ports:
- '3000:3000'
environment:
- REDMINE_DB_POSTGRES=orchestrator_redmine_pg
- REDMINE_DB_USERNAME=${POSTGRES_USER}
- REDMINE_DB_PASSWORD=${POSTGRES_PASSWORD}
- REDMINE_DB_NAME=${POSTGRES_DB}
- REDMINE_USERNAME=${REDMINE_USERNAME}
- REDMINE_PASSWORD=${REDMINE_PASSWORD}
volumes:
- 'redmine_data:/bitnami'
depends_on:
- db-redmine
volumes:
postgresql_data:
driver: local
redmine_data:
driver: local
This generates the postgres database for redmine and creates the redmine instance.
Once the containers are up, I enter the redmine instance and configure the application, this means creating custom fields, adding trackers, issue types, etc. Quite a lot of setup is required, so I don't want to do this everytime I want to deploy this kind of containers.
I figured that, because all of the setup data is going to the volumes, I could export such volumes and then, on a new machine, import them. This way, when both apps start on the new machine, they will have all the neccessary information from the previous setup.
This sounded simple enough, but I'm struggling with the export then import phase.
Form what I have seen, I am able to export postgresql_data to a .tar file by doing the following:
docker export postgresql_data --output="postgres_data.tar"
But how can I import the newly generated .tar file on a new machine? If I'm not mistaken, by importing the .tar file to a volume called postgresql_data in the new machine, the data from the template will be used for generating the new container.
Is it there a way to do this? Is this the correct way of duplicating a setup between two hosts?
Is doing something like docker volume create postgresql_data and then copying the files to the volume directory the way to go?

My suggestion is to use pg_dump and pg_restore tools, instead of coping the volume.
You can add a mount path to the postgres container e.g:
db-redmine:
image: 'bitnami/postgresql:11'
user: root
container_name: 'orchestrator_redmine_pg'
environment:
- POSTGRESQL_USERNAME=${POSTGRES_USER}
- POSTGRESQL_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRESQL_DATABASE=${POSTGRES_DB}
- BITNAMI_DEBUG=true
volumes:
- 'postgresql_data:/bitnami/postgresql'
- /dump-files:/dump-files
Now login to the container and run pg_dump "Your-db-name" > /dump-files/dump
Now Copy this file to any newly created container and inside the new container run: pg_restore -d "your-db-name" /dump-files/dump

First, note that docker export has nothing to do with volumes; that command exports a container filesystem as a tar archive. That won't include anything from mounted volumes.
The easiest way to copy the contents of a volume is probably something like:
docker run -v postgresql_data:/data alpine tar -C /data -cf- . |
ssh remotehost docker run -i -v postgresql_data:/data alpine tar -C /data -xf-
This tars up everything from your existing volume and pipes it over an ssh connection to a remote host, where we create a new postgresql_data container and populate it by extracting the tar archive.
For postgres in particular you could do a pg_dump and pg_restore instead of using tar.
You will have to fiddle your container names because you're creating them with docker-compose.

Related

Sharing data between docker containers without making data persistent

Let's say I have a docker-compose file with two containers:
version: "3"
services:
app:
image: someimage:fpm-alpine
volumes:
- myvolume:/var/www/html
web:
image: nginx:alpine
volumes:
- myvolume:/var/www/html
volumes:
myvolume:
The app container contains the application code in the /var/www/html directory which gets updated with each version of the image, so I don't want this directory to be persistent.
Yet I need to share the data with the nginx container. If I use a volume or a host bind the data is persistent and doesn't get updated with a new version. Maybe there is a way to automatically delete a volume whenever I pull a new image? Or a way to share an anonymous volume?
i think its better for you to use anonymous volume
volumes:
- ./:/var/www/html
You would have to be willing to drop back to docker-compose version 2 and use data containers with the volumes_from directive.
Which is equivalent to --volumes-from on a docker run command.
This should work fine. The problem isn't with docker. You can use volumes to communicate in this way. If you run docker-compose up in a directory with the following compose file:
version: "3"
services:
one:
image: ubuntu
command: sleep 100000
volumes:
- vol:/vol
two:
image: ubuntu
command: sleep 100000
volumes:
- vol:/vol
volumes:
vol:
Then, in a 2nd terminal docker exec -it so_one_1 bash (you might have to do a docker ps to find the exact name of the container, it can change). You'll find yourself in a bash container. Change to the /vol directory cd /vol and then echo "wobble" > wibble.txt", then exit` the shell (ctrl-d).
In the same terminal you can then type docker exec -it so_two_1 bash (again, check the names). Just like last time you can cd /vol and type ls -gAlFh you'll see the wibble.txt file we created in the other container. You can even cat wibble.txt to see the contents. It'll be there.
So if the problem isn't docker, what can it be? I think the problem is that nginx isn't seeing the changes on the filesystem. For that, I believe that setting expires -1; inside a location block in the config will actually disable caching completely and may solve the problem (dev only).

How to move docker volume to disk location?

I have a MySQL docker image running in a docker container on Ubuntu VPS. I bring up MySQL using the docker-compose up -d command via the following docker-composer.yml file
version: "3"
services:
mysql_server:
image: mysql:8.0.21
restart: always
container_name: mysql_server
environment:
MYSQL_DATABASE: db_name
MYSQL_USER: db_username
MYSQL_PASSWORD: db_password
MYSQL_ROOT_PASSWORD: root_password
volumes:
- mysql_server_data:/var/lib/mysql
- /mysql/files/conf.d:/etc/mysql/conf.d
I am having some performance issues and would like to do the following in attempt to improve performance.
I want the data in the mysql_server_data volume to be mounted on /mysql/data without losing any data as this instance in running in production.
I also want to mount the MySQL config file on /mysql/files so I can change the instance configuration to increase performance.
Questions
How can change the data location of a the volume from mysql_server_data to /mysql/data?
Also, how can I mount MySQL's config file on /mysql/files/conf.d to allow me to update the settings?
I tried to mount config file like this
volumes:
- /mysql/files/conf.d:/etc/mysql/conf.d
But that created a directory /mysql/files/conf.d with no config file.
To move the data:
Shutdown the container with docker-compose down, then on the local file system, copy the data from mysql_server_data to /mysql/data. Then change the compose file to reflect the new location. Finally restart the container with docker-compose up.
To mount the config files, as per the docker hub documentation for MySQL, If /my/custom/config-file.cnf is the path and name of your custom configuration file, the your volume map is:
/my/custom:/etc/mysql/conf.d
Note that mapping the volume to the container does not bring the data from your container to you local, but the other way around. So if you want to have the file in the container, you must first create it on your local.
Use the trick suggested by Docker maintainer Sebastiaan van Stijn at https://github.com/moby/moby/issues/31417 to send the tar over stdout:
docker run --rm -v vol_name:/vol_path img_name sh -c 'tar -cOzf - /vol_path' > volume-export.tgz

Mounted directory empty with docker-compose and custom Dockerfile

I am very (read very) new to Docker so experimenting. I have created a very basic Dockerfile to pull in Laravel:
FROM composer:latest
RUN composer_version="$(composer --version)" && echo $composer_version
RUN composer global require laravel/installer
WORKDIR /var/www
RUN composer create-project --prefer-dist laravel/laravel site
My docker-compose.yml file looks like:
version: '3.7'
services:
laravel:
build:
context: .
dockerfile: laravel.dockerfile
container_name: my_laravel
network_mode: host
restart: on-failure
volumes:
- ./site:/var/www/site
When I run docker-compose up, the ./site directory is created but the contents are empty. I've put this in docker-compose as I plan on on including other things like nginx, mysql, php etc
The command:
docker run -v "/where/i/want/data/site:/var/www/site" my_laravel
Results in the same behaviour.
I know the install is successful as I modified my dockerfile with the follwing two lines appended to it:
WORKDIR /var/www/site
RUN ls -la
Which gives me the correct listing.
Clearly misunderstanding something here. Any help appreciated.
EDIT: So, I was able to get this to work... although, it slightly more difficult than just specifying a path..
You can accomplish this by specifying a volume in docker-compose.yml.. The path to the directory (on the host) is labeled as device in the compose file.. It appears that the root of the path has to be an actual volume (possibly a share would work) but the 'destination' of the path can be a directory on the specified volume..
I created a new volume called docker on my machine but I suppose you could do this with your existing disk/volume..
I am on a Mac and this docker-compose.yml file worked for me:
version: '3.7'
services:
nodemon-test:
container_name: my-nodemon-test
image: oze4/nodemon-docker-test
ports:
- "1337:1337"
volumes:
- docker_test_app:/app # see comment below on which name to use here
volumes:
docker_test_app: # use this name under `volumes:` for the service
name: docker_test_app
driver: local
driver_opts:
o: bind
type: none
device: /Volumes/docker/docker_test_app
The container specified exists in my DockerHub.. this is the source code for it, just in case you are worried about anything malicious. I created it like two weeks ago to help someone else on StackOverflow.
Shows files from the container on my machine (the host)..
You can read more about Docker Volume configs here if you would like.
ORIGINAL ANSWER:
It looks like you are trying to share the build directory with your host machine.. After some testing, it appears Docker will overwrite the specified path on the container with the contents of the path on the host.
If you run docker logs my_laravel you should see an error about missing files at /var/www/site.. So, even though the build is successful - once Docker mounts the directory from your machine (./site) onto the container (/var/www/site) it overwrites the path within the container (/var/www/site) with the contents of the path on your host (./site) - which is empty.
To test and make sure the contents of /var/www/site are in fact being overwritten, you can run docker exec -it /bin/bash (you may need to replace /bin/bash with /bash).. This will give you command line access inside of the container. From there you can do ls -a /var/www/site..
Furthermore, you can also pre-stage ./site to have a random test file in it (test.txt or whatever), then docker-compose up -d, then run the same commands from the step above docker exec -it ... and see if the staged test.txt file is now inside the container - this gives you definitive evidence that when you run volumes, the data on your host overwrites data in the container.
With that being said, doing something like this and sharing a log directory will work... the volume path specified on the container is still overwritten, the difference is the container is writing to that path.. it doesn't rely on it for config files/app files.
Hope this helps.

File in docker-entrypoint-initdb.d never get executed when using docker compose

I'm using Docker Toolbox on Windows 10
I can access the php part succesfully via http://192.168.99.100:8000, I have been working around with the mariadb part but still having several problems
I have an sql file as /mariadb/initdb/abc.sql so I should be copied into /docker-entrypoint-initdb.d, after the container is created I use docker-compose exec mariadb to access the container, there is the file as /docker-entrypoint-initdb.d/abc.sql but the file never get executed, I also have tested to import the sql file to the container manually, it was succesful so the sql file is valid
I don't quite understand about the data folder mapping, and what to do to get the folder sync with the container, I always get the warning when recreate the container using docker-compose up -d
WARNING: Service "mariadb" is using volume "/var/lib/mysql" from the previous container. Host mapping "/.../mariadb/data" has no effect. Remove the existing containers (with docker-compose rm mariadb) to use the Recreating db ... done
Questions
How to get the sql file in /docker-entrypoint-initdb.d to be executed ?
What is the right way to map the data folder with the mariadb container ?
Please guide
Thanks
This is my docker-compose.yml
version: "3.2"
services:
php:
image: php:7.1-apache
container_name: web
restart: always
volumes:
- /.../php:/var/www/html
ports:
- "8000:80"
mariadb:
image: mariadb:latest
container_name: db
restart: always
environment:
- MYSQL_ROOT_PASSWORD=12345
volumes:
- /.../mariadb/initdb:/docker-entrypoint-initdb.d
- /.../mariadb/data:/var/lib/mysql
ports:
- "3306:3306"
For me the issue was the fact that Docker didn't clean up my mounted volumes from previous runs.
Doing a:
docker volume ls
Will list any volumes, and if previous exist, then run 'rm' command on the volume to remove it.
As stated on docker mysql docks, scripts in the '/docker-entrypoint-initdb.d' folder is only evalutated the first time the container runs, and if a previous volume remains, it won't run the scripts.
As for the mapping, you simply need to mount your script folder to the '/docker-entrypoint-initdb.d' folder in the image:
volumes:
- ./db/:/docker-entrypoint-initdb.d
I have a single script file in a folder named db, relative to my docker-compose file.
In your Docker file for creating mariaDB, at the end add the abc.sql file to your docker entry point like so:
COPY abc.sql /docker-entrypoint-initdb.d/
Remove the - /.../mariadb/initdb:/docker-entrypoint-initdb.d mapping as any file copied into the entry point will be executed.
Note: Windows containers do not execute anything in docker-entrypoint-initdb.d/

How to sync code between container and host using docker-compose?

Until now, I have used a local LAMP stack to develop my web projects and deploy them manually to the server. For the next project I want to use docker and docker-compose to create a mariaDB, NGINX and a project container for easy developing and deploying.
When developing I want my code directory on the host machine to be synchronised with the docker container. I know that could be achieved by running
docker run -dt --name containerName -v /path/on/host:/path/in/container
in the cli as stated here, but I want to do that within a docker-compose v2 file.
I am as far as having a docker-composer.yml file looking like this:
version: '2'
services:
db:
#[...]
myProj:
build: ./myProj
image: myProj
depends_on:
- db
volumes:
myCodeVolume:/var/www
volumes:
myCodeVolume:
How can I synchronise my /var/www directory in the container with my host machine (Ubuntu desktop, macos or Windows machine)?
Thank you for your help.
It is pretty much the same way, you do the host:container mapping directly under the services.myProj.volumes key in your compose file:
version: '2'
services:
...
myProj:
...
volumes:
/path/to/file/on/host:/var/www
Note that the top-level volumes key is removed.
This file could be translated into:
docker create --links db -v /path/to/file/on/host:/var/www myProj
When docker-compose finds the top-level volumes section it tries to docker volume create the keys under it first before creating any other container. Those volumes could be then used to hold the data you want to be persistent across containers.
So, if I take your file for an example, it would translate into something like this:
docker volume create myCodeVolume
docker create --links db -v myCodeVoume:/var/www myProj

Resources