Restoring a volume from backup using docker-compose - docker

I've been trying to accomplish migrating a volume from one container to the same container on a different host, just by testing out the method in the Docker docs: Restore Volume from Backup. However, the project I am working on starts the container using docker-compose instead of docker run. Anyone know if I can change the .yaml file somehow to decompress a tarball (similar to the docker run method)?
The docker run command for restoring from a backup looks like this:
docker run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"

If you can decompress the tarball file you can use this in your docker-compose.yaml file
mysql:
image: mysql:5.7
hostname: mysql
container_name: mysql
restart: always
expose:
- '3306'
ports:
- '3306:3306'
environment:
- 'MYSQL_ROOT_PASSWORD=something'
volumes:
- mysql_db:/var/lib/mysql
- ./your-backup.sql-file:/docker-entrypoint-initdb.d

So, I followed the docs linked in my question. The reason it wasn't working originally is because I needed to double check that the original volume AND container were removed before mounting the backup volume.
Essentially,
Backup volume as per the Docker documentation
Remove original container and volume
Restore volume as per documentation

Related

Sharing data between docker containers without making data persistent

Let's say I have a docker-compose file with two containers:
version: "3"
services:
app:
image: someimage:fpm-alpine
volumes:
- myvolume:/var/www/html
web:
image: nginx:alpine
volumes:
- myvolume:/var/www/html
volumes:
myvolume:
The app container contains the application code in the /var/www/html directory which gets updated with each version of the image, so I don't want this directory to be persistent.
Yet I need to share the data with the nginx container. If I use a volume or a host bind the data is persistent and doesn't get updated with a new version. Maybe there is a way to automatically delete a volume whenever I pull a new image? Or a way to share an anonymous volume?
i think its better for you to use anonymous volume
volumes:
- ./:/var/www/html
You would have to be willing to drop back to docker-compose version 2 and use data containers with the volumes_from directive.
Which is equivalent to --volumes-from on a docker run command.
This should work fine. The problem isn't with docker. You can use volumes to communicate in this way. If you run docker-compose up in a directory with the following compose file:
version: "3"
services:
one:
image: ubuntu
command: sleep 100000
volumes:
- vol:/vol
two:
image: ubuntu
command: sleep 100000
volumes:
- vol:/vol
volumes:
vol:
Then, in a 2nd terminal docker exec -it so_one_1 bash (you might have to do a docker ps to find the exact name of the container, it can change). You'll find yourself in a bash container. Change to the /vol directory cd /vol and then echo "wobble" > wibble.txt", then exit` the shell (ctrl-d).
In the same terminal you can then type docker exec -it so_two_1 bash (again, check the names). Just like last time you can cd /vol and type ls -gAlFh you'll see the wibble.txt file we created in the other container. You can even cat wibble.txt to see the contents. It'll be there.
So if the problem isn't docker, what can it be? I think the problem is that nginx isn't seeing the changes on the filesystem. For that, I believe that setting expires -1; inside a location block in the config will actually disable caching completely and may solve the problem (dev only).

How to find volume files from host while inside docker container?

In a docker-compose.yml file I have defined the following service:
php:
container_name: php
build:
context: ./container/php
dockerfile: Dockerfile
networks:
- saasnet
volumes:
- ./services:/var/www/html
- ./logs/php:/usr/local/etc/php-fpm.d/zz-log.conf
environment:
- "DB_PORT=3306"
- "DB_HOST=database"
It all builds fine, and another service (nginx) using the same volume mapping, - ./services:/var/www/html finds php as expected, so it all works in the browser. So far, so good.
But now I want to go into the container because I want to run composer install from a certain directory inside the container. So I go into the container using:
docker run -it php bash
And I find myself in the container at /var/www/html, where I expect to be able to navigate as if I were on my host machine in ./services directory, but ls at this point inside the container shows no files at all.
What am I missing or not understanding about how this works?
Your problem is that your are not specifying the volume on your run command - docker run is not aware of your docker-compose.yml. If you want to run it with all your options as specifiend in it, you need to either use docker-compose run, or pass all options to docker run:
docker-compose run php bash
docker run -it -e B_PORT=3306 -e DB_HOST=database -v ./services:/var/www/html -v ./logs/php:/usr/local/etc/php-fpm.d/zz-log.conf php bash

Gitlab docker backup and restore

I am using GitLab via docker on an intranet disconnected from the internet.
I run GitLab docker using docker-compose following yml file.
web:
image: 'gitlab/gitlab-ee:latest'
restart: always
hostname: 'myowngit.com'
ports:
- 8880:80
- 8443:443
volumes:
- /srv/gitlab/config:/etc/gitlab
- /srv/gitlab/logs:/var/log/gitlab
- /srv/gitlab/data:/var/opt/gitlab
Then free space of 'volumes' is not enough so i move this path to '/mnt/mydata'.
And I modify docker-compose.yml file.
... ... ...
volumes:
- /mnt/mydata/gitlab/config:/etc/gitlab
- /mnt/mydata/gitlab/logs:/var/log/gitlab
- /mnt/mydata/gitlab/data:/var/opt/gitlab
To start GitLab service run sudo docker-compose up -d.
After running the GitLab service I try to explore the project repository but the repository is not found(HTTP response 404 or 503).
What is the reason?
How to move GitLab docker volume directory?
It should work unless, as shown in docker-gitlab issue 562, to move was done with a different ownership
It should be okay to move the files from /data1/data to /data2/data, you should take a little care while copying the files to the new location. i.e. either of these should be fine
cp -a /data1/data /data2/data
rsync --progress -av /data1/data /data2/data
Simply doing cp -r /data1/data /data2/data will not preserve the ownership of the files which will cause issues.

Docker not mapping changes from local project to the container in windows

I am trying to use Docker volume/bind mount so that I don't need to build my project again and again after every small change. I do not get any error but changes in the local files are not visible in container thus I still have to rebuild the project for the new files system snapshot.
Following solution seemed to work for some people.Therefore,
I have tried restarting Docker and Reset Credentials at Docker Desktop-->Setting-->Shared Drives
Here is my docker-compose.yml file
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
I have tried through Docker CLI too. but problem persists
docker build -f Dockerfile.dev .
docker run -p 3000:3000 -v /app/node_modules -v ${pwd}:/app image-id
Windows does copy the files in current directory to container but they are
not in sync
I am using Windows 10 power shell and docker version 18.09.2
UPDATE:
I have checked container contents
using command
docker exec -t -i container-id sh
and the printed file contents using command
cat filename
And from this it is clear that the files container references to has/have changed/updated but I still don't understand why do i have to restart container to see the changes in browser.
Should not they be apparent after just refreshing the tab?

How to add files in docker container and make them accessible from other containers?

Short version:
I want to add files in a docker container in docker-compose or Dockerfile and I want to make it accessible from other containers that I made in docker-compose file. How can I do that?
Long version:
I have a Python app in a container that uses a .csv file to generate a POJO machine learning model.
I also have a Java app in a container that uses the POJO machine learning model and appends the .csv file. The java app has a fileWatcher() method implemented.
The containers are made from the docker-compose file that calls Dockerfiles for each one of them. So I want to add them this way and not with CMD docker commands.
You can add the same named volume to different containers:
docker volume create --name volume_data
docker run -t -i -v volume_data:/public debian:jessie /bin/bash
docker run -t -i -v volume_data:/public2 debian:jessie /bin/bash
or as docker-compose.yml
services:
assets:
image: any_asset_image
volumes:
- assets:"/public/assets"
proxy:
image: nginx
volumes:
- assets
volumes:
- assets

Resources