Gitlab docker backup and restore - docker

I am using GitLab via docker on an intranet disconnected from the internet.
I run GitLab docker using docker-compose following yml file.
web:
image: 'gitlab/gitlab-ee:latest'
restart: always
hostname: 'myowngit.com'
ports:
- 8880:80
- 8443:443
volumes:
- /srv/gitlab/config:/etc/gitlab
- /srv/gitlab/logs:/var/log/gitlab
- /srv/gitlab/data:/var/opt/gitlab
Then free space of 'volumes' is not enough so i move this path to '/mnt/mydata'.
And I modify docker-compose.yml file.
... ... ...
volumes:
- /mnt/mydata/gitlab/config:/etc/gitlab
- /mnt/mydata/gitlab/logs:/var/log/gitlab
- /mnt/mydata/gitlab/data:/var/opt/gitlab
To start GitLab service run sudo docker-compose up -d.
After running the GitLab service I try to explore the project repository but the repository is not found(HTTP response 404 or 503).
What is the reason?
How to move GitLab docker volume directory?

It should work unless, as shown in docker-gitlab issue 562, to move was done with a different ownership
It should be okay to move the files from /data1/data to /data2/data, you should take a little care while copying the files to the new location. i.e. either of these should be fine
cp -a /data1/data /data2/data
rsync --progress -av /data1/data /data2/data
Simply doing cp -r /data1/data /data2/data will not preserve the ownership of the files which will cause issues.

Related

Dockerized Solr not updating configsets

I have a Dockerized Solr and I have to update a configset. The file I modified is under this path: solrhome/server/solr/configsets/myconfig/conf and is called DIH_ContentIndex.xml. After that, I deleted my Docker images and containers with these commands:
docker container stop <solr_container_id>
docker container rm <solr_container_id>
docker image rm <solr_img_id>
docker volume rm <solr_volume>
I rebuilt everything but Solr is not taking changes, as I can see when i go in the Files section. So, I decided to add a configset, that we will call newconfig with my changes at the same level of the other one. Redid everything and restarted. But nothing. So, I decided to enter the container with docker exec -it --user root <solr_container_id> /bin/bash and decided to change the files manually (just to test). Stopped, restarted the container but still nothing. After deleting again everything about Docker, I can still see my changes from inside the container. At this point, I think either I'm not deleting everything or I'm not placing my new config in the right directory. What else do I need to do for a clean build?
Here is the fragment of docker-compose I'm trying to launch, just in case this is the fault.
solr:
container_name: "MySolr"
build: ./solr
restart: always
hostname: solr
ports:
- "8983:8983"
networks:
- my-network
volumes:
- vol_solr:/opt/solr
depends_on:
- configdb
- zookeeper
environment:
ZK_HOST: zookeeper:2181
Of course, everything else is running fine so there is no error witht he dependencies.
It is not a problem of browser cache. I already tried cleaning the cache and using a different browser.
Some updates: it actually copies my config inside the fresh-built image.. But still, can't select it from the Frontend. Clearly, I'm placing my config files in the wrong path.
Thank you in advance!
Solved! All I had to do was:
enter the Solr container, with this command:
docker exec -it --user root <solr_container_id> /bin/bash
entering as root to be able to install nano
copy my pre-existing config somewhere (in the same path of bin for convenience) and modify the file DIH_ContentIndex.xml
apt update
apt install nano
nano DIH_ContentIndex.xml
Go to solr/bin pload to ZK, using this command:
solr zk upconfig -n config_name -d /path/to/config_folder

Restoring a volume from backup using docker-compose

I've been trying to accomplish migrating a volume from one container to the same container on a different host, just by testing out the method in the Docker docs: Restore Volume from Backup. However, the project I am working on starts the container using docker-compose instead of docker run. Anyone know if I can change the .yaml file somehow to decompress a tarball (similar to the docker run method)?
The docker run command for restoring from a backup looks like this:
docker run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"
If you can decompress the tarball file you can use this in your docker-compose.yaml file
mysql:
image: mysql:5.7
hostname: mysql
container_name: mysql
restart: always
expose:
- '3306'
ports:
- '3306:3306'
environment:
- 'MYSQL_ROOT_PASSWORD=something'
volumes:
- mysql_db:/var/lib/mysql
- ./your-backup.sql-file:/docker-entrypoint-initdb.d
So, I followed the docs linked in my question. The reason it wasn't working originally is because I needed to double check that the original volume AND container were removed before mounting the backup volume.
Essentially,
Backup volume as per the Docker documentation
Remove original container and volume
Restore volume as per documentation

Sharing data between docker containers without making data persistent

Let's say I have a docker-compose file with two containers:
version: "3"
services:
app:
image: someimage:fpm-alpine
volumes:
- myvolume:/var/www/html
web:
image: nginx:alpine
volumes:
- myvolume:/var/www/html
volumes:
myvolume:
The app container contains the application code in the /var/www/html directory which gets updated with each version of the image, so I don't want this directory to be persistent.
Yet I need to share the data with the nginx container. If I use a volume or a host bind the data is persistent and doesn't get updated with a new version. Maybe there is a way to automatically delete a volume whenever I pull a new image? Or a way to share an anonymous volume?
i think its better for you to use anonymous volume
volumes:
- ./:/var/www/html
You would have to be willing to drop back to docker-compose version 2 and use data containers with the volumes_from directive.
Which is equivalent to --volumes-from on a docker run command.
This should work fine. The problem isn't with docker. You can use volumes to communicate in this way. If you run docker-compose up in a directory with the following compose file:
version: "3"
services:
one:
image: ubuntu
command: sleep 100000
volumes:
- vol:/vol
two:
image: ubuntu
command: sleep 100000
volumes:
- vol:/vol
volumes:
vol:
Then, in a 2nd terminal docker exec -it so_one_1 bash (you might have to do a docker ps to find the exact name of the container, it can change). You'll find yourself in a bash container. Change to the /vol directory cd /vol and then echo "wobble" > wibble.txt", then exit` the shell (ctrl-d).
In the same terminal you can then type docker exec -it so_two_1 bash (again, check the names). Just like last time you can cd /vol and type ls -gAlFh you'll see the wibble.txt file we created in the other container. You can even cat wibble.txt to see the contents. It'll be there.
So if the problem isn't docker, what can it be? I think the problem is that nginx isn't seeing the changes on the filesystem. For that, I believe that setting expires -1; inside a location block in the config will actually disable caching completely and may solve the problem (dev only).

Docker not mapping changes from local project to the container in windows

I am trying to use Docker volume/bind mount so that I don't need to build my project again and again after every small change. I do not get any error but changes in the local files are not visible in container thus I still have to rebuild the project for the new files system snapshot.
Following solution seemed to work for some people.Therefore,
I have tried restarting Docker and Reset Credentials at Docker Desktop-->Setting-->Shared Drives
Here is my docker-compose.yml file
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
I have tried through Docker CLI too. but problem persists
docker build -f Dockerfile.dev .
docker run -p 3000:3000 -v /app/node_modules -v ${pwd}:/app image-id
Windows does copy the files in current directory to container but they are
not in sync
I am using Windows 10 power shell and docker version 18.09.2
UPDATE:
I have checked container contents
using command
docker exec -t -i container-id sh
and the printed file contents using command
cat filename
And from this it is clear that the files container references to has/have changed/updated but I still don't understand why do i have to restart container to see the changes in browser.
Should not they be apparent after just refreshing the tab?

Docker Compose Continuous Deployment setup

I am looking for a way to deploy docker-compose images and / or builds to a remote sever, specifically but not limited to a DigitalOcean VPS.
docker-compose is currently working on the CircleCI Continuous Integration service, where it automatically verifies that tests pass. But, it should deploy automatically on success.
My docker-compose.yml is looking like this:
version: '2'
services:
web:
image: name/repo:latest
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
depends_on:
- mongo
- redis
mongo:
image: mongo
command: --smallfiles
volumes:
- ./data/mongodb:/data/db
redis:
image: redis
volumes:
- ./data/redis:/data
docker-compose.override.yml:
version: '2'
services:
web:
build: .
circle.yml relevant part:
deployment:
latest:
branch: master
commands:
- docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
- docker push name/repo:$CIRCLE_SHA1
- docker push name/repo:latest
Your docker-compose and circle configurations are already looking pretty good.
Your docker-compose.yml is already setup to gather the image from the Docker Hub, which is being uploaded after tests have passed. We will use this image on the remote server, which instead of building the image up every time (which takes a long time), we'll use this already prepared one.
You did well into separating the build: . into a docker-compose.override.yml file, as priority issues can arise if we use a docker-compose.prod.yml file.
Let's get started with the deployment:
There are various ways of getting your deployment done. The most popular ones are probably SSH and Webhooks.
We'll use SSH.
Edit your circle.yml config to take an additional step, which to load our .scripts/deploy.sh bash file:
deployment:
latest:
branch: master
commands:
- docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
- docker push name/repo:$CIRCLE_SHA1
- docker push name/repo:latest
- .scripts/deploy.sh
deploy.sh will contain a few instructions to connect into our remote server through SSH and update both the repository and Docker images and reload Docker Compose services.
Prior executing it, you should have a remote server that contains your project folder (i.e. git clone https://github.com/zurfyx/my-project), and both Docker and Docker Compose installed.
deploy.sh
#!/bin/bash
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
(
cd "$DIR/.." # Go to project dir.
ssh $SSH_USERNAME#$SSH_HOSTNAME -o StrictHostKeyChecking=no <<-EOF
cd $SSH_PROJECT_FOLDER
git pull
docker-compose pull
docker-compose stop
docker-compose rm -f
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
EOF
)
Notice: last EOF is not indented. That's how bash HEREDOC works.
deploy.sh steps explained:
ssh $SSH_USERNAME#$SSH_HOSTNAME: connects to the remote host through SSH. -o StrictHostChecking=no avoids the SSH asking whether we trust the server.
cd $SSH_PROJECT_FOLDER: browses to the project folder (the one you did gather through git clone ...)
git pull: updates project folder. That's important to keep docker-compose / Dockerfile updated, as well as any shared volume that depends on some source code file.
docker-compose stop: Our remote dependencies have just been downloaded. Stop the docker-compose services which are current running.
docker-compose rm -f: Remove docker-compose services. This step is really important, otherwise we'll reuse old volumes.
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d. Execute your docker-compose.prod.yml which extends docker-compose.yml in detached mode.
On your CI you will need to fill in the following environment variables (that the deployment script uses):
$SSH_USERNAME: your SSH username (i.e. root)
$SSH_HOSTNAME: your SSH hostname (i.e. stackoverflow.com)
$SSH_PROJECT_FOLDER: the folder where the project is stored (either relative or absolute to where the $SSH_USERNAME is on login. (i.e. my-project/)
What about the SSH password? CircleCI in this case offers a way to store SSH keys, so password is no longer needed when logging in through SSH.
Otherwise simply edit the deploy.sh SSH connection to something like this:
sshpass -p your_password ssh user#hostname
More about SSH password here.
In conclusion, all we had to do was to create a script that connected with our remote server to let it know that the source code had been updated. Well, and to perform the appropriate upgrading steps.
FYI, that's similar to how the alternative Webhooks method work.
WatchTower solves this for you.
https://github.com/v2tec/watchtower
Your CI just needs to build the images and push to the registry. Then WatchTower polls the registry every N seconds and automagically restarts your services using the latest and greatest images. It's as simple as adding this code to your compose yaml:
watchtower:
image: v2tec/watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /root/.docker/config.json:/config.json
command: --interval 30

Resources