How do I automate backing up MYSQL database container - docker

I have a MYSQL database container running on a centos server. How do I automate backing up the database outside the container?

as mentioned in the comments, make sure you use volumes for data folder.
for backing up:
create a bash script on the host machine and make it executable:
#!/bin/bash
DATE=$(date '+%Y-%m-%d_%H-%M-%S')
docker exec <container name> /usr/bin/mysqldump -u <putdatabaseusername here> -p<PutyourpasswordHere> --all-databases > /<path to desired backup location>/$DATE.sql
if [[ $? == 0 ]]; then
find /<path to desired backup location>/ -mtime +10 -exec rm {} \;
fi
change the following:
<container name> to actual db container name
<putdatabaseusername here> to db user
<PutyourpasswordHere> to the db password
create a directory for backup files and replace /<path to desired backup location>/ to the actual path
create a cronjob on host machine that executes the script in desired time/period
note that this script will retain backups for 10 days, change the number to reflect your needs.
Important note: this script is storing the password in the file, use a secure way in production

Related

Copy file from localhost to docker container on remote server

I have a large file on my laptop (localhost). I would like to copy this file to a docker container which is located on a remote server. I know how to do it in two steps, i.e. I first copy the file to my remote server and then I copy the file from remote server to the docker container. But, for obvious reasons, I want to avoid this.
A similar question which has a complicated answer is covered here: Copy file from remote docker container
However in this question, the direction is reversed, the file is copied from the remote container to localhost.
Additional request: is it possible that this upload can be done piece-wise or that in case of a network failure I can resume the upload from where it stopped, instead of having to upload the entire file again? I ask because the file is fairly large, ~13GB.
From https://docs.docker.com/engine/reference/commandline/cp/#corner-cases and https://www.cyberciti.biz/faq/howto-use-tar-command-through-network-over-ssh-session/ you would just do:
tar Ccf $(dirname SRC_PATH) - $(basename SRC_PATH) | ssh you#host docker exec -i CONTAINER tar Cxf DEST_PATH -
or
tar Ccf $(dirname SRC_PATH) - $(basename SRC_PATH) | ssh you#host docker cp - CONTAINER:DEST_PATH
Or untested, no idea if this works:
DOCKER_HOST=ssh://you#host docker cp SRC_PATH CONTAINER:DEST_PATH
This will work if you are running a *nix server and a docker with ssh server in it.
You can create a local tunnel on the remote server by following these steps:
mkfifo host_to_docker
netcat -lkp your_public_port < host_to_docker | nc docker_ip_address 22 > host_to_docker &
First command will create a pipe that you can check with file host_to_docker.
Second one is the greatest network utility of all times that is netcat. It just accepts a tcp connection and forwards it to another netcat instance, receiving and forwarding underlying ssh messages to the ssh server running on docker and writing its responses to the pipe we created.
last step is:
scp -P your_public_port payload.tar.gz user#remote_host:/dest/folder
You can use the DOCKER_HOST environment variable and rsync to archive your goal.
First, you set DOCKER_HOST, which causes your docker client (i.e., the docker CLI util) to be connected to the remote server's docker daemon over SSH. This probably requires you to create an ssh-config entry for the destination server.
export DOCKER_HOST="ssh://<your-host-name>"
Next, you can use docker exec in conjunction with rsync to copy your data into the target container. This requires you to obtain the container ID via, e.g., docker ps. Note, that rsync must be installed in the container.
#
rsync -ar -e 'docker exec -i' <local-source-path> <container-id>:/<destintaion-in-the-container>
Since rsync is used, this will also allow you to resume (if the appropriated flags are used) uploads at some point later.

How can I specify a configuration file in ArangoDB docker image?

I'm trying to spin up an ArangoDB server via docker compose.
It all works out fine with default configuration, but I'm struggling to make the server in the container pick up a custom configuration file with minimal setup
I've tried with overriding the startup sequence with the following:
command: >
arangod --configuration /arango.conf
I've checked and the file is present in the container, but when I check the configuration file via query through arangoshell it still references the default settings and the arangod.conf file placed in the /tmp folder.
Any hints?
--config /tmp/arangod.conf is hardcoded in docker-entrypoint.sh at line 185
it is also overwritten upon every start of container with /etc/arangodb3/arangod.conf at line 42
so to run arangod with your custom.conf file you need map it as volume to /etc/arangodb3/arangod.conf
via docker compose
volumes:
/custom.conf:/etc/arangodb3/arangod.conf
via docker cli
-v /custom.conf:/etc/arangodb3/arangod.conf
UPDATE: as per comments
custom.conf have to contain defaults from /etc/arangodb3/arangod.conf
easiest way is to save default config to custom.conf via docker cli and add/update options in that file
docker run --rm arangodb/arangodb:3.7.5 sh -c 'cat /etc/arangodb3/arangod.conf' > custom.conf
it gonna be also required to track updates of default options in /etc/arangodb3/arangod.conf in new versions of ArangoDB and reflect them in your custom.conf

Do I need to set up NEO4J_IMPORT environment variable?

cp -f data/*.csv ${NEO4J_IMPORT}/
cd "${NEO4J_HOME}"
time bin/cypher-shell < $tmp_dir/create_graph.cypher
I am seeing a script to create a neo4j database, but running into a problem:
cp: /person.csv: Read-only file system
Connection refused
I am on Mac and can echo the NEO4J_HOME variable, but no NEO4J_IMPORT. Should I set my own NEO4J_IMPORT environment variable when using cypher-shell to create a graph? Where to set the NEO4J_IMPORT environment variable, if it is a must?
NEO4J_IMPORT need not be in env.
Probably neo4j instance is down or running or running on non standard port. Try this one.
In neo4j home path and make sure file has sufficient permission.
cat query.cypher | bin/cypher-shell -u neo4j -p neo4j -a localhost:7687 --format plain
https://neo4j.com/docs/operations-manual/3.5/tools/cypher-shell/#cypher-shell-syntax

How to persist configuration & analytics across container invocations in Sonarqube docker image

Sonarqube official docker image, is not persisting any configuration changes like: creating users, changing root password or even installing new plugins.
Once the container is restarted, all the configuration changes disappear and the installed plugins are lost. Even the projects' keys and their previous QA analytics data is unavailable after a restart.
How can we persist the data when using Sonarqube's official docker image?
Sonarqube image comes with a temporary h2 database engine which is not recommended for production and doesn't persist across container restarts.
We need to setup a database of our own and point it to Sonarqube at the time of starting the container.
Sonarqube docker images exposes two volumes "$SONARQUBE_HOME/data", "$SONARQUBE_HOME/extensions" as seen from Sonarqube Dockerfile.
Since we wanted to persist the data across invocations, we need to make sure that a production grade database is setup and is linked to Sonarqube and the extensions directory is created and mounted as volume on the host machine so that all the downloaded plugins are available across container invocations and can be used by multiple containers (if required).
Database Setup:
create database sonar;
grant all on sonar.* to `sonar`#`%` identified by "SOME_PASSWORD";
flush privileges;
# since we do not know the containers IP before hand, we use '%' for sonarqube host IP.
It is not necessary to create tables, Sonarqube creates them if it doesn't find them.
Starting up Sonarqube container:
# create a directory on host
mkdir /server_data/sonarqube/extensions
mkdir /server_data/sonarqube/data # this will be useful in saving startup time
# Start the container
docker run -d \
--name sonarqube \
-p 9000:9000 \
-e SONARQUBE_JDBC_USERNAME=sonar \
-e SONARQUBE_JDBC_PASSWORD=SOME_PASSWORD \
-e SONARQUBE_JDBC_URL="jdbc:mysql://HOST_IP_OF_DB_SERVER:PORT/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance" \
-v /server_data/sonarqube/data:/opt/sonarqube/data \
-v /server_data/sonarqube/extensions:/opt/sonarqube/extensions \
sonarqube
Hi #VanagaS and others landing here.
I just wanted to provide an alternative to the above. Maybe some would even consider it an easier one.
Notice this line SONARQUBE_HOME in the Dockerfile for the docker-sonarqube image. We can control this environment variable.
When using docker run. Simply do:
txt
docker run -d \
...
...
-e SONARQUBE_HOME=/sonarqube-data
-v /PERSISTENT_DISK/sonarqubeVolume:/sonarqube-data
This will make Sonarqube create the conf, data and so forth folders and store data therein. As needed.
Or with Kubernetes. In your deployment YAML file. Do:
txt
...
...
env:
- name: SONARQUBE_HOME
value: /sonarqube-data
...
...
volumeMounts:
- name: app-volume
mountPath: /sonarqube-data
And the name in the volumeMounts property points to a volume in the volumes section of the Kubernetes deployment YAML file.
This again will make Sonarqube use the /sonarqube-data mountPath for creating extenions, conf and so forth folders, then save data therein.
And voila your Sonarqube data is thereby persisted.
I hope this will help others.
N.B. Notice that the YAML and Docker run examples are not exhaustive. They focus on the issue of persisting Sonarqube data.
Since Sonarqube v7.9 , Mysql is not supported. One needs to use postgresql. Install Postgresql and configure to run on host ip rather than localhost, private ip is preferred.
Reference: https://www.digitalocean.com/community/tutorials/how-to-install-and-use-postgresql-on-ubuntu-18-04
postgres=# create database sonar;
postgres=# create user sonar with encrypted password 'mypass';
postgres=# grant all privileges on database sonar to sonar;
create a directory on host
mkdir /server_data/sonarqube/extensions
mkdir /server_data/sonarqube/data # this will be useful in saving startup time
Start the container
docker run -d
--name sonarqube
-p 9000:9000
-e SONARQUBE_JDBC_USERNAME=sonar
-e SONARQUBE_JDBC_PASSWORD=mypass
-e SONARQUBE_JDBC_URL=jdbc:postgresql://{host/private ip only}:5432/sonar
-v /server_data/sonarqube/data:/opt/sonarqube/data
-v /server_data/sonarqube/extensions:/opt/sonarqube/extensions
sonarqube
You may face this error when you do "docker logs container_id"
ERROR: [1] bootstrap checks failed [1]: max virtual memory areas
vm.max_map_count [65530] is too low, increase to at least [262144]
This is the fix, run on your host
sysctl -w vm.max_map_count=262144
In order to add hostname
edit /etc/postgresql/10/main/postgresql.conf
In order to add docker as client for postgres edit /etc/postgresql/10/main/pg_hba.conf
10 - postgres version used

How to mount local volumes in docker machine

I am trying to use docker-machine with docker-compose. The file docker-compose.yml has definitions as follows:
web:
build: .
command: ./run_web.sh
volumes:
- .:/app
ports:
- "8000:8000"
links:
- db:db
- rabbitmq:rabbit
- redis:redis
When running docker-compose up -d all goes well until trying to execute the command and an error is produced:
Cannot start container b58e2dfa503b696417c1c3f49e2714086d4e9999bd71915a53502cb6ef43936d: [8] System error: exec: "./run_web.sh": stat ./run_web.sh: no such file or directory
Local volumes are not mounted to the remote machine. Whats the recommended strategy to mount the local volumes with the webapps' code?
Docker-machine automounts the users directory... But sometimes that just isn't enough.
I don't know about docker 1.6, but in 1.8 you CAN add an additional mount to docker-machine
Add Virtual Machine Mount Point (part 1)
CLI: (Only works when machine is stopped)
VBoxManage sharedfolder add <machine name/id> --name <mount_name> --hostpath <host_dir> --automount
So an example in windows would be
/c/Program\ Files/Oracle/VirtualBox/VBoxManage.exe sharedfolder add default --name e --hostpath 'e:\' --automount
GUI: (does NOT require the machine be stopped)
Start "Oracle VM VirtualBox Manager"
Right-Click <machine name> (default)
Settings...
Shared Folders
The Folder+ Icon on the Right (Add Share)
Folder Path: <host dir> (e:)
Folder Name: <mount name> (e)
Check on "Auto-mount" and "Make Permanent" (Read only if you want...) (The auto-mount is sort of pointless currently...)
Mounting in boot2docker (part 2)
Manually mount in boot2docker:
There are various ways to log in, use "Show" in "Oracle VM VirtualBox Manager", or ssh/putty into docker by IP address docker-machine ip default, etc...
sudo mkdir -p <local_dir>
sudo mount -t vboxsf -o defaults,uid=`id -u docker`,gid=`id -g docker` <mount_name> <local_dir>
But this is only good until you restart the machine, and then the mount is lost...
Adding an automount to boot2docker:
While logged into the machine
Edit/create (as root) /mnt/sda1/var/lib/boot2docker/bootlocal.sh, sda1 may be different for you...
Add
mkdir -p <local_dir>
mount -t vboxsf -o defaults,uid=`id -u docker`,gid=`id -g docker` <mount_name> <local_dir>
With these changes, you should have a new mount point. This is one of the few files I could find that is called on boot and is persistent. Until there is a better solution, this should work.
Old method: Less recommended, but left as an alternative
Edit (as root) /mnt/sda1/var/lib/boot2docker/profile, sda1 may be different for you...
Add
add_mount() {
if ! grep -q "try_mount_share $1 $2" /etc/rc.d/automount-shares ; then
echo "try_mount_share $1 $2" >> /etc/rc.d/automount-shares
fi
}
add_mount <local dir> <mount name>
As a last resort, you can take the slightly more tedious alternative, and you can just modify the boot image.
git -c core.autocrlf=false clone https://github.com/boot2docker/boot2docker.git
cd boot2docker
git -c core.autocrlf=false checkout v1.8.1 #or your appropriate version
Edit rootfs/etc/rc.d/automount-shares
Add try_mount_share <local_dir> <mount_name> line right before fi at the end. For example
try_mount_share /e e
Just be sure not to set the to anything the os needs, like /bin, etc...
docker build -t boot2docker . #This will take about an hour the first time :(
docker run --rm boot2docker > boot2docker.iso
Backup the old boot2docker.iso and copy your new one in its place, in ~/.docker/machine/machines/
This does work, it's just long and complicated
docker version 1.8.1, docker-machine version 0.4.0
Also ran into this issue and it looks like local volumes are not mounted when using docker-machine. A hack solution is to
get the current working directory of the docker-machine instance docker-machine ssh <name> pwd
use a command line tool like rsync to copy folder to remote system
rsync -avzhe ssh --progress <name_of_folder> username#remote_ip:<result _of_pwd_from_1>.
The default pwd is /root so the command above would be rsync -avzhe ssh --progress <name_of_folder> username#remote_ip:/root
NB: you would need to supply the password for the remote system. You can quickly create one by ssh into the remote system and creating a password.
change the volume mount point in your docker-compose.yml file from .:/app to /root/<name_of_folder>:/app
run docker-compose up -d
NB when changes are made locally, don't forget to rerun rsync to push the changes to the remote system.
Its not perfect but it works. An issue is ongoing https://github.com/docker/machine/issues/179
Other project that attempt to solve this include docker-rsync
At the moment I can't really see any way to mount volumes on machines, so the approach by now would be to somehow copy or sync the files you need into the machine.
There are conversations on how to solve this issue on the docker-machine's github repo. Someone made a pull request implementing scp on docker-machine and it's already merged on master, so it's very likely that the next release will include it.
Since it's not yet released, by now I would recommend that if you have your code hosted on github, just clone your repo before you run the app
web:
build: .
command: git clone https://github.com/my/repo.git; ./repo/run_web.sh
volumes:
- .:/app
ports:
- "8000:8000"
links:
- db:db
- rabbitmq:rabbit
- redis:redis
Update: Looking further I found that the feature is already available in the latest binaries, when you get them you'll be able to copy your local project running a command like this:
docker-machine scp -r . dev:/home/docker/project
Being this the general form:
docker-machine scp [machine:][path] [machine:][path]
So you can copy files from, to and between machines.
Cheers!1
Since October 2017 there is a new command for docker-machine that does the trick, but make sure there is nothing in the directory before executing it, otherwise it might get lost:
docker-machine mount <machine-name>:<guest-path> <host-path>
Check the docs for more information: https://docs.docker.com/machine/reference/mount/
PR with the change: https://github.com/docker/machine/pull/4018
If you choose the rsync option with docker-machine, you can combine it with the docker-machine ssh <machinename> command like this:
rsync -rvz --rsh='docker-machine ssh <machinename>' --progress <local_directory_to_sync_to> :<host_directory_to_sync_to>
It uses this command format of rsync, leaving HOST blank:
rsync [OPTION]... SRC [SRC]... [USER#]HOST:DEST
(http://linuxcommand.org/man_pages/rsync1.html)
Finally figured out how to upgrade Windows Docker Toolbox to v1.12.5 and keep my volumes working by adding a shared folder in Oracle VM VirtualBox manager and disabling path conversion. If you have Windows 10+ then you're best to use the newer Docker for Windows.
1st the upgrade Pain:
Uninstall VirtualBox first.
Yep that may break stuff in other tools like Android Studio. Thanks Docker :(
Install new version of Docker Toolbox.
Redis Database Example:
redis:
image: redis:alpine
container_name: redis
ports:
- "6379"
volumes:
- "/var/db/redis:/data:rw"
In Docker Quickstart Terminal ....
run docker-machine stop default - Ensure VM is haulted
In Oracle VM VirtualBox Manager ...
Added a shared folder in default VM via or command line
D:\Projects\MyProject\db => /var/db
In docker-compose.yml...
Mapped redis volume as: "/var/db/redis:/data:rw"
In Docker Quickstart Terminal ....
Set COMPOSE_CONVERT_WINDOWS_PATHS=0 (for Toolbox version >= 1.9.0)
run docker-machine start default to restart the VM.
cd D:\Projects\MyProject\
docker-compose up should work now.
Now creates redis database in D:\Projects\MyProject\db\redis\dump.rdb
Why avoid relative host paths?
I avoided relative host paths for Windows Toolbox as they may introduce invalid '\' chars. It's not as nice as using paths relative to docker-compose.yml but at least my fellow developers can easily do it even if their project folder is elsewhere without having to hack the docker-compose.yml file (bad for SCM).
Original Issue
FYI ... Here is the original error I got when I used nice clean relative paths that used to work just fine for older versions. My volume mapping used to be just "./db/redis:/data:rw"
ERROR: for redis Cannot create container for service redis: Invalid bind mount spec "D:\\Projects\\MyProject\\db\\redis:/data:rw": Invalid volume specification: 'D:\Projects\MyProject\db\redis:/data
This breaks for two reasons ..
It can't access D: drive
Volume paths can't include \ characters
docker-compose adds them and then blames you for it !!
Use COMPOSE_CONVERT_WINDOWS_PATHS=0 to stop this nonsense.
I recommend documenting your additional VM shared folder mapping in your docker-compose.yml file as you may need to uninstall VirtualBox again and reset the shared folder and anyway your fellow devs will love you for it.
All other answers were good for the time but now (Docker Toolbox v18.09.3) all works out of the box. You just need to add a shared folder into VirtualBox VM.
Docker Toolbox automatically adds C:\Users as shared folder /c/Users under virtual linux machine (using Virtual Box shared folders feature), so if your docker-compose.yml file is located somewhere under this path and you mount host machine's directories only under this path - all should work out of the box.
For example:
C:\Users\username\my-project\docker-compose.yml:
...
volumes:
- .:/app
...
The . path will be automatically converted to absolute path C:\Users\username\my-project and then to /c/Users/username/my-project. And this is exactly how this path is seen from the point of view of linux virtual machine (you can check it: docker-machine ssh and then ls /c/Users/username/my-project). So, the final mount will be /c/Users/username/my-project:/app.
All works transparently for you.
But this doesn't work if your host mount path is not under C:\Users path. For example, if you put the same docker-compose.yml under D:\dev\my-project.
This can be fixed easily though.
Stop the virtual machine (docker-machine stop).
Open Virtual Box GUI, open Settings of Virtual Machine named default, open Shared Folders section and add the new shared folder:
Folder Path: D:\dev
Folder Name: d/dev
Press OK twice and close Virtual Box GUI.
Start the virtual machine (docker-machine start).
That's all. All paths of host machine under D:\dev should work now in docker-compose.yml mounts.
It can be done witch combination of three tools:
docker-machine mount, rsync, inotifywait
TL;DR
Script based on all below is here
Let's say you have your docker-compose.yml and run_web.sh in /home/jdcaballerov/web
Mount directory on machine which has same path as you have it on your host docker-machine machine:/home/jdcaballerov/web /tmp/some_random_dir
Synchronize mounted directory with dir on your host rsync -r /home/jdcaballerov/web /tmp/some_random_dir
Synchronize on every change of files in your directory:
inotifywait -r -m -e close_write --format '%w%f' /home/jdcaballerov/web | while read CHANGED_FILE
do
rsync /home/jdcaballerov/web /tmp/some_random_dir
done
BE AWARE - there are two directories which has same path - one is on your local (host) machine, second is on docker machine.
I assume the run_web.sh file is in the same directory as your docker-compose.yml file. Then the command should be command: /app/run_web.sh.
Unless the Dockerfile (that you are not disclosing) takes care of putting the run_web.sh file into the Docker image.
After summarize posts here, attached updated script, to create additional host mount point and automount when Virtualbox restart. The working environment brief as below:
- Windows 7
- docker-machine.exe version 0.7.0
- VirtualBox 5.0.22
#!env bash
: ${NAME:=default}
: ${SHARE:=c/Proj}
: ${MOUNT:=/c/Proj}
: ${VBOXMGR:=C:\Program Files\Oracle\VirtualBox\VBoxManage.exe}
SCRIPT=/mnt/sda1/var/lib/boot2docker/bootlocal.sh
## set -x
docker-machine stop $NAME
"$VBOXMGR" sharedfolder add $NAME --name c/Proj --hostpath 'c:\' --automount 2>/dev/null || :
docker-machine start $NAME
docker-machine env $NAME
docker-machine ssh $NAME 'echo "mkdir -p $MOUNT" | sudo tee $SCRIPT'
docker-machine ssh $NAME 'echo "sudo mount -t vboxsf -o rw,user $SHARE $MOUNT" | sudo tee -a $SCRIPT'
docker-machine ssh $NAME 'sudo chmod +x /mnt/sda1/var/lib/boot2docker/bootlocal.sh'
docker-machine ssh $NAME 'sudo /mnt/sda1/var/lib/boot2docker/bootlocal.sh'
#docker-machine ssh $NAME 'ls $MOUNT'
I am using docker-machine 0.12.2 with the virtualbox drive on my local machine. I found that there is a directory /hosthome/$(user name) from where you have access to local files.
Just thought I'd mention I've been using 18.03.1-ce-win65 (17513) on Windows 10 and I noticed that if you've previously shared a drive and cached the credentials, once you change your password docker will start having the volumes mounted within containers as blank.
It gives no indication that what is actually happening is that it is now failing to access the shared with the old cached credentials.
The solution in this scenario is to reset the credentials either through the UI (Settings->Shared drives) or to disable then renable drive sharing and enter the new password.
It would be useful if docker-compose gave an error in these situations.

Resources