MariaDB configs not used by docker container - docker

I want to customize some MariaDB server configs (wait_timeout, etc.). I followed the instructions on the official Docker image and created two .cnf files inside my host dir config/mariadb which is mounted in the container via these volumes:
volumes:
- /config/mariadb:/etc/mysql/conf.d:ro
- /config/mariadb:/etc/mysql/mariadb.conf.d:ro
root#server:~# ls -l /config/mariadb
total 12
-rwxrwx--- 1 root root 263 Jun 8 13:43 mariadb-finetuning.cnf
-rwxrwx--- 1 root root 367 Jun 8 13:43 mariadb-inno.cnf
These are the config files:
[mysqld]
# * InnoDB
#
# InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.
# Read the manual for more InnoDB related options. There are many!
# default_storage_engine = InnoDB
# innodb_buffer_pool_size = 256M
# innodb_log_buffer_size = 8M
# innodb_file_per_table = 1
# innodb_open_files = 400
# innodb_io_capacity = 400
innodb_flush_method = fsync
[mysqld]
# * Fine Tuning
#
# max_connections = 100
# connect_timeout = 5
wait_timeout = 3600
# max_allowed_packet = 16M
# thread_cache_size = 128
# sort_buffer_size = 4M
# bulk_insert_buffer_size = 16M
# tmp_table_size = 32M
# max_heap_table_size = 32M
Somehow the options have no effect and I don't know why:
show variables where variable_name = 'innodb_flush_method'
# Variable_name Value
# innodb_flush_method O_DIRECT
Is there any better way to check why mariadb does not pick up these configs?
Update:
Manually editing my.cnf inside the container works but isn't what I want to do.
Runnnig mysqld --print-defaults will print out ... --innodb_flush_method=fsync as result. It seems to me that this may be due to the entrypoint script /docker-entrypoint.sh ?

If your MariaDB can run with your settings, perform 'docker exec' to run a bash inside your MariaDB container and check if your volume mount shows the expected files in the expected location.
Then also check that the configuation or startup of MariaDB inside the container is configured to read these files at all. You could to that by providing empty files, or files with gibberish inside.
Only then start looking at the content of the files. Once you reach here, you seem to have power over the containerized database engine.

Problem were file permissions.
The config files must be readable by the linux user running the database.
There may be several solutions, in my case the simplest is to apply 774 file permissions so that the file is readable by every user:
root#server:~# ls -l /config/mariadb
total 12
-rwxrwxr-- 1 root root 263 Jun 8 13:43 mariadb-finetuning.cnf
-rwxrwxr-- 1 root root 367 Jun 8 13:43 mariadb-inno.cnf

Related

docker-compose bind mount volume file propagation to host

I've read on this tutorial that when you create a docker-compose.yml file, and bind mount volumes, if you don't create the folder on your host, when running docker-compose up the folder will be automatically created and populated with the content of the container's folder.
Here is the quote:
Then you should volume bind two folders. /etc/nginx is where all your configuration files are stored, and /etc/ssl/private is where your SSL certificates are stored. It is VERY important that your config folder does NOT exist on your host first time you’re starting the container. When you start your container through docker-compose, it will automatically create the folder and populate it with the contents of the container. If you have created an empty config folder on your host, it will mount that, and the folder inside the container will be empty.
But it doesn't seems to work for me.
Here are a few things I've checked.
My docker is not running as root. I created the docker group on my machine and added my user in it, thus I don't need to run sudo docker <command>
I run an ubuntu server 18 LTS
It doesn't matter if I try to bind mount the volumes as read-only or not
When running docker-compose up it creates the folders, but they are owned by the root user
The created folder owned by the root user are empty
Here is my docker-compose.yml
version: '3'
services:
nginx:
image: nginx
container_name: reverse_nginx
ports:
- "80:80"
- "443:443"
volumes:
- "./html:/usr/share/nginx/html"
- "./conf:/etc/nginx"
- "./ssl:/etc/ssl/private"
restart: unless-stopped
And here is what is created:
icare#icare:~/nginx
$ ls -lR
.:
total 16
drwxr-xr-x 2 root root 4096 Mar 20 15:46 conf
-rw-r--r-- 1 icare icare 269 Mar 20 15:27 docker-compose.yml
drwxr-xr-x 2 root root 4096 Mar 20 15:46 html
drwxr-xr-x 2 root root 4096 Mar 20 15:46 ssl
./conf:
total 0
./html:
total 0
./ssl:
total 0
I can't find a solution online and it seems that a person asked on github and it's been solved the hard way (copying files from the container, then bind-mount everything) but I can't help to think that there is another way. Or maybe the tutorial I'm following is outdated or wrong ?

Docker: Monitor disk writes to the container, i.e. by the overlay storage driver

I would like to monitor data written "inside" a Docker container, meaning data written to the backing filesystem by the overlay storage driver. Not data written to volumes, tmpfs or bind mounts. Typical monitoring tools, such as docker stats seem to report the total amount of data written.
BLOCK I/O The amount of data the container has read to and written from [sic] block devices on the host
Source: docker stats
The idea is to keep containers as read-only as possible, by finding "write-heavy" files / folders and moving them to volumes or bind mounts. So an ideal solution would not (only) show the data currently written, but the total amount of data written since the container was started, ideally breaking it down to single files.
At the moment I'm simply using find -type f -mtime x from the container shell, where x is a smaller than the image age, but there must be a better solution for this.
I'm using: Server Version: 18.06.1-ce, Storage Driver: overlay2, Backing Filesystem: extfs
Actually the docker storage driver itself provides the answer already.
Taking the overlay2 storage driver, which is the default driver on most distributions, as an example, we see that the container layer, where all data written to the container is stored, is kept in a separate folder:
Source: How the overlay driver works
Total amount of data written to the container layer
For a complete overview of what has been written to the container, we only have to take a look at the upperdir, which is called diff on the backing (host) file system.
The path of the diff folder can be found with
docker container inspect <container_name> --format='{{.GraphDriver.Data.UpperDir}}' # or
docker container inspect <container_name> | grep UpperDir
With default settings, this path points to /var/lib/docker/overlay2/. Note that access to the "inner workings" of docker requires root access on the host, and it's a good idea not to do any writes to these folders.
Now that we have the folder on the backing file system, we can simply du in much detail as we want. As a test example, I've used an alpine image that runs a script, which writes a 10 MB dummy file every 10 seconds.
root#testbox:/var/lib/docker/overlay2/83a825d...# du -h -d 1
8.0K ./work
216M ./diff
216M .
root#testbox:/var/lib/docker/overlay2/83a825d...# ll diff/tmp
total 220164
drwxrwxrwt 2 root root 4096 Okt 21 22:57 ./
drwxr-xr-x 3 root root 4096 Okt 21 22:53 ../
-rw-r--r-- 1 root root 9266613 Okt 21 22:53 dummy0.tar.gz
-rw-r--r-- 1 root root 9266613 Okt 21 22:55 dummy10.tar.gz
-rw-r--r-- 1 root root 9266613 Okt 21 22:55 dummy11.tar.gz
[...]
Hence, seeing all the files and folders written to the container is as easy as with any other directory.

Can I use "mount" inside a Docker Alpine container?

I am Dockerising an old project. A feature in the project pulls in user-specified Git repos, and since the size of a repo could cause the filing system to be overwhelmed, I created a local filing system of a fixed size, and then mounted it. This was intended to prevent the web host from having its file system filled up.
The general approach is this:
IMAGE=filesystem/image.img
MOUNT_POINT=filesystem/mount
SIZE=20
PROJECT_ROOT=`pwd`
# Number of M to set aside for this filing system
dd if=/dev/zero of=$IMAGE bs=1M count=$SIZE &> /dev/null
# Format: the -F permits creation even though it's not a "block special device"
mkfs.ext3 -F -q $IMAGE
# Mount if the filing system is not already mounted
$MOUNTCMD | cut -d ' ' -f 3 | grep -q "^${PROJECT_ROOT}/${MOUNT_POINT}$"
if [ $? -ne 0 ]; then
# -p Create all parent dirs as necessary
mkdir -p $MOUNT_POINT
/bin/mount -t ext3 $IMAGE $MOUNT_POINT
fi
This works fine in a Linux local or remote VM. However, I'd like to run this shell code, or something like it, inside a container. Part of the reason I'd like to do that is to contain all fiddly stuff inside a container, so that building a new host machine is as kept as simple as possible (in my view, setting up custom mounts and cron-restart rules on the host works against that).
So, this command does not work inside a container ("filesystem" is an on-host Docker volume)
mount -t ext3 filesystem/image.img filesystem/mount
mount: can't setup loop device: No space left on device
It also does not work on a container folder ("filesystem2" is a container directory):
dd if=/dev/zero of=filesystem2/image.img bs=1M count=20
mount -t ext3 filesystem2/image.img filesystem2/mount
mount: can't setup loop device: No space left on device
I wonder whether containers just don't have the right internal machinery to do mounting, and thus whether I should change course. I'd prefer not to spend too much time on this (I'm just moving a project to a Docker-only server) which is why I would like to get mount working if I can.
Other options
If that's not possible, then a size-limited Docker volume, that works with both Docker and Swarm, may be an alternative I'd need to look into. There are conflicting reports on the web as to whether this actually works (see this question).
There is a suggestion here to say this is supported in Flocker. However, I am hesitant to use that, as it appears to be abandoned, presumably having been affected by ClusterHQ going bust.
This post indicates I can use --storage-opt size=120G with docker run. However, it does not look like it is supported by docker service create (unless perhaps the option has been renamed).
Update
As per the comment convo, I made some progress; I found that adding --privileged to the docker run enables mounting, at the cost of removing security isolation. A helpful commenter says that it is better to use the more fine-grained control of --cap-add SYS_ADMIN, allowing the container to retain some of its isolation.
However, Docker Swarm has not yet implemented either of these flags, so I can't use this solution. This lengthy feature request suggests to me that this feature is not going to be added in a hurry; it's been pending for two years already.
You won't be able to safely do this inside of a container. Docker removes the mount privilege from containers because using this you could mount the host filesystem and escape the container. However, you can do this outside of the container and mount the filesystem into the container as a volume using the default local driver. The size option isn't supported by most filesystems, tmpfs being one of the few exceptions. Most of them use the size of the underlying device which you defined with the image file creation command:
dd if=/dev/zero of=filesystem/image.img bs=1M count=$SIZE
I had trouble getting docker to create the loop device dynamically, so here's the process to create it manually:
$ sudo losetup --find --show ./vol-image.img
/dev/loop0
$ sudo mkfs -t ext3 /dev/loop0
mke2fs 1.43.4 (31-Jan-2017)
Creating filesystem with 10240 1k blocks and 2560 inodes
Filesystem UUID: 25c95fcd-6c78-4b8e-b923-f808517b28df
Superblock backups stored on blocks:
8193
Allocating group tables: done
Writing inode tables: done
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: done
When defining the volume mount options are passed almost verbatim from the mount command you run on the command line:
docker volume create --driver local --opt type=ext3 \
--opt device=filesystem/image.img app_vol
docker service create --mount type=volume,src=app_vol,dst=/filesystem/mount ...
or in a single service create command:
docker service create \
--mount type=volume,src=app_vol,dst=/filesystem/mount,volume-driver=local,volume-opt=type=ext3,volume-opt=device=filesystem/image.img ...
With docker run, the command looks like:
$ docker run -it --rm --mount type=volume,dst=/data,src=ext3vol,volume-driver=local,volume-opt=type=ext3,volume-opt=device=/dev/loop0 busybox /bin/sh
/ # ls -al /data
total 17
drwxr-xr-x 3 root root 1024 Sep 19 14:39 .
drwxr-xr-x 1 root root 4096 Sep 19 14:40 ..
drwx------ 2 root root 12288 Sep 19 14:39 lost+found
The only prerequisite is that you create this file and loop device before creating the service, and that this file is accessible wherever the service is scheduled. I would also suggest making all of the paths in these commands fully qualified rather than relative to the current directory. I'm pretty sure there are a few places that relative paths don't work.
I have found a size-limiting solution I am happy with, and it does not use the Linux mount command at all. I've not implemented it yet, but the tests documented below are satisfying enough. Readers may wish to note the minor warnings at the end.
I had not tried mounting Docker volumes prior to asking this question, since part of my research stumbled on a Stack Overflow poster casting doubt on whether Docker volumes can be made to respect a size limitation. My test indicates that they can, but you may wish to test this on your own platform to ensure it works for you.
Size limit on Docker container
The below commands have been cobbled together from various sources on the web.
To start with, I create a volume like so, with a 20m size limit:
docker volume create \
--driver local \
--opt o=size=20m \
--opt type=tmpfs \
--opt device=tmpfs \
hello-volume
I then create an Alpine Swarm service with a mount on this container:
docker service create \
--mount source=hello-volume,target=/myvol \
alpine \
sleep 10000
We can ensure the container is mounted by getting a shell on the single container in this service:
docker exec -it amazing_feynman.1.lpsgoyv0jrju6fvb8skrybqap
/ # ls - /myvol
total 0
OK, great. So, while remaining in this shell, let's try slowly overwhelming this disk, in 5m increments. We can see that it fails on the fifth try, which is what we would expect:
/ # cd /myvol
/myvol # ls
/myvol # dd if=/dev/zero of=image1 bs=1M count=5
5+0 records in
5+0 records out
/myvol # dd if=/dev/zero of=image2 bs=1M count=5
5+0 records in
5+0 records out
/myvol # ls -l
total 10240
-rw-r--r-- 1 root root 5242880 Sep 16 13:11 image1
-rw-r--r-- 1 root root 5242880 Sep 16 13:12 image2
/myvol # dd if=/dev/zero of=image3 bs=1M count=5
5+0 records in
5+0 records out
/myvol # dd if=/dev/zero of=image4 bs=1M count=5
5+0 records in
5+0 records out
/myvol # ls -l
total 20480
-rw-r--r-- 1 root root 5242880 Sep 16 13:11 image1
-rw-r--r-- 1 root root 5242880 Sep 16 13:12 image2
-rw-r--r-- 1 root root 5242880 Sep 16 13:12 image3
-rw-r--r-- 1 root root 5242880 Sep 16 13:12 image4
/myvol # dd if=/dev/zero of=image5 bs=1M count=5
dd: writing 'image5': No space left on device
1+0 records in
0+0 records out
/myvol #
Finally, let's see if we can get an error by overwhelming the disk in one go, in case the limitation only applies to newly opened file handles in a full disk:
/ # cd /myvol
/ # rm *
/myvol # dd if=/dev/zero of=image1 bs=1M count=21
dd: writing 'image1': No space left on device
21+0 records in
20+0 records out
It turns out we can, so that looks pretty robust to me.
Nota bene
The volume is created with a type and a device of "tmpfs", which sounded to me worryingly like a RAM disk. I've successfully checked that the volume remains connected and intact after a system reboot, so it looks good to me, at least for now.
However, I'd say that when it comes to organising your data persistence systems, don't just copy what I have. Make sure the volume is robust enough for your use case before you put it into production, and of course, make sure you include it in your back-up process.
(This is for Docker version 18.06.1-ce, build e68fc7a).

Recreate container on stop with docker-compose

I am trying to set up a multi-container service with docker-compose.
Some of the containers need to be restarted from a fresh container (eg. the file system should be like in the image) when they restart.
How can I achieve this?
I've found the restart: always option I can put on my service in the docker-compose.yml file, but that doesn't give me a fresh file system as it uses the same container.
I've also seen the --force-recreate option of docker-compose up, but that doesn't apply as that only recreates the containers when the command is runned.
EDIT:
This is probably not a docker-compose issue, but more of a general docker question: What is the best way to make sure a container is in a fresh state when it is restarted? With fresh state, I mean a state identical to that of a brand new container from the same image. Restarted is the docker command docker restart or docker stop and docker start.
In docker, immutability typically refers to the image layers. They are immutable, and any changes are pushed to a container specific copy-on-write layer of the filesystem. That container specific layer will last for the lifetime of the container. So to have those files not-persist, you have two options:
Recreate the container instead of just restart it
Don't write the changes to the container filesystem, and don't write them to any persistent volumes.
You cannot do #1 with a restart policy by it's very definition. A restart policy gives you the same container filesystem, with the application restarted. But if you use docker's swarm mode, it will recreate containers when they exit, so if you can migrate to swarm mode, you can achieve this result.
Option #2 looks more difficult than it is. If you aren't writing to the container filesystem, or to a volume, then where? The answer is a tmpfs volume that is only stored in memory and is lost as soon as the container exits. In compose, this is a tmpfs: /data/dir/to/not/persist line. Here's an example on the docker command line.
First, let's create a container with a tmpfs mounted at /data, add some content, and exit the container:
$ docker run -it --tmpfs /data --name no-persist busybox /bin/sh
/ # ls -al /data
total 4
drwxrwxrwt 2 root root 40 Apr 7 21:50 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
/ # echo 'do not save' >>/data/tmp-data.txt
/ # cat /data/tmp-data.txt
do not save
/ # ls -al /data
total 8
drwxrwxrwt 2 root root 60 Apr 7 21:51 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
-rw-r--r-- 1 root root 12 Apr 7 21:51 tmp-data.txt
/ # exit
Easy enough, it behaves as a normal container, let's restart it and check the directory contents:
$ docker restart no-persist
no-persist
$ docker attach no-persist
/ # ls -al /data
total 4
drwxr-xr-x 2 root root 40 Apr 7 21:51 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
/ # echo 'still do not save' >>/data/do-not-save.txt
/ # ls -al /data
total 8
drwxr-xr-x 2 root root 60 Apr 7 21:52 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
-rw-r--r-- 1 root root 18 Apr 7 21:52 do-not-save.txt
/ # exit
As you can see, the directory returned empty, and we can add data as needed back to the directory. The only downside of this is the directory will be empty even if you have content in the image at that location. I've tried combinations of named volumes, or using the mount syntax and passing the volume-nocopy option to 0, without luck. So if you need the directory to be initialized, you'll need to do that as part of your container entrypoint/cmd by copying from another location.
In order to not persist any changes to your containers it is enough that you don't map any directory from host to the container.
In this way, every time the containers runs (with docker run or docker-compose up ), it starts with a fresh file system.
docker-compose down also removes the containers, deleting any data.
The best solution I have found so far, is for the container itself to make sure to clean up when starting or stopping. I solve this by cleaning up when starting.
I copy my app files to /srv/template with the docker COPY directive in my Dockerfile, and have something like this in my ENTRYPOINT script:
rm -rf /srv/server/
cp -r /srv/template /srv/server
cd /srv/server

Having trouble setting up a persistent data volume for a Docker image

I've been looking into setting up a data volume for a Docker container that I'm running on my server. The container is from this FreePBX image https://hub.docker.com/r/jmar71n/freepbx/
Basically I want persistent data so I don't lose my VoIP extensions and settings in the case of Docker stopping. I've tried many guides, ones here on stack overflow, and on the Docker manpages, but I just can't quite get it to work.
Can anyone help me with what commands I need to run in order to attach a volume to the FreePBX image I linked above?
You can do this by running a container with the -v option and mapping to a host directory - you just need to know where the container's storing the data.
Looking at the Dockerfile for that image, I'm assuming that the data you're interested in is stored in MySql. In the MySql config the data directory the container's using is /var/lib/mysql.
So you can start your container like this, mapping the MySql data directory to /docker/pbx-data on your host:
> docker run -d -t -v /docker/pbx-data:/var/lib/mysql jmar71n/freepbx
20b45b8fb2eec63db3f4dcab05f89624ef7cb1ff067cae258e0f8a910762fb1a
Use docker inpect to confirm that the mount is mapped as expected:
> docker inspect --format '{{json .Mounts}}' 20b
[{"Source":"/docker/pbx-data",
"Destination":"/var/lib/mysql",
"Mode":"","RW":true,"Propagation":"rprivate"}]
When the container runs it bootstraps the database, so on the host you'll be able to see the contents of the MySql data directory the container is using:
> ls -l /docker/pbx-data
total 28684
-rw-r----- 1 103 root 2062 Sep 21 09:30 20b45b8fb2ee.err
-rw-rw---- 1 103 messagebus 18874368 Sep 21 09:30 ibdata1
-rw-rw---- 1 103 messagebus 5242880 Sep 21 09:30 ib_logfile0
-rw-rw---- 1 103 messagebus 5242880 Sep 21 09:30 ib_logfile1
drwx------ 2 103 root 4096 Sep 21 09:30 mysql
drwx------ 2 103 messagebus 4096 Sep 21 09:30 performance_schema
If you kill the container and run another one with the same volume mapping, it will have all the data files from the previous container, and your app state should be preserved.
I'm not familiar with FreePBX, but if there is state being stored in other directories, you can find the locations in config and map them to the host in the same way, with multiple -v options.
Hi Elton Stoneman and user3608260!
Yes, you assuming correctly for data saves in Mysql (records, users, configs, etc.).
But in asterisk, all configurations are saved in files '.conf' and similars.
In this case, the archives looked for user3608260 are storaged in '/etc/asterisk/*'
Your answer is perfectly with more one command: -v /local_to_save:/etc/asterisk
the final docker command:
docker run -d -t -v /docker/pbx-data:/var/lib/mysql -v /docker/pbx-asterisk:/etc/asterisk jmar71n/freepbx
[Assuming /docker/pbx-asterisk is a host directory. ]

Resources