container mount success, but directorys are not the same - docker

I create a container with the following command. Docker inspect shows the mounts are success.
But actually the contents in the directories in host and container are not the same.
I have tried to mount /l3/mysql/conf2 to /etc/mysql, but it doesn't work too.
environment:
Lubuntu 22.04
Docker version 20.10.15, build fd82621
docker image: mysql:5.7
docker pull mysql:5.7
docker run -d -p 3306:3306 --privileged=true
-v /l3/mysql/log:/var/log/mysql
-v /l3/mysql/data:/var/lib/mysql
-v /l3/mysql/conf:/etc/mysql/conf.d
-e MYSQL_ROOT_PASSWORD=123456
--name=mysql
mysql:5.7
# directories are created in host, but all of them are empty
cd /l3/mysql/conf
ls
# but the Mounts are created
docker inspect my mysql
<<'END'
"Mounts": [
{
"Type": "bind",
"Source": "/l3/mysql/conf",
"Destination": "/ect/mysql/conf.d",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/l3/mysql/data",
"Destination": "/var/lib/mysql",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/l3/mysql/log",
"Destination": "/var/log/mysql",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
END
# in the container, there are three config files
root#380a7767bae4:/etc/mysql/conf.d# ls
docker.cnf mysql.cnf mysqldump.cnf
When I create a new file test.txt in the host /l3/mysql/conf/, there is no test.txt in /etc/mysql/mysql.conf.d
root#lezhang-Lubuntu:/l3/mysql/conf# ls
test.txt
root#380a7767bae4:/etc/mysql/conf.d# ls
docker.cnf mysql.cnf mysqldump.cnf
Thanks in advance.

Related

Container can't find bind mounted files

When I run this docker command like this, I get errors:
$ docker run -v $PWD:/tmp bobrik/curator --config /tmp/testconfig.yml /tmp/actions-daily.yml
Usage: curator [OPTIONS] ACTION_FILE
Error: Invalid value for "--config": Path "/tmp/testconfig.yml" does not exist.
For some reason, Docker cannot find this file path, even though that file exists in that directory and its permission is set to 775. Furthermore, when I inspect that container, I can see this Mount information:
"HostConfig": {
"Binds": [
"/cygdrive/c/myUbuntu18/rootfs/home/jdepaul/repos/curator/test/utils:/tmp"
],
and this further down:
"Mounts": [
{
"Type": "bind",
"Source": "/cygdrive/c/myUbuntu18/rootfs/home/jdepaul/repos/curator/test/utils",
"Destination": "/tmp",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
Run it on Windows, like this:
docker run -v C:\Users\ja006652\cure:/tmp bobrik/curator --config /tmp/testconfig.yml /tmp/daily-dev-action.yml --dry-run

Running systemd in a Docker managed plugin

How do you run systemd in a Docker managed plugin? With a normal container I can run centos/systemd and run an Apache server using their example Dockerfile
FROM centos/systemd
RUN yum -y install httpd; yum clean all; systemctl enable httpd.service
EXPOSE 80
CMD ["/usr/sbin/init"]
And running it as follows
docker build --rm --no-cache -t httpd .
docker run --privileged --name httpd -v /sys/fs/cgroup:/sys/fs/cgroup:ro -p 80:80 -d httpd
However, when I try to make a managed plugin, there are some issues with the cgroups
I've tried putting in the config.json
{
"destination": "/sys/fs/cgroup",
"source": "/sys/fs/cgroup",
"type": "bind",
"options": [
"bind",
"ro",
"private"
]
}
{
"destination": "/sys/fs/cgroup",
"source": "/sys/fs/cgroup",
"type": "bind",
"options": [
"bind",
"ro",
"rprivate"
]
}
{
"destination": "/sys/fs/cgroup",
"source": "/sys/fs/cgroup",
"type": "bind",
"options": [
"rbind",
"ro",
"rprivate"
]
}
I also tried the following which damages the host's cgroup which may require a hard reboot to recover.
{
"destination": "/sys/fs/cgroup/systemd",
"source": "/sys/fs/cgroup/systemd",
"type": "bind",
"options": [
"bind",
"ro",
"private"
]
}
{
"destination": "/sys/fs/cgroup",
"source": "/sys/fs/cgroup",
"type": "bind",
"options": [
"bind",
"ro",
"private"
]
}
It looks to be something to do with how opencontainer and moby interact https://github.com/moby/moby/issues/36861
This is how I did it on my https://github.com/trajano/docker-volume-plugins/tree/master/centos-mounted-volume-plugin
The key thing to do is preserve the /run/docker/plugins before systemd gets started and wipes the /run folder. Then make sure you create the socket in the new folder.
mkdir -p /dockerplugins
if [ -e /run/docker/plugins ]
then
mount --bind /run/docker/plugins /dockerplugins
fi
The other thing is that Docker managed plugins add an implicit /sys/fs/cgroup AFTER the defined mounts in config so creating a readonly mount will not work unless it was rebound before starting up systemd.
mount --rbind /hostcgroup /sys/fs/cgroup
With the mount defined in config.json as
{
"destination": "/hostcgroup",
"source": "/sys/fs/cgroup",
"type": "bind",
"options": [
"bind",
"ro",
"private"
]
}
Creating the socket needs to be customized since the plugin helpers write to /run/docker/plugins
l, err := sockets.NewUnixSocket("/dockerplugins/osmounted.sock", 0)
if err != nil {
log.Fatal(err)
}
h.Serve(l)
The following shows the process above on how I achieved it on my plugin
https://github.com/trajano/docker-volume-plugins/blob/v1.2.0/centos-mounted-volume-plugin/init.sh
https://github.com/trajano/docker-volume-plugins/blob/v1.2.0/centos-mounted-volume-plugin/config.json
https://github.com/trajano/docker-volume-plugins/blob/v1.2.0/centos-mounted-volume-plugin/main.go#L113
You can run httpd in a centos container without systemd - atleast to the tests with the docker-systemctl-replacement script.

Docker mounting different path with same config on 2 different machines

My docker compose entry for the service is:
api:
restart: always
build: ./project/api
volumes:
- ./:/usr/src/app:ro
- ~/data:/root/data:ro # /root is the ~ in container
I run my containers using
sudo docker-compose down && sudo docker-compose up --build -d
I have 2 different machines, both with admin user. On inspection of the containers created:
sudo docker inspect project_api_1 | grep Mounts -A20
On machine 1, admin user is admin:x:1005:1001:Admin User,,,:/home/admin:/bin/zsh and admin:x:1000:1000:Debian:/home/admin:/bin/zsh on machine 2
Output on machine 1
"Mounts": [
{
"Type": "bind",
"Source": "/home/admin/project",
"Destination": "/usr/src/app",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/home/admin/data",
"Destination": "/root/data",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
Output on machine 2
"Mounts": [
{
"Type": "bind",
"Source": "/home/admin/project",
"Destination": "/usr/src/app",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/root/data",
"Destination": "/root/data",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
On machine 1, /root/data points to /home/admin/data which is the desired behavior, but on machine 2 it points to /root/data. I can fix this by using either relative path ./data (which symlinks to ~/data) or by using absolute path /home/admin/data. My docker version is same on both machines - Docker version 17.06.0-ce, build 02c1d87
I'm curious about why is this different?
On your yaml file, you're using ~/data, where ~ is the shorthand for the home directory of the user you used to run docker. From that, I guess you're running docker on machine 1 as admin, while you're running docker on machine 2 as root.
You should remember that the source directory is a directory on your host machine (not the container), so the user relevant to ~ is the user on your host machine.

docker couchdb data volume doesn't save to local filesystem

Docker newbie here...
I am trying to persist docker run couch db on my local filesystem but when I run the command I don't see the db files being saved. I tried researching but I seem to be doing everything right.
jubin#jubin-VirtualBox:~$ docker run -d -p 5984:5984 -v /home/jubin/data:/usr/local/var/lib/couchdb --name couchdb klaemo/couchdb
5e0d15b933d6344d3c6a28c26e1f2f59dba796697d47ff21b2c0971837c17e54
jubin#jubin-VirtualBox:~$ curl -X PUT http://172.17.0.2:5984/db
{"ok":true}
jubin#jubin-VirtualBox:~$ ls -ltr /home/jubin/data/
total 0
on inspect it seems to be correctly configured.
"Mounts": [
{
"Type": "volume",
"Name": "ea1ab54976ef583e2ca1222b4aeea420c657d48cb0987a0467a737ee3f68df02",
"Source": "/var/lib/docker/volumes/ea1ab54976ef583e2ca1222b4aeea420c657d48cb0987a0467a737ee3f68df02/_data",
"Destination": "/opt/couchdb/data",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/home/jubin/data",
"Destination": "/usr/local/var/lib/couchdb",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
According to the image documentation, the CouchDB data are stored in /opt/couchdb/data and not in /usr/local/var/lib/couchdb in the latest version of the image (2.0.0/latest).
You can also confirm this by doing docker exec -it CONTAINER_ID bash and then locating the CouchDB data inside the container.

Docker volume recreates a new target path every time

I have created the following docker file
FROM resin/rpi-raspbian:jessie-20160831
..
RUN mkdir -p /usr/bin/asmp
COPY src /usr/bin/asmp/
VOLUME /usr/bin/asmp/data
..
The copy actions copies a directory structure like this:
data
db.sqlite3
web
...
worker
...
I than just start a container, using something like this:
docker run -p 8000:8000 asmp
When I do an inspect I see this:
"Mounts": [
{
"Name": "30ccc87580cd85108cb4948798612630640b5564f66de848a4e2f77db8148d3a",
"Source": "/var/lib/docker/volumes/30ccc87580cd85108cb4948798612630640b5564f66de848a4e2f77db8148d3a/_data",
"Destination": "/sys/fs/cgroup",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
},
{
"Name": "c4473031d209eb29d3f454be68325c6b1f33aa660185bf57e8abb91a56bb260e",
"Source": "/var/lib/docker/volumes/c4473031d209eb29d3f454be68325c6b1f33aa660185bf57e8abb91a56bb260e/_data",
"Destination": "/usr/bin/asmp/data",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
When I stop the container (by killing it) and than starting it again, it creates a new volume to a different directory. So I am wondering how to handle with this situation? Am I starting/stopping the container wrong? Or should I specify the volume differently? I do know that you can specify a target path, but can I (and should I ) specify this in the docker file? I rather specify the volume settings in the docker file, since the run command is already having a lot of parameters to redirect ports and devices..
Any thoughts?
You must specify volume destination when you run container. Read about volumes
docker run -p 8000:8000 --volume=<path_on_host>:/usr/bin/asmp/data asmp

Resources