When I run this docker command like this, I get errors:
$ docker run -v $PWD:/tmp bobrik/curator --config /tmp/testconfig.yml /tmp/actions-daily.yml
Usage: curator [OPTIONS] ACTION_FILE
Error: Invalid value for "--config": Path "/tmp/testconfig.yml" does not exist.
For some reason, Docker cannot find this file path, even though that file exists in that directory and its permission is set to 775. Furthermore, when I inspect that container, I can see this Mount information:
"HostConfig": {
"Binds": [
"/cygdrive/c/myUbuntu18/rootfs/home/jdepaul/repos/curator/test/utils:/tmp"
],
and this further down:
"Mounts": [
{
"Type": "bind",
"Source": "/cygdrive/c/myUbuntu18/rootfs/home/jdepaul/repos/curator/test/utils",
"Destination": "/tmp",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
Run it on Windows, like this:
docker run -v C:\Users\ja006652\cure:/tmp bobrik/curator --config /tmp/testconfig.yml /tmp/daily-dev-action.yml --dry-run
Related
I create a container with the following command. Docker inspect shows the mounts are success.
But actually the contents in the directories in host and container are not the same.
I have tried to mount /l3/mysql/conf2 to /etc/mysql, but it doesn't work too.
environment:
Lubuntu 22.04
Docker version 20.10.15, build fd82621
docker image: mysql:5.7
docker pull mysql:5.7
docker run -d -p 3306:3306 --privileged=true
-v /l3/mysql/log:/var/log/mysql
-v /l3/mysql/data:/var/lib/mysql
-v /l3/mysql/conf:/etc/mysql/conf.d
-e MYSQL_ROOT_PASSWORD=123456
--name=mysql
mysql:5.7
# directories are created in host, but all of them are empty
cd /l3/mysql/conf
ls
# but the Mounts are created
docker inspect my mysql
<<'END'
"Mounts": [
{
"Type": "bind",
"Source": "/l3/mysql/conf",
"Destination": "/ect/mysql/conf.d",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/l3/mysql/data",
"Destination": "/var/lib/mysql",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/l3/mysql/log",
"Destination": "/var/log/mysql",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
END
# in the container, there are three config files
root#380a7767bae4:/etc/mysql/conf.d# ls
docker.cnf mysql.cnf mysqldump.cnf
When I create a new file test.txt in the host /l3/mysql/conf/, there is no test.txt in /etc/mysql/mysql.conf.d
root#lezhang-Lubuntu:/l3/mysql/conf# ls
test.txt
root#380a7767bae4:/etc/mysql/conf.d# ls
docker.cnf mysql.cnf mysqldump.cnf
Thanks in advance.
How do you run systemd in a Docker managed plugin? With a normal container I can run centos/systemd and run an Apache server using their example Dockerfile
FROM centos/systemd
RUN yum -y install httpd; yum clean all; systemctl enable httpd.service
EXPOSE 80
CMD ["/usr/sbin/init"]
And running it as follows
docker build --rm --no-cache -t httpd .
docker run --privileged --name httpd -v /sys/fs/cgroup:/sys/fs/cgroup:ro -p 80:80 -d httpd
However, when I try to make a managed plugin, there are some issues with the cgroups
I've tried putting in the config.json
{
"destination": "/sys/fs/cgroup",
"source": "/sys/fs/cgroup",
"type": "bind",
"options": [
"bind",
"ro",
"private"
]
}
{
"destination": "/sys/fs/cgroup",
"source": "/sys/fs/cgroup",
"type": "bind",
"options": [
"bind",
"ro",
"rprivate"
]
}
{
"destination": "/sys/fs/cgroup",
"source": "/sys/fs/cgroup",
"type": "bind",
"options": [
"rbind",
"ro",
"rprivate"
]
}
I also tried the following which damages the host's cgroup which may require a hard reboot to recover.
{
"destination": "/sys/fs/cgroup/systemd",
"source": "/sys/fs/cgroup/systemd",
"type": "bind",
"options": [
"bind",
"ro",
"private"
]
}
{
"destination": "/sys/fs/cgroup",
"source": "/sys/fs/cgroup",
"type": "bind",
"options": [
"bind",
"ro",
"private"
]
}
It looks to be something to do with how opencontainer and moby interact https://github.com/moby/moby/issues/36861
This is how I did it on my https://github.com/trajano/docker-volume-plugins/tree/master/centos-mounted-volume-plugin
The key thing to do is preserve the /run/docker/plugins before systemd gets started and wipes the /run folder. Then make sure you create the socket in the new folder.
mkdir -p /dockerplugins
if [ -e /run/docker/plugins ]
then
mount --bind /run/docker/plugins /dockerplugins
fi
The other thing is that Docker managed plugins add an implicit /sys/fs/cgroup AFTER the defined mounts in config so creating a readonly mount will not work unless it was rebound before starting up systemd.
mount --rbind /hostcgroup /sys/fs/cgroup
With the mount defined in config.json as
{
"destination": "/hostcgroup",
"source": "/sys/fs/cgroup",
"type": "bind",
"options": [
"bind",
"ro",
"private"
]
}
Creating the socket needs to be customized since the plugin helpers write to /run/docker/plugins
l, err := sockets.NewUnixSocket("/dockerplugins/osmounted.sock", 0)
if err != nil {
log.Fatal(err)
}
h.Serve(l)
The following shows the process above on how I achieved it on my plugin
https://github.com/trajano/docker-volume-plugins/blob/v1.2.0/centos-mounted-volume-plugin/init.sh
https://github.com/trajano/docker-volume-plugins/blob/v1.2.0/centos-mounted-volume-plugin/config.json
https://github.com/trajano/docker-volume-plugins/blob/v1.2.0/centos-mounted-volume-plugin/main.go#L113
You can run httpd in a centos container without systemd - atleast to the tests with the docker-systemctl-replacement script.
Docker newbie here...
I am trying to persist docker run couch db on my local filesystem but when I run the command I don't see the db files being saved. I tried researching but I seem to be doing everything right.
jubin#jubin-VirtualBox:~$ docker run -d -p 5984:5984 -v /home/jubin/data:/usr/local/var/lib/couchdb --name couchdb klaemo/couchdb
5e0d15b933d6344d3c6a28c26e1f2f59dba796697d47ff21b2c0971837c17e54
jubin#jubin-VirtualBox:~$ curl -X PUT http://172.17.0.2:5984/db
{"ok":true}
jubin#jubin-VirtualBox:~$ ls -ltr /home/jubin/data/
total 0
on inspect it seems to be correctly configured.
"Mounts": [
{
"Type": "volume",
"Name": "ea1ab54976ef583e2ca1222b4aeea420c657d48cb0987a0467a737ee3f68df02",
"Source": "/var/lib/docker/volumes/ea1ab54976ef583e2ca1222b4aeea420c657d48cb0987a0467a737ee3f68df02/_data",
"Destination": "/opt/couchdb/data",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/home/jubin/data",
"Destination": "/usr/local/var/lib/couchdb",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
According to the image documentation, the CouchDB data are stored in /opt/couchdb/data and not in /usr/local/var/lib/couchdb in the latest version of the image (2.0.0/latest).
You can also confirm this by doing docker exec -it CONTAINER_ID bash and then locating the CouchDB data inside the container.
I have created the following docker file
FROM resin/rpi-raspbian:jessie-20160831
..
RUN mkdir -p /usr/bin/asmp
COPY src /usr/bin/asmp/
VOLUME /usr/bin/asmp/data
..
The copy actions copies a directory structure like this:
data
db.sqlite3
web
...
worker
...
I than just start a container, using something like this:
docker run -p 8000:8000 asmp
When I do an inspect I see this:
"Mounts": [
{
"Name": "30ccc87580cd85108cb4948798612630640b5564f66de848a4e2f77db8148d3a",
"Source": "/var/lib/docker/volumes/30ccc87580cd85108cb4948798612630640b5564f66de848a4e2f77db8148d3a/_data",
"Destination": "/sys/fs/cgroup",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
},
{
"Name": "c4473031d209eb29d3f454be68325c6b1f33aa660185bf57e8abb91a56bb260e",
"Source": "/var/lib/docker/volumes/c4473031d209eb29d3f454be68325c6b1f33aa660185bf57e8abb91a56bb260e/_data",
"Destination": "/usr/bin/asmp/data",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
When I stop the container (by killing it) and than starting it again, it creates a new volume to a different directory. So I am wondering how to handle with this situation? Am I starting/stopping the container wrong? Or should I specify the volume differently? I do know that you can specify a target path, but can I (and should I ) specify this in the docker file? I rather specify the volume settings in the docker file, since the run command is already having a lot of parameters to redirect ports and devices..
Any thoughts?
You must specify volume destination when you run container. Read about volumes
docker run -p 8000:8000 --volume=<path_on_host>:/usr/bin/asmp/data asmp
I have two container one is setup as a data volume, I can go inside the data container and explore the files that are mounted from a network share with out any issues.
how ever on the second docker instance when I go to the folder with mounted volumes the folder exists but all the files and directories that should be there are not visible.
this used to work so I can only assume its due to docker 1.9 I am seeing this on a linux and mac box.
Any ideas as to the cause ? is this a bug or is there something else i can investigate ?
output of inspect.
"Volumes": {
"/mnt/shared_app_data": {},
"/srv/shared_app_data": {}
},
"Mounts": [
{
"Name": "241d3e495f312c79abbeaa9495fa3b32110e9dca8442291d248cfbc5acca5b53",
"Source": "/var/lib/docker/volumes/241d3e495f312c79abbeaa9495fa3b32110e9dca8442291d248cfbc5acca5b53/_data",
"Destination": "/mnt/shared_app_data",
"Driver": "local",
"Mode": "",
"RW": true
},
{
"Name": "061f16c066b59f31baac450d0d97043d1fcdceb4ceb746515586e95d26c91b57",
"Source": "/var/lib/docker/volumes/061f16c066b59f31baac450d0d97043d1fcdceb4ceb746515586e95d26c91b57/_data",
"Destination": "/srv/shared_app_data",
"Driver": "local",
"Mode": "",
"RW": true
}
],
the files are mounted in the docker file in this manner
RUN echo '/srv/path ipaddress/255.255.255.0(rw,no_root_squash,subtree_check,fsid=0)' >> /etc/exports
RUN echo 'ipaddress:/srv/path /srv/shared_app_data nfs defaults 0 0' >> /etc/fstab
RUN echo 'ipaddress:/srv/path /mnt/shared_app_data nfs defaults 0 0' >> /etc/fstab
and then when the container starts it runs.
service rpcbind start
mount -a
You need to be sure that the second container does mount the VOLUME declared in the first one
docker run --volumes-from first_container second_container
Make sure the first container does have the right files: see "Locating a volume"
docker inspect first_container
# more precisely
sudo ls $(docker inspect -f '{{ (index .Mounts 0).Source }}' first_container)