"Mounts": [
{
"Source": "/home/ec2-user/selenium-downloads",
"Destination": "/home/seluser/Downloads",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
How can i change Propagation? I need it to be empty so that my docker container can right to the mounted directory.
Related
I create a container with the following command. Docker inspect shows the mounts are success.
But actually the contents in the directories in host and container are not the same.
I have tried to mount /l3/mysql/conf2 to /etc/mysql, but it doesn't work too.
environment:
Lubuntu 22.04
Docker version 20.10.15, build fd82621
docker image: mysql:5.7
docker pull mysql:5.7
docker run -d -p 3306:3306 --privileged=true
-v /l3/mysql/log:/var/log/mysql
-v /l3/mysql/data:/var/lib/mysql
-v /l3/mysql/conf:/etc/mysql/conf.d
-e MYSQL_ROOT_PASSWORD=123456
--name=mysql
mysql:5.7
# directories are created in host, but all of them are empty
cd /l3/mysql/conf
ls
# but the Mounts are created
docker inspect my mysql
<<'END'
"Mounts": [
{
"Type": "bind",
"Source": "/l3/mysql/conf",
"Destination": "/ect/mysql/conf.d",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/l3/mysql/data",
"Destination": "/var/lib/mysql",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/l3/mysql/log",
"Destination": "/var/log/mysql",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
END
# in the container, there are three config files
root#380a7767bae4:/etc/mysql/conf.d# ls
docker.cnf mysql.cnf mysqldump.cnf
When I create a new file test.txt in the host /l3/mysql/conf/, there is no test.txt in /etc/mysql/mysql.conf.d
root#lezhang-Lubuntu:/l3/mysql/conf# ls
test.txt
root#380a7767bae4:/etc/mysql/conf.d# ls
docker.cnf mysql.cnf mysqldump.cnf
Thanks in advance.
So when you docker run, it will create automatically volumes for your container. When you only have one container on your host its easy to find which volumes have been created.
docker volume ls
Returns :
DRIVER VOLUME NAME
local 21625133cc5dde5eae34b5ee85c6c26c15d4c5bb0847f1fd3629a33faa5084ce
local ad134a6d0130f11ad5b3af9340205eb712718397ece3cfaf8adf5b08abe0362a
local e425b2a143ecb1ca69aaf15ea25bdc232178bea9662dc77bbe0c2c630f452874
But when you ask docker inspect :
docker inspect --format='{{json .Mounts}}{{ .Name}}' $INSTANCE_ID | jq
I'll get :
[
{
"Type": "bind",
"Source": "/home/admin/cobbler-dkr/etc/cobbler/settings",
"Destination": "/etc/cobbler/settings",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/home/admin/cobbler-dkr/var/www/cobbler/links",
"Destination": "/var/www/cobbler/links",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "21625133cc5dde5eae34b5ee85c6c26c15d4c5bb0847f1fd3629a33faa5084ce",
"Source": "/var/lib/docker/volumes/21625133cc5dde5eae34b5ee85c6c26c15d4c5bb0847f1fd3629a33faa5084ce/_data",
"Destination": "“/sys/fs/cgroup”",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/home/admin/cobbler-dkr/var/www/cobbler/ks_mirror",
"Destination": "/var/www/cobbler/ks_mirror",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "e425b2a143ecb1ca69aaf15ea25bdc232178bea9662dc77bbe0c2c630f452874",
"Source": "/var/lib/docker/volumes/e425b2a143ecb1ca69aaf15ea25bdc232178bea9662dc77bbe0c2c630f452874/_data",
"Destination": "]",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/sys/fs/cgroup",
"Destination": "/sys/fs/cgroup",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/home/admin/cobbler-dkr/dist/centos",
"Destination": "/mnt",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/home/admin/cobbler-dkr/var/lib/cobbler/config",
"Destination": "/var/lib/cobbler/config",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/home/admin/cobbler-dkr/var/lib/tftpboot",
"Destination": "/var/lib/tftpboot",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/home/admin/cobbler-dkr/etc/cobbler/dhcp.template",
"Destination": "/etc/cobbler/dhcp.template",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/home/admin/cobbler-dkr/var/www/cobbler/images",
"Destination": "/var/www/cobbler/images",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "ad134a6d0130f11ad5b3af9340205eb712718397ece3cfaf8adf5b08abe0362a",
"Source": "/var/lib/docker/volumes/ad134a6d0130f11ad5b3af9340205eb712718397ece3cfaf8adf5b08abe0362a/_data",
"Destination": "[",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
]
I tried to list and the volume Names thanks to range:
docker inspect -f '{{ range .Mounts }}{{if eq .Type "volume" }}{{ .Name}}{{end}}{{ end }}' 33623ebfece7
But I got something odd, no spaces seems to appear:
e425b2a143ecb1ca69aaf15ea25bdc232178bea9662dc77bbe0c2c630f452874ad134a6d0130f11ad5b3af9340205eb712718397ece3cfaf8adf5b08abe0362a21625133cc5dde5eae34b5ee85c6c26c15d4c5bb0847f1fd3629a33faa5084ce
My last try was to grep the Raw answer :
docker inspect --format='{{json .Mounts}}{{ .Name}}' 33623ebfece7 | jq | grep '"Name":'
Returned :
parse error: Invalid numeric literal at line 2, column 0
"Name": "e425b2a143ecb1ca69aaf15ea25bdc232178bea9662dc77bbe0c2c630f452874",
"Name": "ad134a6d0130f11ad5b3af9340205eb712718397ece3cfaf8adf5b08abe0362a",
"Name": "21625133cc5dde5eae34b5ee85c6c26c15d4c5bb0847f1fd3629a33faa5084ce",
Now I got the names, but I think I'm not taking the simpler way to get a list of my volumes deployed for the container.
The intention is to delete them after I delete my container so I can start from a fresh instance.
I know there is prune but I don't want to delete all unused docker volumes on later purpose, just want a clean way to teardown my container.
Thanks for any help.
If you're manually deleting the containers, docker rm has an option for this:
--volumes, -v: Remove anonymous volumes associated with the container
Similarly, if Docker Compose manages your application, docker-compose down -v will delete anonymous volumes (along with every other volume declared in the docker-compose.yml file).
The docker inspect output you show suggests this is ultimately being caused by an incorrect line in your Dockerfile; perhaps a line like
# (1) This VOLUME is unnecessary; you can bind mount without it
# (2) Unicode quotes don't parse correctly
# (3) You shouldn't need to use this system directory at all
VOLUME [ “/sys/fs/cgroup” ]
Deleting that line will cause the anonymous volumes to not get generated, and won't affect the way you run the container.
If you really need to list out the volume IDs, you need to include some whitespace inside the template to get a space-separated list of IDs.
# Note the space after {{.Name}}
docker inspect \
-f '{{range .Mounts}}{{if eq .Type "volume"}}{{.Name}} {{end}}{{end}}' \
33623ebfece7
My docker compose entry for the service is:
api:
restart: always
build: ./project/api
volumes:
- ./:/usr/src/app:ro
- ~/data:/root/data:ro # /root is the ~ in container
I run my containers using
sudo docker-compose down && sudo docker-compose up --build -d
I have 2 different machines, both with admin user. On inspection of the containers created:
sudo docker inspect project_api_1 | grep Mounts -A20
On machine 1, admin user is admin:x:1005:1001:Admin User,,,:/home/admin:/bin/zsh and admin:x:1000:1000:Debian:/home/admin:/bin/zsh on machine 2
Output on machine 1
"Mounts": [
{
"Type": "bind",
"Source": "/home/admin/project",
"Destination": "/usr/src/app",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/home/admin/data",
"Destination": "/root/data",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
Output on machine 2
"Mounts": [
{
"Type": "bind",
"Source": "/home/admin/project",
"Destination": "/usr/src/app",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/root/data",
"Destination": "/root/data",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
On machine 1, /root/data points to /home/admin/data which is the desired behavior, but on machine 2 it points to /root/data. I can fix this by using either relative path ./data (which symlinks to ~/data) or by using absolute path /home/admin/data. My docker version is same on both machines - Docker version 17.06.0-ce, build 02c1d87
I'm curious about why is this different?
On your yaml file, you're using ~/data, where ~ is the shorthand for the home directory of the user you used to run docker. From that, I guess you're running docker on machine 1 as admin, while you're running docker on machine 2 as root.
You should remember that the source directory is a directory on your host machine (not the container), so the user relevant to ~ is the user on your host machine.
Docker newbie here...
I am trying to persist docker run couch db on my local filesystem but when I run the command I don't see the db files being saved. I tried researching but I seem to be doing everything right.
jubin#jubin-VirtualBox:~$ docker run -d -p 5984:5984 -v /home/jubin/data:/usr/local/var/lib/couchdb --name couchdb klaemo/couchdb
5e0d15b933d6344d3c6a28c26e1f2f59dba796697d47ff21b2c0971837c17e54
jubin#jubin-VirtualBox:~$ curl -X PUT http://172.17.0.2:5984/db
{"ok":true}
jubin#jubin-VirtualBox:~$ ls -ltr /home/jubin/data/
total 0
on inspect it seems to be correctly configured.
"Mounts": [
{
"Type": "volume",
"Name": "ea1ab54976ef583e2ca1222b4aeea420c657d48cb0987a0467a737ee3f68df02",
"Source": "/var/lib/docker/volumes/ea1ab54976ef583e2ca1222b4aeea420c657d48cb0987a0467a737ee3f68df02/_data",
"Destination": "/opt/couchdb/data",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/home/jubin/data",
"Destination": "/usr/local/var/lib/couchdb",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
According to the image documentation, the CouchDB data are stored in /opt/couchdb/data and not in /usr/local/var/lib/couchdb in the latest version of the image (2.0.0/latest).
You can also confirm this by doing docker exec -it CONTAINER_ID bash and then locating the CouchDB data inside the container.
I want to map the logs directory for a nginx container so I don't have to keep connecting to it when i have an error, the volume mapping is half working in that I can access the files in the root of the path I am mapping too, but it is not pulling any subfolders
Docker-compose.yml config
and this is the contents of the source folder with in the running container
which has two subfolders:
1. Supervisor
2. nginx
which both have files in them,
Do I have to create volume mappings for each of the subfolders as well? or is there away to specify that the mapping should include all sub folders?
Thanks
UPDATED:
here are the mounts as listed in the docker inspect. There is loads so let me know if you want to see anything else
"Mounts": [
{
"Source": "/home/ubuntu/dockervel/sites",
"Destination": "/etc/nginx/sites-enabled",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
},
{
"Source": "/home/ubuntu/dockervel/logs",
"Destination": "/var/log",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
},
{
"Source": "/home/ubuntu/dockervel/www",
"Destination": "/var/www",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
Docker version is Docker version 1.10.3, build 20f81dd
You have to declare in dockerfile the volumes you will use.
try to extend dydx image and add the volumes you need.
Then change the .yml file to point to your image