I am hosting some simple docker containers. I am wondering that the container size is increasing over time quickly and I do not know how to figure out the problem.
Size reported by Docker:
me#somewhere:~$ sudo docker ps -s
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES SIZE
02b30add1cb3 my-service "npm start" 23 hours ago Up 23 hours 3001/tcp, 0.0.0.0:9017->9017/tcp my-service-frontend 0 B (virtual 776.4 MB)
20a2be4931e7 my-service "phantomjs src/sites/" 23 hours ago Up 23 hours 0.0.0.0:3007->3001/tcp my-service-5 6.144 kB (virtual 776.4 MB)
ba340ba08941 my-service "phantomjs src/sites/" 23 hours ago Up 23 hours 0.0.0.0:3006->3001/tcp my-service-4 6.144 kB (virtual 776.4 MB)
7b5411d8a171 my-service "phantomjs src/sites/" 23 hours ago Up 23 hours 0.0.0.0:3003->3001/tcp my-service-1 6.144 kB (virtual 776.4 MB)
b583a544b37d my-service "phantomjs src/sites/" 23 hours ago Up 23 hours 0.0.0.0:3001->3001/tcp my-service-0 6.144 kB (virtual 776.4 MB)
91373086e06e foo_bar "/bin/sh -c 'git pull" 47 hours ago Up 47 hours 0.0.0.0:12776->8080/tcp kickass_murdock 11.26 MB (virtual 1.081 GB)
Size reported by du on host:
me#somewhere:~$ sudo du -h -d 1 /var/lib/docker/containers
14G /var/lib/docker/containers/20a2be4931e7a10b2e29260b541e3c4d6581462650e47d59682f84626843752b
1,6G /var/lib/docker/containers/7b5411d8a171a35a3c937d62dbdea141fc0a9f3c4de25a2da3a0b94ea71a8f3d
9,6M /var/lib/docker/containers/02b30add1cb3ba6d5be1c36b2c9dd141d8d70cb88a021d2363af5684ef3c220f
480K /var/lib/docker/containers/91373086e06ea83269465e0b026cfe7ca0158a1315b0df04da9a1d1b4ee52823
13G /var/lib/docker/containers/b583a544b37db6144f17a4819ca2f636126b11d668caab3dcdbf4c3a33dedc65
13G /var/lib/docker/containers/ba340ba08941d47af45230be328ef7289c19b6bb6a0d120cf2098cbdd9983f65
40G /var/lib/docker/containers`
Size reported by du for a container (similar output for all other containers):
me#somewhere:~$ sudo docker exec -it my-service-4 du -h -d1 -c /
58M /root
0 /dev
3.0M /etc
706M /usr
1.4M /tmp
14M /var
9.0M /bin
32M /lib
4.0K /home
8.0K /run
4.0K /mnt
4.0K /boot
0 /sys
4.0K /opt
4.0K /srv
4.0K /lib64
3.9M /sbin
du: cannot access '/proc/12642/task/12642/fd/3': No such file or directory
du: cannot access '/proc/12642/task/12642/fdinfo/3': No such file or directory
du: cannot access '/proc/12642/fd/3': No such file or directory
du: cannot access '/proc/12642/fdinfo/3': No such file or directory
0 /proc
4.0K /media
825M /
825M total
So: Both the container and docker ps report disk usage below 1G, though the actual container file size is more than 10 GB (at least for some). Can any body help me and tell what is happening? I guess there is some trouble going on within my container, though I do not know where to look right now. Anybody knows what I have to do?
Docker put one layer over the other in its onion file system whenever changes are stored to an image. When you delete your container docker rmi -f CONTAINERID, you should see that /var/lib/docker/containers uses less space.
Changes to an image are stored when you build it. If you just use an image - run a container - the changed data is just "hold". So you should investigate what happens IN your container. Esp. see what your phantom server produces in the filesystem. ncdu would be a good tool for that.
Start with ncdu and then store the files that are produced in an attached folder of your hosts filesystem. docker run -it -v FULLPATHATHOST:FULLPATHWITHINCONTAINER CONTAINERID
Test
run a simple container, containing nothing more than the os (in my case alpine)
docker run -it stk/alpine:base sh
At the host go to /var/lib/docker/aufs/diff/ and list the containing directories with ncdu. (Of course you can use any other program you like to determine the diretories sizes)
ncdu 1.10 ~ Use the arrow keys to navigate, press ? for help
--- /var/lib/docker/aufs/diff --------------------------------------------------
/..
286,4MiB [##########] /39f3e2ea0dfe17366b8cd7b0...bf3681b99a1081e33ad62a509f28
214,3MiB [####### ] /2de39307b9361cae12f0116e...d28056e4699b21b9a4d34f374461
207,7MiB [####### ] /92ec6d044cb3e39ae0050012...78d0591675f2231daafbf0877778
154,0MiB [##### ] /9f3806e6bedc8fb01929131b...e01aa1980aadba914fdd9d2f96ae
149,5MiB [##### ] /5f0ca2331640639507d85b83...693659438367311abb0c792b8a62
136,8MiB [#### ] /902b87aaaec929e805414868...1f529ad7f37ab300d4ef9f3a0dbf
136,2MiB [#### ] /222ba86561913d299deb9e0e...6b5f5ec117b01386a4156d092687
132,6MiB [#### ] /8b3a9a9eeaf8ed59f24f21a2...dfe8d033890a2fa44b445deb2e3c
128,5MiB [#### ] /72b3edf317a8d682466c1500...a5e2cad31c8305ed42c41cd61149
117,0MiB [#### ] /818e3763e72ef82b28b0552e...b9f163dc601d266e94e46fd26bb0
57,4MiB [## ] /eeffdfafed9f60771b5bf87a...e8bbd16b572f77899c8e689d174d
56,1MiB [# ] /6976ce3ed5fab37382d90467...37578332417ffcf35a1d499eba52
51,3MiB [# ] /a5a6e0549d247f1c8b81a350...c5071f46d17afe2f8988817360b3
Total disk usage: 2,7GiB Apparent size: 2,7GiB Items: 168658
Within the container execute something like
tr -dc A-Za-z0-9 </dev/urandom | head -c 409600000 > a.txt && ls a.txt -all -h
that will create a file with random data called a.txt I choosed the size 409600000 to be greater than 286,4MiB - the largest folder in /var/lib/docker/aufs/diff/ - so that ncdu shall show it on top
ncdu 1.10 ~ Use the arrow keys to navigate, press ? for help
--- /var/lib/docker/aufs/diff --------------------------------------------------
/..
390,6MiB [##########] /0720a07653a57d938c861cf3...e61c81c29f12289759f0560aa38f
286,4MiB [####### ] /39f3e2ea0dfe17366b8cd7b0...bf3681b99a1081e33ad62a509f28
214,3MiB [##### ] /2de39307b9361cae12f0116e...d28056e4699b21b9a4d34f374461
207,7MiB [##### ] /92ec6d044cb3e39ae0050012...78d0591675f2231daafbf0877778
154,0MiB [### ] /9f3806e6bedc8fb01929131b...e01aa1980aadba914fdd9d2f96ae
149,5MiB [### ] /5f0ca2331640639507d85b83...693659438367311abb0c792b8a62
136,8MiB [### ] /902b87aaaec929e805414868...1f529ad7f37ab300d4ef9f3a0dbf
136,2MiB [### ] /222ba86561913d299deb9e0e...6b5f5ec117b01386a4156d092687
132,6MiB [### ] /8b3a9a9eeaf8ed59f24f21a2...dfe8d033890a2fa44b445deb2e3c
128,5MiB [### ] /72b3edf317a8d682466c1500...a5e2cad31c8305ed42c41cd61149
117,0MiB [## ] /818e3763e72ef82b28b0552e...b9f163dc601d266e94e46fd26bb0
57,4MiB [# ] /eeffdfafed9f60771b5bf87a...e8bbd16b572f77899c8e689d174d
56,1MiB [# ] /6976ce3ed5fab37382d90467...37578332417ffcf35a1d499eba52
Total disk usage: 3,0GiB Apparent size: 3,0GiB Items: 168678
Now I know that it's the directory starting with 0720a07653a57d9... is that I have to look for. Go into it and list the content
root#T520:/var/lib/docker/aufs/diff# cd 0720a07653a57d938c861cf32f4bee87fa4be61c81c29f12289759f0560aa38f
root#T520:/var/lib/docker/aufs/diff/0720a07653a57d938c861cf32f4bee87fa4be61c81c29f12289759f0560aa38f# ls -all -h
insgesamt 391M
drwxr-xr-x 5 root root 4,0K Feb 23 10:55 .
drwxr-xr-x 674 root root 80K Feb 23 10:55 ..
-rw-r--r-- 1 root root 391M Feb 23 10:57 a.txt
drwx------ 2 root root 4,0K Feb 23 10:55 root
-r--r--r-- 1 root root 0 Feb 23 10:55 .wh..wh.aufs
drwx------ 2 root root 4,0K Feb 23 10:55 .wh..wh.orph
drwx------ 2 root root 4,0K Feb 23 10:55 .wh..wh.plnk
As you can see, there is the file a.txt listed.
Now rerun the procedure, random file creation and relist ncdu (just hit r in ncdu.)
ncdu should show you, as well as ls should do, that the directory size did not change. So the data within the docker fs is just overwritten. If you choose a smale size it gets smaller.
So how might this help you?
As I showed above there is no file growth for changing data within files. And you can find out witch directory contains you filesystem and see the plain filestructure of added/changed files within your container.
Hope this helps to find the files within your container.
If you exit your container and restart it by the same command again, a new instance is created, with it own fs layer is created.
You can find the ids of your stopped containers use
docker ps -a | grep Exited | grep stk/alpine:base | awk '{print $1 }'
To see what is found before deleteing...
docker ps -a | grep Exited | grep stk/alpine:base
7b7d3f6e857a stk/alpine:base "sh" 22 minutes ago Exited (0) 2 minutes ago gigantic_swartz
2f51ea988a28 stk/alpine:base "sh" 23 minutes ago Exited (0) 22 minutes ago cranky_euler
4bfafbb034fe stk/alpine:base "sh" 34 minutes ago Exited (0) 25 minutes ago sick_williams
80cd5687fcd7 stk/alpine:base "sh" 44 minutes ago Exited (137) 37 minutes ago determined_panini
a2179a8dd543 stk/alpine:base "sh" 58 minutes ago Exited (130) 44 minutes ago agitated_shockley
8596cd310292 stk/alpine:base "sh" 3 days ago Exited (137) 3 days ago dreamy_murdock
33db61a7830b stk/alpine:base "sh" 3 days ago Exited (0) 3 days ago desperate_hodgkin
2f96c15dc8a1 stk/alpine:base "sh" 2 weeks ago Exited (0) 2 weeks ago determined_babbage
Attach | xargs -r docker rm to delete them
One line solution
docker ps -a | grep Exited | grep stk/alpine:base | awk '{print $1 }' | xargs -r docker rm
Docker will check that the images are not used by other images and complain if it can't be removed if you do docker rmi. But in this case you want the containers and not the images to be deleted. So use rm instead of rmi (I updated the answer)
Enjoy
Related
Even though I have successfully (?) removed all Docker images and containers, the folder /var/lib/docker/overlay2 still is vast (152 GB). Why? How do I reduce the used disk size?
I have tried to rename the folder (in preparation for a possible removal of the folder) but that caused subsequent pull requests to fail.
To me it appears pretty unbelievable that Docker would need this amount of vast disk space just for later being able to pull an image again. Please enlighten me what is wrong or why it has to be this way.
List of commands run which should show what I have tried and the current status:
$ docker image prune --force
Total reclaimed space: 0B
$ docker system prune --force
Total reclaimed space: 0B
$ docker image prune -a --force
Total reclaimed space: 0B
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ du -h --max-depth=1 /var/lib/docker/overlay2 | sort -rh | head -25
152G /var/lib/docker/overlay2
1.7G /var/lib/docker/overlay2/ys1nmeu2aewhduj0dfykrnw8m
1.7G /var/lib/docker/overlay2/ydqchhcaqokdokxzbh6htqa49
1.7G /var/lib/docker/overlay2/xmffou5nk3zkrldlfllopxcab
1.7G /var/lib/docker/overlay2/tjz58rjkote2c79veonb3s6qa
1.7G /var/lib/docker/overlay2/rlnr04hlcudgoh6ujobtsu2ck
1.7G /var/lib/docker/overlay2/r4ubwsmrorpr08k8o5rko9n98
1.7G /var/lib/docker/overlay2/q8x21c9enjhpitt365smkmn4e
1.7G /var/lib/docker/overlay2/ntr973uef37oweqlxr4kmaxps
1.7G /var/lib/docker/overlay2/mcyasqzo2gry5dvjxoao1opws
1.7G /var/lib/docker/overlay2/m2k4u58mob6e2db86qqu1e1f8
1.7G /var/lib/docker/overlay2/lizesless03kch8j7kpk89rcf
1.7G /var/lib/docker/overlay2/kmu7mjvsopr8o63onbsijb98j
1.7G /var/lib/docker/overlay2/khgjwqry5drdy0jbwf47gr2lb
1.7G /var/lib/docker/overlay2/gt70ur50vw3whq265vmpep7ay
1.7G /var/lib/docker/overlay2/c3tm1fcuekmdreowrfcso7nd4
1.7G /var/lib/docker/overlay2/7j93t64mt63arj6sewyyejwyo
1.7G /var/lib/docker/overlay2/3ftxvvg2xg02xuwcb3ut3dq89
1.7G /var/lib/docker/overlay2/0m3o3lw6b1ggs8m6z4uv6ueqf
1.4G /var/lib/docker/overlay2/r82rfxme096cq5pg1xz1z5arg
1.4G /var/lib/docker/overlay2/qric73hv1z3nx4k0zop3fvcm6
1.4G /var/lib/docker/overlay2/oyb0a5ab5h642y30s6hawj4r9
1.4G /var/lib/docker/overlay2/oqf9ltfoy36evnkuo8ga2uepl
1.4G /var/lib/docker/overlay2/ntuwvljxxzqs2oxhgg3enyo7x
1.4G /var/lib/docker/overlay2/l0oi2lxdrtg42hk2rznknqk0r
$ ls -l /var/lib/docker/overlay2
total 136
drwx------ 4 root root 72 Nov 20 13:03 00ep8i7v5bdmhqsxdoikslr19
drwx------ 4 root root 72 Feb 28 09:47 026x5e2xns6ui2acym19qfvl7
drwx------ 4 root root 72 Apr 2 19:20 032y8d31damevtfymq6yzkyi4
drwx------ 4 root root 72 Apr 23 13:42 03wwbyd4uge9u0auk94wwdlig
drwx------ 4 root root 72 Jan 15 12:46 04cy91a19owwqu9hyw6vruhzo
drwx------ 4 root root 72 Apr 2 14:44 051625a0f856b63ed67a3bc9c19f09fb1c90303b9536791dc88717cb7379ceeb
drwx------ 4 root root 72 Dec 3 19:56 059fk19uw70p6fqzei6wnj8s2
drwx------ 4 root root 72 Apr 21 15:03 059mddrhqegqhxv1ockejw9gs
drwx------ 4 root root 72 Nov 28 11:26 069dwkz92m8fao6whxnj4x9vp
drwx------ 4 root root 72 Feb 28 09:47 06h7qo5f70oyzaqgn1elbx5u8
drwx------ 4 root root 72 Dec 18 13:27 0756fd640036fa92499cfdcf4bcc3081d9ec16c25eebe5964d5e12d22beb9991
drwx------ 4 root root 72 Apr 20 11:32 09rk4gm6x2mcquc5cz0yvbawq
drwx------ 4 root root 72 Apr 2 19:55 09scfio3qvtewzgc5bdwgw4f6
drwx------ 4 root root 72 May 4 14:00 0ac2a09aa4a038981d37730e56dece4a3d28e80f261b68587c072b4012dc044a
drwx------ 4 root root 72 Feb 25 14:19 0c399f5c349ec61ac175525c52533b069a52028354c1055894466e9b5430fbc3
drwx------ 4 root root 72 May 4 14:00 0cac39b1382986a2d9a6690985792c04d03192337ea37ee06cb74f2f457b7bb7
drwx------ 4 root root 72 Mar 5 08:41 0czco1xx3148slgwf8imdrk33
drwx------ 4 root root 72 Apr 21 08:30 0gb2iqev9e7kr587l09u19eff
drwx------ 4 root root 72 Feb 20 18:03 0gknqh4pyg46uzi6asskbf8xk
drwx------ 4 root root 72 Jan 8 11:43 0gugiou3wqu53os4dageh77ty
drwx------ 4 root root 72 Jan 7 11:31 0i8fd5jet6ieajyl2uo1xj2ai
.
.
.
$ docker version
Client: Docker Engine - Community
Version: 19.03.8
API version: 1.40
Go version: go1.12.17
Git commit: afacb8b
Built: Wed Mar 11 01:27:04 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.8
API version: 1.40 (minimum version 1.12)
Go version: go1.12.17
Git commit: afacb8b
Built: Wed Mar 11 01:25:42 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
You might have switched storage drivers somewhere along the way, so maybe docker is just cleaning out those drivers but leaving overlay2 as is (I still can't understand why would pulling images would fail).
Let's try this, run docker info and check what is your storage driver:
$ docker info
Containers: 0
Images: 0
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
<output truncated>
If it is not overlay2 (as appears above) try switching to it, and then prune docker images again and check if that cleaned up that folder.
Another possible solution is mentioned in this thread, people are commenting that clearing logs solves this problem, so try the following:
Remove all log files:
find /var/lib/docker/containers/ -type f -name "*.log" -delete
Restart docker daemon (or entire machine):
sudo systemctl restart docker
or
docker-compose down && docker-compose up -d
or
shutdown -r now
in preparation for a possible removal of the folder
If you are going to delete all data from the Docker directory anyway it is safe to:
Stop Docker Daemon
Remove the /var/lib/docker directory entirely
Restart Docker Daemon
Docker will then recreate any needed data directories.
You can also add:
"log-driver": "json-file",
"log-opts": {"max-size": "20m", "max-file": "3"},
to your /etc/docker/daemon.json to restrict log size and growth in the future or set the log-driver to "journald" to eliminate log files entirely.
Thanks for your input and suggestions!
I believe that I am still using overlay2 as storage driver:
$ docker info
Client:
Debug Mode: false
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 19.03.8
Storage Driver: overlay2
<output truncated>
I also cleared the logs and restarted the daemon and actually also the entire machine. The problem however remained.
In the end I solved it by stoping the deamon, removing the entire docker folder and restarting the deamon, as suggested above.
df -h
sudo systemctl stop docker
sudo mv /var/lib/docker /var/lib/docker_old
sudo systemctl start docker
sudo rm -rf /var/lib/docker_old
df -h
I fear however that this will not be a permanent solutions and that the problem will come back, but this will hopefully last another year. :)
Try to prune everything including volumes (different than the original poster's command):
$ docker system prune --volumes
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all volumes not used by at least one container
- all dangling images
- all dangling build cache
Are you sure you want to continue? [y/N]
That freed up a bunch of space for me and solved my issue. I think the build cache was one of the culprits for me.
Two things will fill up /var/lib/docker/overlay2:
Docker images: Clean those with docker image prune -a. Note that any images not currently associated with a container will be deleted which requires pulling or building the image again if you needed it.
Container Specific Changes: any write to the container filesystem that isn't going to another mount (like a volume) will cause a copy-on-write that is stored in the container specific layer. You can see these changes with docker diff on a container. Even a metadata change like file ownership, permissions, or a timestamp, can trigger this copy-on-write of the entire file.
Things that are not included in this directory:
Volumes: Named volumes will be stored in /var/lib/docker/volumes by default. You can still prune these with docker volume prune but make sure you have backed up any important data first. A better cleanup is to remove unused anonymous volumes with a command like:
docker volume ls -qf dangling=true | egrep '^[a-z0-9]{64}$' | \
xargs --no-run-if-empty docker volume rm
Container Logs: Container logs will be written to /var/lib/docker/containers. If these are taking up space, it's best to have docker automatically rotate those. See this answer for details on rotating logs.
I had the same problem, /var/lib/docker/overlay2 was using 17 GB even after removing every docker image: docker image rm ...
When you stop a container, it is not automatically removed unless you started it with the --rm flag (prune-containers). Container size can be seen using this command: docker container ls -a -s
I managed to reclaim the space taken by stopped container using this command: docker container prune.
Answer of #Nick is perhaps even better as it cleans every unused docker files:
docker system prune --volumes
I couldn't start container because of some issues with volumes so I tried this to make sure I understand how volumes work. And there is something strange that is happening here. Two files should be present in /data directory but instead, I see one folder named as one of the files on the source machine. I'm doing this on Windows 10.
PS C:\Users\Piotrek\source\repos\fluentd> dir
Directory: C:\Users\Piotrek\source\repos\fluentd
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 06.01.2019 18:50 7 abc.txt
-a---- 06.01.2019 18:50 80 test.conf
PS C:\Users\Piotrek\source\repos\fluentd> docker run -ti --rm -v ${PWD}:/data ubuntu ls -alR /data
/data:
total 4
drwxr-xr-x 3 1000 root 60 Jan 6 16:48 .
drwxr-xr-x 1 root root 4096 Jan 6 17:53 ..
drwxr-xr-x 2 1000 root 40 Jan 6 16:48 test.conf
/data/test.conf:
total 0
drwxr-xr-x 2 1000 root 40 Jan 6 16:48 .
drwxr-xr-x 3 1000 root 60 Jan 6 16:48 ..
Problem solved.
I went to Docker settings and under "Shared Drives" I clicked Reset Credentials.
I have enabled drive sharing some time ago but after that I changed password - to no password. Looks like Docker doesn't ask you to enable drive sharing again when your password is empty. It does when you change password, but not to empty one.
My real question is, if secrets are mounted as volumes in pods - can they be read if someone gains root access to the host OS.
For example by accessing /var/lib/docker and drilling down to the volume.
If someone has root access to your host with containers, he can do pretty much whatever he wants... Don't forget that pods are just a bunch of containers, which in fact are processes with pids. So for example, if I have a pod called sleeper:
kubectl get pods sleeper-546494588f-tx6pp -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
sleeper-546494588f-tx6pp 1/1 Running 1 21h 10.200.1.14 k8s-node-2 <none>
running on the node k8s-node-2. With root access to this node, I can check what pid this pod and its containers have (I am using containerd as container engine, but points below are very similar for docker or any other container engine):
[root#k8s-node-2 /]# crictl -r unix:///var/run/containerd/containerd.sock pods -name sleeper-546494588f-tx6pp -q
ec27f502f4edd42b85a93503ea77b6062a3504cbb7ac6d696f44e2849135c24e
[root#k8s-node-2 /]# crictl -r unix:///var/run/containerd/containerd.sock ps -p ec27f502f4edd42b85a93503ea77b6062a3504cbb7ac6d696f44e2849135c24e
CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT POD ID
70ca6950de10b 8ac48589692a5 2 hours ago Running sleeper 1 ec27f502f4edd
[root#k8s-node-2 /]# crictl -r unix:///var/run/containerd/containerd.sock# inspect 70ca6950de10b | grep pid | head -n 1
"pid": 24180,
And then finally with those information (pid number), I can access "/" mountpoint of this process and check its content including secrets:
[root#k8s-node-2 /]# ll /proc/24180/root/var/run/secrets/kubernetes.io/serviceaccount/
total 0
lrwxrwxrwx. 1 root root 13 Nov 14 13:57 ca.crt -> ..data/ca.crt
lrwxrwxrwx. 1 root root 16 Nov 14 13:57 namespace -> ..data/namespace
lrwxrwxrwx. 1 root root 12 Nov 14 13:57 token -> ..data/token
[root#k8s-node-2 serviceaccount]# cat /proc/24180/root/var/run/secrets/kubernetes.io/serviceaccount/namespace ; echo
default
[root#k8s-node-2 serviceaccount]# cat /proc/24180/root/var/run/secrets/kubernetes.io/serviceaccount/token | cut -d'.' -f 1 | base64 -d ;echo
{"alg":"RS256","kid":""}
[root#k8s-node-2 serviceaccount]# cat /proc/24180/root/var/run/secrets/kubernetes.io/serviceaccount/token | cut -d'.' -f 2 | base64 -d 2>/dev/null ;echo
{"iss":"kubernetes/serviceaccount","kubernetes.io/serviceaccount/namespace":"default","kubernetes.io/serviceaccount/secret.name":"default-token-6sbz9","kubernetes.io/serviceaccount/service-account.name":"default","kubernetes.io/serviceaccount/service-account.uid":"42e7f596-e74e-11e8-af81-525400e6d25d","sub":"system:serviceaccount:default:default"}
It is one of the reasons why it is super important to properly secure access to your kubernetes infrastructure.
Is there a way to associate existing docker volumes (located in /etc/docker/volumes) to containers?
One way to do this is to use docker inspect :conatiner_id but this assumes that the container exists. How can you find to which container the volume belonged to, in the scenario that the container does no longer exist?
Check docker volumes
$ ls -l /var/lib/docker/volumes/
total 72
drwxr-xr-x 3 root root 4096 Nov 14 14:27 0f801819cf76b04b6794163b65df5d649bd795e23f4fc778f78db9ac60a0180d
drwxr-xr-x 3 root root 4096 Nov 29 14:29 my-jenkins
For more info about your volume you can perform docker volume inspect but this tells you nothing about what's really inside your volume. The only way to know this is by going inside the volume-folder and check.
So I'll check "unamed" volume:
$ ls -l /var/lib/docker/volumes/0f801819cf76b04b6794163b65df5d649
bd795e23f4fc778f78db9ac60a0180d/_data
...
drwx------ 2 999 ping 4096 Nov 14 14:27 pg_tblspc
drwx------ 2 999 ping 4096 Nov 14 14:27 pg_twophase
drwx------ 3 999 ping 4096 Nov 14 14:27 pg_xlog
-rw------- 1 999 ping 88 Nov 14 14:27 postgresql.auto.conf
-rw------- 1 999 ping 20791 Nov 14 14:27 postgresql.conf
-rw------- 1 999 ping 37 Nov 14 14:27 postmaster.opts
Normally you should be able to link your volume to the old container you've used. You can check everything what's inside. There isn't a better way at the moment. This is actually the answer on your question but I'll give some more explanation to make it easier in the future.
The best way is to create named volumes. After deleting your container the volume will remain easy to recognize:
docker volume create --name my-jenkins
So in /var/lib/docker/volumes you'll see my-jenkins.
Now I start my jenkins container and link it with my named volume.
Everything which is in /var/jenkins_home will be stored in the named volume.
docker run -d -p 8080:8080 -v my-jenkins:/var/jenkins_home jenkins
I'll create a job in jenkins with the name firstjob. You'll see this job in my named docker volume.
$ ls -l /var/lib/docker/volumes/my-jenkins/_data/jobs/
total 4
drwxr-xr-x 3 dockrema dockrema 4096 Nov 29 14:47 firstjob
Now I will delete my container (id = fa1003894dbc). The container is gone:
$ docker rm -fv fa1003894dbc
I'm a bit later. I want to reuse the named docker volume which still exists to start a new jenkins container which will immediatly container the job "firstjob".
$ docker run -d -p 8080:8080 -v my-jenkins:/var/jenkins_home jenkins
If you have an unnamed docker volume (created automatically with name 0f8018x9cf76b04x163b6xdf) you can use
docker run -d -v 0f8018x9cf76b04x163b6xdf:/var/jenkins_home jenkins
Now your jenkins will use everything which is inside that volume (it's only not a named volume, what makes it more difficult to see with which type of container it was linked. But by accessing the volume folder you will find it in most cases.)
This question is a minimal failing version of this other one:
How to get contents generated by a docker container on the local fileystem
I have the following files:
./test
-rw-r--r-- 1 miqueladell staff 114 Jan 21 15:24 Dockerfile
-rw-r--r-- 1 miqueladell staff 90 Jan 21 15:23 docker-compose.yml
drwxr-xr-x 3 miqueladell staff 102 Jan 21 15:25 html
./test/html:
-rw-r--r-- 1 miqueladell staff 0 Jan 21 15:22 file_from_local_filesystem
DockerFile
FROM php:7.0.2-apache
RUN touch /var/www/html/file_generated_inside_the_container
VOLUME /var/www/html/
docker-compose.yml
test:
image: test
volumes:
- ./html:/var/www/html/
After running a container built from the image defined in the Dockerfile what I want is having:
./html
-- file_from_local_filesystem
-- file_generated_inside_the_container
Instead of this I get the following:
build the image
$ docker build --no-cache -t test .
Sending build context to Docker daemon 4.096 kB
Step 1 : FROM php:7.0.2-apache
---> 2f16964f48ba
Step 2 : RUN touch /var/www/html/file_generated_inside_the_container
---> Running in b957cc9d7345
---> 5579d3a2d3b2
Removing intermediate container b957cc9d7345
Step 3 : VOLUME /var/www/html/
---> Running in 6722ddba76cc
---> 4408967d2a98
Removing intermediate container 6722ddba76cc
Successfully built 4408967d2a98
run a container with previous image
$ docker-compose up -d
Creating test_test_1
list files on the local machine filesystem
$ ls -al html
total 0
drwxr-xr-x 3 miqueladell staff 102 Jan 21 15:25 .
drwxr-xr-x 5 miqueladell staff 170 Jan 21 14:20 ..
-rw-r--r-- 1 miqueladell staff 0 Jan 21 15:22 file_from_local_filesystem
list files from the container
$ docker exec -i -t test_test_1 ls -alR /var/www/html
/var/www/html:
total 4
drwxr-xr-x 1 1000 staff 102 Jan 21 14:25 .
drwxr-xr-x 4 root root 4096 Jan 7 18:05 ..
-rw-r--r-- 1 1000 staff 0 Jan 21 14:22 file_from_local_filesystem
The volume from the local filesystem gets mounted on the container file system replacing the contents of it.
This is contrary at what I understand in the section "Permissions and Ownership" of this guide Understanding volumes
How could I get the desired output?
Thanks
EDIT: As is said in the accepted answer I did not understand volumes when asking the question. Volumes, as mountponint, replace the container content with the local filesystem that is mounted.
The solution I needed was to use ENTRYPOINT to run the necessary commands to initialize the contents of the mounted volume once the container is running.
The code that originated the question can be seen working here:
https://github.com/MiquelAdell/composed_wordpress/tree/1.0.0
This is from the guide you've pointed to
This won’t happen if you specify a host directory for the volume
Volumes you share from other containers or host filesystem replace directories from container.
If you need to add some files to volume, you should do it after you start container. You can do an entrypoint for example which does touch and then runs your main process.
Yep, pretty sure it should be the full path:
docker-compose.yml
test:
image: test
volumes:
- ./html:/var/www/html/
./html should be /path/to/html
Edit
Output after changing to full path and running test.sh:
$ docker exec -ti dockervolumetest_test_1 bash
root#c0bd7a722b63:/var/www/html# ls -la
total 8
drwxr-xr-x 2 1000 adm 4096 Jan 21 15:19 .
drwxr-xr-x 3 root root 4096 Jan 7 18:05 ..
-rw-r--r-- 1 1000 adm 0 Jan 21 15:19 file_from_local_filesystem
Edit 2
Sorry, I misunderstood the entire premise of the question :)
So you're trying to get file_generated_inside_the_container (which is created inside your docker image only) mounted to some location on your host machine - like a "reverse mount".
This isn't possible to do with any docker commands, but if all you're after is access to your VOLUMEs files on your host, you can find the files in the docker root directory (normally /var/lib/docker). To find the exact location of the files, you can use docker inspect [container_id], or in the latest versions use the docker API.
See cpuguy's answer in this github issue: https://github.com/docker/docker/issues/12853#issuecomment-123953258 for more details.