I would like to get the host mount path from inside docker container. I can only find "docker inspect" commands which can get the information from hosts. Could anyone help on that? Thanks.
You can use variables if it's just a PATH.
You can write some path infomations in file, map the file into Docker and parse it with docker id.
eg:
# docker_id
head -1 /proc/self/cgroup|cut -d/ -f3
>>> ...
{
"docker_id1": {
"paths": [
"/test:/home",
"/test1:/home1"
]
}
}
or use docker inspect file
Or you can cat /proc/mounts in contains inside, it is contains mounts infomations
cgroup /sys/fs/cgroup/freezer cgroup ro,nosuid,nodev,noexec,relatime,freezer 0 0
mqueue /dev/mqueue mqueue rw,nosuid,nodev,noexec,relatime 0 0
shm /dev/shm tmpfs rw,nosuid,nodev,noexec,relatime,size=65536k 0 0
/dev/mapper/VolGroup00-LogVol03 /usr/share/elasticsearch/data xfs rw,relatime,attr2,inode64,noquota 0 0 # <-- here
proc /proc/bus proc ro,relatime 0 0
proc /proc/fs proc ro,relatime 0 0
I think one way could be to pass it as environmental variable when running container.
docker run -e HOST_MOUNT_PATH=wanted_path -ti ubuntu:18.04 bash
Inside container you can check with
echo $HOST_MOUNT_PATH
Related
I enter # docker ps -s and it shows:
a658 gitlab/gitlab-ce:15.0.2-ce.0 ... 2.62MB (virtual 2.49GB)
But, when I enter # du -h /var/lib/docker/containers/ --max-depth=1 it shows:
20G /var/lib/docker/containers/a658
Gitlab container by default mounts /srv folder, there is a gitlab database, but it also weighs a little:
# du -h /srv/gitlab/ --max-depth=1
184K /srv/gitlab/config
428M /srv/gitlab/logs
2.8G /srv/gitlab/data
3.2G /srv/gitlab/
What could it be? This worries me very much, because the disk space of the host system is running out, and I cannot figure out what exactly is clogging all the space and where to look for it. At first I thought these were gitlab logs, but the corresponding folders weigh incomparably little. How can I check this and clean up the excess?
Going inside the container, I see that it also weighs very little:
# docker exec -it a658 bash
# du -h / --max-depth=1 | sort -h
0 /boot
0 /home
0 /media
0 /mnt
0 /proc
0 /srv
0 /sys
4.0K /tmp
12K /root
16K /run
28K /assets
1.5M /etc
7.7M /dev
90M /usr
2.5G /opt
3.2G /var
5.7G /
P.S. If it matters, host system on CentOS 7
I tried to find similar questions, but their answers does not helped me.
I would like to start a VM on google cloud console with more memory in /dev/shm. Thing is the only way I've figured out how to do this is by passing somewhere the argument --shm-size to the docker run command. But I don't know where to do this when creating a VM instance with a specific docker image on Google Cloud Console. Any ideas ? Would it be possible to resize /dev/shm when while running the container ?
You can change size of /dev/shm while your VM instance is running (after VM creation) with a command sudo mount -o remount,size=8G /dev/shm, also you can use startup-script to apply this command during each boot. Please have a look on my steps below:
create a VM instance (optional):
gcloud compute instances create instance-1 --zone=europe-west3-a --machine-type=e2-medium --image=ubuntu-2004-focal-v20210223 --image-project=ubuntu-os-c
loud
SSH into the VM instance:
gcloud compute ssh instance-1 --zone=europe-west3-a
change size of /dev/shm:
instance-1:~$ df | grep shm
tmpfs 2014932 0 2014932 0% /dev/shm
instance-1:~$ sudo mount -o remount,size=8G /dev/shm
instance-1:~$ df | grep shm
tmpfs 8388608 0 8388608 0% /dev/shm
add a startup-script (optional):
#!/bin/bash
mount -o remount,size=8G /dev/shm
restart the VM instance, SSH and check /dev/shm (optional):
$ gcloud compute ssh instance-1 --zone=europe-west3-a
instance-1:~$ df | grep shm
tmpfs 8388608 0 8388608 0% /dev/shm
Alternatively you can try to change /etc/fstab and create your custom image.
I have a server where I run some containers with volumes. All my volumes are in /var/lib/docker/volumes/ because docker is managing it. I use docker-compose to start my containers.
Recently, I tried to stop one of my container but it was impossible :
$ docker-compose down
[17849] INTERNAL ERROR: cannot create temporary directory!
So, I checked how the data is mounted on the server :
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 7,8G 0 7,8G 0% /dev
tmpfs 1,6G 1,9M 1,6G 1% /run
/dev/md3 20G 19G 0 100% /
tmpfs 7,9G 0 7,9G 0% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 7,9G 0 7,9G 0% /sys/fs/cgroup
/dev/md2 487M 147M 311M 33% /boot
/dev/md4 1,8T 1,7G 1,7T 1% /home
tmpfs 1,6G 0 1,6G 0% /run/user/1000
As you can see, the / is only 20Go, so it is full and I can't stop my containers using docker-compose.
My questions are :
There is a simple solution to increase the available space in the
/, using /dev/md4 ?
Or can I move volumes to another place without losing data ?
This part of the Docker Daemon is confirgurable. Best practices would have you change the data folder; this can be done with OS-level Linux commands like a symlink... I would say it's better to actually configure the Docker Daemon to store the data elsewhere!
You can do that by editing the Docker command line (e.g. the systemd script that starts the Docker daemon), or change /etc/docker/daemon.json.
The file should have this content:
{
"data-root": "/path/to/your/docker"
}
If you add a new hard drive, partition, or mount point you can add it here and docker will store its data there.
I landed here as I had the very same issue. Even though some sources suggest you could do it with a symbolic link this will cause all kinds of issues.
Depending on the OS and Docker version I had malformed images, weird errors or the docker-daemon refused to start.
Here is a solution, but it seems it varies a little from version to version. For me the solution was:
Open
/lib/systemd/system/docker.service
And change this line
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
to:
ExecStart=/usr/bin/dockerd -g /mnt/WHATEVERYOUR/PARTITIONIS/docker --containerd=/run/containerd/containerd.sock
I solved it creating a symbolic link to a partition with bigger size:
ln -s /scratch/docker_meta /var/lib/docker
/scratch/docker_meta is the folder that I have in a bigger partition.
Do a bind mount.
For example, moving /docker/volumes to /mnt/large.
Append line into /etc/fstab.
/mnt/large /docker/volumes none bind 0 0
And then.
mv /docker/volumes/* /mnt/large/
mount /docker/volumes
Do not forget chown and chmod of /mnt/large first, if you are using non-root docker.
I changed Docker's storage base directory from /var/lib/docker to /home/docker by changing DOCKER_OPTIONS in /etc/default/docker as explained in this other question. After that, I rsynced the old /var/lib/docker to the new place.
Here is my Docker configuration file:
# Docker Upstart and SysVinit configuration file
# ....
# Customize location of Docker binary (especially for development testing).
#DOCKER="/usr/local/bin/docker"
# Use DOCKER_OPTS to modify the daemon startup options.
DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 -g /home/docker"
# If you need Docker to use an HTTP proxy, it can also be specified here.
#export http_proxy="http://127.0.0.1:3128/"
# This is also a handy place to tweak where Docker's temporary files go.
#export TMPDIR="/mnt/bigdrive/docker-tmp"
Everything was working fine after I rebooted. However, I started getting a "no space left on device" in my containers from time to time. When this error happens, if my container is up, I can't even do a mkdir. If the container is down and I try to start it, I get the following:
Error response from daemon: rpc error: code = 2 desc = "oci runtime
error: could not synchronise with container process: can't create
pivot_root dir , error mkdir .pivot_root: no space left on device"
However, I have space:
Filesystem Size Used Avail Use% Mounted on
udev 32G 4,0K 32G 1% /dev
tmpfs 6,3G 1,6M 6,3G 1% /run
/dev/sda1 92G 56G 32G 64% /
none 4,0K 0 4,0K 0% /sys/fs/cgroup
none 5,0M 0 5,0M 0% /run/lock
none 32G 472K 32G 1% /run/shm
none 100M 0 100M 0% /run/user
/dev/sda5 1,6T 790G 762G 51% /home
I'm suspecting that perhaps I haven't done the storage migration correctly. Does someone know what might be happening?
Running out of disk space can also include inode limits. You can check those with df -i. This post on Unix.SE walks you through the steps required to increase the number of inodes available. Short of that, you can delete files to free up the inodes.
You can try cleaning up images that aren't in use. This fixed the problem for me:
docker images -aq -f 'dangling=true' | xargs docker rmi
As well as volumes. This will remove dangling volumes:
docker volume ls -q -f 'dangling=true' | xargs docker volume rm
https://success.docker.com/article/error-message-no-space-left-on-device-in-default-machine
I am running the docker release of openFOAM. While running openFOAM, I can't access any of the volumes that I have set up in /mnt. I can see them when I run:
bash-4.1$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 29.8G 0 disk
|-sda1 8:1 0 200M 0 part
|-sda2 8:2 0 500M 0 part
`-sda3 8:3 0 29.1G 0 part
`-luks-c551009c-5ab5-4526-85fa-45105a445734 (dm-0)
253:0 0 29.1G 0 crypt
|-korora_a00387863--6-root (dm-1) 253:1 0 26.1G 0 lvm /etc/passwd
`-korora_a00387863--6-swap (dm-2) 253:2 0 3G 0 lvm
sdb 8:16 0 465.8G 0 disk
|-sdb1 8:17 0 137.9G 0 part
|-sdb2 8:18 0 158.7G 0 part
`-sdb3 8:19 0 169.2G 0 part
sdg 8:96 1 15G 0 disk
loop0 7:0 0 100G 0 loop
`-docker-253:1-265037-pool (dm-3) 253:3 0 100G 0 dm
`-docker-253:1-265037-10f82f41512f788ec85215e8764cd3c5b0973d548fe4db2fcbcbaf50db6a4b9c (dm-4)
253:4 0 10G 0 dm /
loop1 7:1 0 2G 0 loop
`-docker-253:1-265037-pool (dm-3) 253:3 0 100G 0 dm
`-docker-253:1-265037-10f82f41512f788ec85215e8764cd3c5b0973d548fe4db2fcbcbaf50db6a4b9c (dm-4)
253:4 0 10G 0 dm /
However, none of these show up in /dev, so I don't know how to mount the volumes that I want. It seems like there is a better solution than manually mounting the volume each time I use openFOAM. Any ideas would be welcome, I don't understand the docker documentation.
You haven't show us exactly what you mean by "volumes set up in /mnt", so there will be a lot of guesswork in this answer w/r/t what you're actually trying to do.
If you are trying to mount block devices on your host and make them available in your container, the normally way you would go about this is:
Mount the device somewhere on your host (e.g., in /mnt)
Use the -v argument to docker run to expose that mountpoint inside a container, as in:
docker run -v /mnt/volume1:/volume1 alpine sh
The above command line would expose /mnt/volume1 on the host as /volume1 inside the container.
If you find that you are often running the same container with the same set of volumes, and you're tired of long command lines, just drop the docker run command into a shell script, or consider using something like docker-compose to help automate things.