I am running the docker release of openFOAM. While running openFOAM, I can't access any of the volumes that I have set up in /mnt. I can see them when I run:
bash-4.1$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 29.8G 0 disk
|-sda1 8:1 0 200M 0 part
|-sda2 8:2 0 500M 0 part
`-sda3 8:3 0 29.1G 0 part
`-luks-c551009c-5ab5-4526-85fa-45105a445734 (dm-0)
253:0 0 29.1G 0 crypt
|-korora_a00387863--6-root (dm-1) 253:1 0 26.1G 0 lvm /etc/passwd
`-korora_a00387863--6-swap (dm-2) 253:2 0 3G 0 lvm
sdb 8:16 0 465.8G 0 disk
|-sdb1 8:17 0 137.9G 0 part
|-sdb2 8:18 0 158.7G 0 part
`-sdb3 8:19 0 169.2G 0 part
sdg 8:96 1 15G 0 disk
loop0 7:0 0 100G 0 loop
`-docker-253:1-265037-pool (dm-3) 253:3 0 100G 0 dm
`-docker-253:1-265037-10f82f41512f788ec85215e8764cd3c5b0973d548fe4db2fcbcbaf50db6a4b9c (dm-4)
253:4 0 10G 0 dm /
loop1 7:1 0 2G 0 loop
`-docker-253:1-265037-pool (dm-3) 253:3 0 100G 0 dm
`-docker-253:1-265037-10f82f41512f788ec85215e8764cd3c5b0973d548fe4db2fcbcbaf50db6a4b9c (dm-4)
253:4 0 10G 0 dm /
However, none of these show up in /dev, so I don't know how to mount the volumes that I want. It seems like there is a better solution than manually mounting the volume each time I use openFOAM. Any ideas would be welcome, I don't understand the docker documentation.
You haven't show us exactly what you mean by "volumes set up in /mnt", so there will be a lot of guesswork in this answer w/r/t what you're actually trying to do.
If you are trying to mount block devices on your host and make them available in your container, the normally way you would go about this is:
Mount the device somewhere on your host (e.g., in /mnt)
Use the -v argument to docker run to expose that mountpoint inside a container, as in:
docker run -v /mnt/volume1:/volume1 alpine sh
The above command line would expose /mnt/volume1 on the host as /volume1 inside the container.
If you find that you are often running the same container with the same set of volumes, and you're tired of long command lines, just drop the docker run command into a shell script, or consider using something like docker-compose to help automate things.
Related
I want resize my /dev/bcache0 to full HDD size.
But I use resize2fs /dev/bcache0 it tell me:
[localhost-PC ~]# resize2fs /dev/bcache0
resize2fs 1.46.2 (28-Feb-2021)
resize2fs: Device or resource busy while trying to open /dev/bcache0
Couldn't find valid filesystem superblock.
I tried resize the bcache location /dev/sdb1 is same
[localhost-PC ~]# resize2fs /dev/sdb1
resize2fs 1.46.2 (28-Feb-2021)
resize2fs: Device or resource busy while trying to open /dev/sdb1
Couldn't find valid filesystem superblock.
Below is my disks pattern:
[localhost-PC ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223.6G 0 disk
├─sda1 8:1 0 300M 0 part /boot/efi
├─sda2 8:2 0 2G 0 part /boot
├─sda3 8:3 0 17.2G 0 part [SWAP]
└─sda4 8:4 0 204.1G 0 part
└─bcache0 254:0 0 1.7T 0 disk
└─VolumeGroup00-root 253:0 0 1.7T 0 lvm /
sdb 8:16 0 3.6T 0 disk
└─sdb1 8:18 0 1.7T 0 part
└─bcache0 254:0 0 1.7T 0 disk
└─VolumeGroup00-root 253:0 0 1.7T 0 lvm /
Thank you
According to lsblk, /dev/bcache0 is a physical volume within a volume group, hence, in order to resize the root filesystem, and use all space available in sdb, you must:
Grow sdb1 to 3.6T (https://www.gnu.org/software/parted/manual/html_node/parted_31.html)
reboot
pvresize /dev/bcache0
lvextend /dev/VolGroup00/root /dev/bcache0
resize2fs /dev/VolGroup00/root
Cheers!
I would like to get the host mount path from inside docker container. I can only find "docker inspect" commands which can get the information from hosts. Could anyone help on that? Thanks.
You can use variables if it's just a PATH.
You can write some path infomations in file, map the file into Docker and parse it with docker id.
eg:
# docker_id
head -1 /proc/self/cgroup|cut -d/ -f3
>>> ...
{
"docker_id1": {
"paths": [
"/test:/home",
"/test1:/home1"
]
}
}
or use docker inspect file
Or you can cat /proc/mounts in contains inside, it is contains mounts infomations
cgroup /sys/fs/cgroup/freezer cgroup ro,nosuid,nodev,noexec,relatime,freezer 0 0
mqueue /dev/mqueue mqueue rw,nosuid,nodev,noexec,relatime 0 0
shm /dev/shm tmpfs rw,nosuid,nodev,noexec,relatime,size=65536k 0 0
/dev/mapper/VolGroup00-LogVol03 /usr/share/elasticsearch/data xfs rw,relatime,attr2,inode64,noquota 0 0 # <-- here
proc /proc/bus proc ro,relatime 0 0
proc /proc/fs proc ro,relatime 0 0
I think one way could be to pass it as environmental variable when running container.
docker run -e HOST_MOUNT_PATH=wanted_path -ti ubuntu:18.04 bash
Inside container you can check with
echo $HOST_MOUNT_PATH
I have a server where I run some containers with volumes. All my volumes are in /var/lib/docker/volumes/ because docker is managing it. I use docker-compose to start my containers.
Recently, I tried to stop one of my container but it was impossible :
$ docker-compose down
[17849] INTERNAL ERROR: cannot create temporary directory!
So, I checked how the data is mounted on the server :
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 7,8G 0 7,8G 0% /dev
tmpfs 1,6G 1,9M 1,6G 1% /run
/dev/md3 20G 19G 0 100% /
tmpfs 7,9G 0 7,9G 0% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 7,9G 0 7,9G 0% /sys/fs/cgroup
/dev/md2 487M 147M 311M 33% /boot
/dev/md4 1,8T 1,7G 1,7T 1% /home
tmpfs 1,6G 0 1,6G 0% /run/user/1000
As you can see, the / is only 20Go, so it is full and I can't stop my containers using docker-compose.
My questions are :
There is a simple solution to increase the available space in the
/, using /dev/md4 ?
Or can I move volumes to another place without losing data ?
This part of the Docker Daemon is confirgurable. Best practices would have you change the data folder; this can be done with OS-level Linux commands like a symlink... I would say it's better to actually configure the Docker Daemon to store the data elsewhere!
You can do that by editing the Docker command line (e.g. the systemd script that starts the Docker daemon), or change /etc/docker/daemon.json.
The file should have this content:
{
"data-root": "/path/to/your/docker"
}
If you add a new hard drive, partition, or mount point you can add it here and docker will store its data there.
I landed here as I had the very same issue. Even though some sources suggest you could do it with a symbolic link this will cause all kinds of issues.
Depending on the OS and Docker version I had malformed images, weird errors or the docker-daemon refused to start.
Here is a solution, but it seems it varies a little from version to version. For me the solution was:
Open
/lib/systemd/system/docker.service
And change this line
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
to:
ExecStart=/usr/bin/dockerd -g /mnt/WHATEVERYOUR/PARTITIONIS/docker --containerd=/run/containerd/containerd.sock
I solved it creating a symbolic link to a partition with bigger size:
ln -s /scratch/docker_meta /var/lib/docker
/scratch/docker_meta is the folder that I have in a bigger partition.
Do a bind mount.
For example, moving /docker/volumes to /mnt/large.
Append line into /etc/fstab.
/mnt/large /docker/volumes none bind 0 0
And then.
mv /docker/volumes/* /mnt/large/
mount /docker/volumes
Do not forget chown and chmod of /mnt/large first, if you are using non-root docker.
I'm doing some testing in which I utilise iSCSI. Strange things are happening and I'm looking for an explanation. If anyone could suggest something, I'd be really grateful. So here we go:
There are two VMs running Debian9. One is an iSCSI target (server), the other is an iSCSI initiatior (client).
The server shares a disk (ie. /dev/sdb) or a partition on that disk (ie. /dev/sdb1) as an iSCSI LUN. The client connects to the server and properly recognizes the LUN as a new device (ie. /dev/sdc). Then a LVM is configured on /dev/sdc. Nothing out of ordinary: PV on /dev/sdc, VG on PV, LV on VG, some data on LV. It all works the way it should.
Then I shutdown both machines and start them up again. All important services are set to autostart, both machines see each other, the client creates a sessions (connects to the iSCSI server). But now the magic happens:
Despite the client being connected to the server, it no longer sees to the LUN - so no /dev/sdc device or PV / VG / LV on the client.
The server properly displays the target (LUN) as being shared, but the LUN size is displayed as "0" and the backing store path as "none". The PV / VG / LV are also now displayed by the iSCSI server.
My first idea would be that the LVM metadata gets copied to the iSCSI server, but there are no lvm2-related packages on the server. Since these machines will be used (once I straigten up the iSCSI issues) to cluster tests, lvm locking_type is already set as 3 (clustered locking with clvmd) on the iSCSI client - not sure if that makes a difference here. Also checked if sharing /dev/sdb1 partition makes any difference in comparison to sharing /dev/sdb device - but no difference. So currently I'm out of ideas. Could anyone assist? Thanks in advance!
before restart, server:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 8G 0 disk
├─sda1 8:1 0 7G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 1022M 0 part [SWAP]
sdb 8:16 0 8G 0 disk
└─sdb1 8:17 0 8G 0 part
sr0 11:0 1 1024M 0 rom
# tgtadm --mode target --op show
Target 1: iqn.20181018:test
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 8589 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: rdwr
Backing store path: /dev/sdb
Backing store flags:
Account information:
vgs-user-incoming
vgs-user-outcoming (outgoing)
ACL information:
192.168.106.171
before restart, client:
# lvs
WARNING: Not using lvmetad because locking_type is 3 (clustered).
connect() failed on local socket: No such file or directory
Internal cluster locking initialisation failed.
WARNING: Falling back to local file-based locking.
Volume Groups with the clustered attribute will be inaccessible.
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
virtualMachine1 vg1 -wi-a----- 2,00g
lv_001 vg2 -wi-a----- 4,00m
lv_002 vg2 -wi-a----- 2,00g
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 8G 0 disk
├─sda1 8:1 0 7G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 1022M 0 part [SWAP]
sdb 8:16 0 4G 0 disk
└─sdb1 8:17 0 4G 0 part
└─vg1-virtualMachine1 254:0 0 2G 0 lvm
sdc 8:32 0 8G 0 disk
├─vg2-lv_001 254:1 0 4M 0 lvm
└─vg2-lv_002 254:2 0 2G 0 lvm
sr0 11:0 1 1024M 0 rom
after restart, server:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 8G 0 disk
├─sda1 8:1 0 7G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 1022M 0 part [SWAP]
sdb 8:16 0 8G 0 disk
└─sdb1 8:17 0 8G 0 part
├─vg2-lv_001 254:0 0 4M 0 lvm
└─vg2-lv_002 254:1 0 2G 0 lvm
sr0 11:0 1 1024M 0 rom
# tgtadm --mode target --op show
Target 1: iqn.20181018:test
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
Account information:
vgs-user-incoming
vgs-user-outcoming (outgoing)
ACL information:
192.168.106.171
after restart, client:
# lvs
WARNING: Not using lvmetad because locking_type is 3 (clustered).
connect() failed on local socket: No such file or directory
Internal cluster locking initialisation failed.
WARNING: Falling back to local file-based locking.
Volume Groups with the clustered attribute will be inaccessible.
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
virtualMachine1 vg1 -wi-a----- 2,00g
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 8G 0 disk
├─sda1 8:1 0 7G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 1022M 0 part [SWAP]
sdb 8:16 0 4G 0 disk
└─sdb1 8:17 0 4G 0 part
└─vg1-virtualMachine1 254:0 0 2G 0 lvm
sr0 11:0 1 1024M 0 rom
The server is detecting the LVM and starting it up. Later, when it tries to share /dev/sdb1, it can't, because the device is in use.
You can prevent this with a filter in lvm.conf on the server. If you don't need LVM at all on the server, you can just tell it to avoid scanning (remove) all block devices:
filter = [ "r/.*/" ]
Source: https://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/lvm_filters.html
I would like to run a docker container that requires a lot of memory on a machine that doesn't have much RAM. I have been trying to increase the swap space available for the container to no avail. Here is the last command I tried:
docker run -d -m 1000M --memory-swap=10000M --name=my_container my_image
Following these tips on how to check memory metrics I found the following:
$ boot2docker ssh
docker#boot2docker:~$ cat /sys/fs/cgroup/memory/docker/35af5a072751c7af80ce7a255a01ab3c14b3ee0e3f15341f7bb22a777091c67b/memory.stat
cache 454656
rss 65015808
rss_huge 29360128
mapped_file 208896
writeback 0
swap 0
pgpgin 31532
pgpgout 22702
pgfault 49372
pgmajfault 0
inactive_anon 28672
active_anon 65183744
inactive_file 241664
active_file 16384
unevictable 0
hierarchical_memory_limit 1048576000
hierarchical_memsw_limit 10485760000
total_cache 454656
total_rss 65015808
total_rss_huge 29360128
total_mapped_file 208896
total_writeback 0
total_swap 0
total_pgpgin 31532
total_pgpgout 22702
total_pgfault 49372
total_pgmajfault 0
total_inactive_anon 28672
total_active_anon 65183744
total_inactive_file 241664
total_active_file 16384
total_unevictable 0
Is it possible to run a container that requires 5G of memory on a machine that only has 4G of physical memory?
This GitHub issue was very helpful in figuring out how to increase the swap space available in the boot2docker-vm. Adapting it to my situation I used the following commands to ssh into the boot2docker-vm and set up a new swapfile:
boot2docker ssh
export SWAPFILE=/mnt/sda1/swapfile
sudo dd if=/dev/zero of=$SWAPFILE bs=1024 count=4194304
sudo mkswap $SWAPFILE
sudo chmod 600 $SWAPFILE
sudo swapon $SWAPFILE
exit