LVM2 : Failing to pvcreate a block device - lvm

I'm trying to make use of the LVM2 functionality in linux (Centos6.0).
When trying to make the first step of defining a PV on a specific block device, I get the following error message:
[root#localhost /] pvcreate /dev/sdb
Can't open /dev/sdb exclusively. Mounted filesystem?
/dev/sdb is not mounted and its partition table was deleted.
I should mention also that /dev/sdb used to represent a larger block device (about 4 times larger) and was reduced by configuration of hardware raid (I split the hd to 4 in the raid controller).
Has anyone ever encountered this error before and knows how to take it from here?

Maybe device-mapper is 'stealing' this device. Try this:
[root#host ~]# dmsetup ls
sdb (253, 2)
VolGroup00-LogVol01 (253, 1)
VolGroup00-LogVol00 (253, 0)
If you find sdb device listed as above example, remove it using dmsetup and create the physical volume:
[root#host ~]# dmsetup remove sdb
[root#host ~]# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created

[root#localhost /] pvcreate -vvvvv /dev/sdb
Could ouput more details.
and you could use lsof -L to check if the block device is opened by other process.

Related

Cannot open vfio device in docker container as non-root user

I have enabled virtualization in the BIOS and enabled the IOMMU on kernel command line (intel_iommu=on).
I bound a solarflare NIC to the vfio-pci device and added a udev rule to ensure the vfio device is accessible by my non-root user (e.g., /etc/udev/rules.d/10-vfio-docker-users.rules):
SUBSYSTEM=="vfio", OWNER="myuser", GROUP=="myuser"
I've launched my container with -u 1000 and mapped /dev (-v /dev:/dev). Running in an interactive shell in the container, I am able to verify that the device is there with the permissions set by my udev rule:
bash-4.2$ whoami
whoami: unknown uid 1000
bash-4.2$ ls -al /dev/vfio/35
crw-rw---- 1 1000 1000 236, 0 Jan 25 00:23 /dev/vfio/35
However, if I try and open it (e.g., python -c "open('/dev/vfio/35', 'rb')" I get IOError: [Errno 1] Operation not permitted: '/dev/vfio/35'. However, the same command works outside the container as the normal non-root user with user-id 1000!
It seems that there are additional security measures that are not allowing me to access the vfio device within the container. What am I missing?
Docker drops a number of privileges by default, including the ability to access most devices. You can explicitly grant access to a device using the --device flag, which would look something like:
docker run --device /dev/vfio/35 ...
Alternately, you can ask Docker not to drop any privileges:
docker run --privileged ...
You'll note that in both of the above examples it was not necessary to explicitly bind-mount /dev; in the first case, the device(s) you have exposed with --device will show up, and in the second case you see the host's /dev by default.

How to use loop devices locally in Docker

I want to use loop devices in a docker container locally. It means, when running a couple of container all of them should have for instance a /dev/loop0 connected to a file local in the container. I tried
[root#600bbfb452d1 /]# mknod /dev/loop20 b 7 20
[root#600bbfb452d1 /]# dd if=/dev/random of=loopfile1 bs=1M count=2
[root#600bbfb452d1 /]# losetup -a | grep 20
/dev/loop20: [0049]:3553002 (/loopfile1)
so far so good. But going back to host I can see:
[loewe#linux-2 ~]$ losetup -a | grep 20
/dev/loop20: []: (/loopfile1)
the loop device /dev/loop20 was also created in the hosts /dev - as my fear was because of the tmpfs mount - and worst the container local file "loopfile1" is attached to hosts loop dev.
I tries to umount the /dev filesystem in the container but didn't succeed (device busy but no proc visible with lsof).
Any idea what I am doing wrong?
BTW: using iscsi devices in a container should have the same problem.
Thanks Heiko

Docker not seeing usb /dev/ttyACM0 after unplugging and then replugging

I'm running a Docker container Ubuntu 18.04 which I use to compile code and flash IOT devices, I use this command: docker run --privileged --device=/dev/ttyACM0 -it -v disc_vol1:/root/zephyr zephyr
To run the docker container, which allows me to see the usb devices. However if I for some reason need to unplug and replug the devices, whilst the container is still running, docker no longer sees them, until I restart the container.
Is there a solution for this problem?
DMESG after unplugging and then replugging:
[388387.919792] usb 3-3: USB disconnect, device number 47
[388387.919796] usb 3-3.1: USB disconnect, device number 48
[388387.957792] FAT-fs (sdb): unable to read boot sector to mark fs as dirty
[388406.517953] usb 3-1: new high-speed USB device number 51 using xhci_hcd
[388406.666047] usb 3-1: New USB device found, idVendor=0424, idProduct=2422
[388406.666051] usb 3-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0
[388406.666415] hub 3-1:1.0: USB hub found
[388406.666438] hub 3-1:1.0: 2 ports detected
[388407.881910] usb 3-1.1: new full-speed USB device number 52 using xhci_hcd
[388407.986919] usb 3-1.1: New USB device found, idVendor=0d28, idProduct=0204
[388407.986924] usb 3-1.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[388407.986927] usb 3-1.1: Product: DAPLink CMSIS-DAP
[388407.986929] usb 3-1.1: Manufacturer: ARM
[388407.986932] usb 3-1.1: SerialNumber: 1026000015afe1e800000000000000000000000097969902
[388407.987898] usb-storage 3-1.1:1.0: USB Mass Storage device detected
[388407.988131] scsi host10: usb-storage 3-1.1:1.0
[388407.991188] hid-generic 0003:0D28:0204.00A9: hiddev0,hidraw3: USB HID v1.00 Device [ARM DAPLink CMSIS-DAP] on usb-0000:00:14.0-1.1/input3
[388407.991926] cdc_acm 3-1.1:1.1: ttyACM0: USB ACM device
[388409.014753] scsi 10:0:0:0: Direct-Access MBED VFS 0.1 PQ: 0 ANSI: 2
[388409.015336] sd 10:0:0:0: Attached scsi generic sg2 type 0
[388409.015632] sd 10:0:0:0: [sdb] 131200 512-byte logical blocks: (67.2 MB/64.1 MiB)
[388409.015888] sd 10:0:0:0: [sdb] Write Protect is off
[388409.015892] sd 10:0:0:0: [sdb] Mode Sense: 03 00 00 00
[388409.016103] sd 10:0:0:0: [sdb] No Caching mode page found
[388409.016109] sd 10:0:0:0: [sdb] Assuming drive cache: write through
[388409.045555] sd 10:0:0:0: [sdb] Attached SCSI removable disk
[388482.439345] CIFS VFS: Free previous auth_key.response = 00000000df9e4b01
[388521.789341] CIFS VFS: Free previous auth_key.response = 0000000071020f34
[388554.099064] CIFS VFS: Free previous auth_key.response = 000000002a3aa60b
[388590.132004] CIFS VFS: Free previous auth_key.response = 000000009bed9fb5
[388606.372288] usb 3-1: USB disconnect, device number 51
[388606.372292] usb 3-1.1: USB disconnect, device number 52
[388606.415803] FAT-fs (sdb): unable to read boot sector to mark fs as dirty
[388622.643954] usb 3-3: new high-speed USB device number 53 using xhci_hcd
[388622.792057] usb 3-3: New USB device found, idVendor=0424, idProduct=2422
[388622.792061] usb 3-3: New USB device strings: Mfr=0, Product=0, SerialNumber=0
[388622.792451] hub 3-3:1.0: USB hub found
[388622.792479] hub 3-3:1.0: 2 ports detected
And when I do ls /dev/ttyACM0 or /dev/ttyACM1 nothing changes when it is plugged or unplugged. The problem is that I cannot flash or see the devices with for example pyocd, when I do pycod list the devices wont show up until I restart the container.
Problem
The problem lies in device node creation mechanism.
As you can read in LFS docs, in 9.3.2.2. Device Node Creation:
Device files are created by the kernel by the devtmpfs filesystem.
By comparing mount entries in host:
$ mount
...
udev on /dev type devtmpfs (rw,nosuid,noexec,relatime,size=16259904k,nr_inodes=4064976,mode=755,inode64)
...
...and in container:
# mount
...
tmpfs on /dev type tmpfs (rw,nosuid,size=65536k,mode=755,inode64)
...
...you can notice that /dev filesystem in the container isn't the same thing as it is in the host.
It seems to me that privileged docker container recreates /dev structure while staring. Later, kernel does create device node in devtmpfs, but as long as the container uses separate filesystem for devices, the node isn't created there. As a confirmation, you can notice that after unplugging the device (the one that was connected before container started), its node still persists inside container, but disappears from the host.
Solution
You can workaround it by creating the node manually. In this example I plugged in /dev/ttyUSB1 while container was running.
On the host machine find major and minor device number:
$ ls -la /dev/ttyUSB*
crw-rw----+ 1 root plugdev 188, 0 gru 5 15:25 /dev/ttyUSB0
crw-rw----+ 1 root plugdev 188, 1 gru 5 15:26 /dev/ttyUSB1
# ^^^^^^ major and minor number
And create corresponding node inside container:
# ll /dev/ttyUSB*
crw-rw---- 1 root plugdev 188, 0 Dec 5 14:25 /dev/ttyUSB0
# mknod /dev/ttyUSB1 c 188 1
# ll /dev/ttyUSB*
crw-rw---- 1 root plugdev 188, 0 Dec 5 14:25 /dev/ttyUSB0
crw-r--r-- 1 root root 188, 1 Dec 5 15:16 /dev/ttyUSB1
The device should work.
Enhancement
You can also automate node creation by installing udev and writing some custom nodes inside container.
I found this repo that successfully sets up udev instance inside container - udevadm monitor correctly reflects udev events compared to host.
The last thing is to write some udev rules that will automagically create corresponding nodes inside the container:
ACTION=="add", RUN+="mknod %N c %M %m"
ACTION=="remove", RUN+="rm %N"
I haven't tested it yet, but I can see no reason that it will not work.
Better enhancement
You don't neet to install udev inside the container. You can run mknod there from script that runs on host machine (on host's udev trigger), as it's described here. It would be good to handle removing nodes as well.

tar fills up my HDD

I'm trying to tar a pretty big folder (~11GB) and while taring, my VM crashes because its disk is full. But... I still have plenty of room available on all disks but /
$ sudo df -h
File system Size Used Avail. Used% Mount on
udev 3,9G 0 3,9G 0% /dev
tmpfs 799M 9,3M 790M 2% /run
/dev/sda1 9,1G 3,1G 5,6G 36% /
/dev/sda2 69G 37G 29G 57% /home
/dev/sdb1 197G 87G 100G 47% /docker
I assume tar is buffering somewhere on / and fulfil it before my OS crashes. By the way, I have no idea on how to prevent this. Do you guy have any idea?
Cheers,
Olivier
Tar normally builds the archive in the current directory, as a hidden file. Try cd'ing to one of your larger partition mounting points and taring from there to see if it makes a difference. You may also be running out of innodes:
No Space Left on Device, Running out of Innodes
I ran into a similar problem with a server because of too many small files. While you have plenty of free space left, you might run into this issue.

Docker increase disk space

I have a docker running and it gives me disk space warning. How can i increase the docker space and start again? (The same container)
Lets say I want to give like 15gb.
You can also increase disk space through the docker GUI
I assume you are talking about disk space to run your containers.
Make sure that you have enough space on whatever disk drive you are using for /var/lib/docker which is the default used by Docker. You can change it with the -g daemon option.
If you don't have enough space you may have to repartition your OS drives so that you have over 15GB. If you are using boot2docker or docker-machine you will have to grow the volume on your Virtual Machine. It will vary depending on what you are using for Virtualization (i.e VirtualBox, VMware, etc)
For example if you are using VirtualBox and docker-machine you can start with something like this for a 40GB VM.
docker-machine create --driver virtualbox --virtualbox-disk-size "40000" default
I ran into similar problem with my docker-vm (which is 'alpine-linux' on VMware Fusion in OS X):
write error: no space left on device alpinevm:/mnt/hgfs
failed to build: .. no space left on device
.. eventually this guide helped me to resize/expand my docker volume.
TL;DR:
1 - Check size of partition containing /var/lib/docker
> df -h
/dev/sda3 17.6G 4.1G 12.6G 25% /var/lib/docker
look for '/dev/sdaN', where N is your partition for '/var/lib/docker', in my case /dev/sda3
2 - Shut down your VM, open VM Settings > Hard Disk(s) > change size of your 'virtual_disk.vmdk' (or whatever is your machine's virtual disk), then click Apply (see this guide).
3 - Install cfdisk and e2fsprogs-extra which contains resize2fs
> apk add cfdisk
> apk add e2fsprogs-extra
4 - Run cfdisk and resize/expand /dev/sda3
> cfdisk
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 206847 204800 100M 83 Linux
/dev/sda2 206848 4241407 4034560 1.9G 82 Linux swap / Solaris
/dev/sda3 4241408 83886079 79644672 12.6G 83 Linux
[Bootable] [ Delete ] [ Resize ] [ Quit ] [ Type ] [ Help ] [ Write ] [ Dump ]
.. press down/up to select '/dev/sda3'
.. press left/right/enter to select 'Resize' -> 'Write' -> 'Quit'
5 - Run resize2fs to expand the file system of /dev/sda3
> resize2fs /dev/sda3
6 - Verify resized volume
> df -h
/dev/sda3 37.3G 4.1G 31.4G 12% /var/lib/docker
To increase space available for Docker you will have to increase your docker-pool size. If you do a
lvs
You will see the docker-pool logical volume and its size. If your docker pool is sitting on a volume group that has free space you can simply increase the docker-pool LV by
lvextend -l 100%FREE <path_to_lv>
# An example using this may looks like this:
# lvextend -l 100%FREE /dev/VolGroup00/docker-pool
You can check out more docker diskspace tips here
Thanks
Docker stores all layers/images in its file formate (i.e. aufs) in default /var/lib/docker directory.
If you are getting disk space warning because of docker then there must of lot of docker images and you need to clean up it.
If you have option to add disk space then can you create separate partition with bigger size and mount your /var/lib/docker over there which will help you to get rid of filling root partition.
some extra information can be found here on managing disk space for docker .
http://www.scmtechblog.net/2016/06/clean-up-docker-images-from-local-to.html

Resources