lvm my VG has no free PE but it has a lot of space indeed, why? - lvm

everyone:
I use lvm on my Gentoo Linux, and it has a HHD and a SSD, so I use SSD to be the cache of HHD to accelerate the speed. However, after several days, I think it's even slower than only use HHD. Then I try to find the reason but unfortunately I failed till now. Here is a question puzzling me as I write in title:
As it shows below, my PV has no free PE to allocate:
lgl#pGentoo ~ $ sudo pvdisplay
--- Physical volume ---
PV Name /dev/sdb5
VG Name pika
PV Size 150.00 GiB / not usable 1.69 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 38400
Free PE 0
Allocated PE 38400
PV UUID O1Db1I-zXss-5OLP-nlN6-OUFH-oqDf-8UjOFY
--- Physical volume ---
PV Name /dev/sda7
VG Name pika
PV Size 20.00 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 5119
Free PE 235
Allocated PE 4884
PV UUID 4Hy6zL-gcpi-aRmI-GeUB-rEsw-Sa3R-Fd4Kpb
However, if I check my space used with df -h, I can see that only 21% is used by /(\ is mounted on /dev/sdb5) while in pvdislay it says that I have no free PE to allocate, why?
lgl#pGentoo ~ $ sudo df -h
filesystem total used free used% mountpoint
none 3.8G 1.6M 3.8G 1% /run
udev 10M 0 10M 0% /dev
tmpfs 3.8G 116M 3.7G 3% /dev/shm
/dev/mapper/pika-data 148G 29G 112G 21% /
tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup
tmpfs 3.8G 1.9M 3.8G 1% /tmp
/dev/sda8 2.0G 43M 1.8G 3% /boot
/dev/sda1 496M 59M 438M 12% /boot/efi
/dev/sdb6 99G 4.5G 89G 5% /home
tmpfs 776M 20K 776M 1% /run/user/1000
thanks.

OK, i have found the answer on this question myself.....
thanks for the answer of Lone_Wolf.
Allocatable yes (but full)
from pvdisplay means that you have assigned the space on that PV to a logical volume.
DF doesn't show LVM physical volumes, it only displays data about filesystems on logical volumes.

Related

Elasticsearch uses much more disk space then it shown in info about indices stat

Elasticsearch (in docker) uses much more disk space than it shown in the info about indices stat.
It shows only 7.8gb
curl 'localhost:9201/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open my_index LL319QgqSJ6FNDEh2ZQw8g 1 1 24896180 0 7.8gb 7.8gb
yellow open test u0IJ7cocSXSlRST_qCPhHg 1 1 8 0 5.5kb 5.5kb
but when I see the disk space info inside the docker container I see the following
docker exec -it es bash
df -h
Filesystem Size Used Avail Use% Mounted on
overlay 278G 51G 213G 20% /
tmpfs 64M 0 64M 0% /dev
tmpfs 3.0G 0 3.0G 0% /sys/fs/cgroup
shm 64M 0 64M 0% /dev/shm
/dev/vda1 278G 51G 213G 20% /etc/hosts
tmpfs 3.0G 0 3.0G 0% /proc/acpi
tmpfs 3.0G 0 3.0G 0% /sys/firmware
I`ve found such a command.
curl 'localhost:9201/_cat/allocation?v'
shards disk.indices disk.used disk.avail disk.total disk.percent host ip node
2 7.8gb 65gb 212.6gb 277.6gb 23 172.17.0.2 172.17.0.2 571fa89f411f 2 UNASSIGNED
But I cant understand what ES spends 65-7.8~57gb of disk space?
As I understood, disk.used shows 'Elasticsearch, including the translog and unassigned shards; The node’s OS; Any other applications or files on the node'. How can I decrease consumption of the disk?

How to use disk space when memory is not enough to pull docker image

I am trying to pull a docker image that says:
write /var/lib/docker/tmp/GetImageBlob375213140: no space left on device
Insight:
$sudo docker info
Total Memory: 7.544GiB
[ec2-user#ip-172-31-93-184 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 3.8G 0 3.8G 0% /dev
tmpfs 3.8G 0 3.8G 0% /dev/shm
tmpfs 3.8G 444K 3.8G 1% /run
tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup
/dev/nvme0n1p1 24G 7.9G 17G 33% /
tmpfs 773M 0 773M 0% /run/user/1000
[ec2-user#ip-172-31-93-184 ~]$ free -h
total used free shared buff/cache available
Mem: 7.5G 189M 7.1G 456K 309M 7.1G
Swap: 0B 0B 0B
How could i use disk space? I think it has something to do with Swap.
Please let me know if any other info could help in understanding the issue. Thanks in advance.
Below are the command could be used to add swap space from hard disk.
$ sudo fallocate -l 1g /mnt/1GiB.swap
$ sudo chmod 600 /mnt/1GiB.swap
$ sudo mkswap /mnt/1GiB.swap
# Setting up swapspace version 1, size = 1048576 kB
$ sudo swapon /mnt/1GiB.swap
Ref: https://help.ubuntu.com/community/SwapFaq

Training an object detector using Cloud Machine Learning Engine

I am trying to follow this protocol:
https://cloud.google.com/blog/big-data/2017/06/training-an-object-detector-using-cloud-machine-learning-engine
but after
after
gsutil cp pet_train.record ${YOUR_GCS_BUCKET}/data/pet_train.record
I get
IOError: [Errno 28] No space left on device
then I did df
Filesystem 1K-blocks Used Available Use% Mounted on
none 25669948 15408340 8934608 64% /
tmpfs 304340 0 304340 0% /dev
tmpfs 304340 0 304340 0% /sys/fs/cgroup
/dev/sdb1 5028480 4749764 240 100% /home
/dev/sda1 25669948 15408340 8934608 64% /etc/hosts
shm 65536 0 65536 0% /dev/shm
any idea what s going on there?
tyvm
Yes: you filled up your disk:
/dev/sdb1 5028480 4749764 240 100% /home
You've used roughly 5Gb of disk space. The training is trying to write more data, and the remaining 240K is too small for the current write.

Ambiguity in disk space allocation for docker containers

I have two Physical machine installed with Docker 1.11.3 on ubuntu. Following is the configuration of machines -
1. Machine 1 - RAM 4 GB, Hard disk - 500 GB, quad core
2. Machine 2 - RAM 8 GB, Hard disk - 1 TB, octa core
I created containers on both machines. When I check the disk space of individual containers, here are some stats, which I am not able to undestand the reason behind.
1. Container on Machine 1
root#e1t2j3k45432#df -h
Filesystem Size Used Avail Use% Mounted on
none 37G 27G 8.2G 77% /
tmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda9 37G 27G 8.2G 77% /etc/hosts
shm 64M 0 64M 0% /dev/shm
I have nothing installed in the above container, still it is showing
27 GB used.
How come this container got 37 GB of space. ?
2. Container on Machine 2
root#0af8ac09b89c:/# df -h
Filesystem Size Used Avail Use% Mounted on
none 184G 11G 164G 6% /
tmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda5 184G 11G 164G 6% /etc/hosts
shm 64M 0 64M 0% /dev/shm
Why only 11GB of disk space is shown as used in this container. Even
though this is also empty container with no packages installed.
How come this container is given 184 GB of disk space ?
The disk usage reported inside docker is the host disk usage of /var/lib/docker (my /var/lib/docker in the example below is symlinked to my /home where I have more disk space):
bash$ df -k /var/lib/docker/.
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/... 720798904 311706176 372455240 46% /home
bash$ docker run --rm -it busybox df -k
Filesystem 1K-blocks Used Available Use% Mounted on
none 720798904 311706268 372455148 46% /
...
So if you run the df command on the same container on different hosts, a different result is expect.

resize2fs: Bad magic number in super-block while trying to open

I am trying to resize a logical volume on CentOS7 but am running into the following error:
resize2fs 1.42.9 (28-Dec-2013)
resize2fs: Bad magic number in super-block while trying to open /dev/mapper/centos-root
Couldn't find valid filesystem superblock.
I have tried adding a new partition (using fdisk) and using vgextend to extend the volume group, then resizing.
Resize worked fine for the logical volume using lvextend, but it failed at resize2fs.
I have also tried deleting an existing partition (using fdisk) and recreating it with a larger end block, then resizing the physical volume using lvm pvresize, followed by a resize of the logical volume using lvm lvresize. Again everything worked fine up to this point.
Once I tried to use resize2fs, using both methods as above, I received the exact same error.
Hopefully some of the following will shed some light.
fdisk -l
[root#server~]# fdisk -l
Disk /dev/xvda: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0009323a
Device Boot Start End Blocks Id System
/dev/xvda1 * 2048 1026047 512000 83 Linux
/dev/xvda2 1026048 41943039 20458496 8e Linux LVM
/dev/xvda3 41943040 62914559 10485760 8e Linux LVM
Disk /dev/mapper/centos-swap: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/centos-root: 29.5 GB, 29532094464 bytes, 57679872 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
pvdisplay
[root#server ~]# pvdisplay
--- Physical volume ---
PV Name /dev/xvda2
VG Name centos
PV Size 19.51 GiB / not usable 2.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 4994
Free PE 0
Allocated PE 4994
PV UUID 7bJOPh-OUK0-dGAs-2yqL-CAsV-TZeL-HfYzCt
--- Physical volume ---
PV Name /dev/xvda3
VG Name centos
PV Size 10.00 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 2559
Free PE 0
Allocated PE 2559
PV UUID p0IClg-5mrh-5WlL-eJ1v-t6Tm-flVJ-gsJOK6
vgdisplay
[root#server ~]# vgdisplay
--- Volume group ---
VG Name centos
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 6
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 2
Act PV 2
VG Size 29.50 GiB
PE Size 4.00 MiB
Total PE 7553
Alloc PE / Size 7553 / 29.50 GiB
Free PE / Size 0 / 0
VG UUID FD7k1M-koJt-2veW-sizL-Srsq-Y6zt-GcCfz6
lvdisplay
[root#server ~]# lvdisplay
--- Logical volume ---
LV Path /dev/centos/swap
LV Name swap
VG Name centos
LV UUID KyokrR-NGsp-6jVA-P92S-QE3X-hvdp-WAeACd
LV Write Access read/write
LV Creation host, time localhost, 2014-10-09 08:28:42 +0100
LV Status available
# open 2
LV Size 2.00 GiB
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
--- Logical volume ---
LV Path /dev/centos/root
LV Name root
VG Name centos
LV UUID ugCOcT-sTDK-M8EV-3InM-hjIg-2nwS-KeAOnq
LV Write Access read/write
LV Creation host, time localhost, 2014-10-09 08:28:42 +0100
LV Status available
# open 1
LV Size 27.50 GiB
Current LE 7041
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:1
I've probably done something stupid, so any help would be greatly appreciated!
After a bit of trial and error... as mentioned in the possible answers, it turned out to require xfs_growfs rather than resize2fs.
CentOS 7,
fdisk /dev/xvda
Create new primary partition, set type as linux lvm.
n
p
3
t
8e
w
Create a new primary volume and extend the volume group to the new volume.
partprobe
pvcreate /dev/xvda3
vgextend /dev/centos /dev/xvda3
Check the physical volume for free space, extend the logical volume with the free space.
vgdisplay -v
lvextend -l+288 /dev/centos/root
Finally perform an online resize to resize the logical volume, then check the available space.
xfs_growfs /dev/centos/root
df -h
In Centos 7 default filesystem is xfs.
xfs file system support only extend not reduce. So if you want to resize the filesystem use xfs_growfs rather than resize2fs.
xfs_growfs /dev/root_vg/root
Note: For ext4 filesystem use
resize2fs /dev/root_vg/root
I ran into the same exact problem around noon today and finally found a solution here --> Trying to resize2fs EB volume fails
I skipped mounting, since the partition was already mounted.
Apparently CentOS 7 uses XFS as the default file system and as a result resize2fs will fail.
I took a look in /etc/fstab, and guess what, XFS was staring me in the face... Hope this helps.
resize2fs Command will not work for all file systems.
Please confirm the file system of your instance using below command.
Please follow the procedure to expand volume by following the steps mentioned in Amazon official document for different file systems.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html
Default file system in Centos is xfs, use the following command for xfs file system to increase partition size.
sudo xfs_growfs -d /
then "df -h" to check.
On centos and fedora work with fsadm
fsadm resize /dev/vg_name/root
CentOS7 + VM
Ive made it with:
Gparted-live extend the volume
pvresize -v /dev/sda2
lvresize -r -l+100%FREE centos/root
On Centos 7, in answer to the original question where resize2fs fails with "bad magic number" try using fsadm as follows:
fsadm resize /dev/the-device-name-returned-by-df
Then:
df
... to confirm the size changes have worked.
After reading about LVM and being familiar with PV -> VG -> LV, this works for me :
0) #df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 824K 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/mapper/fedora-root 15G 2.1G 13G 14% /
tmpfs 1.9G 0 1.9G 0% /tmp
/dev/md126p1 976M 119M 790M 14% /boot
tmpfs 388M 0 388M 0% /run/user/0
1) # vgs
VG #PV #LV #SN Attr VSize VFree
fedora 1 2 0 wz--n- 231.88g 212.96g
2) # vgdisplay
--- Volume group ---
VG Name fedora
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 231.88 GiB
PE Size 4.00 MiB
Total PE 59361
Alloc PE / Size 4844 / 18.92 GiB
Free PE / Size 54517 / 212.96 GiB
VG UUID 9htamV-DveQ-Jiht-Yfth-OZp7-XUDC-tWh5Lv
3) # lvextend -l +100%FREE /dev/mapper/fedora-root
Size of logical volume fedora/root changed from 15.00 GiB (3840 extents) to 227.96 GiB (58357 extents).
Logical volume fedora/root successfully resized.
4) #lvdisplay
5) #fd -h
6) # xfs_growfs /dev/mapper/fedora-root
meta-data=/dev/mapper/fedora-root isize=512 agcount=4, agsize=983040 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1 spinodes=0 rmapbt=0
= reflink=0
data = bsize=4096 blocks=3932160, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 3932160 to 59757568
7) #df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 828K 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/mapper/fedora-root 228G 2.3G 226G 2% /
tmpfs 1.9G 0 1.9G 0% /tmp
/dev/md126p1 976M 119M 790M 14% /boot
tmpfs 388M 0 388M 0% /run/user/0
Best Regards,
os: rhel7
After gparted, # xfs_growfs /dev/mapper/rhel-root did the trick on a living system.
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-root 47G 47G 20M 100% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 9.3M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda1 1014M 205M 810M 21% /boot
tmpfs 379M 8.0K 379M 1% /run/user/42
tmpfs 379M 0 379M 0% /run/user/1000
# lvresize -l +100%FREE /dev/mapper/rhel-root
Size of logical volume rhel/root changed from <47.00 GiB (12031 extents) to <77.00 GiB (19711 extents).
Logical volume rhel/root successfully resized.
# xfs_growfs /dev/mapper/rhel-root
meta-data=/dev/mapper/rhel-root isize=512 agcount=7, agsize=1900032 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0 spinodes=0
data = bsize=4096 blocks=12319744, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=3711, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 12319744 to 20184064
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-root 77G 47G 31G 62% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 9.3M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda1 1014M 205M 810M 21% /boot
tmpfs 379M 8.0K 379M 1% /run/user/42
tmpfs 379M 0 379M 0% /run/user/1000
How to resize root partition online :
1) [root#oel7 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/root_vg-root 5.0G 4.5G 548M 90% /
2)
PV /dev/sda2 VG root_vg lvm2 [6.00 GiB / 0 free]
as here it shows that there is no space left on root_vg volume group, so first i need to extend VG
3)
[root#oel7 ~]# vgextend root_vg /dev/sdb5
Volume group "root_vg" successfully extended
4)
[root#oel7 ~]# pvscan
PV /dev/sda2 VG root_vg lvm2 [6.00 GiB / 0 free]
PV /dev/sdb5 VG root_vg lvm2 [2.00 GiB / 2.00 GiB free]
5) Now extend the logical volume
[root#oel7 ~]# lvextend -L +1G /dev/root_vg/root
Size of logical volume root_vg/root changed from 5.00 GiB (1280 extents) to 6.00 GiB (1536 extents).
Logical volume root successfully resized
3) [root#oel7 ~]# resize2fs /dev/root_vg/root
resize2fs 1.42.9 (28-Dec-2013)
resize2fs: Bad magic number in super-block while trying to open /dev/root_vg /root
Couldn't find valid filesystem superblock.
as root partition is not a ext* partiton so , you resize2fs will not work for you.
4) to check the filesystem type of a partition
[root#oel7 ~]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/root_vg-root xfs 6.0G 4.5G 1.6G 75% /
devtmpfs devtmpfs 481M 0 481M 0% /dev
tmpfs tmpfs 491M 80K 491M 1% /dev/shm
tmpfs tmpfs 491M 7.1M 484M 2% /run
tmpfs tmpfs 491M 0 491M 0% /sys/fs /cgroup
/dev/mapper/data_vg-home xfs 3.5G 2.9G 620M 83% /home
/dev/sda1 xfs 497M 132M 365M 27% /boot
/dev/mapper/data_vg01-data_lv001 ext3 4.0G 2.4G 1.5G 62% /sybase
/dev/mapper/data_vg02-backup_lv01 ext3 4.0G 806M 3.0G 22% /backup
above command shows that root is an xfs filesystem , so we are sure that we need to use xfs_growfs command to resize the partition.
6) [root#oel7 ~]# xfs_growfs /dev/root_vg/root
meta-data=/dev/mapper/root_vg-root isize=256 agcount=4, agsize=327680 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=1310720, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 1310720 to 1572864
[root#oel7 ~]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/root_vg-root xfs 6.0G 4.5G 1.6G 75% /
To resize the existing volume mounted
sudo mount -t xfs /dev/sdf /opt/data/
mount: /opt/data: /dev/nvme1n1 already mounted on /opt/data.
sudo xfs_growfs /opt/data/
In my case I could to fix the superblock location with these command:
yum install gdisk
parted -l /dev/mapper/centos-root
growpart /dev/mapper/centos-root 1
xfs_growfs /

Resources