How to resize Bcache PV? - lvm

I want resize my /dev/bcache0 to full HDD size.
But I use resize2fs /dev/bcache0 it tell me:
[localhost-PC ~]# resize2fs /dev/bcache0
resize2fs 1.46.2 (28-Feb-2021)
resize2fs: Device or resource busy while trying to open /dev/bcache0
Couldn't find valid filesystem superblock.
I tried resize the bcache location /dev/sdb1 is same
[localhost-PC ~]# resize2fs /dev/sdb1
resize2fs 1.46.2 (28-Feb-2021)
resize2fs: Device or resource busy while trying to open /dev/sdb1
Couldn't find valid filesystem superblock.
Below is my disks pattern:
[localhost-PC ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223.6G 0 disk
├─sda1 8:1 0 300M 0 part /boot/efi
├─sda2 8:2 0 2G 0 part /boot
├─sda3 8:3 0 17.2G 0 part [SWAP]
└─sda4 8:4 0 204.1G 0 part
└─bcache0 254:0 0 1.7T 0 disk
└─VolumeGroup00-root 253:0 0 1.7T 0 lvm /
sdb 8:16 0 3.6T 0 disk
└─sdb1 8:18 0 1.7T 0 part
└─bcache0 254:0 0 1.7T 0 disk
└─VolumeGroup00-root 253:0 0 1.7T 0 lvm /
Thank you

According to lsblk, /dev/bcache0 is a physical volume within a volume group, hence, in order to resize the root filesystem, and use all space available in sdb, you must:
Grow sdb1 to 3.6T (https://www.gnu.org/software/parted/manual/html_node/parted_31.html)
reboot
pvresize /dev/bcache0
lvextend /dev/VolGroup00/root /dev/bcache0
resize2fs /dev/VolGroup00/root
Cheers!

Related

Elasticsearch uses much more disk space then it shown in info about indices stat

Elasticsearch (in docker) uses much more disk space than it shown in the info about indices stat.
It shows only 7.8gb
curl 'localhost:9201/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open my_index LL319QgqSJ6FNDEh2ZQw8g 1 1 24896180 0 7.8gb 7.8gb
yellow open test u0IJ7cocSXSlRST_qCPhHg 1 1 8 0 5.5kb 5.5kb
but when I see the disk space info inside the docker container I see the following
docker exec -it es bash
df -h
Filesystem Size Used Avail Use% Mounted on
overlay 278G 51G 213G 20% /
tmpfs 64M 0 64M 0% /dev
tmpfs 3.0G 0 3.0G 0% /sys/fs/cgroup
shm 64M 0 64M 0% /dev/shm
/dev/vda1 278G 51G 213G 20% /etc/hosts
tmpfs 3.0G 0 3.0G 0% /proc/acpi
tmpfs 3.0G 0 3.0G 0% /sys/firmware
I`ve found such a command.
curl 'localhost:9201/_cat/allocation?v'
shards disk.indices disk.used disk.avail disk.total disk.percent host ip node
2 7.8gb 65gb 212.6gb 277.6gb 23 172.17.0.2 172.17.0.2 571fa89f411f 2 UNASSIGNED
But I cant understand what ES spends 65-7.8~57gb of disk space?
As I understood, disk.used shows 'Elasticsearch, including the translog and unassigned shards; The node’s OS; Any other applications or files on the node'. How can I decrease consumption of the disk?

iSCSI LVM configuration does not survive restart

I'm doing some testing in which I utilise iSCSI. Strange things are happening and I'm looking for an explanation. If anyone could suggest something, I'd be really grateful. So here we go:
There are two VMs running Debian9. One is an iSCSI target (server), the other is an iSCSI initiatior (client).
The server shares a disk (ie. /dev/sdb) or a partition on that disk (ie. /dev/sdb1) as an iSCSI LUN. The client connects to the server and properly recognizes the LUN as a new device (ie. /dev/sdc). Then a LVM is configured on /dev/sdc. Nothing out of ordinary: PV on /dev/sdc, VG on PV, LV on VG, some data on LV. It all works the way it should.
Then I shutdown both machines and start them up again. All important services are set to autostart, both machines see each other, the client creates a sessions (connects to the iSCSI server). But now the magic happens:
Despite the client being connected to the server, it no longer sees to the LUN - so no /dev/sdc device or PV / VG / LV on the client.
The server properly displays the target (LUN) as being shared, but the LUN size is displayed as "0" and the backing store path as "none". The PV / VG / LV are also now displayed by the iSCSI server.
My first idea would be that the LVM metadata gets copied to the iSCSI server, but there are no lvm2-related packages on the server. Since these machines will be used (once I straigten up the iSCSI issues) to cluster tests, lvm locking_type is already set as 3 (clustered locking with clvmd) on the iSCSI client - not sure if that makes a difference here. Also checked if sharing /dev/sdb1 partition makes any difference in comparison to sharing /dev/sdb device - but no difference. So currently I'm out of ideas. Could anyone assist? Thanks in advance!
before restart, server:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 8G 0 disk
├─sda1 8:1 0 7G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 1022M 0 part [SWAP]
sdb 8:16 0 8G 0 disk
└─sdb1 8:17 0 8G 0 part
sr0 11:0 1 1024M 0 rom
# tgtadm --mode target --op show
Target 1: iqn.20181018:test
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 8589 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: rdwr
Backing store path: /dev/sdb
Backing store flags:
Account information:
vgs-user-incoming
vgs-user-outcoming (outgoing)
ACL information:
192.168.106.171
before restart, client:
# lvs
WARNING: Not using lvmetad because locking_type is 3 (clustered).
connect() failed on local socket: No such file or directory
Internal cluster locking initialisation failed.
WARNING: Falling back to local file-based locking.
Volume Groups with the clustered attribute will be inaccessible.
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
virtualMachine1 vg1 -wi-a----- 2,00g
lv_001 vg2 -wi-a----- 4,00m
lv_002 vg2 -wi-a----- 2,00g
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 8G 0 disk
├─sda1 8:1 0 7G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 1022M 0 part [SWAP]
sdb 8:16 0 4G 0 disk
└─sdb1 8:17 0 4G 0 part
└─vg1-virtualMachine1 254:0 0 2G 0 lvm
sdc 8:32 0 8G 0 disk
├─vg2-lv_001 254:1 0 4M 0 lvm
└─vg2-lv_002 254:2 0 2G 0 lvm
sr0 11:0 1 1024M 0 rom
after restart, server:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 8G 0 disk
├─sda1 8:1 0 7G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 1022M 0 part [SWAP]
sdb 8:16 0 8G 0 disk
└─sdb1 8:17 0 8G 0 part
├─vg2-lv_001 254:0 0 4M 0 lvm
└─vg2-lv_002 254:1 0 2G 0 lvm
sr0 11:0 1 1024M 0 rom
# tgtadm --mode target --op show
Target 1: iqn.20181018:test
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
Account information:
vgs-user-incoming
vgs-user-outcoming (outgoing)
ACL information:
192.168.106.171
after restart, client:
# lvs
WARNING: Not using lvmetad because locking_type is 3 (clustered).
connect() failed on local socket: No such file or directory
Internal cluster locking initialisation failed.
WARNING: Falling back to local file-based locking.
Volume Groups with the clustered attribute will be inaccessible.
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
virtualMachine1 vg1 -wi-a----- 2,00g
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 8G 0 disk
├─sda1 8:1 0 7G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 1022M 0 part [SWAP]
sdb 8:16 0 4G 0 disk
└─sdb1 8:17 0 4G 0 part
└─vg1-virtualMachine1 254:0 0 2G 0 lvm
sr0 11:0 1 1024M 0 rom
The server is detecting the LVM and starting it up. Later, when it tries to share /dev/sdb1, it can't, because the device is in use.
You can prevent this with a filter in lvm.conf on the server. If you don't need LVM at all on the server, you can just tell it to avoid scanning (remove) all block devices:
filter = [ "r/.*/" ]
Source: https://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/lvm_filters.html

Training an object detector using Cloud Machine Learning Engine

I am trying to follow this protocol:
https://cloud.google.com/blog/big-data/2017/06/training-an-object-detector-using-cloud-machine-learning-engine
but after
after
gsutil cp pet_train.record ${YOUR_GCS_BUCKET}/data/pet_train.record
I get
IOError: [Errno 28] No space left on device
then I did df
Filesystem 1K-blocks Used Available Use% Mounted on
none 25669948 15408340 8934608 64% /
tmpfs 304340 0 304340 0% /dev
tmpfs 304340 0 304340 0% /sys/fs/cgroup
/dev/sdb1 5028480 4749764 240 100% /home
/dev/sda1 25669948 15408340 8934608 64% /etc/hosts
shm 65536 0 65536 0% /dev/shm
any idea what s going on there?
tyvm
Yes: you filled up your disk:
/dev/sdb1 5028480 4749764 240 100% /home
You've used roughly 5Gb of disk space. The training is trying to write more data, and the remaining 240K is too small for the current write.

How to mount volumes in docker release of openFOAM

I am running the docker release of openFOAM. While running openFOAM, I can't access any of the volumes that I have set up in /mnt. I can see them when I run:
bash-4.1$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 29.8G 0 disk
|-sda1 8:1 0 200M 0 part
|-sda2 8:2 0 500M 0 part
`-sda3 8:3 0 29.1G 0 part
`-luks-c551009c-5ab5-4526-85fa-45105a445734 (dm-0)
253:0 0 29.1G 0 crypt
|-korora_a00387863--6-root (dm-1) 253:1 0 26.1G 0 lvm /etc/passwd
`-korora_a00387863--6-swap (dm-2) 253:2 0 3G 0 lvm
sdb 8:16 0 465.8G 0 disk
|-sdb1 8:17 0 137.9G 0 part
|-sdb2 8:18 0 158.7G 0 part
`-sdb3 8:19 0 169.2G 0 part
sdg 8:96 1 15G 0 disk
loop0 7:0 0 100G 0 loop
`-docker-253:1-265037-pool (dm-3) 253:3 0 100G 0 dm
`-docker-253:1-265037-10f82f41512f788ec85215e8764cd3c5b0973d548fe4db2fcbcbaf50db6a4b9c (dm-4)
253:4 0 10G 0 dm /
loop1 7:1 0 2G 0 loop
`-docker-253:1-265037-pool (dm-3) 253:3 0 100G 0 dm
`-docker-253:1-265037-10f82f41512f788ec85215e8764cd3c5b0973d548fe4db2fcbcbaf50db6a4b9c (dm-4)
253:4 0 10G 0 dm /
However, none of these show up in /dev, so I don't know how to mount the volumes that I want. It seems like there is a better solution than manually mounting the volume each time I use openFOAM. Any ideas would be welcome, I don't understand the docker documentation.
You haven't show us exactly what you mean by "volumes set up in /mnt", so there will be a lot of guesswork in this answer w/r/t what you're actually trying to do.
If you are trying to mount block devices on your host and make them available in your container, the normally way you would go about this is:
Mount the device somewhere on your host (e.g., in /mnt)
Use the -v argument to docker run to expose that mountpoint inside a container, as in:
docker run -v /mnt/volume1:/volume1 alpine sh
The above command line would expose /mnt/volume1 on the host as /volume1 inside the container.
If you find that you are often running the same container with the same set of volumes, and you're tired of long command lines, just drop the docker run command into a shell script, or consider using something like docker-compose to help automate things.

resize2fs: Bad magic number in super-block while trying to open

I am trying to resize a logical volume on CentOS7 but am running into the following error:
resize2fs 1.42.9 (28-Dec-2013)
resize2fs: Bad magic number in super-block while trying to open /dev/mapper/centos-root
Couldn't find valid filesystem superblock.
I have tried adding a new partition (using fdisk) and using vgextend to extend the volume group, then resizing.
Resize worked fine for the logical volume using lvextend, but it failed at resize2fs.
I have also tried deleting an existing partition (using fdisk) and recreating it with a larger end block, then resizing the physical volume using lvm pvresize, followed by a resize of the logical volume using lvm lvresize. Again everything worked fine up to this point.
Once I tried to use resize2fs, using both methods as above, I received the exact same error.
Hopefully some of the following will shed some light.
fdisk -l
[root#server~]# fdisk -l
Disk /dev/xvda: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0009323a
Device Boot Start End Blocks Id System
/dev/xvda1 * 2048 1026047 512000 83 Linux
/dev/xvda2 1026048 41943039 20458496 8e Linux LVM
/dev/xvda3 41943040 62914559 10485760 8e Linux LVM
Disk /dev/mapper/centos-swap: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/centos-root: 29.5 GB, 29532094464 bytes, 57679872 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
pvdisplay
[root#server ~]# pvdisplay
--- Physical volume ---
PV Name /dev/xvda2
VG Name centos
PV Size 19.51 GiB / not usable 2.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 4994
Free PE 0
Allocated PE 4994
PV UUID 7bJOPh-OUK0-dGAs-2yqL-CAsV-TZeL-HfYzCt
--- Physical volume ---
PV Name /dev/xvda3
VG Name centos
PV Size 10.00 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 2559
Free PE 0
Allocated PE 2559
PV UUID p0IClg-5mrh-5WlL-eJ1v-t6Tm-flVJ-gsJOK6
vgdisplay
[root#server ~]# vgdisplay
--- Volume group ---
VG Name centos
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 6
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 2
Act PV 2
VG Size 29.50 GiB
PE Size 4.00 MiB
Total PE 7553
Alloc PE / Size 7553 / 29.50 GiB
Free PE / Size 0 / 0
VG UUID FD7k1M-koJt-2veW-sizL-Srsq-Y6zt-GcCfz6
lvdisplay
[root#server ~]# lvdisplay
--- Logical volume ---
LV Path /dev/centos/swap
LV Name swap
VG Name centos
LV UUID KyokrR-NGsp-6jVA-P92S-QE3X-hvdp-WAeACd
LV Write Access read/write
LV Creation host, time localhost, 2014-10-09 08:28:42 +0100
LV Status available
# open 2
LV Size 2.00 GiB
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
--- Logical volume ---
LV Path /dev/centos/root
LV Name root
VG Name centos
LV UUID ugCOcT-sTDK-M8EV-3InM-hjIg-2nwS-KeAOnq
LV Write Access read/write
LV Creation host, time localhost, 2014-10-09 08:28:42 +0100
LV Status available
# open 1
LV Size 27.50 GiB
Current LE 7041
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:1
I've probably done something stupid, so any help would be greatly appreciated!
After a bit of trial and error... as mentioned in the possible answers, it turned out to require xfs_growfs rather than resize2fs.
CentOS 7,
fdisk /dev/xvda
Create new primary partition, set type as linux lvm.
n
p
3
t
8e
w
Create a new primary volume and extend the volume group to the new volume.
partprobe
pvcreate /dev/xvda3
vgextend /dev/centos /dev/xvda3
Check the physical volume for free space, extend the logical volume with the free space.
vgdisplay -v
lvextend -l+288 /dev/centos/root
Finally perform an online resize to resize the logical volume, then check the available space.
xfs_growfs /dev/centos/root
df -h
In Centos 7 default filesystem is xfs.
xfs file system support only extend not reduce. So if you want to resize the filesystem use xfs_growfs rather than resize2fs.
xfs_growfs /dev/root_vg/root
Note: For ext4 filesystem use
resize2fs /dev/root_vg/root
I ran into the same exact problem around noon today and finally found a solution here --> Trying to resize2fs EB volume fails
I skipped mounting, since the partition was already mounted.
Apparently CentOS 7 uses XFS as the default file system and as a result resize2fs will fail.
I took a look in /etc/fstab, and guess what, XFS was staring me in the face... Hope this helps.
resize2fs Command will not work for all file systems.
Please confirm the file system of your instance using below command.
Please follow the procedure to expand volume by following the steps mentioned in Amazon official document for different file systems.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html
Default file system in Centos is xfs, use the following command for xfs file system to increase partition size.
sudo xfs_growfs -d /
then "df -h" to check.
On centos and fedora work with fsadm
fsadm resize /dev/vg_name/root
CentOS7 + VM
Ive made it with:
Gparted-live extend the volume
pvresize -v /dev/sda2
lvresize -r -l+100%FREE centos/root
On Centos 7, in answer to the original question where resize2fs fails with "bad magic number" try using fsadm as follows:
fsadm resize /dev/the-device-name-returned-by-df
Then:
df
... to confirm the size changes have worked.
After reading about LVM and being familiar with PV -> VG -> LV, this works for me :
0) #df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 824K 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/mapper/fedora-root 15G 2.1G 13G 14% /
tmpfs 1.9G 0 1.9G 0% /tmp
/dev/md126p1 976M 119M 790M 14% /boot
tmpfs 388M 0 388M 0% /run/user/0
1) # vgs
VG #PV #LV #SN Attr VSize VFree
fedora 1 2 0 wz--n- 231.88g 212.96g
2) # vgdisplay
--- Volume group ---
VG Name fedora
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 231.88 GiB
PE Size 4.00 MiB
Total PE 59361
Alloc PE / Size 4844 / 18.92 GiB
Free PE / Size 54517 / 212.96 GiB
VG UUID 9htamV-DveQ-Jiht-Yfth-OZp7-XUDC-tWh5Lv
3) # lvextend -l +100%FREE /dev/mapper/fedora-root
Size of logical volume fedora/root changed from 15.00 GiB (3840 extents) to 227.96 GiB (58357 extents).
Logical volume fedora/root successfully resized.
4) #lvdisplay
5) #fd -h
6) # xfs_growfs /dev/mapper/fedora-root
meta-data=/dev/mapper/fedora-root isize=512 agcount=4, agsize=983040 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1 spinodes=0 rmapbt=0
= reflink=0
data = bsize=4096 blocks=3932160, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 3932160 to 59757568
7) #df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 828K 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/mapper/fedora-root 228G 2.3G 226G 2% /
tmpfs 1.9G 0 1.9G 0% /tmp
/dev/md126p1 976M 119M 790M 14% /boot
tmpfs 388M 0 388M 0% /run/user/0
Best Regards,
os: rhel7
After gparted, # xfs_growfs /dev/mapper/rhel-root did the trick on a living system.
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-root 47G 47G 20M 100% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 9.3M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda1 1014M 205M 810M 21% /boot
tmpfs 379M 8.0K 379M 1% /run/user/42
tmpfs 379M 0 379M 0% /run/user/1000
# lvresize -l +100%FREE /dev/mapper/rhel-root
Size of logical volume rhel/root changed from <47.00 GiB (12031 extents) to <77.00 GiB (19711 extents).
Logical volume rhel/root successfully resized.
# xfs_growfs /dev/mapper/rhel-root
meta-data=/dev/mapper/rhel-root isize=512 agcount=7, agsize=1900032 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0 spinodes=0
data = bsize=4096 blocks=12319744, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=3711, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 12319744 to 20184064
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-root 77G 47G 31G 62% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 9.3M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda1 1014M 205M 810M 21% /boot
tmpfs 379M 8.0K 379M 1% /run/user/42
tmpfs 379M 0 379M 0% /run/user/1000
How to resize root partition online :
1) [root#oel7 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/root_vg-root 5.0G 4.5G 548M 90% /
2)
PV /dev/sda2 VG root_vg lvm2 [6.00 GiB / 0 free]
as here it shows that there is no space left on root_vg volume group, so first i need to extend VG
3)
[root#oel7 ~]# vgextend root_vg /dev/sdb5
Volume group "root_vg" successfully extended
4)
[root#oel7 ~]# pvscan
PV /dev/sda2 VG root_vg lvm2 [6.00 GiB / 0 free]
PV /dev/sdb5 VG root_vg lvm2 [2.00 GiB / 2.00 GiB free]
5) Now extend the logical volume
[root#oel7 ~]# lvextend -L +1G /dev/root_vg/root
Size of logical volume root_vg/root changed from 5.00 GiB (1280 extents) to 6.00 GiB (1536 extents).
Logical volume root successfully resized
3) [root#oel7 ~]# resize2fs /dev/root_vg/root
resize2fs 1.42.9 (28-Dec-2013)
resize2fs: Bad magic number in super-block while trying to open /dev/root_vg /root
Couldn't find valid filesystem superblock.
as root partition is not a ext* partiton so , you resize2fs will not work for you.
4) to check the filesystem type of a partition
[root#oel7 ~]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/root_vg-root xfs 6.0G 4.5G 1.6G 75% /
devtmpfs devtmpfs 481M 0 481M 0% /dev
tmpfs tmpfs 491M 80K 491M 1% /dev/shm
tmpfs tmpfs 491M 7.1M 484M 2% /run
tmpfs tmpfs 491M 0 491M 0% /sys/fs /cgroup
/dev/mapper/data_vg-home xfs 3.5G 2.9G 620M 83% /home
/dev/sda1 xfs 497M 132M 365M 27% /boot
/dev/mapper/data_vg01-data_lv001 ext3 4.0G 2.4G 1.5G 62% /sybase
/dev/mapper/data_vg02-backup_lv01 ext3 4.0G 806M 3.0G 22% /backup
above command shows that root is an xfs filesystem , so we are sure that we need to use xfs_growfs command to resize the partition.
6) [root#oel7 ~]# xfs_growfs /dev/root_vg/root
meta-data=/dev/mapper/root_vg-root isize=256 agcount=4, agsize=327680 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=1310720, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 1310720 to 1572864
[root#oel7 ~]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/root_vg-root xfs 6.0G 4.5G 1.6G 75% /
To resize the existing volume mounted
sudo mount -t xfs /dev/sdf /opt/data/
mount: /opt/data: /dev/nvme1n1 already mounted on /opt/data.
sudo xfs_growfs /opt/data/
In my case I could to fix the superblock location with these command:
yum install gdisk
parted -l /dev/mapper/centos-root
growpart /dev/mapper/centos-root 1
xfs_growfs /

Resources