I want resize my /dev/bcache0 to full HDD size.
But I use resize2fs /dev/bcache0 it tell me:
[localhost-PC ~]# resize2fs /dev/bcache0
resize2fs 1.46.2 (28-Feb-2021)
resize2fs: Device or resource busy while trying to open /dev/bcache0
Couldn't find valid filesystem superblock.
I tried resize the bcache location /dev/sdb1 is same
[localhost-PC ~]# resize2fs /dev/sdb1
resize2fs 1.46.2 (28-Feb-2021)
resize2fs: Device or resource busy while trying to open /dev/sdb1
Couldn't find valid filesystem superblock.
Below is my disks pattern:
[localhost-PC ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223.6G 0 disk
├─sda1 8:1 0 300M 0 part /boot/efi
├─sda2 8:2 0 2G 0 part /boot
├─sda3 8:3 0 17.2G 0 part [SWAP]
└─sda4 8:4 0 204.1G 0 part
└─bcache0 254:0 0 1.7T 0 disk
└─VolumeGroup00-root 253:0 0 1.7T 0 lvm /
sdb 8:16 0 3.6T 0 disk
└─sdb1 8:18 0 1.7T 0 part
└─bcache0 254:0 0 1.7T 0 disk
└─VolumeGroup00-root 253:0 0 1.7T 0 lvm /
Thank you
According to lsblk, /dev/bcache0 is a physical volume within a volume group, hence, in order to resize the root filesystem, and use all space available in sdb, you must:
Grow sdb1 to 3.6T (https://www.gnu.org/software/parted/manual/html_node/parted_31.html)
reboot
pvresize /dev/bcache0
lvextend /dev/VolGroup00/root /dev/bcache0
resize2fs /dev/VolGroup00/root
Cheers!
I'm doing some testing in which I utilise iSCSI. Strange things are happening and I'm looking for an explanation. If anyone could suggest something, I'd be really grateful. So here we go:
There are two VMs running Debian9. One is an iSCSI target (server), the other is an iSCSI initiatior (client).
The server shares a disk (ie. /dev/sdb) or a partition on that disk (ie. /dev/sdb1) as an iSCSI LUN. The client connects to the server and properly recognizes the LUN as a new device (ie. /dev/sdc). Then a LVM is configured on /dev/sdc. Nothing out of ordinary: PV on /dev/sdc, VG on PV, LV on VG, some data on LV. It all works the way it should.
Then I shutdown both machines and start them up again. All important services are set to autostart, both machines see each other, the client creates a sessions (connects to the iSCSI server). But now the magic happens:
Despite the client being connected to the server, it no longer sees to the LUN - so no /dev/sdc device or PV / VG / LV on the client.
The server properly displays the target (LUN) as being shared, but the LUN size is displayed as "0" and the backing store path as "none". The PV / VG / LV are also now displayed by the iSCSI server.
My first idea would be that the LVM metadata gets copied to the iSCSI server, but there are no lvm2-related packages on the server. Since these machines will be used (once I straigten up the iSCSI issues) to cluster tests, lvm locking_type is already set as 3 (clustered locking with clvmd) on the iSCSI client - not sure if that makes a difference here. Also checked if sharing /dev/sdb1 partition makes any difference in comparison to sharing /dev/sdb device - but no difference. So currently I'm out of ideas. Could anyone assist? Thanks in advance!
before restart, server:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 8G 0 disk
├─sda1 8:1 0 7G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 1022M 0 part [SWAP]
sdb 8:16 0 8G 0 disk
└─sdb1 8:17 0 8G 0 part
sr0 11:0 1 1024M 0 rom
# tgtadm --mode target --op show
Target 1: iqn.20181018:test
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 8589 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: rdwr
Backing store path: /dev/sdb
Backing store flags:
Account information:
vgs-user-incoming
vgs-user-outcoming (outgoing)
ACL information:
192.168.106.171
before restart, client:
# lvs
WARNING: Not using lvmetad because locking_type is 3 (clustered).
connect() failed on local socket: No such file or directory
Internal cluster locking initialisation failed.
WARNING: Falling back to local file-based locking.
Volume Groups with the clustered attribute will be inaccessible.
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
virtualMachine1 vg1 -wi-a----- 2,00g
lv_001 vg2 -wi-a----- 4,00m
lv_002 vg2 -wi-a----- 2,00g
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 8G 0 disk
├─sda1 8:1 0 7G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 1022M 0 part [SWAP]
sdb 8:16 0 4G 0 disk
└─sdb1 8:17 0 4G 0 part
└─vg1-virtualMachine1 254:0 0 2G 0 lvm
sdc 8:32 0 8G 0 disk
├─vg2-lv_001 254:1 0 4M 0 lvm
└─vg2-lv_002 254:2 0 2G 0 lvm
sr0 11:0 1 1024M 0 rom
after restart, server:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 8G 0 disk
├─sda1 8:1 0 7G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 1022M 0 part [SWAP]
sdb 8:16 0 8G 0 disk
└─sdb1 8:17 0 8G 0 part
├─vg2-lv_001 254:0 0 4M 0 lvm
└─vg2-lv_002 254:1 0 2G 0 lvm
sr0 11:0 1 1024M 0 rom
# tgtadm --mode target --op show
Target 1: iqn.20181018:test
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
Account information:
vgs-user-incoming
vgs-user-outcoming (outgoing)
ACL information:
192.168.106.171
after restart, client:
# lvs
WARNING: Not using lvmetad because locking_type is 3 (clustered).
connect() failed on local socket: No such file or directory
Internal cluster locking initialisation failed.
WARNING: Falling back to local file-based locking.
Volume Groups with the clustered attribute will be inaccessible.
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
virtualMachine1 vg1 -wi-a----- 2,00g
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 8G 0 disk
├─sda1 8:1 0 7G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 1022M 0 part [SWAP]
sdb 8:16 0 4G 0 disk
└─sdb1 8:17 0 4G 0 part
└─vg1-virtualMachine1 254:0 0 2G 0 lvm
sr0 11:0 1 1024M 0 rom
The server is detecting the LVM and starting it up. Later, when it tries to share /dev/sdb1, it can't, because the device is in use.
You can prevent this with a filter in lvm.conf on the server. If you don't need LVM at all on the server, you can just tell it to avoid scanning (remove) all block devices:
filter = [ "r/.*/" ]
Source: https://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/lvm_filters.html
I got some application which will call the pvcreate each time.
I can see the volumes in my vm as follow:
$ pvscan
PV /dev/vda5 VG ubuntu-vg lvm2 [99.52 GiB / 0 free]
Total: 1 [99.52 GiB] / in use: 1 [99.52 GiB] / in no VG: 0 [0 ]
$ pvcreate --metadatasize=128M --dataalignment=256K '/dev/vda5'
Can't initialize physical volume "/dev/vda5" of volume group "ubuntu-vg" without -ff
$ pvcreate --metadatasize=128M --dataalignment=256K '/dev/vda5' -ff
Really INITIALIZE physical volume "/dev/vda5" of volume group "ubuntu-vg" [y/n]? y
Can't open /dev/vda5 exclusively. Mounted filesystem?
I have also tried wipsfs and observed the same result for above commands
$ wipefs -af /dev/vda5
/dev/vda5: 8 bytes were erased at offset 0x00000218 (LVM2_member): 4c 56 4d 32 20 30 30 31
How can I execute pvcreate?
Anything to be added for my vm?
It seems your hdd (/dev/vda5) is already been used in your ubuntu-vg. I think you can not use same hdd partition in 2 different PV's. or you can not add it again.
everyone:
I use lvm on my Gentoo Linux, and it has a HHD and a SSD, so I use SSD to be the cache of HHD to accelerate the speed. However, after several days, I think it's even slower than only use HHD. Then I try to find the reason but unfortunately I failed till now. Here is a question puzzling me as I write in title:
As it shows below, my PV has no free PE to allocate:
lgl#pGentoo ~ $ sudo pvdisplay
--- Physical volume ---
PV Name /dev/sdb5
VG Name pika
PV Size 150.00 GiB / not usable 1.69 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 38400
Free PE 0
Allocated PE 38400
PV UUID O1Db1I-zXss-5OLP-nlN6-OUFH-oqDf-8UjOFY
--- Physical volume ---
PV Name /dev/sda7
VG Name pika
PV Size 20.00 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 5119
Free PE 235
Allocated PE 4884
PV UUID 4Hy6zL-gcpi-aRmI-GeUB-rEsw-Sa3R-Fd4Kpb
However, if I check my space used with df -h, I can see that only 21% is used by /(\ is mounted on /dev/sdb5) while in pvdislay it says that I have no free PE to allocate, why?
lgl#pGentoo ~ $ sudo df -h
filesystem total used free used% mountpoint
none 3.8G 1.6M 3.8G 1% /run
udev 10M 0 10M 0% /dev
tmpfs 3.8G 116M 3.7G 3% /dev/shm
/dev/mapper/pika-data 148G 29G 112G 21% /
tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup
tmpfs 3.8G 1.9M 3.8G 1% /tmp
/dev/sda8 2.0G 43M 1.8G 3% /boot
/dev/sda1 496M 59M 438M 12% /boot/efi
/dev/sdb6 99G 4.5G 89G 5% /home
tmpfs 776M 20K 776M 1% /run/user/1000
thanks.
OK, i have found the answer on this question myself.....
thanks for the answer of Lone_Wolf.
Allocatable yes (but full)
from pvdisplay means that you have assigned the space on that PV to a logical volume.
DF doesn't show LVM physical volumes, it only displays data about filesystems on logical volumes.
I am trying to resize a logical volume on CentOS7 but am running into the following error:
resize2fs 1.42.9 (28-Dec-2013)
resize2fs: Bad magic number in super-block while trying to open /dev/mapper/centos-root
Couldn't find valid filesystem superblock.
I have tried adding a new partition (using fdisk) and using vgextend to extend the volume group, then resizing.
Resize worked fine for the logical volume using lvextend, but it failed at resize2fs.
I have also tried deleting an existing partition (using fdisk) and recreating it with a larger end block, then resizing the physical volume using lvm pvresize, followed by a resize of the logical volume using lvm lvresize. Again everything worked fine up to this point.
Once I tried to use resize2fs, using both methods as above, I received the exact same error.
Hopefully some of the following will shed some light.
fdisk -l
[root#server~]# fdisk -l
Disk /dev/xvda: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0009323a
Device Boot Start End Blocks Id System
/dev/xvda1 * 2048 1026047 512000 83 Linux
/dev/xvda2 1026048 41943039 20458496 8e Linux LVM
/dev/xvda3 41943040 62914559 10485760 8e Linux LVM
Disk /dev/mapper/centos-swap: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/centos-root: 29.5 GB, 29532094464 bytes, 57679872 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
pvdisplay
[root#server ~]# pvdisplay
--- Physical volume ---
PV Name /dev/xvda2
VG Name centos
PV Size 19.51 GiB / not usable 2.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 4994
Free PE 0
Allocated PE 4994
PV UUID 7bJOPh-OUK0-dGAs-2yqL-CAsV-TZeL-HfYzCt
--- Physical volume ---
PV Name /dev/xvda3
VG Name centos
PV Size 10.00 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 2559
Free PE 0
Allocated PE 2559
PV UUID p0IClg-5mrh-5WlL-eJ1v-t6Tm-flVJ-gsJOK6
vgdisplay
[root#server ~]# vgdisplay
--- Volume group ---
VG Name centos
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 6
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 2
Act PV 2
VG Size 29.50 GiB
PE Size 4.00 MiB
Total PE 7553
Alloc PE / Size 7553 / 29.50 GiB
Free PE / Size 0 / 0
VG UUID FD7k1M-koJt-2veW-sizL-Srsq-Y6zt-GcCfz6
lvdisplay
[root#server ~]# lvdisplay
--- Logical volume ---
LV Path /dev/centos/swap
LV Name swap
VG Name centos
LV UUID KyokrR-NGsp-6jVA-P92S-QE3X-hvdp-WAeACd
LV Write Access read/write
LV Creation host, time localhost, 2014-10-09 08:28:42 +0100
LV Status available
# open 2
LV Size 2.00 GiB
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
--- Logical volume ---
LV Path /dev/centos/root
LV Name root
VG Name centos
LV UUID ugCOcT-sTDK-M8EV-3InM-hjIg-2nwS-KeAOnq
LV Write Access read/write
LV Creation host, time localhost, 2014-10-09 08:28:42 +0100
LV Status available
# open 1
LV Size 27.50 GiB
Current LE 7041
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:1
I've probably done something stupid, so any help would be greatly appreciated!
After a bit of trial and error... as mentioned in the possible answers, it turned out to require xfs_growfs rather than resize2fs.
CentOS 7,
fdisk /dev/xvda
Create new primary partition, set type as linux lvm.
n
p
3
t
8e
w
Create a new primary volume and extend the volume group to the new volume.
partprobe
pvcreate /dev/xvda3
vgextend /dev/centos /dev/xvda3
Check the physical volume for free space, extend the logical volume with the free space.
vgdisplay -v
lvextend -l+288 /dev/centos/root
Finally perform an online resize to resize the logical volume, then check the available space.
xfs_growfs /dev/centos/root
df -h
In Centos 7 default filesystem is xfs.
xfs file system support only extend not reduce. So if you want to resize the filesystem use xfs_growfs rather than resize2fs.
xfs_growfs /dev/root_vg/root
Note: For ext4 filesystem use
resize2fs /dev/root_vg/root
I ran into the same exact problem around noon today and finally found a solution here --> Trying to resize2fs EB volume fails
I skipped mounting, since the partition was already mounted.
Apparently CentOS 7 uses XFS as the default file system and as a result resize2fs will fail.
I took a look in /etc/fstab, and guess what, XFS was staring me in the face... Hope this helps.
resize2fs Command will not work for all file systems.
Please confirm the file system of your instance using below command.
Please follow the procedure to expand volume by following the steps mentioned in Amazon official document for different file systems.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html
Default file system in Centos is xfs, use the following command for xfs file system to increase partition size.
sudo xfs_growfs -d /
then "df -h" to check.
On centos and fedora work with fsadm
fsadm resize /dev/vg_name/root
CentOS7 + VM
Ive made it with:
Gparted-live extend the volume
pvresize -v /dev/sda2
lvresize -r -l+100%FREE centos/root
On Centos 7, in answer to the original question where resize2fs fails with "bad magic number" try using fsadm as follows:
fsadm resize /dev/the-device-name-returned-by-df
Then:
df
... to confirm the size changes have worked.
After reading about LVM and being familiar with PV -> VG -> LV, this works for me :
0) #df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 824K 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/mapper/fedora-root 15G 2.1G 13G 14% /
tmpfs 1.9G 0 1.9G 0% /tmp
/dev/md126p1 976M 119M 790M 14% /boot
tmpfs 388M 0 388M 0% /run/user/0
1) # vgs
VG #PV #LV #SN Attr VSize VFree
fedora 1 2 0 wz--n- 231.88g 212.96g
2) # vgdisplay
--- Volume group ---
VG Name fedora
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 231.88 GiB
PE Size 4.00 MiB
Total PE 59361
Alloc PE / Size 4844 / 18.92 GiB
Free PE / Size 54517 / 212.96 GiB
VG UUID 9htamV-DveQ-Jiht-Yfth-OZp7-XUDC-tWh5Lv
3) # lvextend -l +100%FREE /dev/mapper/fedora-root
Size of logical volume fedora/root changed from 15.00 GiB (3840 extents) to 227.96 GiB (58357 extents).
Logical volume fedora/root successfully resized.
4) #lvdisplay
5) #fd -h
6) # xfs_growfs /dev/mapper/fedora-root
meta-data=/dev/mapper/fedora-root isize=512 agcount=4, agsize=983040 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1 spinodes=0 rmapbt=0
= reflink=0
data = bsize=4096 blocks=3932160, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 3932160 to 59757568
7) #df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 828K 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/mapper/fedora-root 228G 2.3G 226G 2% /
tmpfs 1.9G 0 1.9G 0% /tmp
/dev/md126p1 976M 119M 790M 14% /boot
tmpfs 388M 0 388M 0% /run/user/0
Best Regards,
os: rhel7
After gparted, # xfs_growfs /dev/mapper/rhel-root did the trick on a living system.
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-root 47G 47G 20M 100% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 9.3M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda1 1014M 205M 810M 21% /boot
tmpfs 379M 8.0K 379M 1% /run/user/42
tmpfs 379M 0 379M 0% /run/user/1000
# lvresize -l +100%FREE /dev/mapper/rhel-root
Size of logical volume rhel/root changed from <47.00 GiB (12031 extents) to <77.00 GiB (19711 extents).
Logical volume rhel/root successfully resized.
# xfs_growfs /dev/mapper/rhel-root
meta-data=/dev/mapper/rhel-root isize=512 agcount=7, agsize=1900032 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0 spinodes=0
data = bsize=4096 blocks=12319744, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=3711, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 12319744 to 20184064
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-root 77G 47G 31G 62% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 9.3M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda1 1014M 205M 810M 21% /boot
tmpfs 379M 8.0K 379M 1% /run/user/42
tmpfs 379M 0 379M 0% /run/user/1000
How to resize root partition online :
1) [root#oel7 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/root_vg-root 5.0G 4.5G 548M 90% /
2)
PV /dev/sda2 VG root_vg lvm2 [6.00 GiB / 0 free]
as here it shows that there is no space left on root_vg volume group, so first i need to extend VG
3)
[root#oel7 ~]# vgextend root_vg /dev/sdb5
Volume group "root_vg" successfully extended
4)
[root#oel7 ~]# pvscan
PV /dev/sda2 VG root_vg lvm2 [6.00 GiB / 0 free]
PV /dev/sdb5 VG root_vg lvm2 [2.00 GiB / 2.00 GiB free]
5) Now extend the logical volume
[root#oel7 ~]# lvextend -L +1G /dev/root_vg/root
Size of logical volume root_vg/root changed from 5.00 GiB (1280 extents) to 6.00 GiB (1536 extents).
Logical volume root successfully resized
3) [root#oel7 ~]# resize2fs /dev/root_vg/root
resize2fs 1.42.9 (28-Dec-2013)
resize2fs: Bad magic number in super-block while trying to open /dev/root_vg /root
Couldn't find valid filesystem superblock.
as root partition is not a ext* partiton so , you resize2fs will not work for you.
4) to check the filesystem type of a partition
[root#oel7 ~]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/root_vg-root xfs 6.0G 4.5G 1.6G 75% /
devtmpfs devtmpfs 481M 0 481M 0% /dev
tmpfs tmpfs 491M 80K 491M 1% /dev/shm
tmpfs tmpfs 491M 7.1M 484M 2% /run
tmpfs tmpfs 491M 0 491M 0% /sys/fs /cgroup
/dev/mapper/data_vg-home xfs 3.5G 2.9G 620M 83% /home
/dev/sda1 xfs 497M 132M 365M 27% /boot
/dev/mapper/data_vg01-data_lv001 ext3 4.0G 2.4G 1.5G 62% /sybase
/dev/mapper/data_vg02-backup_lv01 ext3 4.0G 806M 3.0G 22% /backup
above command shows that root is an xfs filesystem , so we are sure that we need to use xfs_growfs command to resize the partition.
6) [root#oel7 ~]# xfs_growfs /dev/root_vg/root
meta-data=/dev/mapper/root_vg-root isize=256 agcount=4, agsize=327680 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=1310720, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 1310720 to 1572864
[root#oel7 ~]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/root_vg-root xfs 6.0G 4.5G 1.6G 75% /
To resize the existing volume mounted
sudo mount -t xfs /dev/sdf /opt/data/
mount: /opt/data: /dev/nvme1n1 already mounted on /opt/data.
sudo xfs_growfs /opt/data/
In my case I could to fix the superblock location with these command:
yum install gdisk
parted -l /dev/mapper/centos-root
growpart /dev/mapper/centos-root 1
xfs_growfs /