Xenserver NFS export share only 4GB size? - storage

I have managed to create an NFS server on my Xenserver and mounted it on my Cloudstack 4.4!
However i realise the size of my primary storage and secondary storage is only 4gb when i have assigned 250gb to my Xenserver VM (local storage)
May i know why and how can i increase the space?
Picture link
http://115.66.5.90/manage/shares/Torrents/why%204gb%20size.png?__c=2533372089363723488
Edit on 6/8/2014-------------
Hello Miguel,I have done your steps as seen but still stuck. (Xen was given 100GB)
pvs
PV VG mt Attr PSize PFree
/dev/sda3 VG_XenStorage- lvm2 a- 91.99G 91.98G
Then i gdisk /dev/sda3 as this 91GB is the free storage i have after installing Xen on my VM.
I followed all your steps that you have written below.
Having this result when i PVS again
[root#xenserver-bpqbdmrk ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 lvm2 a- 4.00G 4.00G
However when i ran vgdisplay -c
[root#xenserver-bpqbdmrk ~]# vgdisplay -c
No volume groups found
fdisk -l
Disk /dev/sda: 107.3 GB, 107374182400 bytes
256 heads, 63 sectors/track, 13003 cylinders
Units = cylinders of 16128 * 512 = 8257536 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13004 104857599+ ee EFI GPT
[root#xenserver-bpqbdmrk ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 4.0G 1.9G 2.0G 49% /
none 381M 16K 381M 1% /dev/shm
/opt/xensource/packages/iso/XenCenter.iso
52M 52M 0 100% /var/xen/xc-install
172.16.109.11:/export/primary/97cffd9a-acfe-0c71-91d5-b93e58f27462
4.0G 1.9G 2.0G 49% /var/run/sr-mount/97cffd9a-acfe-0c71-91d5-b93e58f27462
May i know why i do not have a volume group even though i have a storage repo of 4GB on my NFS.
And why does my /dev/sda2 has only 4Gb too
More information about my testing Cloud.
i am running a VM of 100GB.
wanted a primary storage and secondary storage combine of 91Gb.
Command (? for help): p
Disk /dev/sda: 209715200 sectors, 100.0 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 7AE0B6EE-99F4-44F4-A9F0-5140B14DCC32
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 209715166
Partitions will be aligned on 2048-sector boundaries
Total free space is 6042 sectors (3.0 MiB)
Number Start (sector) End (sector) Size Code Name
1 2048 8388641 4.0 GiB 0700
2 8390656 16777249 4.0 GiB 0700
3 16779264 209715166 92.0 GiB 8E00
Command (? for help):

When you logon to your XenServer management console you are actually logging on to a VM (the one running on Dom0). This VM is the one that controls the whole hypervisor.
Only some of the resources you provided to your XenServer are used by the management VM in Dom0. The rest is used for the other VMs you might spin-up on the XenServer.
That goes for CPU, memory and disk space.
You need to check if the XenServer local storage logical volume already contains the remaining space of your disk. To do that type pvs on the terminal to list all LVM physical devices. The entry you are looking for starts with "VG_XenStorage-".
You should see the disk partition that is attached to that physical device, the total size and the free space.
If the local storage logical volume doesn't contain the extra space already you need to add it yourself by partitioning the space if it isn't already. Assuming your disk device is /dev/sda, type gdisk /dev/sda then at the prompt type pto print the partition table. If you have one too many (in relation to what is mounted) then you have a partition already available to use. If you have 2x 4GB partitions and one larger (taking the remaining space) the last is the one you want to use. If not, then you need to create one at the end of the disk. Still in gdisk type:
nto create a new partition, then chose a number for it (the next available int),
push enter twice to make it start at the next available disk block and end at the last,
type 8e00 to select the "Linux LVM" partition type
type w to write the new partition table
At this point you've either created a new partition or you had one already available. I'm assuming /dev/sda3. Now you need to create a physical volume and attach it to the logical volume XenServer uses for local storage.
pvcreate /dev/sda3 to create a new physical volume
vgextend $(vgdisplay -c | cut -d : -f 1) /dev/sda3
The $(vgdisplay ...) bit is to find out the name of the volume group you will attach the physical device to.
If you do pvs again you should see that the local storage logical volume has now more space available.
Edit:
As mentioned before XenServer can manage local storage for VMs using a Storage Repository (SR). When this is the case, then there is no need to create a primary storage directory for holding VM's storage.
As for secondary storage, there will still be a need for it. Secondary storage is where CloudStack looks for the templates (disk images) that it uses to boot the System VMs. System VMs are the VMs CloudStack uses for managing the cloud environment (e.g. virtual routers or console proxies). The hypervisors under CloudStack (in this case a XenServer) must be able to reach the secondary storage, and one of the most common ways of achieving this is to make the secondary storage available via NFS. Whether the NFS export is available from the hypervisor itself or some other reachable machine, that doesn't really matter.
Getting back to the setup of the question, the disk of the XenServer would have to be partitioned in such a way that one partition would be available for primary storage (managed by XenServer via a SR) and another one for secondary storage (with a file system, mounted on the locally and made available ad an NFS export).

Related

Buildroot: Build Device Table with unknown Major Number

I have one driver that shall get Major number from linux kernel (major number is assigned dynamically).
To create the a device node for my driver, I run the following steps manually
insmod my_driver
cat /proc/devices -- This is to know which Major number is assigned
mknod /dev/myDevName -c Assigned_Major_Number 0
Eventually, I have to use Buildroot to build my file system which should include my driver.
In Buildroot, you can use device table file to create device node (this is instead of running mknod ... when linux system is up).
The missing part how to mention the Major Number in device table file as I don't have it yet (it will be assigned later by linux kernel when system is up)?!
Thanks for your help
Let the /dev entries be created dynamically and automatically for you. A static table is too cumbersome when you have dynamic numbers.
There are several dynamic /dev management methods. From most complex and featureful to simplest:
use udev and systemd (like many desktop/server distributions do)
use udev (if your init system is not systemd)
use mdev from Busybox (like udev, but simpler and very lightweight)
mount a devtmpfs on /dev (no daemon needed, the kernel will do it for you)
Buildroot can set up whichever you prefer. Just enter make menuconfig -> System configuration -> /dev management. See the manual section /dev management for the details.

Docker Images Report as Taking 5-10x Actual Space on BTRFS Filesystem

Today's day one with Docker, and I've been overjoyed (cough) to find that docker is taking 5-10 times as much space to store the images on my hard drive as the images themselves. A visual inspection in baobab shows very similar (though not perfectly identical) folder structure repeated among the five subfolders of /var/lib/docker.
A 1.8G docker image takes 18G on my disk. If I rebuild the images from scratch using the same sources, Docker barely increases disk storage, plus 1G give or take--so at least it's deduplicating sources in that sense. Once I remove the images storage goes down to 400K.
I thought maybe there were a bunch of different sources that had to be differentially compared to get to the final version of the image I had downloaded earlier, so I downloaded the 18.04 Ubuntu image (79M) next to verify if that was the case, but even so, baobab is back to showing 401 MB under /var/lib/docker/. What the heck?!? Am I missing something, or is Docker being dreadfully inefficient? Is there an evil BTRFS compression kernel bug? Does Docker hate disk encryption? Please tell me Docker doesn't just laugh in your face and fill up your hard drive instead of anti-de-duplicating your data.
On a clean install of x11vnc/docker-desktop with nothing else
user#Ubuntu ~ $ docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 1 0 1.817GB 1.817GB (100%)
Containers 0 0 0B 0B
Local Volumes 2 0 72.87kB 72.87kB (100%)
Build Cache 0B 0B
Docker on Btrfs uses a lot of snapshots. Btrfs snapshots use copy on write so that when they're changed, only the changed parts use new disk space.
But regular disk tools don't know about snapshots and count the space as a full copy for each snapshot.
Use btrfs fi du -s /var/lib/docker to use the Btrfs tools to measure it.

How to increase the space (Data Space Total) used by Docker ? That value is less than half on the disk size

Many times, I have this error when building Dockerfile :
devmapper: Thin Pool has 157168 free data blocks which is less than minimum required 163840 free data blocks. Create more free space in thin pool or use dm.min_free_space option to change behavior
My disk has at all 250Go and when I execute docker version, I can see in storage part :
Storage Driver: devicemapper
Pool Name: docker-253:0-19468577-pool
Pool Blocksize: 65.54kB
Base Device Size: 21.47GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 97.03GB
Data Space Total: 107.4GB
Data Space Available: 10.35GB
Metadata Space Used: 83.67MB
Metadata Space Total: 2.147GB
Metadata Space Available: 2.064GB
Thin Pool Minimum Free Space: 10.74GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.135-RHEL7 (2016-11-16)
I tried after stopping docker service :
I tried :
dockerd --storage-opt dm.thinpooldev dm.min_free_space=3%
dockerd --storage-opt dm.thinp_autoextend_percent
But those command doesn't succed.
How to increase the Data Space Total (the free space on the disk is more than 2 times 107.4GB) ?
Or How to decrease the Thin Pool Minimum Free Space: 10.74GB ?
If you've cleaned/pruned and still have issues this is how you actually modify the min space setting.
Create/Modify this file: /etc/docker/daemon.json
{
"storage-opts": [
"dm.min_free_space=1%"
]
}
DISCLAIMER: This answer does not address the specific question of the poster.
However it may help or alleviate the problem to people that get the
same error message.
This answer specifies the way to generally free space, not configuring the amount of space to be used.
Probably it's because of docker occupying its assigned space with containers, images or volumes.
To see actual Docker's usage, run:
$ docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 135 27 41.33GB 33.77GB (81%)
Containers 34 32 509.4MB 15.21kB (0%)
Local Volumes 387 3 3.706GB 3.706GB (99%)
Prune helps with cleaning up:
$ docker system prune
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all build cache
Are you sure you want to continue? [y/N] y
(See alternatives in Prune page for older Docker versions).
Beware that volumes are not removed by default, to prevent loss of data. You'd have to specify it yourself, adding --volumes or cleaning them manually.

In KVM how to clone a VM to a VM with different disk space

Using Centos 6.7 as hot for KVM virtualization, I have created a VM with a virtual disk of size 30GB. I want to clone this VM to a new VM with a different disk size.
The new VM should have a disk space of 60GB.
Is this possible at all? if yes how can I do it?
why you want clone vm to deferent space ?
1 - if you want clone vm for extend hdd of vm
kvm use lvm for each vm so you can extend lvm for increase hdd of vm
lvextend -l +(size of extend in byte or use G for gigabyte) /dev/vgname/lvmNameOfVm
resize2fs /dev/vgname/lvmNameOfVm
you can find lvm and vg name with lvdisplay and vgs
2 - if you need clone for change location of vm to other server
i suggested you first resize your vm and then clone to new vm
note : if you want do this for first time first create a vm for test and do it after test it and work for you use for main vm
you have not controlpanel like solusvm ? if you have your panel can clone vm withouth need do anything on the ssh
I managed to clone a VM and then increase its disk size and because I decided to bring steps I took here because I couldn't find all these steps in one place.
after cloning, to extend the disk size, create a disk with bigger size:
virsh vol-create-as default newdisk 60G
and copy the old disk to the new disk and expand one of the guest's partition:
virt-resize --expand /dev/sda2 olddisk newdisk
change the vm's configuration to use the new disk.
issue this command to edit the configuration file:
#virsh edit <VM_name>
find and replace the old disk name with the new disk name.
more detail about these steps can be found here: http://libguestfs.org/virt-resize.1.html
now start the new vm, log in to it and resize the vm's lvm partion.
lvextend -l +<free_blocks_count> /dev/vg_<VM_name>/lv_root
resize2fs /dev/vg_<VM_NAME>/lv_root
to find number of free blocks issue the following command:
vgdisplay
a great tutorial about these steps can be found here: http://www.tecmint.com/extend-and-reduce-lvms-in-linux/

LXD with LVM backingstore to achieve disk quotas

I see from the LXD storage specs that LVM can be used as a backingstore. I've previously managed to get LVM working with LXC. This was very
pleasing, since it allows quota-style control of disk consumption.
How do I achieve this with LXD?
From what I understand, storage.lvm_vg_name must point to my volume
group. I've set this for a container by creating a profile, and
applying that profile to the container. The entire profile config
looks like this:
name: my-profile-name
config:
raw.lxc: |
storage.lvm_vg_name = lxc-volume-group
lxc.start.auto = 1
lxc.arch = amd64
lxc.network.type = veth
lxc.network.link = lxcbr0
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:xx:xx:xx
lxc.cgroup.cpu.shares = 1
lxc.cgroup.memory.limit_in_bytes = 76895572
security.privileged: "false"
devices: {}
The volume group should be available and working, according to
pvdisplay on the host box:
--- Physical volume ---
PV Name /dev/sdc5
VG Name lxc-volume-group
PV Size 21.87 GiB / not usable 3.97 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 5599
Free PE 901
Allocated PE 4698
PV UUID what-ever
However after applying the profile and starting the container, it
appears to be using file backing store:
me#my-box:~# ls /var/lib/lxd/containers/container-name/rootfs/
bin boot dev etc home lib lib64 lost+found media mnt opt
proc root run sbin srv sys tmp usr var
What am I doing wrong?
Note that we also ship a python script with LXD to do the initial VG configuration for you.
As for disk quotas, we have a new specification for it which we'll be implementing shortly and that will let you set disk quotas for any storage attached to a container which supports it.
While we still support LVM, our main focus and preference as far as storage backend go is ZFS nowadays as it allows such changes to happen live and also works better when moving containers and snapshots across the network.
The new storage quota feature will be supported on zfs, LVM and btrfs but will only be applied live for zfs and btrfs, LVM will require a container restart.
I'll answer my own question, in case it's of use to others.
According to an authoritative answer on the lxc-users mailing, list:
"The storage.lvm_vg_name is not a per-container config setting, it's
for the whole daemon.
You set it using 'lxc config set storage.lvm_vg_name myvolgroup', and
then lxd will use the volume group as storage for every new image and
container that you create afterwards."
As a very rough summary, I used vgcreate to create a volume group, then lvcreate to create a volume within that group. This was followed by lxc config set storage.lvm_vg_name and lxc config set storage.lvm_thinpool_name appropriately.
It appears to work. However LXD feels a little too immature for my tastes at the moment, and I'm going to use plain LXC for now. I look forward to trying LXD again in a few months.

Resources