Buildroot: Build Device Table with unknown Major Number - driver

I have one driver that shall get Major number from linux kernel (major number is assigned dynamically).
To create the a device node for my driver, I run the following steps manually
insmod my_driver
cat /proc/devices -- This is to know which Major number is assigned
mknod /dev/myDevName -c Assigned_Major_Number 0
Eventually, I have to use Buildroot to build my file system which should include my driver.
In Buildroot, you can use device table file to create device node (this is instead of running mknod ... when linux system is up).
The missing part how to mention the Major Number in device table file as I don't have it yet (it will be assigned later by linux kernel when system is up)?!
Thanks for your help

Let the /dev entries be created dynamically and automatically for you. A static table is too cumbersome when you have dynamic numbers.
There are several dynamic /dev management methods. From most complex and featureful to simplest:
use udev and systemd (like many desktop/server distributions do)
use udev (if your init system is not systemd)
use mdev from Busybox (like udev, but simpler and very lightweight)
mount a devtmpfs on /dev (no daemon needed, the kernel will do it for you)
Buildroot can set up whichever you prefer. Just enter make menuconfig -> System configuration -> /dev management. See the manual section /dev management for the details.

Related

BeagleBone black USB_mass_storage connect to Windows not working

I have an annoying issue regarding getting USB_mass_storage on BBB to work when connected to Windows
I have created an image :
dd bs=1M if=/dev/zero of=/usb.bin count=64
Formatted it:
mkdosfs /usb.bin -F 32 -I
I have mounted it, copied files to and from it, no problem.
Then I created a USB mass storage :
modprobe g_mass_storage file=./usb.bin stall=0 ro=0
Connected it to a USB port on my Linux, nor problem, I can see and manipulate files
On Windows I can see the drive, the size is correct, but filesystem is not recognized.
With ro=0 I am able to create a partition from within Windows and format it. I can copy files to and from it but when I mount it on BBB I can not see the files copied using Windows. I can still though see the files I copied to the mountpoint on BBB.
Can someone tell me what I am doing wrong ?
I disabled everything regarding g_multi, including RNDIS, Serial, CDC.
And it works perfectly under Linux.
You have created a raw disk image without a partition table on the Linux side. Linux doesn't care if it's a file, if it has a partition table, etc.
Windows however gets confused by the lack of partition table. As you noticed.
Having a partition table is preferable. What you can do on the Linux side of things:
losetup --partscan - Have the file get processed as a disk with partition table and get devices for each partition
Mount the partition directly using an offset
In this particular case the latter is probably the quickest. There is only one partition and the offset is known.
fdisk -l ./usb.bin
Multiply the Start value by the Units size. Use it as the offset below:
mount -o loop,offset=12345 ./usb.bin /mnt
Make sure to never access from both sides at the same time as this will lead to filsystem damage and data loss.
See also e.g. https://askubuntu.com/a/69447

Enabling cgroup cpu real-time runtime in ubuntu kernel

I am trying to use real-time scheduling in a docker container running on Ubuntu 18.04.
I have already installed a realtime kernel following the method given here. I have selected kernel version 5.2.9 and its associated rt patch.
The output of uname -a confirms that the realtime kernel is well installed and running:
Linux myLaptop 5.2.9-rt3 #1 SMP PREEMPT RT ...
To run my container I issue the following command:
docker run --cpu-rt-runtime=95000 \
--ulimit rtprio=99 \
--ulimit memlock=102400 \
--cap-add=sys_nice \
--privileged \
-it \
myimage:latest
However, the output I got is:
docker: Error response from daemon: Your kernel does not support cgroup cpu real-time runtime.
I have seen that this can be linked to the missing CONFIG_RT_GROUP_SCHED as detailed in the issue here. Indeed if I run the script provided at this page to check the kernel compatibility with Docker I get:
- CONFIG_RT_GROUP_SCHED: missing
Which seems to confirm that Docker is using this for realtime scheduling but is not provided in the kernel, although patched to be realtime.
From there, I tried to find a solution in vain. I am not well versed in kernel configurations to know if I need to compile it with a specific option, and which one to choose, to add the missing CONFIG_RT_GROUP_SCHED.
Thanks a lot in advance for recommendations and help.
When talking about real-time Linux there are different approaches ranging from single kernel approaches (like PREEMPT_RT) to dual-kernel approaches (such as Xenomai). You can use real-time capable Dockers in combination with all of them (clearly the kernel of your host machine has to match) to produce real-time capable systems but the approaches differ. In your case you are mixing up two different approaches: You installed PREEMPT_RT while following a guide for control groups which are incompatible with PREEMPT_RT.
By default the Linux kernel can be compiled with different levels of preempt-ability (see e.g. Reghenzani et al. - "The real-time Linux kernel: a Survey on PREEMPT_RT"):
PREEMPT_NONE has no way of forced preemption
PREEMPT_VOLUNTARY where preemption is possible in some locations in order to reduce latency
PREEMPT where preemption can occur in any part of the kernel (excluding spinlocks and other critical sections)
These can be combined with the feature of control groups (cgroups for short) by setting CONFIG_RT_GROUP_SCHED=y during kernel compilation, which reserves a certain fraction of CPU-time for processes of a certain (user-defined) group.
PREEMPT_RT developed from PREEMPT and is a set of patches that aims at making the kernel fully preemptible, even in critical sections (PREEMPT_RT_FULL). For this purpose e.g. spinlocks are largely replaced by mutexes.
As of 2021 it is being slowly merged into the mainline and will be available to the general public without the need to patch the kernel. As stated here PREEMPT_RT currently can't be compiled with the CONFIG_RT_GROUP_SCHED and therefore can't be used with control groups (see here for a comparison). From what I have read this is due to high latency spikes, something that I have already observed with control groups by means of cyclicytests.
This means you can either compile your kernel (see the Ubuntu manual for details)
Without PREEMPT_RT but with CONFIG_RT_GROUP_SCHED (see this post for details) and follow the Docker guide on real-time with control groups as well as my post here. From my experience this has though quite high latency spikes, something not desirable for real-time system where the worst-case latency is much more important than the average latency.
With PREEMPT_RT without CONFIG_RT_GROUP_SCHED (which can also be installed from a Debian package such as this one). In this case it is sufficient to execute the Docker with the options --privileged --net=host, or the Docker-compose equivalent privileged: true network_mode: host. Then any process from inside the Docker can set real-time priorities rtprio (e.g. by calling ::pthread_setschedparam from inside the code or by using chrt from the command line).
In case you are not using the root as user inside the Docker you furthermore will have to have give yourself a name of a user that belongs to a group with real-time privileges on your host computer (see $ ulimit -r). This can be done by configuring the PAM limits (/etc/security/limits.conf file) accordingly (as described here) by copying the section of the #realtime user group and creating a new group (e.g. #some_group) or adding the user (e.g. some_user) directly:
#some_group soft rtprio 99
#some_group soft priority 99
#some_group hard rtprio 99
#some_group hard priority 99
In this context rtprio is the maximum real-time priority allowed for non-privileged processes. The hard limit is the real limit to which the soft limit can be set to. The hard limits are set by the super-user and enforce by the kernel. The user cannot raise his code to run with a higher priority than the hard limit. The soft limit on the other hand is the default value limited by the hard limit. For more information see e.g. here.
I use latter option for real-time capable robotic applications and could not observe any differences in latency between with and without the Docker. You can find a guide on how to set up PREEMPT_RT and automated scripts for building it on my Github.

How to limit Docker filesystem space available to container(s)

The general scenario is that we have a cluster of servers and we want to set up virtual clusters on top of that using Docker.
For that we have created Dockerfiles for different services (Hadoop, Spark etc.).
Regarding the Hadoop HDFS service however, we have the situation that the disk space available to the docker containers equals to the disk space available to the server. We want to limit the available disk space on a per-container basis so that we can dynamically spawn an additional datanode with some storage size to contribute to the HDFS filesystem.
We had the idea to use loopback files formatted with ext4 and mount these on directories which we use as volumes in docker containers. However, this implies a large performance loss.
I found another question on SO (Limit disk size and bandwidth of a Docker container) but the answers are almost 1,5 years old which - regarding the speed of development of docker - is ancient.
Which way or storage backend would allow us to
Limit storage on a per-container basis
Has near bare-metal performance
Doesn't require repartitioning of the server drives
You can specify runtime constraints on memory and CPU, but not disk space.
The ability to set constraints on disk space has been requested (issue 12462, issue 3804), but isn't yet implemented, as it depends on the underlying filesystem driver.
This feature is going to be added at some point, but not right away. It's a bit more difficult to add this functionality right now because a lot of chunks of code are moving from one place to another. After this work is done, it should be much easier to implement this functionality.
Please keep in mind that quota support can't be added as a hack to devicemapper, it has to be implemented for as many storage backends as possible, so it has to be implemented in a way which makes it easy to add quota support for other storage backends.
Update August 2016: as shown below, and in issue 3804 comment, PR 24771 and PR 24807 have been merged since then. docker run now allow to set storage driver options per container
$ docker run -it --storage-opt size=120G fedora /bin/bash
This (size) will allow to set the container rootfs size to 120G at creation time.
This option is only available for the devicemapper, btrfs, overlay2, windowsfilter and zfs graph drivers
Documentation: docker run/#Set storage driver options per container.

LXD with LVM backingstore to achieve disk quotas

I see from the LXD storage specs that LVM can be used as a backingstore. I've previously managed to get LVM working with LXC. This was very
pleasing, since it allows quota-style control of disk consumption.
How do I achieve this with LXD?
From what I understand, storage.lvm_vg_name must point to my volume
group. I've set this for a container by creating a profile, and
applying that profile to the container. The entire profile config
looks like this:
name: my-profile-name
config:
raw.lxc: |
storage.lvm_vg_name = lxc-volume-group
lxc.start.auto = 1
lxc.arch = amd64
lxc.network.type = veth
lxc.network.link = lxcbr0
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:xx:xx:xx
lxc.cgroup.cpu.shares = 1
lxc.cgroup.memory.limit_in_bytes = 76895572
security.privileged: "false"
devices: {}
The volume group should be available and working, according to
pvdisplay on the host box:
--- Physical volume ---
PV Name /dev/sdc5
VG Name lxc-volume-group
PV Size 21.87 GiB / not usable 3.97 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 5599
Free PE 901
Allocated PE 4698
PV UUID what-ever
However after applying the profile and starting the container, it
appears to be using file backing store:
me#my-box:~# ls /var/lib/lxd/containers/container-name/rootfs/
bin boot dev etc home lib lib64 lost+found media mnt opt
proc root run sbin srv sys tmp usr var
What am I doing wrong?
Note that we also ship a python script with LXD to do the initial VG configuration for you.
As for disk quotas, we have a new specification for it which we'll be implementing shortly and that will let you set disk quotas for any storage attached to a container which supports it.
While we still support LVM, our main focus and preference as far as storage backend go is ZFS nowadays as it allows such changes to happen live and also works better when moving containers and snapshots across the network.
The new storage quota feature will be supported on zfs, LVM and btrfs but will only be applied live for zfs and btrfs, LVM will require a container restart.
I'll answer my own question, in case it's of use to others.
According to an authoritative answer on the lxc-users mailing, list:
"The storage.lvm_vg_name is not a per-container config setting, it's
for the whole daemon.
You set it using 'lxc config set storage.lvm_vg_name myvolgroup', and
then lxd will use the volume group as storage for every new image and
container that you create afterwards."
As a very rough summary, I used vgcreate to create a volume group, then lvcreate to create a volume within that group. This was followed by lxc config set storage.lvm_vg_name and lxc config set storage.lvm_thinpool_name appropriately.
It appears to work. However LXD feels a little too immature for my tastes at the moment, and I'm going to use plain LXC for now. I look forward to trying LXD again in a few months.

Xenserver NFS export share only 4GB size?

I have managed to create an NFS server on my Xenserver and mounted it on my Cloudstack 4.4!
However i realise the size of my primary storage and secondary storage is only 4gb when i have assigned 250gb to my Xenserver VM (local storage)
May i know why and how can i increase the space?
Picture link
http://115.66.5.90/manage/shares/Torrents/why%204gb%20size.png?__c=2533372089363723488
Edit on 6/8/2014-------------
Hello Miguel,I have done your steps as seen but still stuck. (Xen was given 100GB)
pvs
PV VG mt Attr PSize PFree
/dev/sda3 VG_XenStorage- lvm2 a- 91.99G 91.98G
Then i gdisk /dev/sda3 as this 91GB is the free storage i have after installing Xen on my VM.
I followed all your steps that you have written below.
Having this result when i PVS again
[root#xenserver-bpqbdmrk ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 lvm2 a- 4.00G 4.00G
However when i ran vgdisplay -c
[root#xenserver-bpqbdmrk ~]# vgdisplay -c
No volume groups found
fdisk -l
Disk /dev/sda: 107.3 GB, 107374182400 bytes
256 heads, 63 sectors/track, 13003 cylinders
Units = cylinders of 16128 * 512 = 8257536 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13004 104857599+ ee EFI GPT
[root#xenserver-bpqbdmrk ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 4.0G 1.9G 2.0G 49% /
none 381M 16K 381M 1% /dev/shm
/opt/xensource/packages/iso/XenCenter.iso
52M 52M 0 100% /var/xen/xc-install
172.16.109.11:/export/primary/97cffd9a-acfe-0c71-91d5-b93e58f27462
4.0G 1.9G 2.0G 49% /var/run/sr-mount/97cffd9a-acfe-0c71-91d5-b93e58f27462
May i know why i do not have a volume group even though i have a storage repo of 4GB on my NFS.
And why does my /dev/sda2 has only 4Gb too
More information about my testing Cloud.
i am running a VM of 100GB.
wanted a primary storage and secondary storage combine of 91Gb.
Command (? for help): p
Disk /dev/sda: 209715200 sectors, 100.0 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 7AE0B6EE-99F4-44F4-A9F0-5140B14DCC32
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 209715166
Partitions will be aligned on 2048-sector boundaries
Total free space is 6042 sectors (3.0 MiB)
Number Start (sector) End (sector) Size Code Name
1 2048 8388641 4.0 GiB 0700
2 8390656 16777249 4.0 GiB 0700
3 16779264 209715166 92.0 GiB 8E00
Command (? for help):
When you logon to your XenServer management console you are actually logging on to a VM (the one running on Dom0). This VM is the one that controls the whole hypervisor.
Only some of the resources you provided to your XenServer are used by the management VM in Dom0. The rest is used for the other VMs you might spin-up on the XenServer.
That goes for CPU, memory and disk space.
You need to check if the XenServer local storage logical volume already contains the remaining space of your disk. To do that type pvs on the terminal to list all LVM physical devices. The entry you are looking for starts with "VG_XenStorage-".
You should see the disk partition that is attached to that physical device, the total size and the free space.
If the local storage logical volume doesn't contain the extra space already you need to add it yourself by partitioning the space if it isn't already. Assuming your disk device is /dev/sda, type gdisk /dev/sda then at the prompt type pto print the partition table. If you have one too many (in relation to what is mounted) then you have a partition already available to use. If you have 2x 4GB partitions and one larger (taking the remaining space) the last is the one you want to use. If not, then you need to create one at the end of the disk. Still in gdisk type:
nto create a new partition, then chose a number for it (the next available int),
push enter twice to make it start at the next available disk block and end at the last,
type 8e00 to select the "Linux LVM" partition type
type w to write the new partition table
At this point you've either created a new partition or you had one already available. I'm assuming /dev/sda3. Now you need to create a physical volume and attach it to the logical volume XenServer uses for local storage.
pvcreate /dev/sda3 to create a new physical volume
vgextend $(vgdisplay -c | cut -d : -f 1) /dev/sda3
The $(vgdisplay ...) bit is to find out the name of the volume group you will attach the physical device to.
If you do pvs again you should see that the local storage logical volume has now more space available.
Edit:
As mentioned before XenServer can manage local storage for VMs using a Storage Repository (SR). When this is the case, then there is no need to create a primary storage directory for holding VM's storage.
As for secondary storage, there will still be a need for it. Secondary storage is where CloudStack looks for the templates (disk images) that it uses to boot the System VMs. System VMs are the VMs CloudStack uses for managing the cloud environment (e.g. virtual routers or console proxies). The hypervisors under CloudStack (in this case a XenServer) must be able to reach the secondary storage, and one of the most common ways of achieving this is to make the secondary storage available via NFS. Whether the NFS export is available from the hypervisor itself or some other reachable machine, that doesn't really matter.
Getting back to the setup of the question, the disk of the XenServer would have to be partitioned in such a way that one partition would be available for primary storage (managed by XenServer via a SR) and another one for secondary storage (with a file system, mounted on the locally and made available ad an NFS export).

Resources