After boot does Linux reclaims Tianocore boot loader memory - bios

I am using Tianocore for booting Linux, I understand that Linux can avail Tianocore Runtime services (reboot, update_capsule etc.), it means that
some part of Tianocore code remains untouched by linux. Linux will never touch that memory.
My question, is it some part of Tianocore code (related to Runtime Services) or the whole of Tianocore remains untouched by Linux kernel even after boot ?
and, how does Linux kernel comes to know about memory areas that contain Tianocore image ?

There are many memory types that can be allocated by UEFI implementation (using AllocatePool or AllocatePages BootServices), and some of them will remain untouched by UEFI-aware OS, others will be freed. All regions of memory that shouldn't be freed will also be added to e820 memory map to prevent legacy OSes from corrupting them.
Normally, only a small portion of allocated memory is not freed after ExitBS event: runtime services code and data, ACPI tables and MMIO regions.

Related

virtual_memory meaning for process in container

In the old days, virtual memory was not to be trusted, as the same library was loaded only once in memory, but added to all programs that use it.
For a process in a container, is the virtual memory the real memory used?
Resources in Linux containers are managed via cgroups,
A cgroup is a collection of processes that are bound to a set of
limits or parameters defined via the cgroup filesystem.
Particularly, regarding the memory cgroup,
memory (since Linux 2.6.25; CONFIG_MEMCG)
The memory controller supports reporting and limiting of
process memory, kernel memory, and swap used by cgroups.

Does the operating system itself issue virtual memory addresses?

An operating system itself has resources it needs to access, like block I/O cache and process control blocks. Does it use virtual memory addresses or physical memory addresses?
I feel like it should be the former since it prevents the need to keep a large area of physical memory for a purpose, even when it is mostly empty. The mechanism of page tables/virtual memory would do a much better job at keeping those resources that the OS really needs.
So which is it?
10 randomly selected operating systems will do virtual memory management in 10 different ways. There's no answer that applies to all operating systems.
Some (e.g. MS-DOS) don't support or use virtual memory management for anything, some (e.g. Linux) just map all of physical memory into kernel space and don't bother using virtual memory management tricks for the kernel itself (it's almost as if the kernel is in physical memory even though it's technically both), and some may do any number of virtual memory tricks in kernel space.

Writing to kernel memory in kernel 3 versus kernel 2.4?

I've written a LKM which writes to a data structure of kernel (poolinfo_table). If I insmod this LKM in kernel 2.4 I guess it writes to this data structure, but when I do the same with kernel 3.10 my system restarts as I expect! What's wrong with kernel 2.4? Its kernel memory isn't protected or I'm not actually writing to it?! I mean I expect any kernel crush when I try to writing to its memory, so I'm in doubt I've written actually to my kernel 2.4's memory. In fact I tried the same code at my host system (Fedora 18) with kernel 3.10 and my guest (Redhat 9) with kernel 2.4. (I have hypervisor Xen)
In userland if you write to somewhere that you are not supposed to (somewhere in your address space that's not writeable, maybe it's not mapped or maybe it's write protected for example) the mmu may issue a bus error or segmentation violation to your process.
This is less likely but not impossible for threads in kernel space - you can easily cause memory corruption or mess up memory mapped devices without triggering an instant crash. The most likely crash that you'll generate in kernel space is by stomping on someone else's memory pointer and having them inadvertently step through it into unmapped space.
The major difference between userland and kernel really only relates to the scope of the damage you can do. Obviously in the kernel you can mess up a whole lot more.

Pinned memory in CUDA

I read somewhere that pinned memory in CUDA is scarce source. What is upper bound on pinned memory? In windows, in linux?
Pinned memory is just physical RAM in your system that is set aside and not allowed to be paged out by the OS. So once pinned, that amount of memory becomes unavailable to other processes (effectively reducing the memory pool available to rest of the OS).
The maximum pinnable memory therefore is determined by what other processes (other apps, the OS itself) are competing for system memory. What processes are concurrently running in either Windows or Linux (e.g. whether they themselves are pinning memory) will determine how much memory is available for you to pin at that particular time.

>4GB of memory for 32 bit application running on 64-bit Solaris (Very Large Memory)

Both MS Windows and Oracle Linux allows 32bit applications to use >4GB of Memory. Windows method is AWE: Address Windowing Extensions and Linux's method is Very Large Memory.
How it works: 32-bit application can't directly address > 4 GB of Virtual memory; but 64-bit OS can and 4GB of memory is too small to some applications. So, VLM and AWE allow application to reserve huge amount of memory from 64bit OS (or even from 32bit OS with AWE). 32-bit application can't address this memory directly, but it can ask OS to setup mapping of some part of huge memory into first 4GB (into 32-bit virtual space), then this memory can be accessed, modified; then it is unmapped back (with OS request).
Question is: Is there something like VLM or AWE in Solaris OS (version 10 or 11; x86_64 or sparc64)?
There is no library I'm aware of but implementing it would be quite straightforward under Solaris (and all Unix/Unix like OSes supporting tmpfs and mmap).
Just create a file the size you want (eg: 16 GiB) in /tmp (assuming /tmp is on tmpfs, the default configuration) and have the process(es) mapping various areas of this file to access memory at the wanted offsets.
Should you really want to access physical memory and not virtual one, you can use Solaris ramdisk support (ramdiskadm) instead of tmpfs.
Solaris supports PAE (Physical Address Extension), but Googling around doesn't paint a pretty picture. There is very little information available, and most of it is dire warnings that a bunch of third-party drivers won't work with it.

Resources