Writing to kernel memory in kernel 3 versus kernel 2.4? - memory

I've written a LKM which writes to a data structure of kernel (poolinfo_table). If I insmod this LKM in kernel 2.4 I guess it writes to this data structure, but when I do the same with kernel 3.10 my system restarts as I expect! What's wrong with kernel 2.4? Its kernel memory isn't protected or I'm not actually writing to it?! I mean I expect any kernel crush when I try to writing to its memory, so I'm in doubt I've written actually to my kernel 2.4's memory. In fact I tried the same code at my host system (Fedora 18) with kernel 3.10 and my guest (Redhat 9) with kernel 2.4. (I have hypervisor Xen)

In userland if you write to somewhere that you are not supposed to (somewhere in your address space that's not writeable, maybe it's not mapped or maybe it's write protected for example) the mmu may issue a bus error or segmentation violation to your process.
This is less likely but not impossible for threads in kernel space - you can easily cause memory corruption or mess up memory mapped devices without triggering an instant crash. The most likely crash that you'll generate in kernel space is by stomping on someone else's memory pointer and having them inadvertently step through it into unmapped space.
The major difference between userland and kernel really only relates to the scope of the damage you can do. Obviously in the kernel you can mess up a whole lot more.

Related

Kernel virtual memory space and process virtual memory space

I was reading the textbook:Computer Systems A Programmer’s Perspective, in chapter 9.7.2:Linux Virtual Memory System (third edition) that talks about virtual memory.
I was a bit confused by the structure of virtual memory for linux process as shown below:
My question is: does kernel virtual memory preserve for kernel to run
and rest of the virtual memory preserve for user process? What does kernel code and data do? And what does the physical memory in kernel virtual memory?
does kernel virtual memory preserve for kernel to run and rest of the virtual memory preserve for user process?
Yes, there is a part of virtual memory that is always reserved for the kernel and another part that is left available to userspace processes. Every single process has its own virtual memory, but the kernel is always mapped in the higher part (higher addresses) of virtual memory. Whether or not this mapping is visible to the process depends on Kernel Page Table Isolation.
See also: Do the virtual address spaces of all the processes have the same content in their “Kernel” parts?
What does kernel code and data do?
Part of the high virtual memory is a direct mapping of the actual kernel image. That is, the kernel executable and all its data. You can see it in more detail here in this page of the kernel documentation, marked as "kernel text mapping, mapped to physical address 0".
See also: What's the use of having a kernel part in the virtual memory space of Linux processes?
And what does the physical memory in kernel virtual memory?
That part of the image is totally misleading. I don't know precisely what information the authors of the book were trying to convey, but physical memory is definitely not a part of kernel virtual memory. They were probably trying to address the fact that there is a direct mapping of all physical memory in the kernel virtual memory, which can be seen again on the same page of the kernel documentation, marked as "direct mapping of all physical memory".
Physical memory refers to the real memory of the system (i.e. the RAM). Each region of virtual memory is mapped to some region of physical memory. This virtual-to-physical mapping is totally transparent to processes and is managed by the kernel. For example, two executables that have the same file open in read-only mode are usually sharing the same physical memory region, while seeing two different virtual address.
This is a more accurate depiction of the relationship between virtual and physical memory:
Source: https://computationstructures.org/lectures/vm/vm.html
cited from the CSAPP book, 3rd version, section 9.7.2, where the picture is shown.
Interestingly, Linux also maps a set of contiguous virtual pages (equal in size to the total amount of DRAM in the system) to the corresponding set of contiguous physical pages. This provides the kernel with a convenient way to access any specific location in physical memory—for example, when it needs to access page tables or to perform memory-mapped I/O operations on devices that are mapped to particular physical memory locations.
I think the Physical memory in the picture just reflects what's described above: a virtual memory area that maps to the entire physical memory.

Does the operating system itself issue virtual memory addresses?

An operating system itself has resources it needs to access, like block I/O cache and process control blocks. Does it use virtual memory addresses or physical memory addresses?
I feel like it should be the former since it prevents the need to keep a large area of physical memory for a purpose, even when it is mostly empty. The mechanism of page tables/virtual memory would do a much better job at keeping those resources that the OS really needs.
So which is it?
10 randomly selected operating systems will do virtual memory management in 10 different ways. There's no answer that applies to all operating systems.
Some (e.g. MS-DOS) don't support or use virtual memory management for anything, some (e.g. Linux) just map all of physical memory into kernel space and don't bother using virtual memory management tricks for the kernel itself (it's almost as if the kernel is in physical memory even though it's technically both), and some may do any number of virtual memory tricks in kernel space.

Since modern computer uses virtual memory, why do we still encounter "out of memory" issue?

I am learning the concept of virtual memory, but this question has been confusing me for a while. Since most modern computers use virtual memory, when a program is in execution, the os is supposed to page data in and out between RAM and disk. But why do we still encounter "out of memory" issue? Could you please correct me if I misunderstood the concept? I really appreciate your explanation.
PS: For example, I was analyzing a large amount of data (>100G) output from simulation on a computing cluster, and read in the data to an C array. Very often the system crashed and complained a memory error.
First: Modern computer do indeed use virtual memory, however there is no magic here. Memory is not created out of nothing. Virtual memory schemes typically allow a portion of the mass storage sub-system (aka hard disk) to be used to hold portions of the process that are (hopefully) less frequently used.
This technique allows processes to use more memory than is available as RAM. However nothing is infinite. Eventually all RAM and Hard Drive resources will be used up and the process will get an out of memory error.
Second: It is not unheard of for operating systems to place a cap on the memory that a process may use. Hit that cap and again, the process gets an out of memory error.
Even with virtual memory the memory available is not unlimited.
Limit 1) Architectural limits. The processor and operating system will place some maximum virtual memory limit.
Limit 2) System Parameters. Many operating systems configure the maximum virtual memory size.
Limit 3) Process quotas. Many operating system have process quotas that limit the maximum virtual memory size.
Limit 4) System resources. Notably page file space.

After boot does Linux reclaims Tianocore boot loader memory

I am using Tianocore for booting Linux, I understand that Linux can avail Tianocore Runtime services (reboot, update_capsule etc.), it means that
some part of Tianocore code remains untouched by linux. Linux will never touch that memory.
My question, is it some part of Tianocore code (related to Runtime Services) or the whole of Tianocore remains untouched by Linux kernel even after boot ?
and, how does Linux kernel comes to know about memory areas that contain Tianocore image ?
There are many memory types that can be allocated by UEFI implementation (using AllocatePool or AllocatePages BootServices), and some of them will remain untouched by UEFI-aware OS, others will be freed. All regions of memory that shouldn't be freed will also be added to e820 memory map to prevent legacy OSes from corrupting them.
Normally, only a small portion of allocated memory is not freed after ExitBS event: runtime services code and data, ACPI tables and MMIO regions.

Read anywhere from the memory by DelphiXE2 with Assembly x64

How can I access any memory address in DelphiXE2 in Windows7 64bit?
I tried to use the ReadProcessMemory function, but it does not working.
However, I want to avoid to use the kernel driver to do this.
Sorry for my bad English.
ReadProcessMemory is a function that is known to work correctly. It allows one process to read memory from another process. But the addresses it uses are still virtual memory addresses. They are relative to the virtual address space of the target process.
I suspect that what you are actually trying to do is read physical memory. In which case there is no alternative to kernel mode. Only in kernel mode can physical memory be addressed.

Resources