Is WM operating system protects process memory against one another?
Can one badly written application crash some other application just mistakenly writing over the first one memory?
Windows Mobile, at least in all current incarnations, is build on Windows CE 5.0 and therefore uses CE 5.0's memory model (which is the same as it was in CE 3.0). The OS doesn't actually do a lot to protect process memory, but it does enough to generally keep processes from interfering with one another. It's not hard and fast though.
CE processes run in "slots" of which there are 32. The currently running process gets swapped to slot zero, and it's addresses are re-based to zero (so all memory in the running process effectively has 2 addresses, the slot 0 address and it's non-zero slot address). These addresses are proctected (though there's a simple API call to cross the boundary). This means that pointer corruptions, etc will not step on other apps but if you want to, you still can.
Also CE has the concept of shared memory. All processes have access to this area and it is 100% unprotected. If your app is using shared memory (and the memory manager can give you a shared address without you specifically asking, depending on your allocation and its size). If you have shared memory then yes, any process can access that data, including corrupting it, and you will get no error or warning in either process.
Is WM operating system protects process memory against one another?
Yes.
Can one badly written application crash some other application just mistakenly writing over the first one memory?
No (but it might do other things like use up all the 'disk' space).
Even if you're a device driver, to get permission to write to memory that's owned by a different process there's an API which you must invoke explicitly.
While ChrisW's answer is technically correct, my experience of Windows mobile is that it is much easier to crash the entire device from an application than it is on the desktop. I could guess at a few reasons why this is the case;
The operating sytem is often much more heavily OEMed than Windows desktop, that is the amount of manufacturer specific low level code can be very high, which leads to manufacturer specific bugs at a level that can cause bad crashes. On many devices it is common to see a new firmware revision every month or so, where the revisions are fixes to such bugs.
Resources are scarcer, and an application that exhausts all available resources is liable to cause a crash.
The protection mechanisms and architecture vary quite a bit. The device I'm currently working with is SH4 based, while you mostly see ARM, X86 and the odd MIPs CPU..
Related
Not sure if anyone here can answer this.
I've learned that an Operating System checks if an instruction of a program changes something outside of its allocated memory, and if it does then the OS won't allow the program to do this.
But, if the OS has to check this for every instruction, won't this take up at least 5/6 of the CPU? I tried to replicate this, and this is how many clock cycles I've come up with to check this for every instruction.
If I've understood something wrong, please correct me, because I can't imagine that an OS takes up that much of the CPU.
There are several safe-guards in place to ensure a non-privileged process behaves. I will discuss two of them in the context of the x86_64 architecture, but these concepts (mostly) extend to other major platforms.
Privilege Levels
There is a bit in a particular CPU register that indicates the current privilege level. These privileges are often called rings, where ring 0 corresponds to the kernel (ie. highest privilege), and ring 3 corresponds to a userspace process (ie. lowest privilege). There are other rings, but they're not relevant to this introduction.
Certain instructions in x86_64 may only be executed by privileged processes. The current ring must be 0 to execute a privileged instruction. If you try to execute this instruction without the correct privileges, the processor raises a general protection fault. The kernel synchronously processes this interrupt, and will almost certainly kill the userspace process.
The ring level can only be changed while in ring 0, so the userspace process can't simply change from ring 3 to ring 0 by itself.
Execute Permission in Page Tables
All instructions to be executed are stored in memory. Many architectures (including x86_64) use page tables to store mappings from virtual addresses to physical addresses. These page tables have several bookkeeping entries as well, one of which is an execute permission bit. If this bit is not set for a page that corresponds to the instruction trying to be executed, then the processor will produce a general protection fault. As before, the kernel will synchronously process this interrupt, and likely kill the offending process.
When are these execute bits set? They can be dynamically set via mmap(2), but in most cases the compiler emits special CODE sections in the binaries it generates, and when the OS loads the binary into memory it sets the execute bit in the page table entries for the pages that correspond to the CODE sections.
Who's checking these bits?
You're right to ask about the performance penalty of an OS checking these bits for every single instruction. If the OS were doing this, it would be prohibitively expensive. Instead, the processor supports privilege levels and page tables (with the execute bit). The OS can set these bits, and rely on the processor to generate interrupts when a process acts outside its privileges.
These hardware checks are very fast.
We're working on an application meant to run on an embedded system, in a moderately harsh environment (a controller for a heating system in a residential building).
That application should run for years without needing to reboot the system. It runs on an embedded PC running Linux. The program instantiates several classes whose lifetime is the same as the application's.
Should I worry about memory becoming corrupt over such a long lifetime? Does it make sense to periodically check the class invariants to detect any such memory corruption? Or does modern hardware make such corruption astronomically unlikely?
I have seen my share of cheap sd cards on boards, they can die on you easily.
Few months ago have been dealing with one maker, under high data throughput SD card was unable to react in time. Some irq failure messages pop up and whole partition blows up.
If it's not intended for mass production I would definitely suggest you to choose some good and recommended storage.
But really, I can not remember memory corruption issues(besides rom), I would worry about memory leaks. Those are the most nasty problems for embedded system intended to last long without reboot.
Have to be really careful, they can happen either in userspace or in kernel space. Even software which you have always had confidence in may have them, depending on the build version. Have to choose Linux distribution carefully, if there is no dedicated kernel development team usually this stuff is outsourced to companies which build stable systems, where every included package is tested and confirmed to not leak.
In the end, definitely a few cycles of stress testing are needed, if there are problems with memory you will notice.
So I know the operating system must be in control of giving an application a certain amount of Ram. But I'm curious how does it know how much to give to the application, and how does it know how much the said application is using? Like who and what is keeping track of that usage? And how does it know which memory is safe to use? I assume that some memory is reserved for critical systems. I must admit that I don't have very much knowledge of operating systems.
The operating system divides memory into "pages". They're typically 4KB in size.
The operating system keeps track of those pages in a table. By counting them we can determine how much memory is used or free.
Userland programs request memory by a system call. It depends on the system, and mmap() is used for Linux. This will request the OS to give an empty page to use for the program. Freeing the memory is basically the reverse.
I understand that the computer loads the first sector of memory known as BIOS, which runs diagnostics on hardware and the proceeds to load the OS. I guess my question leans towards the hardware side. How does the computer know which memory to boot from (RAM, ROM, FLASH, etc). I understand the differences between memory and I understand computers boot from the hard drive, but Im attempting to make an 8 bit computer with a z80 microprocessor, which will need to boot from ROM or Flash memory. The only problem is that the processor reads only from whatever memory the address pins are connected to and there are no separate address pins for ram and rom. Its also impractical to run the system on rom or flash due to the much slower read/write time compared to ram. The z80 to the best of my knowledge doesnt have separate commands for reading from rom and ram, and it wouldnt matter even if it did because the ram will be blank upon powering up. How does a computer choose to read from rom only upon booting and then switch to ram once the OS has been loaded. Is it hardwired in using logic gates? And how does a computer choose to write to flash memory or a hard drive instead of ram once the OS has been loaded? Would flash memory be treated as a device? Or is this also hardwired into the motherboard using logic gates? Sorry for giving so much background, I just dont want you to waste your time explaining things Ive already grasped. Ive just researched this to a great extent and thought about it for hours on end and cant seem to figure it out, and everywhere Ive looked doesnt explain how the computer chooses which memory to read from, it just says that it does. Thanks
I'm not sure I'm answering what you are asking, but I'll give it a try.
Some computers (at least, IBM PC-compatible computers), after powering up, usually run this BIOS (Basic Input/Output System) program. For this to happen, to the best of my knowledge, the hardware must make the jump to this code, and this code must be accessible (that is, mapped) from the physical memory, since that's where the CPU will execute code from. So, a physical address space with some read-only areas where this code is hard-wired to would do the trick.
Once the BIOS code is running, it can select how to proceed next. It can copy a sector from a hard disk to memory, (or a bunch of data from a Flash drive) and then jump to it, or whatever. That's up to the BIOS writer.
I will try to explain the Pentium boot up process very briefly.
On the flash ROM mounted on the Motherboard. there is a small program called BIOS (basic input, output system). After pressing the power button the BIOS program is executed.
The BIOS contains low level software that performs the following operations :-
checks how much RAM is installed and if all other PCI and ISA buses peripherals are connected.
it checks if all IO devices are connected.
scans a list of boot devices and selects the boot devices based on BIOS configurations setup earlier by the user.
once the boot devices is selected. the first sector from the boot device is read into memory and executed. it contains a simple program which examines the partition table and selects the Active one (Holding the OS). The secondary bootloader is read from that partition. this loader then reads the OS from the partition into the memory and runs it. After running, the OS asks the BIOS for the configuration info for each device and configure the new devices (those have no stored configurations). after all devices configurations are set. they are delivered to the kernel. Then it initializes tables, background boot up processes and starts login program or GUI.
Since our application grows, we need more space on our Windows CE devices.
If I install CF app in RAM on win ce device this app vanished after cold restart.
I have used the simplest choice install on flash card. As I mentioned running applications from the sd card is slow and there are some heavy issues with demand-paging if you run the apps from persistent paths. Isn't it? Is it worth to install it there? Will we get performance problems?
Should I use another solution - install after cold restart/new start on RAM from flash disk (if it possible)? Where can/should I store settings/log files? On flash/sd card?
There's no "one size fits all" answer for this.
If you move the app from memory to storage you'll gain RAM. Maybe that boost in RAM will give the EE more heap space and thereby prevent GC thrashing. That would give you better perceived performance. But maybe it won't and it will just increase demand-paging for your app and hurt performance. Maybe you'll get a little of both and it's a wash.
How would you handle persistence to RAM? That depends on what your device supports for auto-running apps.
Where should you store settings and logs? Again, that depends on the device, the storage, the size, the frequency of access and loads of other things.
Basically the answer for all of these is only going to be found by you testing your actual app on your actual hardware. Try the difference scenarios and collect metrics to see which performs better. That's the only "correct" answer.