Zynq 7020: some memory is unreadable - xilinx

I have a Zynq 7020 chip with 250 MB of DDR memory attached to it put in ECC mode (so 125 MB effectively). It's attached to NAND flash memory and has a series of bootloaders which eventually load VxWorks to run some stuff.
We are about to do a test which will require me to read all the memory, flash, and FPGA configuration memory on the device after execution.
I have another [small] program that I will install via JTAG after the run and have it write out the rest of the RAM, then all the flash and FPGA configuration memory. This program is compiled by the Xilinx SDK and is bare-metal (no OS/bootloader).
When I load this program, I reset the processor (JTAG command), run a ps7_init.tcl script that sets all the CPU registers to a good configuration as set by Vivado, load the elf file onto the device, then run the processor. This program then tries to read memory starting at the address 0x0, but it crashes quickly. I told it to start at an address of 1 MB (1<<20) (because I know there's some weird memory map stuff at the beginning, so I tried this just in case) and it reads a little bit more then crashes again.
The crash appears to be the CPU not letting me read these areas of RAM.
Why not? How do I make it so I can read every byte of the 125 MB of RAM I have?

There is a template project that Xilinx provides that is close to exactly what you are trying to do. It is the "Memory Tests" template, and it runs through attached memory ranges and tests read/write operations. By default it tests DDR memory range and ps7_ram_1. The linker script for the application puts the program in ps7_ram_0, and doesn't test that range since you can't overwrite instruction and data memory for the application.
Code for the template can be found here:
<SDK installation directory>\data\embeddedsw\lib\sw_apps\memory_tests
I would recommend creating a new project from this template, and changing the memorytest.c file to fit your need.
To answer your question more directly: You are likely running into problems with the processors MMU (memory management unit) and cache management. If your application is being loaded into DDR, then it is possible that it is blocking you from accessing application instruction and data memory. If your application is loaded into OCM (like the template) there may be access problems with the memory in cache. If you disable cache using
Xil_DCacheDisable();
Then you should be able to read from the entire DDR memory space (as long as it exists). Make sure that you configure your applications linker script (*.ld) so that the application knows which memory devices are out there, their base addresses and size.

Related

Some Details of The Boot Process of OSes on x86 32-bit machines

I'm trying to write a OS for my own use, I want to show a blank (black) screen with VGA output but I have some problems(questions):
Under FAT32, I have MBR bootloader to read the first sector of the virtual disk image generated by bximage from Bochs. Where (which sector) should I put the second compiled code that shows the black screen? How to do it with dd utility? My second compiled code file is 9 Bytes only.
Is VBR necessary?
How do I know where the data region (FAT32) starts and ends?
I rewrote the bootloader provided from this link.
My disk file specifications is:
20M,
CHS 40/16/63
In chronological order...
Originally there were no hard disks and (if you weren't using "BASIC in ROM") computers booted from a floppy disk. In this case the first sector of the volume (the floppy disk) contains the operating system's boot loader.
Not long after hard disks got added, and worked using a similar scheme (where the first sector of the volume/hard disk contains the operating system's boot loader).
However, people soon realised that using a whole "large" hard disk for a single volume is silly/inflexible; so a partitioning scheme was invented to split the hard disk into multiple volumes. In this case the first sector of the disk (the MBR) contains a partition table where one is marked as the "active" partition, and some code to "chain load" the first sector of the active partition (the boot loader). This became "extremely standard", then people extended it to support multiple different operating systems, and most boot managers support multiple operating systems using this method.
Note 1: I define "boot manager" as something you use to choose which OS to boot, and "boot loader" as something designed to boot the specific OS that was chosen. Ideally these have nothing to do with each other, the boot manager should have nothing to do with any OS, and the end user should be able to change the boot manager with anything they like without upsetting or effecting any OS or any boot loader. Sadly, (for Windows) Microsoft are hostile towards allowing multiple different operating systems to boot using simple, sane and well supported methods (including allowing multiple instances of the same version of Windows to be installed at the same time, which could be useful - e.g. one OS for your work stuff and a separate OS for your kids both installed on the same computer) and try to smother sanity with their own "boot.ini" idiocy that mostly just makes everything horrid for no benefit (other than giving Microsoft more control over what you do with your computer). Of course when the user is only installing one OS on the computer it's nice for the OS installer to (optionally, if and only if the user wants it - e.g. because they don't already have their own boot manager) provide and install a minimal MBR that doesn't nothing more than chain load the operating system's boot loader.
As time passed more devices got added. The first was network cards and the ability to boot from network. This is nothing like "boot from disk". Instead, the network card's ROM (after some negotiation with a DHCP server) downloads an entire "boot file" (which is not limited to 1 sector and can be 500 KiB if you like) from a server, then provides an API (which became known as the "PXE API") that the boot loader can use to access networking (e.g. send/receive packets, download more files using the TFTP protocol, etc).
The other type of device that got added was CD-ROM. For these, a new specification ("El Torito bootable CD-ROM specification") was created, partly so that you could have a boot catalogue with multiple entries for multiple architectures (e.g. one for "80x86 PC", one for "PowerPC", etc) and let the firmware choose the most appropriate boot loader for the computer being booted. For this there are 3 methods for PCs - emulate a floppy disk, emulate a hard disk, or "no emulation". The emulation options work the same as original "boot from disk" method (and use 512-byte sectors, etc), but are limited and slow and probably shouldn't be used for anything other than compatibility with legacy operating systems. For "no emulation" it's completely different to the original "boot from disk" method, firmware is supposed to load an entire "boot file" (which is not limited to 1 sector and can be 500 KiB if you like), and sectors will be 2048 bytes (and not 512 bytes).
Even later; UEFI got invented. For 80x86 PCs this comes in 2 flavours - 32-bit 80x86 and 64-bit 80x86. In theory you can have a 64-bit UEFI boot loader that switches to protected mode/32-bit and starts a 32-bit OS; and you can have a 32-bit UEFI boot loader that switches to long mode/64-bit and starts a 64-bit OS. However, 32-bit UEFI is very rare (a few old Apple Mac's and almost nothing else) and these computers are likely to also support "BIOS compatible boot"; and isn't worth supporting 32-bit UEFI for that reason. For UEFI in general, it loads and executes an entire file (from whatever the boot device was) and provides an API that the boot loader can use (e.g. to setup a video mode, get a memory map, load other file/s, etc).
Note 2: UEFI tries to make it so that boot works the same regardless of which type of device you're booting from. In practice this doesn't work very well and you'll probably want a different boot loader for CD (that accesses file/s on the CD itself and isn't restricted to a weeny FAT file system image) and a different boot loader for network (even if it's only to allow you to pass IP addresses to the OS and avoid repeating the slow DHCP stuff after the OS boots).
With UEFI a new partitioning scheme was also introduced (GPT or "GUID Partition Table"). This has multiple advantages and (for new operating systems being installed as the only OS on a computer) should probably be considered the default (and the old "MBR partitions" should probably be considered obsolete for compatibility with old operating systems only).
Mostly; for 80x86 you'll probably need 4 or more different boot loaders:
one for BIOS and un-partitioned disk devices (floppy)
one for BIOS and disk devices that were partitioned with "MBR partitions"
one for BIOS and disk devices that were partitioned with "GPT partitions"
one for BIOS and network boot/PXE
one for BIOS and "no emulation" CD boot
one for 64-bit UEFI disk
one for 64-bit UEFI CD-ROM
one for 64-bit UEFI network
Of course all of these cases are "different enough" that it's silly to try to have a generic boot loader that covers multiple different cases (and in cases where there are similarities things like "512-bytes only" restrictions are so limiting that you'll be doomed if you try).
I'd also "strongly recommend" having some kind of abstraction between boot loader and the rest of the OS (e.g. a "boot protocol" defined for the OS that describes how a boot loader sets things up, passes information to the OS and transfers control to the OS); such that none of the code in the entire OS needs to know or care what the firmware was (if it was BIOS or UEFI or something else, like maybe kexec()). This means that anyone can create more boot loaders (to support other cases and other devices); and (as long as everything complies with your abstraction's specification) the entire OS will work with the new boot loader/s without any changes.
Under FAT32, I have MBR bootloader to read the first sector of the virtual disk image generated by bximage from Bochs. Where (which sector) should I put the second compiled code that shows the black screen? How to do it with dd utility? My second compiled code file is 9 Bytes only.
This is mostly wrong. For "BIOS hard disk" you should have an MBR (that has nothing to do with the OS at all) and partitions, and your operating system's boot loader should begin in the first sector of the partition (and should be designed to use DS:SI to find the partition table entry that describes its partition, and dl to determine which device the partition is on).
Is VBR necessary?
For some cases (booting from UEFI, network, CD-ROM) a VBR doesn't make sense. For some cases (booting from BIOS hard disk or BIOS USB flash) it's "theoretically optional" but extremely recommended; because some BIOSes may not recognise it (especially for the USB flash case), and other operating systems will assume that the disk isn't formatted (and will tell their users that the disk needs to be initialised/partitioned, convincing the user that your OS is garbage and leading to the user accidentally or intentionally wiping your OS off the disk).
How do I know where the data region (FAT32) starts and ends?
For FAT; there's fields in the BPB ("BIOS Parameter Block", which is misnamed as it's mostly not used by the BIOS at all) in the first sector of the volume/partition that tell you things like how many reserved sectors there are, how many sectors are in each cluster, etc. Really, if you're going to use one of the world's worst file systems for inappropriate things (e.g. for an operating system's main partition where things like effective permissions/security and fault tolerance are sorely needed) then you'll need to learn everything about FAT32 so that you can write code to allow the OS to support it after boot.

Since modern computer uses virtual memory, why do we still encounter "out of memory" issue?

I am learning the concept of virtual memory, but this question has been confusing me for a while. Since most modern computers use virtual memory, when a program is in execution, the os is supposed to page data in and out between RAM and disk. But why do we still encounter "out of memory" issue? Could you please correct me if I misunderstood the concept? I really appreciate your explanation.
PS: For example, I was analyzing a large amount of data (>100G) output from simulation on a computing cluster, and read in the data to an C array. Very often the system crashed and complained a memory error.
First: Modern computer do indeed use virtual memory, however there is no magic here. Memory is not created out of nothing. Virtual memory schemes typically allow a portion of the mass storage sub-system (aka hard disk) to be used to hold portions of the process that are (hopefully) less frequently used.
This technique allows processes to use more memory than is available as RAM. However nothing is infinite. Eventually all RAM and Hard Drive resources will be used up and the process will get an out of memory error.
Second: It is not unheard of for operating systems to place a cap on the memory that a process may use. Hit that cap and again, the process gets an out of memory error.
Even with virtual memory the memory available is not unlimited.
Limit 1) Architectural limits. The processor and operating system will place some maximum virtual memory limit.
Limit 2) System Parameters. Many operating systems configure the maximum virtual memory size.
Limit 3) Process quotas. Many operating system have process quotas that limit the maximum virtual memory size.
Limit 4) System resources. Notably page file space.

Who brings a program from Secondary memory(Hard Disk) to Primary Memory(RAM) for execution?

In the book "Operating System Concept by SILBERSCHATZ, GALVIN & GAGNE", they've mentioned that,
Main Memory(Primary memory) and the registers built into the processor itself are the only storage that CPU can access directly
This statement has caused a lot of confusion. If CPU can not access Secondary Memory, then how does it fetch a program from Secondary Memory?
CPU can't access Secondary memory directly doesn't mean It can't access it anyway. when System is booted, BIOS inbuilt program (in ROM) copy boot loader (from secondary bootable device) to RAM's memory and ask cpu to continue execution from that particular address.
Once CPU starts executing Boot loader, Boot loader calls necessary function to read from disk (secondary storage) and copy your OS kernel image into memory. transfer of data is done through I/O ports.
You have your kernel image into memory, and thus boot loader ask cpu to jump to kernel's entry point. You have your kernel starting up this point.
Kernel setup OS environment, load up necessary drivers (including disk/cd-rom driver). After this point, It is up to OS disk driver that It performs I/O port operation or DMA access to load up data from secondary storage.
Generally DMA is preferred because It does not involve CPU for polling up data from device, but It is little difficult to code.
I hope I cleared your doubt :)

How does a computer boot up?

I understand that the computer loads the first sector of memory known as BIOS, which runs diagnostics on hardware and the proceeds to load the OS. I guess my question leans towards the hardware side. How does the computer know which memory to boot from (RAM, ROM, FLASH, etc). I understand the differences between memory and I understand computers boot from the hard drive, but Im attempting to make an 8 bit computer with a z80 microprocessor, which will need to boot from ROM or Flash memory. The only problem is that the processor reads only from whatever memory the address pins are connected to and there are no separate address pins for ram and rom. Its also impractical to run the system on rom or flash due to the much slower read/write time compared to ram. The z80 to the best of my knowledge doesnt have separate commands for reading from rom and ram, and it wouldnt matter even if it did because the ram will be blank upon powering up. How does a computer choose to read from rom only upon booting and then switch to ram once the OS has been loaded. Is it hardwired in using logic gates? And how does a computer choose to write to flash memory or a hard drive instead of ram once the OS has been loaded? Would flash memory be treated as a device? Or is this also hardwired into the motherboard using logic gates? Sorry for giving so much background, I just dont want you to waste your time explaining things Ive already grasped. Ive just researched this to a great extent and thought about it for hours on end and cant seem to figure it out, and everywhere Ive looked doesnt explain how the computer chooses which memory to read from, it just says that it does. Thanks
I'm not sure I'm answering what you are asking, but I'll give it a try.
Some computers (at least, IBM PC-compatible computers), after powering up, usually run this BIOS (Basic Input/Output System) program. For this to happen, to the best of my knowledge, the hardware must make the jump to this code, and this code must be accessible (that is, mapped) from the physical memory, since that's where the CPU will execute code from. So, a physical address space with some read-only areas where this code is hard-wired to would do the trick.
Once the BIOS code is running, it can select how to proceed next. It can copy a sector from a hard disk to memory, (or a bunch of data from a Flash drive) and then jump to it, or whatever. That's up to the BIOS writer.
I will try to explain the Pentium boot up process very briefly.
On the flash ROM mounted on the Motherboard. there is a small program called BIOS (basic input, output system). After pressing the power button the BIOS program is executed.
The BIOS contains low level software that performs the following operations :-
checks how much RAM is installed and if all other PCI and ISA buses peripherals are connected.
it checks if all IO devices are connected.
scans a list of boot devices and selects the boot devices based on BIOS configurations setup earlier by the user.
once the boot devices is selected. the first sector from the boot device is read into memory and executed. it contains a simple program which examines the partition table and selects the Active one (Holding the OS). The secondary bootloader is read from that partition. this loader then reads the OS from the partition into the memory and runs it. After running, the OS asks the BIOS for the configuration info for each device and configure the new devices (those have no stored configurations). after all devices configurations are set. they are delivered to the kernel. Then it initializes tables, background boot up processes and starts login program or GUI.

Windows Mobile memory corruption

Is WM operating system protects process memory against one another?
Can one badly written application crash some other application just mistakenly writing over the first one memory?
Windows Mobile, at least in all current incarnations, is build on Windows CE 5.0 and therefore uses CE 5.0's memory model (which is the same as it was in CE 3.0). The OS doesn't actually do a lot to protect process memory, but it does enough to generally keep processes from interfering with one another. It's not hard and fast though.
CE processes run in "slots" of which there are 32. The currently running process gets swapped to slot zero, and it's addresses are re-based to zero (so all memory in the running process effectively has 2 addresses, the slot 0 address and it's non-zero slot address). These addresses are proctected (though there's a simple API call to cross the boundary). This means that pointer corruptions, etc will not step on other apps but if you want to, you still can.
Also CE has the concept of shared memory. All processes have access to this area and it is 100% unprotected. If your app is using shared memory (and the memory manager can give you a shared address without you specifically asking, depending on your allocation and its size). If you have shared memory then yes, any process can access that data, including corrupting it, and you will get no error or warning in either process.
Is WM operating system protects process memory against one another?
Yes.
Can one badly written application crash some other application just mistakenly writing over the first one memory?
No (but it might do other things like use up all the 'disk' space).
Even if you're a device driver, to get permission to write to memory that's owned by a different process there's an API which you must invoke explicitly.
While ChrisW's answer is technically correct, my experience of Windows mobile is that it is much easier to crash the entire device from an application than it is on the desktop. I could guess at a few reasons why this is the case;
The operating sytem is often much more heavily OEMed than Windows desktop, that is the amount of manufacturer specific low level code can be very high, which leads to manufacturer specific bugs at a level that can cause bad crashes. On many devices it is common to see a new firmware revision every month or so, where the revisions are fixes to such bugs.
Resources are scarcer, and an application that exhausts all available resources is liable to cause a crash.
The protection mechanisms and architecture vary quite a bit. The device I'm currently working with is SH4 based, while you mostly see ARM, X86 and the odd MIPs CPU..

Resources