Is Redpitaya 16k sample size a hard limit? - redpitaya

I notice looking at the documentation that the scope and AFG supports record lengths of up 16k points. Is there anyway this can be increased (I don't remember this number being quoted throughout the development period) and up to 125k points would be very useful for our applications.

16k samples is not a hard limit. Scope & AWG applications use 16k x 16-bit buffers per channel, resulting in 128kB, which accounts only for half of the total Zynq 7010 BRAM memory. There is also a possibility to use a partition of external 512MB DDR3 RAM memory for signal buffering purpose, shared with Linux OS, or simply to allocate the BRAM memory to a single channel+function combination in case your application does not use all 4 of them.

Related

Is the accessing speed of the RAM/Disk Memory dependent on its volume?

As the image shows that, as the memory capacity increases the accessing time is also increasing.
Does it make sense that, accessing time is dependent on the memory capacity..???
No. The images show that technologies with lower cost in $ / GB are slower. Within a certain level (tier of the memory hierarchy), performance is not dependent on size. You can build systems with wider busses and so on to get more bandwidth out of a certain tier, but it's not inherently slower to have more.
Having more disks or larger disks doesn't make disk access slower, they're close to constant latency determined by the nature of the technology (rotating platter).
In fact, larger-capacity disks tend to have better bandwidth once they do seek to the right place, because more bits per second are flying under the read / write heads. And with multiple disks you can run RAID to utilize multiple disks in parallel.
Similarly for RAM, having multiple channels of RAM on a big many-core Xeon increases aggregate bandwidth. (But unfortunately hurts latency due to a more complicated interconnect vs. simpler quad-core "client" CPUs: Why is Skylake so much better than Broadwell-E for single-threaded memory throughput?) But that's a sort of secondary effect, and just using RAM with more bits per DIMM doesn't change latency or bandwidth, assuming you use the same number of DIMMs in the same system.

how is CPU physical address space mapped to physical DRAM?

In a systems memory map (also called cpu memory map) the address ranges are allocated for RAM memory ranges, MMIO for PCI devices etc.
Lets take an example where address ranges for RAM is started from address 0 to upto 512MB which includes DOS compatibility memory space starts from 0 and goes upto 1MB.
Now when we say that this 512MB of region will be mapped into the memory, does this mean that the address 0 in the CPU address space will be mapped to address 0 in the Physical RAM and the same goes up to 512MB? If not then how the mapping is done?
Also does the memory address ranges allocated in CPU address space will be exactly equal to the size of the RAM installed in the system? If its not the case then how the mapping would take place in such case?
Also how will memory mapping of DOS compatibility region be done? Does this region will be unused into the memory if using an OS other than DOS?
Also does the memory mapping means that when the CPU generates the address from 0 to 512 MB only will be redirected into the RAM ? Any other address generated by CPU will never be directed into the RAM by MMU ? In such case all the application would have the address between ranges from 0 to 512MB inorder to access the Memory ?
I'm considering an x86 system here.
Before going into the question, it's worth taking a look into DRAM architecture.
Now when we say that this 512MB of region will be mapped into the memory, does this mean that the address 0 in the CPU address space will be mapped to address 0 in the Physical RAM and the same goes up to 512MB? If not then how the mapping is done?
There isn't exactly a concept of 'address 0' in DRAM, instead, there is an architecture of channels, DIMM, ranks, chips, banks, rows and columns, and the DRAM controller generates 'commands' that activates parts of the DRAM and selects data from the cells:
So the answer to the first question is no. As other people mentioned, the exact mapping is complicated and undocumented. If you are interested, AMD does provide documentation (Section 2.10 and 3.5), and there are attempts of reverse engineering Intel's mapping (Section 4).
Also does the memory address ranges allocated in CPU address space will be exactly equal to the size of the RAM installed in the system? If its not the case then how the mapping would take place in such case?
The answer is also no for many reasons. You answered one of them: the physical address space represents more than just RAM/memory, there are also PCIe devices, ROM (where BIOS is located), etc, and thus there are memory holes. To inspect what does the physical address correspond to in the system, in Linux take a look at /proc/iomem, as it has the mappings.
Also how will memory mapping of DOS compatibility region be done? Does this region will be unused into the memory if using an OS other than DOS?
Yes, I believe these are unused memory holes.
Also does the memory mapping means that when the CPU generates the address from 0 to 512 MB only will be redirected into the RAM ? Any other address generated by CPU will never be directed into the RAM by MMU ? In such case all the application would have the address between ranges from 0 to 512MB inorder to access the Memory ?
MMU serves a completely different purpose. Take a look at virtual address to physical address translation.

When 32 bit machine can access max of 4GB RAM, how does it access 150GB of HDD [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I understand that the memory size that can be accessed by a 32 bit machine is limited to 4GB. Since there are I/O ports like PCIe, USB, Serial, Parallel, PS/2, Audio I/O, CD drive, floppy drive, memory card readers e.t.c that have to be dealt with as well, the result is that less than 4GB RAM is supported by the CPU itself. All the things I have just mentioned and others take quite a bit of memory also.
Now what I am confused about is how can it support several GB of hard disk space? How is it able to access even upto a 1 TB of external storage through these SATA/ATA interfaces. The same goes with the USB mass storage devices like external USB hard disks as well, I am suprised that such a big storage can be accessed by the CPU while being limited to 32 bits. Is there no limit to how big HDD can be supported by a 32 bits processor?
Not sure where to begin :-)
This is a very-very simplistic explanation, not exactly true ever since the 286, but might help you grasp basic concepts:
Memory addressing is done via an address bus: a 32 bit address bus can express 2^32 different addresses. The smallest amount of memory manipulable in one operation is called the "word" size, which is limited by the width of the data bus.
The maximum amount of addressable memory is word size times the number of addresses.
In "block IO" operations, the equivalent of the word size is the block size, usually much bigger. This is a trade-off: larger data can be accessed with the same address length, but flipping a single bit requires overwriting the whole block.
The bigger difference is that the address does not need to be present at the same time on an "address bus" like in memory: commands (and responses) are transmitted in sequential "packets", like on a network. There is thus no hardware-imposed limit on address size, albeit I am sure the protocol specifies a reasonable upper bound.
As you can see, addressable disk size is completely unrelated to CPU bus widths and register sizes.
For most nowadays application developers prefer to deal with 64 bit file pointers. For example lseek64 for linux or SetFilePointer for Windows - so from file point of view you can address 2^64 single file.
But from hardware level it is more interesting - because each disk is splitted (in logical units on clusters, in disk units on sectors). Each cluster is many bytes that can be addressed and read by single query. Operation systems hides from you these operations. But terabyte much more easily to address in terms of cluster.
A 32bit processor can handle much bigger numbers than 32bit (for example using "add with carry" instructions). The processor can write large address values into the address register of the I/O controller (for example using several 32bit data store instructions). Due to this indirect addressing the disk I/O address is independent of the processor bus address.

Nehalem memory architecture address mapping

Given a 2 processor Nehalem Xeon server with 12GB of RAM (6x2GB), how are memory addresses mapped onto the physical memory modules?
I would imagine that on a single processor Nehalem with 3 identical memory modules, the address space would be striped over the modules to give better memory bandwidth. But with what kind of stripe size? And how does the second processor (+memory) change that picture?
Intel is not very clear on that, you have to dig into their hardcore technical documentation to find out all the details. Here's my understanding. Each processor has an integrated memory controller. Some Nehalems have triple-channel controllers, some have dual-channel controllers. Each memory module is assigned to one of the processors. Triple channel means that accesses are interleaved across three banks of modules, dual channel = two banks.
The specific interleaving pattern is configurable to some extent, but, given their design, it's almost inevitable that you'll end up with 64 to 256 byte stripes.
If one of the processors wants to access memory that's attached to the IMC of some other processor, the access goes through both processor and incurs additional latency.

Mapping of memory addresses to physical modules in Windows XP

I plan to run 32-bit Windows XP on a workstation with dual processors, based on Intel's Nehalem microarchitecture, and triple channel RAM. Even though XP is limited to 4 GB of RAM, my understanding is that it will function with more than 4 GB installed, but will only expose 4 GB (or slightly less).
My question is: Assuming that 6 GB of RAM is installed in six 1 GB modules, which physical 4 GB will Windows actually map into its address space?
In particular:
Will it use all six 1 GB modules, taking advantage of all memory channels? (My guess is yes, and that the mapping to individual modules within a group happens in hardware.)
Will it map 2 GB of address space to each of the two NUMA nodes (as each processor has it's own memory interface), or will one processor get fast access to 3 GB of RAM, while the other only has 1 GB?
Thanks!
This question was answered over at SuperUser. Because there are no other responses here, I'm responding to my own question so that the relevant information can easily be found.
Since the question was asked, I have also come across this blog post by Mark Russinovich, explaining how the Windows XP kernel handles memory.
In conclusion, it appears that what happens is that the kernel, even though it is PAE aware, truncates all physical memory addresses to 32-bit, meaning only the lowest physical 4 GB of RAM will be used. This in turn is mapped by hardware to memory modules, and corresponds to the entirety of the first module triplet (3 GB in total), and a third of the second triplet (spread across all three of its modules -- 1 GB in total).
Thus, all memory channels will be exploited, but the amount of memory will not be balanced between NUMA nodes.

Resources