Given a 2 processor Nehalem Xeon server with 12GB of RAM (6x2GB), how are memory addresses mapped onto the physical memory modules?
I would imagine that on a single processor Nehalem with 3 identical memory modules, the address space would be striped over the modules to give better memory bandwidth. But with what kind of stripe size? And how does the second processor (+memory) change that picture?
Intel is not very clear on that, you have to dig into their hardcore technical documentation to find out all the details. Here's my understanding. Each processor has an integrated memory controller. Some Nehalems have triple-channel controllers, some have dual-channel controllers. Each memory module is assigned to one of the processors. Triple channel means that accesses are interleaved across three banks of modules, dual channel = two banks.
The specific interleaving pattern is configurable to some extent, but, given their design, it's almost inevitable that you'll end up with 64 to 256 byte stripes.
If one of the processors wants to access memory that's attached to the IMC of some other processor, the access goes through both processor and incurs additional latency.
Related
I have read An Introduction to the Intel® QuickPath Interconnect. The document does not mention that QPI is used by processors to access memory. So I think that processors don't access memory through QPI.
Is my understanding correct?
Intel QuickPath Interconnect (QPI) is not wired to the DRAM DIMMs and as such is not used to access the memory that connected to the CPU integrated memory controller (iMC).
In the paper you linked this picture is present
That shows the connections of a processor, with the QPI signals pictured separately from the memory interface.
A text just before the picture confirm that QPI is not used to access memory
The processor
also typically has one or more integrated memory
controllers. Based on the level of scalability
supported in the processor, it may include an
integrated crossbar router and more than one
Intel® QuickPath Interconnect port.
Furthermore, if you look at a typical datasheet you'll see that the CPU pins for accessing the DIMMs are not the ones used by QPI.
The QPI is however used to access the uncore, the part of the processor that contains the memory controller.
Courtesy of QPI article on Wikipedia
QPI is a fast internal general purpose bus, in addition to giving access to the uncore of the CPU it gives access to other CPUs' uncore.
Due to this link, every resource available in the uncore can potentially be accessed with QPI, including the iMC of a remote CPU.
QPI define a protocol with multiple message classes, two of them are used to read memory using another CPU iMC.
The flow use a stack similar to the usual network stack.
Thus the path to remote memory include a QPI segment but the path to local memory doesn't.
Update
For Xeon E7 v3-18C CPU (designed for multi-socket systems), the Home agent doesn't access the DIMMS directly instead it uses an Intel SMI2 link to access the Intel C102/C104 Scalable Memory Buffer that in turn accesses the DIMMS.
The SMI2 link is faster than the DDR3 and the memory controller implements reliability or interleaving with the DIMMS.
Initially the CPU used a FSB to access the North bridge, this one had the memory controller and was linked to the South bridge (ICH - IO Controller Hub in Intel terminology) through DMI.
Later the FSB was replaced by QPI.
Then the memory controller was moved into the CPU (using its own bus to access memory and QPI to communicate with the CPU).
Later, the North bridge (IOH - IO Hub in Intel terminology) was integrated into the CPU and was used to access the PCH (that now replaces the south bridge) and PCIe was used to access fast devices (like the external graphic controller).
Recently the PCH has been integrated into the CPU as well that now exposes only PCIe, DIMMs pins, SATAexpress and any other common internal bus.
As a rule of thumb the buses used by the processors are:
To other CPUs - QPI
To IOH - QPI (if IOH present)
To the uncore - QPI
To DIMMs - Pins as the DRAM technology (DDR3, DDR4, ...) support mandates. For Xeon v2+ Intel uses a fast SMI(2) link to connect to an off-core memory controller (Intel C102/104) that handle the DIMMS and channels based on two configurations.
To PCH - DMI
To devices - PCIe, SATAexpress, I2C, and so on.
Yes, QPI is used to access all remote memory on multi-socket systems, and much of its design and performance is intended to support such access in a reasonable fashion (i.e., with latency and bandwidth not too much worse than local access).
Basically, most x86 multi-socket systems are lightly1 NUMA: every DRAM bank is attached to a the memory controller of a particular socket: this memory is then local memory for that socket, while the remaining memory (attached to some other socket) is remote memory. All access to remote memory goes over the QPI links, and on many systems2 that is fully half of all memory access and more.
So QPI is designed to be low latency and high bandwidth to make such access still perform well. Furthermore, aside from pure memory access, QPI is the link through which the cache coherence between sockets occurs, e.g., notifying the other socket of invalidations, lines which have transitioned into the shared state, etc.
1 That is, the NUMA factor is fairly low, typically less than 2 for latency and bandwidth.
2 E.g., with NUMA interleave mode on, and 4 sockets, 75% of your access is remote.
I'm a bit confused between about the difference between shared memory and distributed memory. Can you clarify?
Is shared memory for one processor and distributed for many (for network)?
Why do we need distributed memory, if we have shared memory?
Short answer
Shared memory and distributed memory are low-level programming abstractions that are used with certain types of parallel programming. Shared memory allows multiple processing elements to share the same location in memory (that is to see each others reads and writes) without any other special directives, while distributed memory requires explicit commands to transfer data from one processing element to another.
Detailed answer
There are two issues to consider regarding the terms shared memory and distributed memory. One is what do these mean as programming abstractions, and the other is what do they mean in terms of how the hardware is actually implemented.
In the past there were true shared memory cache-coherent multiprocessor systems. The systems communicated with each other and with shared main memory over a shared bus. This meant that any access from any processor to main memory would have equal latency. Today these types of systems are not manufactured. Instead there are various point-to-point links between processing elements and memory elements (this is the reason for non-uniform memory access, or NUMA). However, the idea of communicating directly through memory remains a useful programming abstraction. So in many systems this is handled by the hardware and the programmer does not need to insert any special directives. Some common programming techniques that use these abstractions are OpenMP and Pthreads.
Distributed memory has traditionally been associated with processors performing computation on local memory and then once it using explicit messages to transfer data with remote processors. This adds complexity for the programmer, but simplifies the hardware implementation because the system no longer has to maintain the illusion that all memory is actually shared. This type of programming has traditionally been used with supercomputers that have hundreds or thousands of processing elements. A commonly used technique is MPI.
However, supercomputers are not the only systems with distributed memory. Another example is GPGPU programming which is available for many desktop and laptop systems sold today. Both CUDA and OpenCL require the programmer to explicitly manage sharing between the CPU and the GPU (or other accelerator in the case of OpenCL). This is largely because when GPU programming started the GPU and CPU memory was separated by the PCI bus which has a very long latency compared to performing computation on the locally attached memory. So the programming models were developed assuming that the memory was separate (or distributed) and communication between the two processing elements (CPU and GPU) required explicit communication. Now that many systems have GPU and CPU elements on the same die there are proposals to allow GPGPU programming to have an interface that is more like shared memory.
In modern x86 terms, for example, all the CPUs in one physical computer share memory. e.g. 4-socket system with four 18-core CPUs. Each CPU has its own memory controllers, but they talk to each other so all the CPUs are part of one coherency domain. The system is NUMA shared memory, not distributed.
A room full of these machines form a distributed-memory cluster which communicates by sending messages over a network.
Practical considerations are one major reasons for distributed memory: it's impractical to have thousands or millions of CPU cores sharing the same memory with any kind of coherency semantics that make it worth calling it shared memory.
I'm aware that in most modern architectures the CPU sends read and write requests, to a memory management unit rather than directly to the RAM controller.
If other peripherals are also addressed, that is to say, read from and written to using an address bus, then are these addresses also accessed through a virtual address? In other words, to speak to a USB drive etc. does the CPU send the target virtual address to an MMU which translates it to a physical one? Or does it simply write to a physical address with no intermediary device?
I cant speak globally there may be exceptions. But that is the general idea, that being that the cpu memory interface goes completely through the mmu (And completely through a cache or layers of caches).
In order for peripherals really to work (caching a status register on the first read then subsequent reads getting the cached version not the real version) you have to set the address space for the peripheral to be not cached. So for example on an arm and no doubt others where you have separate i and d cache enables, you can turn on the i cache without the mmu, but to turn on the d cache and not have this peripheral problem you need the mmu on and the peripheral space in the tables and marked as not cached.
It us up to the software designers to decide if they want to have the virtual address for the peripherals match the physical or to move the peripherals elsewhere, both have pros and cons.
It is certainly possible to design a chip/system where an address space is automatically not sent through the mmu or cache, that can make the busses ugly, and/or the chip may have separate busses for peripherals from ram, or other solutions, so the above is not necessarily a universal answer, but for say an arm and I would assume an x86 that is how it works. On the arms I am familar with the mmu and l1 cache are in the core, the l2 is outside and l3 beyond that if you have one. the l2 is literally between the core and the world (if you have one (from arm)) but the axi/amba bus has cacheable settings so each transaction may or may not be marked as cacheable, if not cacheable then it passes right through the l2 logic. if enabled the mmu determines that if enabled on a per transaction basis.
Actually the virtual-to-physical translation is in the CPU for almost all modern (and at this point, even most old) architectures. Even the DRAM and PCIe controllers (previously in the Northbridge) made it onto the CPU. So a modern CPU doesn't even talk to a RAM controller, it directly talks to DRAM.
If other peripherals are also addressed, that is to say, read from and written to using an address bus, then are these addresses also accessed through a virtual address?
At least in the case of x86, yes. You can virtually map your memory mapped IO ranges anywhere. Good thing too, otherwise the virtual address space would necessarily mirror the weird physical layout with "holes" that you couldn't map real ram into because then you'd have two things in the same place.
As far as I know , the paging system do eliminate external fragment in physical address space, but what about fragment in virtual address space?
In modern OSes the virtual address space is used per process (the kernel has it's own dedicated virtual range), which means that the demands are much lower compared to the whole OS. The virtual address space is usually large enough (2-3 GB per process on x86 and multiple TB (8 on Windows) on x64 machines), so that fragmentation is not such a big issue as for the OS-wide physical address space. Still the issue can arise, especially for long running and memory hungry applications on x86 or other 32 bit architectures. For this the OS provides mechanisms, for example in form of the heap code. An application usually reserves one or more memory ranges as heap(s) when it starts and allocates the required chunks of memory from there later (e.g. malloc). There are a varity of implementations that handle fragmentation of the heap in different ways. Windows provides a special low-fragmentation heap implementation that can be used, if desired. Everything else is usually up to the application or it's libraries.
Let me add a qualification to your statement. Paging systems nearly eliminate fragmentation in the physical address space when the kernel is pageable.
On some systems, the user mode page tables are themselves pageable. On others, they are are physical locations that are not pageable. Then you can get fragmentation.
Fragmentation in the virtual address space tends to occur in heap allocation. The challenge of heap managers is to manage the space while minimizing fragmentation.
On a Linux machine, I need to count the number of read and write accesses to memory (DRAM) performed by a process. The machine has a NUMA configuration and I am binding the process to access memory from a single remote NUMA node using numactl. The process is running on CPUs in node 0 and accessing memory in node 1.
Currently, I am using perf to count the number of LLC load miss and LLC store miss events to serve as an estimate for read and write accesses to memory. Because, I guessed LLC misses will need to be served by memory accesses. Is this approach correct i.e. is this event relevant ? And, are there any alternatives to obtain the read and write access information ?
Processor : Intel Xeon E5-4620
Kernel : Linux 3.9.0+
Depending on your hardware you should be able to acess performance counter located on the memory side, to exactly count memory accesses. On Intel processor, these events are called uncore events. I know that you can also count the same thing on AMD processors.
Counting LLC misses is not totally correct because some events such as the hardware prefetcher may lead a significant number of memory accesses.
Regarding your hardware, unfortunately you will have to use raw events (in the perf terminology). These events can't be generalized by perf because they are processor's specifics and as a consequence you will have to look into your processor's manual to find the raw encoding of the event to give to perf. For your Intel processor you should look at chapter 18.9.8 Intel® Xeon® Processor E5 Family Uncore Performance Monitoring Facility and CHAPTER 19 PERFORMANCE-MONITORING EVENTS of the Intel software developer manual document available here In these documents you'll need the exact ID of your processor that you can get using /proc/cpuinfo