CLR/Fastcall: How are large value types passed internally to called functions? - clr

Just out of curiosity: value types are generally copied, and the JIT compiler seems to use Microsoft's Fastcall calling convention when calling a method. This puts the first few arguments in registers, for fast access. But how are large value types (i.e. bigger than the size of a register or the width of the stack) passed to the called function?
This book excerpt states that:
The CLR's jitted code uses the fastcall Windows calling convention. This permits the caller to supply the first two arguments (including this in the case of instance methods) in the machine's ECX and EDX registers.

It is __clrcall, indeed similar to __fastcall. Two registers are used by the x86 jitter (ecx, edx). Four registers by the x64 jitter (ecx, edx, r8, r9), same as the native x64 calling convention. Large value types like Decimal and large structs are passed by reserving space on the caller's stack, copying the value into it and passing a pointer to this copy. The callee copies it again to its own stack frame.
This is expensive which is why Microsoft recommends that a struct should not be larger than 16 bytes. Intentionally passing a struct by ref to avoid the copy is a workaround, commonly done in C and C++ as well. At the cost of an extra pointer dereference.

Related

Pointer to result of AVX load (_mm256_load_si256)

If I have some class with a field like __m256i* loaded_v, and a method like:
void load() {
loaded_v = &_mm256_load_si256(reinterpret_cast<const __m256i*>(vector));
}
For how long will loaded_v be a valid pointer? Since there are a limited number of registers, I would imagine that eventually loaded_v will refer to a different value, or some other weird behavior will happen. However, I would like to reduce the number of loads I do.
I'm writing a packed bit array class, and I would like to use AVX intrinsics to increase performance. However, it is inefficient to load my array of bits every time I do some operation (and, or, xor, etc). Therefore, I would like to be able to explicitly call load() before performing some batch of operations. However, I don't understand how exactly AVX registers are handled. Could anyone help me out, or point to me to some documentation for this issue?
The optimizing compiler would use registers automatically.
It may put a __m256 variable into memory, or in a register, or may use a register in one part of you code, and spill it in another. This can be done not only with standalone automatic storage (stack) variable, but also with member of a class, especially if the class instance is an automatic storage variable itself.
In case of registers usage, __m256 variable would correspond one of ymm registers (one of 16 in x86-64, one of 8 in 32-bit compilation, or one of 32 in x86-64 with AVX512), there's no need to indirectly refer to it.
The _mm256_load_si256 intrinsic doesn't necessarily compile to vmovdqa. For example, this code:
#include <immintrin.h>
__m256i f(__m256i a, const void* p)
{
__m256i b = _mm256_load_si256(reinterpret_cast<const __m256i*>(p));
return _mm256_xor_si256(a, b);
}
Compiles as following (https://godbolt.org/z/ve67YPn4T):
vpxor ymm0, ymm0, YMMWORD PTR [rdx]
ret 0
C and C++ are high level languages; the intrinsics should be seen as a way to convey the semantic to the compiler, not instruction mnemonics.
You should load a value into a variable,
__m256i loaded_v;
loaded_v = _mm256_load_si256(reinterpret_cast<const __m256i*>(vector));
or a temporary:
__m256_whatever_operation(_mm256_load_si256(reinterpret_cast<const __m256i*>(vector)), other_operand);
And you should follow the usual C or C++ rules.
If you repeatedly load an indirect value from a pointer, it may be helpful to cache it in a variable, so that compiler would see the value does not change between loads, and use this as an optimization opportunity. Sure compiler may miss this opportunity anyway, or find it even without cached variable (possibly with the help of the strict aliasing rule).

Go memory layout compared to C++/C

In Go, it seems there are no constructors, but it is suggested that you allocate an object of a struct type using a function, usually named by "New" + TypeName, for example
func NewRect(x,y, width, height float) *Rect {
return &Rect(x,y,width, height)
}
However, I am not sure about the memory layout of Go. In C/C++, this kind of code means you return a pointer, which point to a temporary object because the variable is allocated on the stack, and the variable may be some trash after the function return. In Go, do I have to worry such kind of thing? Because It seems no standard shows that what kind of data will be allocated on the stack vs what kind of data will be allocated on the heap.
As in Java, there seems to have a specific point out that the basic type such as int, float will be allocated on the stack, other object derived from the object will be allocated on the heap. In Go, is there a specific talk about this?
The Composite Literal section mentions:
Taking the address of a composite literal (§Address operators) generates a unique pointer to an instance of the literal's value.
That means the pointer returned by the New function will be a valid one (allocated on the stack).
Calls:
In a function call, the function value and arguments are evaluated in the usual order.
After they are evaluated, the parameters of the call are passed by value to the function and the called function begins execution.
The return parameters of the function are passed by value back to the calling function when the function returns.
You can see more in this answer and this thread.
As mentioned in "Stack vs heap allocation of structs in Go, and how they relate to garbage collection":
It's worth noting that the words "stack" and "heap" do not appear anywhere in the language spec.
The blog post "Escape Analysis in Go" details what happens, mentioning the FAQ:
When possible, the Go compilers will allocate variables that are local to a function in that function's stack frame.
However, if the compiler cannot prove that the variable is not referenced after the function returns, then the compiler must allocate the variable on the garbage-collected heap to avoid dangling pointer errors.
Also, if a local variable is very large, it might make more sense to store it on the heap rather than the stack.
The blog post adds:
The code that does the “escape analysis” lives in src/cmd/gc/esc.c.
Conceptually, it tries to determine if a local variable escapes the current scope; the only two cases where this happens are when a variable’s address is returned, and when its address is assigned to a variable in an outer scope.
If a variable escapes, it has to be allocated on the heap; otherwise, it’s safe to put it on the stack.
Interestingly, this applies to new(T) allocations as well.
If they don’t escape, they’ll end up being allocated on the stack. Here’s an example to clarify matters:
var intPointerGlobal *int = nil
func Foo() *int {
anInt0 := 0
anInt1 := new(int)
anInt2 := 42
intPointerGlobal = &anInt2
anInt3 := 5
return &anInt3
}
Above, anInt0 and anInt1 do not escape, so they are allocated on the stack;
anInt2 and anInt3 escape, and are allocated on the heap.
See also "Five things that make Go fast":
Unlike C, which forces you to choose if a value will be stored on the heap, via malloc, or on the stack, by declaring it inside the scope of the function, Go implements an optimisation called escape analysis.
Go’s optimisations are always enabled by default.
You can see the compiler’s escape analysis and inlining decisions with the -gcflags=-m switch.
Because escape analysis is performed at compile time, not run time, stack allocation will always be faster than heap allocation, no matter how efficient your garbage collector is.

What happens when memory "wraps" on an IA-32 supporting machine?

I'm creating a 64-bit model of IA-32 and am representing memory as a 0-based array of 2**64 bytes (the language I'm modeling this in uses ** as the exponentiation operator). This means that valid indices into the array are from 0 to 2**64-1. Now, to model the possible modes of accessing that memory, one can treat one element as an 8-bit number, two elements as a (little-endian) 16-bit number, etc.
My question is, what should my model do if they ask for a 16-bit (or 32-bit, etc.) number from location 2**64-1? Right now, what the model does is say that the returned value is Memory(2**64-1) + (8 * Memory(0)). I'm not updating any flags (which feels wrong). Is wrapping like this the correct behavior? Should I be setting any flags when the wrapping happens?
I have a copy of Intel-64-ia-32-ISA.pdf which I'm using as a reference, but it's 1,479 pages, and I'm having a hard time finding the answer to this particular question.
The answer is in Volume 3A, section 5.3: "Limit checking."
For ia-32:
When the effective limit is FFFFFFFFH (4 GBytes), these accesses [which extend beyond the end of the segment] may or may not cause the indicated exceptions. Behavior is implementation-specific and may vary from one execution to another.
For ia-64:
In 64-bit mode, the processor does not perform rumtime limit checking on code or data segments. Howver, the processor does check descriptor-table limits.
I tested it (did anyone expect that?) for 64bit numbers with this code:
mov dword [0], 0xDEADBEEF
mov dword [-4], 0x01020304
mov rdi, [-4]
call writelonghex
In a custom OS, with pages mapped as appropriate, running in VirtualBox. writelonghex just writes rdi to the screen as a 16-digit hexadecimal number. The result:
So yes, it does just wrap. Nothing funny happens.
No flags should be affected (though the manual doesn't say that no flags should be set for address wrapping, it does say that mov reg, [mem] doesn't affect them ever, and that includes this case), and no interrupt/trap/whatever happens (unless of course one or both pages touched are not present).

local and dynamic allocating

I have a tree and I want to release the allocated memory, but I face a problem that a pointer may refers to a variable that isn't dynamically allocated,so how to know wether this pointer refers to dynamic a variable or not
This is compiler-specific. You may compare given pointer with pointer to a local variable. Result interpretation depends on the way compiler implements heap and stack. Generally, for given compiler, stack pointer is always less (or greater) than heap pointer.
In any case, THIS IS BAD DESIGN.
This may not work if pointer belongs to another heap (for example, allocated in another Dll).

How does a program look in memory?

How is a program (e.g. C or C++) arranged in computer memory? I kind of know a little about segments, variables etc, but basically I have no solid understanding of the entire structure.
Since the in-memory structure may differ, let's assume a C++ console application on Windows.
Some pointers to what I'm after specifically:
Outline of a function, and how is it called?
Each function has a stack frame, what does that contain and how is it arranged in memory?
Function arguments and return values
Global and local variables?
const static variables?
Thread local storage..
Links to tutorial-like material and such is welcome, but please no reference-style material assuming knowledge of assembler etc.
Might this be what you are looking for:
http://en.wikipedia.org/wiki/Portable_Executable
The PE file format is the binary file structure of windows binaries (.exe, .dll etc). Basically, they are mapped into memory like that. More details are described here with an explanation how you yourself can take a look at the binary representation of loaded dlls in memory:
http://msdn.microsoft.com/en-us/magazine/cc301805.aspx
Edit:
Now I understand that you want to learn how source code relates to the binary code in the PE file. That's a huge field.
First, you have to understand the basics about computer architecture which will involve learning the general basics of assembly code. Any "Introduction to Computer Architecture" college course will do. Literature includes e.g. "John L. Hennessy and David A. Patterson. Computer Architecture: A Quantitative Approach" or "Andrew Tanenbaum, Structured Computer Organization".
After reading this, you should understand what a stack is and its difference to the heap. What the stack-pointer and the base pointer are and what the return address is, how many registers there are etc.
Once you've understood this, it is relatively easy to put the pieces together:
A C++ object contains code and data, i.e., member variables. A class
class SimpleClass {
int m_nInteger;
double m_fDouble;
double SomeFunction() { return m_nInteger + m_fDouble; }
}
will be 4 + 8 consecutives bytes in memory. What happens when you do:
SimpleClass c1;
c1.m_nInteger = 1;
c1.m_fDouble = 5.0;
c1.SomeFunction();
First, object c1 is created on the stack, i.e., the stack pointer esp is decreased by 12 bytes to make room. Then constant "1" is written to memory address esp-12 and constant "5.0" is written to esp-8.
Then we call a function that means two things.
The computer has to load the part of the binary PE file into memory that contains function SomeFunction(). SomeFunction will only be in memory once, no matter how many instances of SimpleClass you create.
The computer has to execute function SomeFunction(). That means several things:
Calling the function also implies passing all parameters, often this is done on the stack. SomeFunction has one (!) parameter, the this pointer, i.e., the pointer to the memory address on the stack where we have just written the values "1" and "5.0"
Save the current program state, i.e., the current instruction address which is the code address that will be executed if SomeFunction returns. Calling a function means pushing the return address on the stack and setting the instruction pointer (register eip) to the address of the function SomeFunction.
Inside function SomeFunction, the old stack is saved by storing the old base pointer (ebp) on the stack (push ebp) and making the stack pointer the new base pointer (mov ebp, esp).
The actual binary code of SomeFunction is executed which will call the machine instruction that converts m_nInteger to a double and adds it to m_fDouble. m_nInteger and m_fDouble are found on the stack, at ebp - x bytes.
The result of the addition is stored in a register and the function returns. That means the stack is discarded which means the stack pointer is set back to the base pointer. The base pointer is set back (next value on the stack) and then the instruction pointer is set to the return address (again next value on the stack). Now we're back in the original state but in some register lurks the result of the SomeFunction().
I suggest, you build yourself such a simple example and step through the disassembly. In debug build the code will be easy to understand and Visual Studio displays variable names in the disassembly view. See what the registers esp, ebp and eip do, where in memory your object is allocated, where the code is etc.
What a huge question!
First you want to learn about virtual memory. Without that, nothing else will make sense. In short, C/C++ pointers are not physical memory addresses. Pointers are virtual addresses. There's a special CPU feature (the MMU, memory management unit) that transparently maps them to physical memory. Only the operating system is allowed to configure the MMU.
This provides safety (there is no C/C++ pointer value you can possibly make that points into another process's virtual address space, unless that process is intentionally sharing memory with you) and lets the OS do some really magical things that we now take for granted (like transparently swap some of a process's memory to disk, then transparently load it back when the process tries to use it).
A process's address space (a.k.a. virtual address space, a.k.a. addressable memory) contains:
a huge region of memory that's reserved for the Windows kernel, which the process isn't allowed to touch;
regions of virtual memory that are "unmapped", i.e. nothing is loaded there, there's no physical memory assigned to those addresses, and the process will crash if it tries to access them;
parts the various modules (EXE and DLL files) that have been loaded (each of these contains machine code, string constants, and other data); and
whatever other memory the process has allocated from the system.
Now typically a process lets the C Runtime Library or the Win32 libraries do most of the super-low-level memory management, which includes setting up:
a stack (for each thread), where local variables and function arguments and return values are stored; and
a heap, where memory is allocated if the process calls malloc or does new X.
For more about the stack is structured, read about calling conventions. For more about how the heap is structured, read about malloc implementations. In general the stack really is a stack, a last-in-first-out data structure, containing arguments, local variables, and the occasional temporary result, and not much more. Since it is easy for a program to write straight past the end of the stack (the common C/C++ bug after which this site is named), the system libraries typically make sure that there is an unmapped page adjacent to the stack. This makes the process crash instantly when such a bug happens, so it's much easier to debug (and the process is killed before it can do any more damage).
The heap is not really a heap in the data structure sense. It's a data structure maintained by the CRT or Win32 library that takes pages of memory from the operating system and parcels them out whenever the process requests small pieces of memory via malloc and friends. (Note that the OS does not micromanage this; a process can to a large extent manage its address space however it wants, if it doesn't like the way the CRT does it.)
A process can also request pages directly from the operating system, using an API like VirtualAlloc or MapViewOfFile.
There's more, but I'd better stop!
For understanding stack frame structure you can refer to
http://en.wikipedia.org/wiki/Call_stack
It gives you information about structure of call stack, how locals , globals , return address is stored on call stack
Another good illustration
http://www.cs.uleth.ca/~holzmann/C/system/memorylayout.pdf
It might not be the most accurate information, but MS Press provides some sample chapters of of the book Inside Microsoft® Windows® 2000, Third Edition, containing information about processes and their creation along with images of some important data structures.
I also stumbled upon this PDF that summarizes some of the above information in an nice chart.
But all the provided information is more from the OS point of view and not to much detailed about the application aspects.
Actually - you won't get far in this matter with at least a little bit of knowledge in Assembler. I'd recoomend a reversing (tutorial) site, e.g. OpenRCE.org.

Resources