System.Int32 contains... another System.Int32 - clr

I used reflection to inspect the contents of System.Int32 and found that it contains another System.Int32.
System.Int32 m_value;
I don't see how that's possible.
This int really is the "backing integer" of the one you have: if you box an int and use reflection to change the value of its m_value field, you effectively change the value of the integer:
object testInt = 4;
Console.WriteLine(testInt); // yields 4
typeof(System.Int32)
.GetField("m_value", BindingFlags.NonPublic | BindingFlags.Instance)
.SetValue(testInt, 5);
Console.WriteLine(testInt); // yields 5
There's gotta be a rational explanation behind this singularity. How can a value type contain itself? What magic does the CLR use to make it work?

As noted, a 32-bit integer can exist in two varieties. Four bytes anywhere in memory or a CPU register (not just the stack), the fast version. And it can be embedded in System.Object, the boxed version. The declaration for System.Int32 is compatible with the latter. When boxed, it has the typical object header, followed by 4 bytes that stores the value. And those 4 bytes map exactly to the m_value member. Maybe you see why there's no conflict here: m_value is always the fast, non-boxed version. Because there is no such thing as a boxed boxed integer.
Both the language compiler and the JIT compiler are keenly aware of the properties of an Int32. The compiler is responsible for deciding when the integer needs to be boxed and unboxed, it generates the corresponding IL instructions to do so. And it knows what IL instructions are available that allows the integer to be operated on without boxing it first. Readily evident from the methods implemented by System.Int32, it doesn't have an override for operator==() for example. That's done by the CEQ opcode. But it does have an override for Equals(), required to override the Object.Equals() method when the integer is boxed. Your compiler needs to have that same kind of awareness.

Check out this thread for a laborious discussion of this mystery.

The magic is actually in the boxing/unboxing.
System.Int32 (and its alias int) is a value type, which means that it's normally allocated on the stack. The CLR takes your System.Int32 declaration and simply turns it into 32 bits of stack space.
However, when you write object testInt = 4;, the compiler automatically boxes your value 4 into a reference, since object is a reference type. What you have is a reference that points to a System.Int32, which is now 32 bits of space on the heap somewhere. But the auto-boxed reference to a System.Int32 is called (...wait for it...) System.Int32.
What your code sample is doing is creating a reference System.Int32 and changing the value System.Int32 that it points to. This explains the bizarre behavior.

Rationale
The other answers are ignorant and/or misleading.
Prologue
It can help to understand this by first reading my answer on How do ValueTypes derive from Object (ReferenceType) and still be ValueTypes?
So what's going on?
System.Int32 is a struct that contains a 32-bit signed integer. It does not contain itself.
To reference a value type in IL, the syntax is valuetype [assembly]Namespace.TypeName.
II.7.2 Built-in types
The CLI built-in types have corresponding value types defined in the Base Class Library. They shall be referenced in signatures only using their special encodings (i.e., not using the general purpose valuetype TypeReference syntax). Partition I specifies the built-in types.
This means that, if you have a method that takes a 32-bit integer, you mustn't use the conventional valuetype [mscorlib]System.Int32 syntax, but the special encoding for the 32-bit signed built-in integer int32.
In C#, this means, that whether you type System.Int32 or int, either will be compiled to int32, not valuetype [mscorlib]System.Int32.
You may have heard that int is an alias for System.Int32 in C#, but in reality, both are aliases of the built-in CLS value type int32.
So while a struct like
public struct MyStruct
{
internal MyStruct m_value;
}
Indeed would compile to (and thus be invalid):
.class public sequential ansi sealed beforefieldinit MyStruct extends [mscorlib]System.ValueType
{
.field assembly valuetype MyStruct m_value;
}
namespace System
{
public struct Int32
{
internal int m_value;
}
}
Instead compiles to (ignoring interfaces):
.class public sequential ansi sealed beforefieldinit System.Int32 extends [mscorlib]System.ValueType
{
.field assembly int32 m_value;
}
The C# compiler does not need a special case to compile System.Int32, because the CLI specification stipulates that all references to System.Int32 are replaced with the special encoding for the built-in CLS value type int32.
Ergo. System.Int32 is a struct that doesn't contain another System.Int32 but an int32. In IL, you can have 2 method overloads, one taking a System.Int32 and another taking an int32 and have them co-exist:
.assembly extern mscorlib
{
.publickeytoken = (B7 7A 5C 56 19 34 E0 89)
.ver 2:0:0:0
}
.assembly test {}
.module test.dll
.imagebase 0x00400000
.file alignment 0x00000200
.stackreserve 0x00100000
.subsystem 0x0003
.corflags 0x00000001
.class MyNamespace.Program
{
.method static void Main() cil managed
{
.entrypoint
ldc.i4.5
call int32 MyNamespace.Program::Lol(valuetype [mscorlib]System.Int32) // Call the one taking the System.Int32 type.
call int32 MyNamespace.Program::Lol(int32) // Call the overload taking the built in int32 type.
call void [mscorlib]System.Console::Write(int32)
call valuetype [mscorlib]System.ConsoleKeyInfo [mscorlib]System.Console::ReadKey()
pop
ret
}
.method static int32 Lol(valuetype [mscorlib]System.Int32 x) cil managed
{
ldarg.0
ldc.i4.1
add
ret
}
.method static int32 Lol(int32 x) cil managed
{
ldarg.0
ldc.i4.1
add
ret
}
}
Decompilers like ILSpy, dnSpy, .NET Reflector, etc. can be misleading. They (at the time of writing) will decompile both int32 and System.Int32 as either the C# keyword int or the type System.Int32, because that's how we define integers in C#.
But int32 is the built-in value type for 32-bit signed integers (i.e. the VES has direct support for these types, with instructions like add, sub, ldc.i4.x, etc.); System.Int32 is the corresponding value type defined in the class library.
The corresponding System.Int32 type is used for boxing, and for methods like ToString(), CompareTo(), etc.
If you write a program in pure IL, you can absolutely make your own value type that contains an int32 in exactly the same way, where you're still using int32 but call methods on the custom "corresponding" value type.
.class MyNamespace.Program
{
.method hidebysig static void Main(string[] args) cil managed
{
.entrypoint
.maxstack 8
ldc.i4.0
call void MyNamespace.Program::PrintWhetherGreaterThanZero(int32)
ldc.i4.m1 // -1
call void MyNamespace.Program::PrintWhetherGreaterThanZero(int32)
ldc.i4.3
call void MyNamespace.Program::PrintWhetherGreaterThanZero(int32)
ret
}
.method private hidebysig static void PrintWhetherGreaterThanZero(int32 'value') cil managed noinlining
{
.maxstack 8
ldarga 0
call instance bool MyCoolInt32::IsGreaterThanZero()
brfalse.s printOtherMessage
ldstr "Value is greater than zero"
call void [mscorlib]System.Console::WriteLine(string)
ret
printOtherMessage:
ldstr "Value is not greater than zero"
call void [mscorlib]System.Console::WriteLine(string)
ret
}
}
.class public MyCoolInt32 extends [mscorlib]System.ValueType
{
.field assembly int32 myCoolIntsValue;
.method public hidebysig bool IsGreaterThanZero()
{
.maxstack 8
ldarg.0
ldind.i4
ldc.i4.0
bgt.s isNonZero
ldc.i4.0
ret
isNonZero:
ldc.i4.1
ret
}
}
This is no different from the System.Int32 type, except, that the C# compiler doesn't consider MyCoolInt32 the corresponding int32 type, but to the CLR, it doesn't matter. This will fail PEVerify.exe, however, but it'll run just fine.
Decompilers will show casts and apparent pointer dereferences when decompiling the above, because they don't consider MyCoolInt32 and int32 related either.
But functionally, there's no difference, and there's no magic going on behind the scenes in the CLR.

Related

Get type of Key and Value from a Map variable?

Given a Map variable, how can I determine the type of Key and Value from it?
For example:
void doSomething(Map m){
print('m: ${m.runtimeType}');
print('keys: ${m.keys.runtimeType}');
print('values: ${m.values.runtimeType}');
print('entries: ${m.entries.runtimeType}');
}
void main() async {
Map<String, int> m = {};
doSomething(m);
}
This will print
m: _InternalLinkedHashMap<String, int>
keys: _CompactIterable<String>
values: _CompactIterable<int>
entries: MappedIterable<String, MapEntry<String, int>>
But how can I get the actual type of Key and Value (i.e. String and int), so that I can use them in type checking code (i.e. if( KeyType == String ))?
You cannot extract the type parameters of a class if it doesn't provide them to you, and Map does not.
An example of a class which does provide them is something like:
class Example<T> {
Type get type => T;
R withType<R>(R Function<X>() callback) => callback<T>();
}
If you have an instance of Example, you can get to the type parameter, either as a Type (which is generally useless), or as a type argument which allows you to do anything with the type.
Alas, providing access to types variables that way is very rare in most classes.
You can possibly use reflection if you have access to dart:mirrors, but most code does not (it doesn't work with ahead-of-time compilation, which includes all web code, or in Flutter programs).
You can try to guess the type by trying types that you know (like map is Map<dynamic, num>, then map is Map<dynamic, int> and map is Map<dynamic, Never>. If the first two are true, and the last one is false, then the value type is definitely int. That only works if you know all the possible types.
It does work particularly well for platform types like int and String because you know for certain that their only subtype is Never.
If you can depend on the static type instead of the runtime type, you could use a generic function:
Type mapKeyType<K, V>(Map<K, V> map) => K;
Otherwise you would need to have a non-empty Map and inspect the runtime types of the actual elements.

is there a difference between implicit cast vs 'as' keyword in dart?

Is there any difference between using an implicit cast to cast in dart vs the 'as' keyword? Will they result in the same (or similar) runtime error if the type is not as expected?
For example:
dynamic foo = "blah";
String boo = foo; // is this
String boo2 = foo as String; // the same as this?
No. And yes.
TL;DR: Don't worry about the difference, just do what reads the best.
If your program is correct, and the casts will succeed, then there is unlikely to be any difference.
When inferring types, String boo = foo; will infer the type of foo with a context type of String. If the resulting static type of foo then turns out to be dynamic then it implies an implicit downcast from dynamic to `String.
For String boo = foo as String;, the static type of foo is inferred with no context type. No matter what the resulting static type is, it will be cast to String at run-time.
You can see a difference between these two if you have a more complicated expression than just the variable foo:
T first<T extends dynamic>(List<T> list) => list.first;
String boo = first([1]); // <- compile-time error
String boo2 = first([1]) as String;
With this example, you get a compile-time error in the boo line because the compiler knows that the list should be a List<String>. There is no error in the boo2 line because the list only needs to be a List<dynamic>, and whatever first returns is then dynamically cast to String.
A more contrived example would be:
T firstOrDefault<T extends dynamic>(List<T> list) {
if (list.isEmpty) {
// Invent some default values for known types.
if (null is T) return null as T;
if (0 is T) return 0 as T;
if (0.0 is T) return 0.0 as T;
if ("" is T) return "" as T;
if (false is T) return false as T;
throw UnsupportedError("No default value for the needed type");
}
return list.first;
}
String boo = firstOrDefault([]); // <- returns "", with null safety.
String boo2 = firstOrDefault([]) as String; // <- returns null, throws.
(Doing that kind of type-parameter specialization is not a recommended programming style. It's too fragile precisely because it can be affected in unpredictable ways by subtle changes to static types.).
Ignoring inference and static checking, there is not much difference at run-time. If foo is just a simple expression with static type dynamic, then the language requires downcast to String in both situations.
However, the Dart2JS web compiler can enable unsound optimizations which basically omit implicit downcasts entirely (as an "optimization" assume that they would have succeeded) and the go on with potentially type-unsound values flowing around.
For that reason, some people (mainly those coding for the web) may prefer to use implicit downcasts over explicit downcasts.
Dart with null safety only has implicit downcasts from dynamic.
You can always force an implicit downcast from any type by doing:
String boo3 = foo as dynamic;
The as dynamic is a free up-cast, it has no effect at run-time (it can't fail and the compiler knows that), so all it does is change the static type of the expression ... to something which introduces an implicit downcast, which the dart2js compiler will then (unsoundly) ignore as well.
(Use with caution, as with everything involving dynamic. Also, the analyzer might warn about an "unnecessary up-cast".)

In Dart, given the nullable type `T?`, how do I get the non-nullable one `T`

Given some nullable type T?, how do I get the corresponding non-nullable one T ?
For example:
T? x<T extends int?>(T? value) => value;
Type g<T>(T Function(T) t) => T;
Type type = g(x);
print(type); // Prints "int?"
Now I want to get the non-nullable type. How do I create the function convert so that:
Type nonNullableType = convert(type);
print(nonNullableType); // Prints "int"
If you have an instance of T?, and you're trying to do something where the expected type is T, you can use use T! wherever dart is showing an error. It is not exactly a conversion from T? to T, its just a shortcut to do a null check.
In general, you do not. There is no simple way to strip the ? of a type, or destructure types in other ways. (You also can't find the T of type you know is a List<T> at run--time)
If you have the type as a Type object, you can do nothing with it. Using Type object is almost never what you need.
If you have the type as a type parameter, then the type system don't actually know whether it's nullable. Example:
void foo<T>() { ... here T can be nullable or non-nullable ... }
Even if you test null is T to check that the type is actually nullable, the type system doesn't get any smarter, that's not one of the tests that it can derive type information from.
The only types you can improve on are variable types (or rather, the type of a single value currently stored in a variable). So, if you have T x = ...; and you do if (x != null) { ... x is not null here }, you can promote the variable to T&Object, but that's only an intermediate type to allow you to call members on the variable, it's not a real type that you can capture as a type variable or a variable type. It won't help you.
All in all, it can't be done. When you have the nullable type, it's too late, you need to capture it before adding the ?.
What problem are you actually trying to solve?
If you have an instance of T?, I think you could do:
Type nonNullableTypeOf<T>(T? object) => T;
void main() {
int x = 42;
int? y;
print(nonNullableTypeOf(x)); // Prints: int
print(nonNullableTypeOf(y)); // Prints: int
}
If you have only T? itself (the Type object), then I'm not confident that there's much you can do since what you can do with Type objects is very limited. (And given those limitations, it's not clear that nonNullableTypeOf ultimately would be very useful either.)
A related question: How do I check whether a generic type is nullable in Dart NNBD?

Why does the F# compiler give an error for one case but not the other?

I'm working on a platform invoke call from F#, and I am getting a compiler error I really can't make that much sense out of. First, let me show the C signature of what I am doing:
int Foo(
ULONG_PTR *phHandle,
DWORD flags
);
In F#, I think the correct way to invoke this natively is as so:
[<DllImport("somedll.dll")>]
static extern int APlatformInvokeCall
(
[<Out>]nativeint& phHandle,
uint32 flags
)
If I try to call this in a class, I get a compilation error when calling it like so:
type Class1() =
[<DllImport("somedll.dll")>]
static extern int APlatformInvokeCall
(
nativeint& phHandle,
uint32 flags
)
member this.Foo() =
let mutable thing = nativeint 0
APlatformInvokeCall(&thing, 0u) |> ignore
thing
The error is:
A type instantiation involves a byref type. This is not permitted by the rules of Common IL.
Weirdly, when I do this all in a module, the compilation errors go away:
module Module1 =
[<DllImport("somedll.dll")>]
extern int APlatformInvokeCall
(
nativeint& phHandle,
uint32 flags
)
let Foo() =
let mutable thing = nativeint 0
APlatformInvokeCall(&thing, 0u) |> ignore
thing
Why does this compile as a module, but not as a class?
I don't think it's valid to define an extern method within a class in F#.
If you pull up the F# 3.0 language specification and search for DllImport, near the bottom is a table listing some special attributes and how they can be used. The text for [<DllImport>] says:
When applied to a function definition in a module, causes the F# compiler to ignore the implementation of the definition, and instead compile it as a CLI P/Invoke stub declaration.
That seems to indicate that it's only valid to declare extern methods (that use [<DllImport>]) on functions defined in a module; it doesn't say anything about class members though.
I think you're running into a compiler bug. Please submit this code to fsbugs#microsoft.com so they can fix the error message emitted by the compiler -- it should really be giving you an error about defining an extern method in a class since that's not allowed by the language spec.
Whether this is a bug not withstanding, maybe this is what's going on: If APlatformInvokeCall were considered a static member function, that member have a single argument of tuple type. Tuples are compiled into objects of generic type (see here, at the bottom, or 5.1.3 in the spec). In this case that tuple is
System.Tuple<nativeint&, uint32>
But ECMA 335 II.9.4 says you can't instantiate generic types at byref types. This explains the error reported.
This explanation fits the fact mentioned above that Class1 works (well, compiles) if you modify the extern declaration and call to take instead a single argument. It also fits the fact that the module version works, since in that version there is no considering APlatFormInvokeCall a member function.
The simple solution is to check the spec, here is the class definition grammar:
type type-name pat_opt as-defn)opt =
class
class-inherits-decl_opt
class-function-or-value-defns_opt
type-defn-elements
end
then we have
class-function-or-value-defn :
attributes_opt staticopt let rec_opt function-or-value-defns
attributes_opt staticopt do expr
which doesn't allow extern.
and
type-defn-element :
member-defn
interface-impl
interface-signature
which isn't what you want either.
As a result, we can see that using extern as you are trying to use it can't be done inside a class.

System::IDisposable woes

public ref class ScriptEditor : public Form
{
public:
typedef map<UInt32, ScriptEditor^> AlMap;
static AlMap AllocationMap;
Form^ EditorForm;
RichTextBox^ EditorBox;
StatusBar^ EditorStatusBar;
StatusBarPanel^ StatusBarLineNo;
void Destroy() { EditorForm->Close(); }
ScriptEditor(unsigned int PosX, unsigned int PosY);
};
The above code throws an Error C2039: '{dtor}' : is not a member of 'System::IDisposable'. I'm quite lost after having looked into articles that explain how the CLR manages memory. Any advice on getting rid of it would be appreciated. My first dabble in C+++/CLI isn't going too well.
You are not getting a very good error message. But the problem is that the STL map<> template class is only suitable for unmanaged types. It requires an element type to have a destructor, managed types don't have one. In the C++/CLI language, destructors are simulated with the IDisposable interface, that's the source of the confusing error message you see.
If you really want to use STL, you can with the STL/CLR implementation, available in VS2008. It is however pretty widely ignored as it basically combines the disadvantages of STL (expensive value semantics) with those of managed code (no default value semantics on reference types). This web page compares it to the native .NET collection classes, stark results to put it mildly.
The appropriate collection class to use here is System::Collections::Generic::Dictionary<>

Resources