Getting actual value of local variables in llvm - clang

If I have this example:
int a=0, b=0;
a and b are local variables and make any modifications in their values, such as:
a++;
b++;
I need to get the value in this line code during running MCJIT.
I mean by value not Value class, but the actual integer or any type value.

You need to return the value from a JITed LLVM function in order to retrieve it from the code invoking MCJIT.
Check out this Kaleidoscope example.
The relevant code is in HandleTopLevelExpression():
if (FunctionAST *F = ParseTopLevelExpr()) {
if (Function *LF = F->Codegen()) {
// JIT the function, returning a function pointer.
void *FPtr = TheHelper->getPointerToFunction(LF);
// Cast it to the right type (takes no arguments, returns a double) so we
// can call it as a native function.
double (*FP)() = (double (*)())(intptr_t)FPtr;
fprintf(stderr, "Evaluated to %f\n", FP());
}
}

Put a break point after execution of the statement where you want to check the value. In the console (lldb) po <variable name>.
Although I guess watch point is more suitable for your requirement, add a watch point for the variable like, watchpoint set variable <variable key path>.

Related

Understanding difference between int? and int (or num? and num) [duplicate]

This question already has answers here:
What is Null Safety in Dart?
(2 answers)
Closed 1 year ago.
After defining a map (with letters as keys and scrabble tile scores as values)
Map<String, int> letterScore //I'm omitting the rest of the declaration
when I experiment with this function (in DartPad)
int score(String aWord) {
int result = 0;
for (int i = 0; i < aWord.length; ++i) {
result += letterScore[aWord[i]];
}
return result;
}
I consistently get error messages, regardless of whether I experiment by declaring variables as num or int:
Error: A value of type 'int?' can't be assigned to a variable of type
'num' because 'int?' is nullable and 'num' isn't [I got this after declaring all the numerical variables as int]
Error: A value of type 'num' can't be returned from a function with
return type 'int'.
Error: A value of type 'num?' can't be assigned to a variable of type
'num' because 'num?' is nullable and 'num' isn't.
I understand the difference between an integer and a floating point (or double) number, it's the int vs int? and num vs num? I don't understand, as well as which form to use when declaring variables. How should I declare and use int or num variables to avoid these errors?
Take this for example:
int x; // x has value as null
int x = 0; // x is initialized as zero
Both the above code are fine and compilable code. But if you enable Dart's null-safety feature, which you should, it will make the above code work differently.
int x; // compilation error: "The non-nullable variable must be assigned before can be used"
int x = 0; // No Error.
This is an effort made from the compiler to warn you wherever your variable can be null, but during the compile time. Awesome.
But what happens, if you must declare a variable as null because you don't know the value at the compile time.
int? x; // Compiles fine because it's a nullable variable
The ? is a way for you tell the compiler that you want this variable to allow null. However, when you say a variable can be null, then every time you use the variable, the compiler will remind you to check whether the variable is null or not before you can use it.
Hence the other use of the ?:
int? x;
print(x?.toString() ?? "0");
Further readings:
Official Docs: https://dart.dev/null-safety/understanding-null-safety
Null-aware operators: https://dart.dev/codelabs/dart-cheatsheet

Why can't Object.runtimeType be used in an as expression?

According to the Dart docs for Object.runtimeType, the field's type is Type. Which is confusing because I get an error from the compiler complaining about this field not being a type.
See this sample code:
final double first = 1.0;
final int second = 2;
final third = second as double; // works fine, unlike declaration below.
assert(first.runtimeType == double); // true
final fourth = second as first.runtimeType;
The last line throws this compile-time error:
The name 'first.runtimeType' isn't a type, so it can't be used in an 'as' expression.
The sample code shows that first.runtimeType == double, so wouldn't it follow that _ as first.runtimeType is equivalent to _ as double?
I think it is simple actually, runtimeType is only available at RunTime and cannot be statically analyzed by the compiler.

Why can't I convert a Number into a Double?

weight is a field (Number in Firestore), set as 100.
int weight = json['weight'];
double weight = json['weight'];
int weight works fine, returns 100 as expected, but double weight crashes (Object.noSuchMethod exception) rather than returning 100.0, which is what I expected.
However, the following works:
num weight = json['weight'];
num.toDouble();
When parsing 100 from Firestore (which actually does not support a "number type", but converts it), it will by standard be parsed to an int.
Dart does not automatically "smartly" cast those types. In fact, you cannot cast an int to a double, which is the problem you are facing. If it were possible, your code would just work fine.
Parsing
Instead, you can parse it yourself:
double weight = json['weight'].toDouble();
Casting
What also works, is parsing the JSON to a num and then assigning it to a double, which will cast num to double.
double weight = json['weight'] as num;
This seems a bit odd at first and in fact the Dart Analysis tool (which is e.g. built in into the Dart plugin for VS Code and IntelliJ) will mark it as an "unnecessary cast", which it is not.
double a = 100; // this will not compile
double b = 100 as num; // this will compile, but is still marked as an "unnecessary cast"
double b = 100 as num compiles because num is the super class of double and Dart casts super to sub types even without explicit casts.
An explicit cast would be the follwing:
double a = 100 as double; // does not compile because int is not the super class of double
double b = (100 as num) as double; // compiles, you can also omit the double cast
Here is a nice read about "Types and casting in Dart".
Explanation
What happened to you is the following:
double weight;
weight = 100; // cannot compile because 100 is considered an int
// is the same as
weight = 100 as double; // which cannot work as I explained above
// Dart adds those casts automatically
You can do it in one line:
double weight = (json['weight'] as num).toDouble();
You can Parse the data Like given below:
Here document is a Map<String,dynamic>
double opening = double.tryParse(document['opening'].toString());
In Dart, int and double are separate types, both subtypes of num.
There is no automatic conversion between number types. If you write:
num n = 100;
double d = n;
you will get a run-time error. Dart's static type system allows unsafe down-casts, so the unsafe assignment of n to d (unsafe because not all num values are double values) is treated implicitly as:
num n = 100;
double d = n as double;
The as double checks that the value is actually a double (or null), and throws if it isn't. If that check succeeds, then it can safely assign the value to d since it is known to match the variable's type.
That's what's happening here. The actual value of json['weight'] (likely with static type Object or dynamic) is the int object with value 100. Assigning that to int works. Assigning it to num works. Assigning it to double throws.
The Dart JSON parser parses numbers as integers if they have no decimal or exponent parts (0.0 is a double, 0e0 is a double, 0 is an integer). That's very convenient in most cases, but occasionally annoying in cases like yours where you want a double, but the code creating the JSON didn't write it as a double.
In cases like that, you just have to write .toDouble() on the values when you extract them. That's a no-op on actual doubles.
As a side note, Dart compiled to JavaScript represents all numbers as the JavaScript Number type, which means that all numbers are doubles. In JS compiled code, all integers can be assigned to double without conversion. That will not work when the code is run on a non-JS implementation, like Flutter, Dart VM/server or ahead-of-time compilation for iOS, so don't depend on it, or your code will not be portable.
Simply convert int to double like this
int a = 10;
double b = a + 0.0;

Constant 'spacesLeft' inferred to have type '()', which may be unexpected Swift

I am building a Tic Tac Toe game with an AI using Xcode 8 and Swift. Here are the relevant variables I am using that are contributing to the error:
var allSpaces: Set<Int> = [1,2,3,4,5,6,7,8,9]
var playerOneMoves = Set<Int>()
var playerTwoMoves = Set<Int>()
var nextMove: Int? = nil
Inside a function defining how the AI will play there are these variables:
var count = 0
let spacesLeft = allSpaces.subtract(PlayerOneMoves.union(playerTwoMoves))
The latter results in the compiler warning:
Constant 'spacesLeft" inferred to have type '()', which may be unexpected
There is an if statement just below that says:
if allSpaces.subtract(playerOneMoves.union(playerTwoMoves)).count > 0 {
nextMove = spacesLeft[spacesLeft.startIndex.advancedBy(Int(arc4random_uniform(UInt32(spacesLeft.count))))]
}
The condition gives the following error:
Value of tuple type '()' has no member 'count'
The statement gives the following error:
Type '()' has no subscript members
I am struggling to find a solution.
subtract modifies Set in place and doesn't return a value, you want to use subtracting
For the first warning, subtract returns Void, so use subtracting:
let spacesLeft = allSpaces.subtracting(playerOneMoves.union(playerTwoMoves))
For the second error, advancedBy is deprecated, you may change like this:
if spacesLeft.count > 0 {
nextMove = spacesLeft[spacesLeft.index(spacesLeft.startIndex, offsetBy: Int(arc4random_uniform(UInt32(spacesLeft.count))))]
}
Set.subtract is a mutating function, so it modifies the Set in place and its return value is Void, which is just a type alias for an empty tuple, (), hence the warning.
You should call Set.substracting, which is the non-mutating version of subtract and returns Set<Set.Element>.
The subtract(_:) function is a mutating function so it will mutate the Set your using to call the function.
From Apple Docs:
subtract(_:)
Removes the elements of the given set from this set.
The reason you're getting the errors is because this function returns Void which in Swift is a typealias for an empty tuple(from Swift's source code). Since Void has no subscripts nor count property/variable you get those errors.
Maybe you should take a look at the subtracting(_:) function, which returns a different Set.
From Apple Docs:
subtracting(_:)
Returns a new set containing the elements of this set that do not occur in the given set.

Why must a struct value be mutable to set an indexed property?

Consider the following program:
[<Struct>]
type Grid2D<'T> =
val RowLength : int
val Data : 'T[]
new(rowLength, data) = { RowLength = rowLength; Data = data }
member this.Item
with get(rowIndex, columnIndex) =
this.Data.[rowIndex * this.RowLength + columnIndex]
and set(rowIndex, columnIndex) value =
this.Data.[rowIndex * this.RowLength + columnIndex] <- value
let g = Grid2D(3, Array.zeroCreate(3 * 3))
g.[1, 1] <- 4
The last line fails to compile with:
error FS0256: A value must be mutable in order to mutate the contents
or take the address of a value type, e.g. 'let mutable x = ...'
However, if the [<Struct>] attribute is removed, and Grid2D is thus a reference type, then the program compiles.
Interestingly, inlining the property setter by hand also compiles fine:
g.Data.[1 * g.RowLength + 1] <- 4
So why is calling it a compile error?
Note: I am aware that this compiler error exists to make it impossible to mutate a non-mutable value of a struct by setting one of its fields. But I'm clearly not mutating the struct here.
I'm gonna take a guess here that its the second part of that error message that applies - "or take the address of a value type". Its not the mutability but the address of the value type that needs to be taken in order for you to refer to the same value g when mutating the Data.
It's probably impossible the compiler could consistently prove any setter doesn't actually mutate the struct, so it doesn't bother and just always emits the error when using assignment statements on non-mutable struct bindings.
In other words the question becomes: why does F# assume property setters mutate their instance? Well, probably because that's usually what property setters do.
Inlining the property setter works in this case because then the target of the assignment is an element of a property and not a property of the struct itself.

Resources