disable-randomization in gdb doesn't work - memory

The gdb variable disable-randomization is ON, but still the memory address of a variable changes on different run.
First run:
(gdb) p &g[v].bIsDummyVertex
$13 = (bool *) 0x143d6a7c
Second run:
(gdb) p &g[v].bIsDummyVertex
$13 = (bool *) 0x143d63e4
I was expecting the same value for the memory location because disable-randomization is on.

Related

How to show an environment variable (that I did not set) in lldb?

I would like to inspect the defined environment variables in various moments during the run of a program.
settings show target.env-vars only shows me env-vars I've set in lldb, not all the environment variables available to the running process.
How do I read the value of an environment variable that is available to the running process (either because the program was started with that variable or the variable was defined during the run) in lldb?
lldb doesn't keep track of the environment actually in the program, but at least on most Unix systems you can access it through the environ global variable, so something like this will work:
(lldb) expr int idx = 0; while (1) { char *str = ((char **)environ)[idx]; if (str == (char *) NULL) break; else printf("%d: %s\n", idx++, str); }
If you do this a lot, then you can put:
command alias print_real_env expr int idx = 0; while (1) { char *str = ((char **)environ)[idx]; if (str == (char *) NULL) break; else printf("%d: %s\n", idx++, str); }
in your ~/.lldbinit file, and then just run print_real_env.

Do CLANG compiler optimize static function? If so how to stop it from optimizing?

I have a function defined as below.
static void func1 (arg 1, arg 2) {
:
:
}
This is called from another static function
static int func2 (args) {
:
:
func1(args);
:
}
In this scenario when I do gdb and put a breakpoint on function func1, gdb breaks at the above function.
(gdb) b func1
(gdb) info b
Num Type Disp Enb Address What
1 breakpoint keep y 0x0dde3d3a in func2 at file1_main.c:42
breakpoint already hit 1 time
(gdb)
My compiler is CLANG. Language is C.
How to stop CLANG from (temporarily) optimizing functions (just the functions of interest)?

Are box types in Rust automatically freed when they have no references?

In the following code, does the "box 5i" get properly freed when exiting the "main" scope? The wording on their pointer guide seems to indicate that variables with box types act as if there's an automatic "free()" call when the variable goes out of scope. However, if you "free()" on "a" in this code, it would only end up freeing the "box 8i" that is on the heap. What happens to the "box 5i" that "a" was originally pointing to?
fn foo(a: &mut Box<int>) {
*a = box 8i;
}
fn main() {
let mut a = box 5i;
println!("{}", a); // -> "5"
foo(&mut a);
println!("{}", a); // -> "8"
}
By default, overwriting a memory location will run the destructor of the old value. For Box<...> this involves running the destructor of the contents (which is nothing for an int) and freeing the allocation, so if a has type &mut Box<T>, *a = box value is equivalent to (in C):
T_destroy(**a);
free(*a);
*a = malloc(sizeof T);
**a = value;
In some sense, the answer to your question is yes, because the type system guarantees that *a = box ... can only work if a is the only reference to the old Box, but unlike most garbage collected/managed languages this is all determined statically, not dynamically (it is a direct consequence of ownership and linear/affine types).

`lldb` gives random output on same function call [duplicate]

I have an universal iOS app targeting iOS SDK 6.1, and the compiler is set to Apple LLVM compiler 4.2. When I place a breakpoint in my code and run the following I get weird results for sin(int).
For reference, sin(70) = 0.7739 (70 is in radians).
(lldb) p (double)sin(70)
(double) $0 = -0.912706376367676 // initial value
(lldb) p (double)sin(1.0)
(double) $1 = 0.841470984807897 // reset the value sin(int) will return
(lldb) p (double)sin(70)
(double) $2 = 0.841470984807905 // returned same as sin(1.0)
(lldb) p (double)sin(70.0)
(double) $3 = 0.773890681557889 // reset the value sin(int) will return
(lldb) p (double)sin(70)
(double) $4 = 0.773890681558519
(lldb) p (double)sin((float)60)
(double) $5 = -0.304810621102217 // casting works the same as appending a ".0"
(lldb) p (double)sin(70)
(double) $6 = -0.30481062110269
(lldb) p (double)sin(1)
(double) $7 = -0.304810621102223 // every sin(int) behaves the same way
Observations:
The first value for sin(int) in a debug session is always -0.912706376367676.
sin(int) will always return the same value that was returned from the last executed sin(float).
If I replace p with po, or expr (e.g. expr (double)sin(70)), I get the same exact results.
Why is the debugger behaving like this?
Does this mean that I should type cast every single parameter each time I call a function?
Some more interesting behavior with NSLog:
(lldb) expr (void)NSLog(#"%f", (float)sin(70))
0.000000 // new initial value
(lldb) expr (void)NSLog(#"%f", (float)sin(70.0))
0.773891
(lldb) expr (void)NSLog(#"%f", (float)sin(70))
0.000000 // does not return the previous sin(float) value
(lldb) p (double)sin(70)
(double) $0 = 1.48539705402154e-312 // sin(int) affected by sin(float) differently
(lldb) p (double)sin(70.0)
(double) $1 = 0.773890681557889
(lldb) expr (void)NSLog(#"%f", (float)sin(70))
0.000000 // not affected by sin(float)
You're walking into the wonderful world of default argument promotions in C. Remember, lldb doesn't know what the argument types or return type of sin() is. The correct prototype is double sin (double). When you write
(lldb) p (float) sin(70)
there are two problems with this. First, you're providing an integer argument and the C default promotion rules are going to pass this as an int, a 4-byte value on the architectures in question. double, besides being 8-bytes, is an entirely different encoding. So sin is getting garbage input. Second, sin() returns a double, or 8-byte on these architectures, value but you're telling lldb to grab 4 bytes of it and do something meaningful. If you'd called p (float)sin((double)70) (so only the return type was incorrect) lldb would print a nonsensical value like 9.40965e+21 instead of 0.773891.
When you wrote
(lldb) p (double) sin(70.0)
you fixed these mistakes. The default C promotion for a floating point type is to pass it as a double. If you were calling sinf(), you'd have problems because the function expected only a float.
If you want to provide lldb with a proper prototype for sin() and not worry about these issues, it is easy. Add this to your ~/.lldbinit file,
settings set target.expr-prefix ~/lldb/prefix.h
(I have a ~/lldb directory where I store useful python files and things like this) and ~/lldb/prefix.h will read
extern "C" {
int strcmp (const char *, const char *);
void printf (const char *, ...);
double sin(double);
}
(you can see that I also have prototypes for strcmp() and printf() in my prefix file so I don't need to cast these.) You don't want to put too many things in here - this file is prepended to every expression you evaluate in lldb and it will slow your expression evaluations down if you put all the prototypes in /usr/include in there.
With that prototype added to my target.expr-prefix setting:
(lldb) p sin(70)
(double) $0 = 0.773890681557889

F# using accumulator, still getting stack overflow exception

In the following function, I've attempted to set up tail recursion via the usage of an accumulator. However, I'm getting stack overflow exceptions which leads me to believe that the way I'm setting up my function is't enabling tail recursion correctly.
//F# attempting to make a tail recursive call via accumulator
let rec calc acc startNum =
match startNum with
| d when d = 1 -> List.rev (d::acc)
| e when e%2 = 0 -> calc (e::acc) (e/2)
| _ -> calc (startNum::acc) (startNum * 3 + 1)
It is my understanding that using the acc would allow the compiler to see that there is no need to keep all the stack frames around for every recursive call, since it can stuff the result of each pass in acc and return from each frame. There is obviously something I don't understand about how to use the accumulator value correctly so the compiler does tail calls.
Stephen Swensen was correct in noting as a comment to the question that if you debug, VS has to disable the tail calls (else it wouldn't have the stack frames to follow the call stack). I knew that VS did this but just plain forgot.
After getting bit by this one, I wonder if it possible for the runtime or compiler to throw a better exception since the compiler knows both that you are debugging and you wrote a recursive function, it seems to me that it might be possible for it to give you a hint such as
'Stack Overflow Exception: a recursive function does not
tail call by default when in debug mode'
It does appear that this is properly getting converted into a tail call when compiling with .NET Framework 4. Notice that in Reflector it translates your function into a while(true) as you'd expect the tail functionality in F# to do:
[CompilationArgumentCounts(new int[] { 1, 1 })]
public static FSharpList<int> calc(FSharpList<int> acc, int startNum)
{
while (true)
{
int num = startNum;
switch (num)
{
case 1:
{
int d = num;
return ListModule.Reverse<int>(FSharpList<int>.Cons(d, acc));
}
}
int e = num;
if ((e % 2) == 0)
{
int e = num;
startNum = e / 2;
acc = FSharpList<int>.Cons(e, acc);
}
else
{
startNum = (startNum * 3) + 1;
acc = FSharpList<int>.Cons(startNum, acc);
}
}
}
Your issue isn't stemming from the lack it being a tail call (if you are using F# 2.0 I don't know what the results will be). How exactly are you using this function? (Input parameters.) Once I get a better idea of what the function does I can update my answer to hopefully solve it.

Resources