Dumping contents of lost memory reported by Valgrind - memory

When I run valgrind --leak-check=yes on a program, a few bytes of lost memory are reported. Is it possible to view the contents of this memory (i.e. dump the data that is stored in it)?

You can do that with the last version of Valgrind (3.8.1):
Start your executable activating the gdbserver at startup:
valgrind --vgdb-error=0 ....<your program>
Then in another window, connect a gdb to Valgrind (following the indications
given by Valgrind).
Then put a breakpoint at a relevant place (e.g. at the end of main)
and use the gdb
continue
command till the breakpoint is reached.
Then do a leak search from gdb:
monitor leak_check full reachable any
Then list the address(es) of the reachable blocks of the relevant loss record nr
monitor block_list <loss_record_nr>
You can then use gdb features to examine the memory of the given address(es).
Note also the potentially interesting command "who_points_at"
if you are searching who has kept a pointer to this memory.

Related

What is malloc: recording malloc (but not VM allocation) stacks using lite mode

Hello what is this follwoing message in xcode debug.
SomeApp(2389,0x1092763c0) malloc: recording malloc (but not VM allocation) stacks using lite mode
xCode 8.3
The log message seems to come from
libmalloc-53.1.1/src/malloc.c
as the source code is available here
at line #567 - or at least search in text "recording malloc (but not VM".
malloc_printf(ASL_LEVEL_INFO, "recording malloc (but not VM allocation) stacks to disk using standard recorder\n");
About the logging environment, you should have a look on the Apple documentation.
If you are worried about the log message, I'd refer to the inline comments in source:
// Set up stack logging as early as possible to catch all ensuing VM allocations,
// including those from _malloc_printf and malloc zone setup. Make sure to set
// __syscall_logger after this, because prepare_to_log_stacks() itself makes VM
// allocations that we aren't prepared to log yet.
So I guess you should ignore it unless you want to debug memory allocations.
In order to set/unset the malloc debug environment, select Edit Scheme... from the project toolbar and enter the Diagnostics panel.

What is causing repeated glibc error with plink/batch job software-?

I am running plink software through a PBS batch job. This error occurs when I run the job:
*** glibc detected *** /software/plink: double free or corruption (out): 0x000000018dfafca0 ***
======= Backtrace: =========
[0x7d7691]
[0x7d8bea]
[0x45f5ed]
[0x47bb11]
[0x40669a]
[0x7bdb2c]
[0x400209]
However it only occur with one of my files (bw 30-60 gb files) and each rerun shows the exact same back trace map. I tried running it not through the batch scheduler and received the same error again, with the same backtrace map. I am just using the software (plink), and didn't write it, so most of the answers online are about writing and freeing memory in your program.
Any ideas on
what is causing this error, and
how I can fix it?
what is causing this error, and
A double-free or heap corruption in the plink
how I can fix it?
You can't. You can do one of two things, depending on how much you know and understand.
First, build the newest version of plink from source, and see if the problem persists.
If it does not, you are done (or at least you might hope that someone else found and fixed this problem).
If it does, you'll have to debug the problem sufficiently for either you, or plink developers to fix it. Some tools that should help: Valgrind and Address Sanitizer (note: in addition to Clang, Address Sanitizer is also included in GCC-4.8).
Once you have a good report (where the memory was allocated, and where it got corrupted), you should either fix it and submit your fix to plink developers, or give them a bug report with the allocation and corruption location and stack traces.

Using massif on process which is "killed 9"

I'm trying to do a memory profiling for a program which consumes too much memory and gets killed by OS (FreeBSD) with 9 signal. That happens on some specific data, so profiling it on another (e.g. smaller) data set would not give much help. When program is killed 9 massif doesn't generate any output at all. What could be done in this situation to get memory profiled?
If you have a recent Valgrind version (>= 3.7.0),
Valgrind has an embedded gdbserver so it can be used together with gdb.
Before your application starts to run under Valgrind, you can put breakpoints.
When a breakpoint is encountered, GDB monitor commands are available
to invoke Valgrind tool specific functionalities.
For example, with Massif, you can trigger the production of a report.
With Memcheck, you can do a leak search, examine validity bits, ...
It is also possible to trigger these monitor commands from the shell command
line (using the Valgrind vgdb utility)

Is there any tools to check the memory and variable of running programme?

I can use Wireshark to check the computer network and packages, and I got GDB debugger to check debug of the programme that I own the source, is there any tools for me to check the programme that I don't own the source, but I would like to check the memory usage of the programme for that? Can I ever check the assign of variable or similar information. Thanks.

strategies to fix runtime errors

I was wondering what strategies you guys are using to fix runtime errors? Really appreciate if you could share some tips!
Here is some of my thought (possibly with the help of gdb):
when runtime error happens because some memory is wrongly accessed, is the address stored in the dumped core showing where the memory is?
If I can find the address/memory whose being accessed causes the runtime error, is it possible to find out which variable is using that address (which may be at the begining or middle of the memory of the variable)? And find out the nearby variables that takes the memory down below and right above that memory block?
If all these are possible, will it help to fix the bugs?
Thanks and regards!
I use gdb's --args option to start my programs from the command-line.
Example:
gdb --args foocode --with-super-awesome-option
run
This will load the program foocode and pass the --with-super-awesome-option parameter to it. When the program fails, you'll have a ready-to-use gdb session to work within.
From there you can use the backtrace command:
bt
This will show you the chain of events (function calls) that lead to your crash.

Resources