No symbols for valgrind massif dlclose() - memory

massif doesn't show any function names for functions which are in a lib and this lib is closed by dlclose().
If I remove dlclose(), and run the recompile and execute program I can see the symbols. Is there a way to know the function names without changing the source code?

The new version of valgrind (3.14) has an option that instructs valgrind to keep the symbols of dlclose'd libraries :
--keep-debuginfo=no|yes Keep symbols etc for unloaded code [no]
This allows saved stack traces (e.g. memory leaks)
to include file/line info for code that has been
dlclose'd (or similar)
However, massif does not make use of this information.
You might obtain a usable heap reporting profile by doing:
valgrind --keep-debuginfo=yes --:xtree-leak=yes
and then visualise the heap memory using e.g. kcachegrind.

Related

Does JIT creates an output containing native code?

In the context of the JIT compiler that acts on the Assembly (containing metadata and intermediate language):
The assembly is generated on the disk by the specific language compilation so as the CLR makes his own independent compilation to convert the MSIL into native code. Is there a visible output created on disk after this compilation? A file/s containing binary code or similar?
Here is a quite explicit article where I found the answer. Basically, there is not an output file and the native code is stored dynamically in memory during the runtime.
When managed code calls a particular method, the compiling function wakes up, looks up the intermediate code (processor-agnostic object code that's similar to the machine code), then compiles the intermediate code into instructions for the available processor. The managed code then saves those instructions in a dynamically allocated location in memory. The compiling function points back to the original method so that the two are linked: When the method in the assembly executes, it executes the processor instructions stored in memory.

Which malloc will be called?

I want to adopt jemalloc in my project. In order to call the malloc() function in jemalloc, I included jemalloc/jemalloc.h in the .cpp files. However, inevitably I should also call some function provided in cstdlib.h. So both jemalloc/jemalloc.h and cstdlib.h are included. I am wondering in this case, which malloc() will be called? And how can I guarantee the malloc() from jemalloc will be called? Thanks in advance!
You need to link your application against the jemalloc library (add -L/path/to/jemalloc/lib -ljemalloc to the link command), which will cause the dynamic loader to resolve all calls to malloc(), free() etc. to the jemalloc versions. An easy way to tell if jemalloc is actually being used is to define MALLOC_CONF=stats_print:true in the environment, which will cause jemalloc to dump statistics to stderr just before program exit.
You have to tell the linker to use jemalloc, which you can do by setting an environment variable before running your program:
LD_PRELOAD=/path/to/lib/libjemalloc.so.1 your_program

Extra "$qqrv" appearing in symbols

Delphi XE3. I'm using the JCL Error dialog and FastMM with FullDebug turned on in my application and getting "garbage" appended to the symbols in the stack traces (both JCL and FastMM):
[74EA3D67] RaiseException
[0041815D] FastMM4.TFreedObject.VirtualMethodError$qqrv
[0054FEC5] Vcl.Controls.TWinControl.CMInvalidate$qqrr24Winapi.Messages.TMessage
when what I'd like is:
[74EA3D67] RaiseException
[0041815D] FastMM4.TFreedObject.VirtualMethodError
[0054FEC5] Vcl.Controls.TWinControl.CMInvalidate
[00548735] Vcl.Controls.TControl.WndProc
But only when the app is compiled for Release. When I compile for Debug the stack trace is "clean". Since I'm seeing the same sort of "garbage" in FastMM and JCL reports I don't think it's either library giving trouble.
And I'm saying "garbage" with quotes because the $qqv seems to be constant and the rest of the string varies from run to run.
I have checked (and rechecked) the map file and symbols settings and the JCL symbols and I can't see anything different in the settings.
EDIT:
Not surprisingly the underlying cause is the same, as FastMM is (I think) using JCLDebug to generate the stack traces ... so fix one, fix all.
This is a bug in the .map file parser of the JCL.
see
http://sourceforge.net/p/fastmm/discussion/443400/thread/82b024dc/
for the detailed thread and suggested fix.
Probably your Release configuration doesn't include stack frames compiler option (by default, it doesn't). Without this information compiled into the executable, what the stack trace shows are the names of the runtime package exports. The solution is to compile in Debug mode, or turn on stack frames in the compiler options of your Release configuration.
After looking into it all I conclude that this is no problem, just my misunderstanding and maybe a bit of stale code:
The $qqrv and the other text is all valid and potentially useful information so rather than find a way to remove it it would be better to learn how to use it. The links in the questions above give good basis for this work.

How to make an object file that cannot be dead_stripped?

What is the easiest way to produce a Mach-O object file that does not have the SUBSECTIONS_VIA_SYMBOLS flag set, such that the linker (with -dead_strip) will not later try to cut the text section into pieces and guess which pieces are used?
I can use either a command-line option to llvm/gcc (4.2.1) that will prevent it from emitting .subsections_via_symbols in the first place, or a command-line tool that will remove the flag from an existing object file.
(Writing such a tool myself based on the Mach-O spec is an option, but if possible I'd rather not reinvent the wheel that hard).
Platform: iOS, cross-compiling from OSX with XCode 4.5.
Background: We're supplying a static library that other companies build into apps. When our library encounters a problem it produces a crash report with a stack trace and certain other key information that (if we're lucky) we get to analyze later. Typically the apps as deployed have been stripped of debug information so interpreting stack traces is a problem. If we were making the app ourselves we would just save the DWARF debug data from before stripping and use that to decode the addresses in the incoming crash reports. But we can't depend on the app makers supplying us with such data from their linking steps.
What we're doing instead is to let the crash report include the run-time address of selected function; from that we can deduce the offset between addresses in our linker map and addresses in the crash report. We're linking our entire library incrementally into a single .o before we stuff it into an .a; since it does only one big thing there wouldn't be much to save from removing unused functionality from it when the app is eventually linked. Unfortunately there's a few small pieces of code in the library that are sometimes not used (alternative API entry points for the main functionality, small helper functions for interpreting our error codes and the like), and if the app developer links with -dead_strip, it disturbs the address reconstruction of crash reports that the relative offsets in the final app differ from the linker map from our incremental link operation.
We can't realistically ask all app developers to disable dead-code stripping in their build process, so it seems a better way forward if we could mark our .o as "not dead-strippable" and have the eventual app linking respect that.
I solved it.
The output of an incremental link operation only has MH_SUBSECTIONS_VIA_SYMBOLS set if all the input objects have it set. And an object file produced from assembler input only has it set if there's an explicit directive set. So one can remove the flag by linking with an empty assembler input:
echo > empty.s
$(CC) $(CFLAGS) input.o empty.s -nostdlib -Wl,r -o output.o

Native code execution by JVM/CLR

How does JVM/CLR execute JIT compiled native code? Is it by some code injection or by copying code to executable memory? What are the system calls that allows dynamic code execution?
I can explain how we do it in CACAO VM (a research JIT-only JVM). First, the machine code for a method is generated into some heap-allocated memory block. After compilation, the final code length is known, and a chunk of executable memory is allocated using mmap and the PROT_EXEC flag (relevant CACAO code here). Then, the machine code is copied into the mmapped area. After that, many architectures require some machine-specific cache flushing mechanism. As an example, have a look at the cache-flushing function for PowerPC 64. Notably, on i386 and x86_64, there is nothing to do. After this step, the processor is ready to execute the newly-generated code. Alternatively, already allocated memory pages can be marked executable with mprotect. Note that mmap/mprotect are Unix facilities.
I don't know specifically how Java does it, but in general you'd insert "trap" opcodes into the interpreter's instruction stream. There are two opcodes in the JVM spec that seem tailor-made for this purpose.
If you want to know for sure, there's no better answer than the source: http://download.java.net/jdk6/source/
The Common Language Runtime has a methodtable for each type with entries pointing to native code or a native stub to JIT managed code and then fixup the methodtable with the pointer to the just created native code.
MSDN has a more in depth explanation in the MethodDesc section
This blog entry by Dave Notario explains how the CLR JIT compiler works.

Resources