I have an application which i build using gcc on linux host for ARM target processor. This generated arm executable i execute on a ARM development board i have.
I want to do some code coverage analysis:
Will gcov show code coverage if i have ARM assembly source files in my build environment?
If my build environment has some X86 assembly source files, then will gcov show code coverage data?
Thank you.
-AD.
AFAIK, gcov works by preprocessing your C or C++ source code.
If you have pure assembly language files, I don't think gcov ever
sees them.
If it does, I'd be suprised if it understand how
to safely insert code in arbitrary-target assembly code,
with ARM being common enough so there's a faint chance.
The problem with instrumentating assembly code is the
test coverage probe code itself may require registers,
and there isn't a safe way to know, for an arbitrary piece
of assemblers, a) what registers are available, and b)
if there's an inserted instruction, will some other instruction
break because of the extra space (e.g., a hardwired jump relative
across the inserted instruction).
Related
How do I get Bazel to write out the assembly for a C/C++ program? I'd like to inspect the assembly while not having to do many changes to my current build system.
For my specific use case I need generate the assembly for all linked files when I bazel(isk) test. I would like to take a look at the assembly to check on some inlining behavior.
If I can't narrow the assembly generation to only occur when testing, that's fine; it's not a big deal to have to switch that on and off.
What would be nice, however, is if there is a way to control this for several compilers with one configuration (file). Hence the question is not about producing assembly with gcc or clang, but about bazel.
--save_temps will do this for the configured compiler.
I am searching a lot about this great language and got myself on this page https://github.com/dart-lang/sdk/wiki/Snapshots but it didn't answer some questions:
What is the difference between a Snapshot and AOT? Which is faster and why?
Why the AOT generated on a Windows machine doesn't run on a Linux but a snapshot generated through dart --snapshot_kind=kernel does? What is "kernel"? Is it slower?
What is the difference from a Kernel Snapshot and a JIT Snapshot? Which is faster?
With snapshots, nothing of your code (kernel snapshots) or only portions of your code (JIT snapshots) are pre-compiled into binary format. With AOT, all of your code gets pre-compiled to binary (platform specific) format. Pre-compiled binary code (AOT) is faster, because there is no need to compile code at runtime.
As mentioned above, AOT compiled code is transformed to platform specific binary code. Therefor you can not run a program AOT compiled for Windows on a Linux machine. With kernel snapshots, nothing of your code is pre-compiled. So it is portable between platforms, but must be compiled at runtime. So yes, this is slower.
Also as mentioned above, with kernel snapshots nothing of your code gets pre-compiled. With JIT snapshots, the program is executed in a test run and each part of the code that is executed, gets pre-compiled to platform specific binary format. So JIT snapshots is faster then kernel snapshots.
I collect all beam files of a project under a path like ~/erl_beam
dialyzer ~/erl_beam/*.beam --get_warnings -o static_analysis.log
It works well.
If I do it on Erlang source code:
dialyzer --get_warnings -I <Path1> --src <Path2> -o static_analysis.log
It works, too.
So why we have two ways to take static analysis on Erlang code?
Is there any strength or weakness for each other?
Very small.
Dialyzer analysis is performed on Core Erlang. This representation can be extracted either directly from a +debug_info compiled .beam file, or by compiling a .erl file. Compilation takes time, but it is of course not the most time-consuming part of the analysis.
If you have already compiled your .erl with +debug_info it is also more convenient to analyze the resulting .beam file, as you won't have to pass any compilation-related command-line options to Dialyzer.
Dialyzer starts its analysis from either debug-compiled BEAM bytecode or from Erlang source code. However, several options work only for BEAM files (e.g., --build_plt).
Using BEAM files may be necessary if, for example, you don't have access to source files. If you have access to both BEAM and source files, you'll probably want to use the BEAM files as this will speed up the analysis slightly: Dialyzer will take much less time to parse its input. On the other hand, parsing takes significantly less time than the rest of the analysis, so don't expect to see much of a difference (I'd be surprised if it was more than 10%).
Apart from that, AFAIK, there's no difference in the type of analysis that Dialyzer performs, between these two cases.
reading this link :http://fdiv.net/2012/08/15/compiling-code-clang-api
i see i can load c/c++ file and compile it just fine to executable..
but i did't found any info if i have some kind of source code in c/c++
that my application generating or other way ..
in short can i pass the stage where i load the c file , and just build my own logic to compile to executable ?
Of course you can. And there is already a project for that purpose
Dig its source codes as your wish :)
Cling
What is Cling
Cling is an interactive C++ interpreter, built on the top of
LLVM and Clang libraries. Its advantages over the standard
interpreters are that it has command line prompt and uses
just-in-time (JIT) compiler for compilation. Many of the developers
(e.g. Mono in their project called CSharpRepl) of such kind
of software applications name them interactive compilers.
One of Cling's main goals is to provide contemporary, high-performance
alternative of the current C++ interpreter in the ROOT project -
CINT. The backward-compatibility with CINT is major priority during
the development.
I'm looking at some code compiled for iOS in XCode (so compiled for ARM with gcc) and as far as I can see, the compiler has never used ARM's feature of allowing arbitrary instructions to have a condition attached to them, but instead always branches on a condition as would be the case on Intel and other architectures.
Is this simply a restriction of GCC (I can understand that it might be: that "condition = branch" is embedded at a too high a level in the compiler architecture to allow otherwise), or is there a particular optimisation flag that needs to be turned on to allow compilation of conditional instructions?
(Obviously I appreciate I'm making big assumptions about where use of conditional instructions "ought" to be used and would actually be an optimisation, but I have experience of programming earlier ARM chips and using and analysing the output of Acorn's original ARM C compiler, so I have a rough idea.)
Update: Having investigated this more thanks to the information below, it turns out that:
XCode compiles in Thumb-2 mode, in which conditional execution of arbitrary instructions is not available;
Under some circumstances, it does however use the ITE (if-then-else) instruction to effectively produce instructions with conditional execution.
Seeing some actual assembly would make things clear, but I suspect that the default settings for iOS compilation prefer generation of Thumb code instead of ARM for better code density. While there are pseudo-conditional instructions in Thumb32 aka Thumb-2 (supported in ARMv7 architecture via the IT instruction), the original Thumb16 only has conditional branches. Also, even in ARM mode there are some instructions that cannot be conditional (e.g. many NEON instructions use the extended opcode space with condition field set to NV).
Yes, gcc does not really produce the most optimal code WRT conditional instructions. It works well in the most simple cases, but real code suffers from some pointless slowdowns that can be avoided in hand coded arm ASM. Just to give you a rough idea, I was able to get a 2x speedup for a very low level graphics blit method by doing the read/write and copy logic in ARM asm instead of the C code emitted by gcc. But, keep in mind that this optimization is only worth it for the most heavily used parts of your code. It takes a lot of work to write well optimized ARM asm, so don't even attempt it unless there is a real benefit in the optimization.
The first thing to keep in mind is that xcode uses Thumb mode by default, so in order to generate ARM asm you will need to add the -mno-thumb option to the module specific options for the specific .c file that will contain the ARM asm. Once the ARM asm is getting emitted, you will want to conditionally compile asm statements as indicated in the answer to the following question:
ARM asm conditional compilation question