Obtain compiler flags in skylark - bazel

I'd like to convert a CMake-based C++ library to bazel.
As part of the current CMake project, I'm using a libclang-based code generator that parses C++ headers and generates C++ code from the parsed AST. In order to do that, I need the actual compiler flags used to build the cc_library the header is part of. The flags are passed to the code generation tool so it can use clang's preprocessor.
Is there any way I could access the compiler flags used to build a dependency from a skylark- or gen_rule rule? I'm particularly interested in the include paths and defines.

We're working on it. Well, not right now, but will soon. You might want to subscribe to the corresponding issue, and maybe describe your requirements there so we take them into account when designing the API.

Related

generate llvm bytecode for translation unit in clang plugin

Good afternoon. In the clang documentation, I found a way to generate and paste code. So far, I have not used this method for the plugin, but if I understand correctly, it is possible (correct if not right - thanks).
I would like to know if it is possible to change the code at the bytecode level in the plugin, i.e. insert new stmt(clang::Stmt and its derivatives) into existing ones and give them to the compiler?
Pasting code at the source code level (clang::Rewriter) changes the project's source files, so it adds extra work.
Thank you

How can I find the flag dependency or conflict in LLVM?

As I know, GCC has this website to figure out the relationship between different flags using while optimization. GCC example website. Like fpartialInlining can only be useful when findirectInlining is turned on.
I think the same thing would happen in clang, in other words, I think the different passes may have this kind of dependcy/confilcts relationship in LLVM(CLANG).
But after checking all the document provided by developers, I find it just say something about the functionality in these passes. LLVM PASS DOC
So my question can be divided into 2 parts I think:
Does the dependency exists in LLVM PASS or there is no such dependency/conflicts
If there is, how can I find them.
You can find which passes are using in which optimization levels by clang while compiling any c or c++ code with clang and try to figure out dependencies. For example:
clang -O2 --target=riscv32 -mllvm -debug-pass=Structure example.c
(You can use also -debug-pass=Arguments instead of -debug-pass=Structure. It depends readability.)
this will give which passes used by clang at 2. optimization level for riscv32 target. If you don't give a target it sets default as your host machine target, and keep in mind that some used passes changes related to different targets at same optimization levels.

How to get bitcode llvm after linking?

I am trying to get LLVM IR for a file which is linked with some static libararies.
I tried to link using llvm-link . It just copy the .bc files in one file ( not like native linking).
clang -L$(T_LIB_PATH) -lpthread -emit-llvm gives an error: emit-llvm can not be used with linking. When passing -c option, it gives warning that the linking options were not used.
My main goal is to get .bc file with all resolved symbols and references. How can I achieve that with clang version 3.4.?
You may have a look at wllvm. It is a wrapper on the compiler, which enable to build a project and extract the LLVM bitcode of the whole program.
You need to use wllvm and wllvm++ for C and C++, respectively (after setting some environment variables).
Some symbols come from source code via LLVM IR. IR is short for intermediate representation. Those symbols are easy to handle, just stop in the middle of the build process.
Some others come from a library and probably were generated by some other compiler, one that never makes any IR, and in any case the compiler was run by some other people at some other location. You can't go back in time and make those people build IR for you, even if their compiler has the right options. All you can do is obtain the source code for the libraries and build your entire application from source.

Sending specific compilation flags for static compilation with Bazel Build

In my project we have code that when compiled for static linkage there must be a define added when compiling the code. Let's assume it's -DSTATIC_COMPILATION.
My question: Is it possible to control compilation flags when requesting a static linkage or any compilation flags based on linkage binding?
Thinks I know I can do:
Add a --copt '-DSTATIC_COMPILATION' to the command line bazel build
Configure a bazelrc file that can provide such configuration by passing bazel build --config=static_comp - which is nice, but I'm not sure it will pass to other packages when taking this package as external package - I could be wrong here...
What are the options I'm missing?
The short answer is that there is no way in Bazel today to get it to set a flag based on whether the code will be statically or dynamically linked.
Bazel's cc_library does compile code twice on architectures that require PIC for dynamic linking, but do not require PIC for static linking - once with and once without PIC. This is mostly done for performance of the statically linked executables, as non-PIC code is generally faster.
Note that cc_test rules in Bazel are dynamically linked by default while cc_binary rules are statically linked by default, so the PIC/no-PIC distinction requires double compilation of almost all C/C++ source code. For additional complexity, note that PIE executables require PIC-compiled code, so if you want ASLR, which requires PIE executables, then the code is always compiled as PIC.
However, the support for PIC/no-PIC is hard-coded in cc_library, and I don't see any obvious way to 'abuse' it to do what you want. You could conceivably hack up a crosstool to declare that the arch requires PIC for dynamic linking, but not static linking, and then declare with PIC in both cases anyway and also set the additional flag. This would result in .pic.o and .o output files, although both would contain PIC code. This isn't workable if you can't control the crosstool, and I wouldn't recommend doing this.
That said, there may be other ways to achieve what you want. Mind elaborating why you need to have a special case for statically linked code?

is there something similar to c++ precompiled headers in Delphi?

One feature I miss in Delphi (I hope it is at all possible) is that I cannot have Units automatically include their dependent units. This is possible in c++ headers.
For example, in c++:
dependentHeader.h:
#include "baseHeader.h"
Any headers included in baseHeader.h are available in dependentHeader.h. Another example is the precompiled header, whatever I include in the precompiled header is available to all header files in the project. This is very useful for including frequently used headers throughout a project.
Now back to Delphi:
I have a Unit called DebugService
In order to use it other units are required: DependentUnit1, DependentUnit2.
So in every Unit I use DebugService I have to manually add all the other dependent units: DependentUnit1, DependentUnit2.
What I want is just to be able to specify DebugService as a dependency and have all its dependencies come along?
So, in other words I want:
uses
DebugService;
and NOT:
uses
DebugService, DependentUnit1, DependentUnit2;
Is this at all possible?
Thank you!
Ironic that you would ask this, when a better question would be, "Why doesn't C++ have modules yet, in the year 2013".
Delphi's compilation units are not normally split into duplicate .h and .cpp files. You may have noticed that Delphi units have an Interface and Implementation section. This in turn becomes a true module system, compiled .DCU files differ significantly from C++/C compiler ".obj" files because just the interface area can be read, very quickly, by a compiler, when a "uses UnitX" is encountered.
Recently, CLANG/LLVM compiler developers at Apple started adding the rudiments of true module support to the latest CLANG/LLVM C and Objective-C compilers. This means that precompiled header support in XCode is no longer the preferred manner of doing things, because true modules are better than precompiled headers. You could say that a precompiled header system is like having one module, and only one module, as a shabby kludge that you are happy to have, when you cannot have the real thing, which is called Modules. You may say, you are a windows developer, what do you care about CLANG/LLVM? Just that it is evidence that the world is slowly giving up on precompilation, and moving eventually, to modules. The C++ standardization comittee, working at its current rate will certainly deliver you a working C++ standard (but not an implementation) by 2113, at the latest.
In short we might say your question might be asking, if the Horseless Carriage is going to gain features allowing it to accelerate the caching and rapid deployment of Oats to the Equine Power Units.
We don't need that here. We have a real compiler with real module support. End of story. You may notice that Modules (in clang/llvm) are faster than precompiled headers. They are also less of a source of problems, than precompiled headers which are a nearly endless source of crazy problems.
Pre-compiled headers don't have any semantic meaning that differs from standard headers. They are simply an optimisation to improve compilation times. Typically Delphi compilation much faster than C++ compilers and so the optimisation is not needed.
You cannot use unit A and transitively use all of unit A's dependencies. If you want to use definitions from a unit, it must be listed in the uses clause.
There is no equivalent to pre-compiled headers in Delphi. Adding the additional uses references is required if DebugService uses declarations from DependantUnit1 and DependentUnit2 in its own declarations of its interface section, and its declarations are then used by other units, thus they are dependant on those other units. If you can design your units to reduce interface dependencies, using dependent units only in the implementation section instead, then you won't have to include DependantUnit1 and DependantUnit2 in other units' uses clauses anymore. But I understand that is not always possible.
If you need to share code amongst multiple units, it is best to move that code to its own unit/package.
#include "baseHeader.h"
is equivalent to
{$I baseHeader.pas}
you can put anything you like into that file. Even the whole Interface section.
an other alternative to your problem is the use of conditional defines.
in main project file
{$DEFINE debugMyApp}
in each unit you use
use
abc
{$IFDEF debugMyApp}
, additionalUNit1
, additionalUNit2
, etc
{$ENDIF}
;

Resources