How to run LLVM's Pass with new Pass Builder on IR by calling the LLVM API in program? - clang

By using opt, we can run a custom pass, or run a -O1 -O2 -O3 pass pipeline on a foo.ll file, but it all happens on the command line, and the IR is a file form.This is not conducive to some optimization of IR in the form of memory (each IR file is a module in memory form)in your own project. By the way of opt just like this:
opt -O2 foo.ll -o foo2.ll
Then we can get an optimized IR file foo2.ll.
Now, I want do the optimization in the program. I have compiled some custom passes into libraries outside the source tree. How to organize passes into pipelines by using the API interface of the new pass manager, and apply thepipline in an IR module?
My goal is :
①load the local pass's libraries in the program with a certain LLVM data structure(I am not sure which data structure it is)
②Register the passes which loaded into the program as sequential piplines
③Finally, apply Pass Pipline to an IR Module and return a new IR Module optimized by this Pipline
If doing this outside the source tree, by linking the static library to call the API, which API should I start with? Are there some examples to refer to? For example, how to use the new pass manager outside the source tree, and how to load the natively compiled pass library into memory

Passes are run by a pass manager; opt is basically a small program that loads a module from a file, configures and runs a pass manager and writes the module at the end.
The pipeline you mention is much of what the pass manager is/does.
I'm not quite clear on what exactly you're trying to do, but the general direction is clear enough. All the tasks in that direction involve using the pass manager. You configure the pass manager to use your custom passes and probably some/many that are included in LLVM, then you tell the pass manager to do its work on a module, and it calls your passes as necessary. The module is the in-memory data structure you want, it can be read or written as a .bc or .ll file.

Related

Compiling multiple .metal files into one .metallib

I’m currently writing some custom Core Image filters using Metal. For the sake of structure I want to put the different kernels into different .metal files with some common includes like you would do with “normal” source files.
However, when the metallib tool bundles the different .air files created by the Metal compiler into one .metallib file, only the kernel functions defined in the first input .air file given to metallib are visible. Functions from the other .air files don’t seem to be included. What’s the reason for this?
I thought (as is the default compilation behavior for Metal files) all Metal sources get compiled into one library that is then used by every custom CIFilter class to instantiate their internal CIKernel with the function they need.
I now ended up compiling a .metallib file for each custom filter with custom build rules and copying all of them them into my framework using a custom build phase. This doesn't seem to be the intended way…

Obtain compiler flags in skylark

I'd like to convert a CMake-based C++ library to bazel.
As part of the current CMake project, I'm using a libclang-based code generator that parses C++ headers and generates C++ code from the parsed AST. In order to do that, I need the actual compiler flags used to build the cc_library the header is part of. The flags are passed to the code generation tool so it can use clang's preprocessor.
Is there any way I could access the compiler flags used to build a dependency from a skylark- or gen_rule rule? I'm particularly interested in the include paths and defines.
We're working on it. Well, not right now, but will soon. You might want to subscribe to the corresponding issue, and maybe describe your requirements there so we take them into account when designing the API.

How do headers work in Objective-C?

Beyond allowing one file to use another file's attributes, what actually happens behind the scenes? Does it just provide the location to access to that file when its contents are later needed, or does it load the implementation's data into memory?
In short;
The header file defines the API for a module. It's a contract listing which methods a third party can call. The module can be considered a black box to third parties.
The implementation implements the module. It is the inside of the black box. As a developer of a module you have to write this, but as a user of a third party module you shouldn't need to know anything about the implementation. The header should contain all the information you need.
Some parts of a header file could be auto generated - the method declarations. This would require you to annotate the implementation as there are likely to be private methods in the implementation which don't form part of the API and don't belong in the header.
Header files sometimes have other information in them; type definitions, constant definitions etc. These belong in the header file, and not in the implementation.
The main reason for a header is to be able to #include it in some other file, so you can use the functions in one file from that other file. The header includes (only) enough to be able to use the functions, not the functions themselves, so (we hope) compiling it is considerably faster.
Maintaining the two separately most results from nobody ever having written an editor that automates the process very well. There's not really a lot of reason they couldn't do so, and a few have even tried to -- but the editors that have done so have never done very well in the market, and the more mainstream editors haven't adopted it.
Well i will try:
Header files are only needed in the preprocessing phase. Once the preprocessor is done with them the compiler never even sees them. Obviously, the target system doesn't need them either for execution (the same way .c files aren't needed).
Instead libraries are executed during the linking phase.If a program is dynamically linked and the target environment doesn't have the necessary libraries, in the right places, with the right versions it won't run.
In C nothing like that is needed since once you compile it you get native code. The header files are copy pasted when u #include it . It is very different from the byte-code you get from java. There's no need for an interpreter(like the JVM): you just feed it your binary stuff to the CPU and it does its thing.

No Erlang compile time errors for missing functions

Why is there no compile time errors or warnings when I call a function in another module that doesn't exist or has the wrong arity?
The compiler has all of the exports information in a module to make this possible. Is it just not implemented yet or is there a technical reason why it is not possible that I am not seeing?
I don't know why it's missing (probably because modules are completely separate and compilation of one doesn't depend on the other really - but that's just speculation). But I believe you can find problems like this with dialyzer static analysis. Have a look at http://www.erlang.org/doc/man/dialyzer.html
It's part of the system itself, so try including it in your workflow.
It is as others have said. Modules are compiled separately and there is absolutely no guarantee that the environment which exists at compile-time is the same as the one that will exit at run-time. This implies that doing checks at compile-time about the existence of a module, or of a function in it, is basically meaningless. At run-time that module may or may not be loaded, the function you call may or may not be defined in the module, or it may do something completely different from what you expected.
All this is due to the very dynamic nature of Erlang systems. There is no real way as such to define what is in system at run-time. Hot code-loading is a part of this and works properly because of the dynamic nature of the system. It means you can redefine the system at run-time, you can load in new versions of existing modules with a different interface and you can load in completely new modules and remove existing modules.
For this to work all checks about the existence of a module or function must be done at run-time.
Tools like dialyzer can help with this but they do assume that you don't do anything "funny" at run-time and the system you check is the same as the system you run. Which is of course all good, but very static. And against Erlang's nature which is to be dynamic, in everything.
Unfortunately, in this case, you can't both have your cake and eat it.
You may use the xref application to check the usage of deprecated, undefined and unused functions (and more!).
Compile the module with debug_info:
Eshell V6.2 (abort with ^G)
1> c(test, debug_info).
{ok,test}
Check the module with xref:m/1:
2> xref:m(test).
[{deprecated,[]},
{undefined,[{{test,start,0},{erlang,foo,0}}]},
{unused,[]}]
You may want to check out more about xref here:
Erlang -- Xref - The Cross Reference Tool (Tools User's Guide)
Erlang -- xref (Tools Reference Manual)
It is due hot code loading. Each module can be loaded in any particular time. So when you have in your module A code which calls function B:F then you can't tell it is wrong in compile time when your source code of module B has no function B:F. Imagine this: You compile module A with call to B:F. You load module B into memory without function B:F. Then you load module A which contain call B:F but don't call it. Then compile new version of module B with B:F. Then load this new module and then you can call B:F and everything is perfectly right. Imagine your module A makes module B on fly and load it. You can't tell in any particular time that it is wrong that module A contain call to nonexistent function B:F.
In my opinion most, if not all, compiler does not verify that a function exists at compilation. What it is required in general is a prototype declaration of the function: the type of the return value, the list and type of all arguments. This is done in C/C++ by including some_file.h in each module definition (not the .c or .cpp).
In Erlang this type verification is done dynamically, while the program is running, so it is not necessary to include these definitions. It is even totally useless because Erlang allows to upgrade the application in run, so the function type may change, or the function may disappear, on purpose or by mistake, during application life time; it is why the Erlang designer have chosen to make this verification at run time and not at build time.
The error you speak about generally occurs during the link phase of the code generation, when the "compiler" tries to gather all together some individual pieces of object code to build an executable file or a library, during this phase the linker solves all the external addresses (for shared variable, static call...). This phase does not exist in Erlang, a module is totally self contained; it does no share anything with the rest of the application, no variable nor function address.
Of course, it is mandatory to use some tools and make some test before updating a running production program, but I consider that these verifications have exactly the same level of importance than the correctness of the algorithm itself.
When you compile e.g. module alpha which has a call to beta:some_function(...), the compiler cannot assume some specific version of beta to be in use at runtime. Maybe you will compile a newer version of beta after you compiled alpha and this will have the correct some_function exported. Maybe you will upload alpha to be used on a different host, which has all the other modules.
The compiler therefore just compiles the remote call and any errors (non-existent module or function) are resolved at run time, when some version of beta will be loaded.

Bundling additional Lua libraries for embedded and statically linked Lua runtime

I have embedded Lua on Win32 in my project by means of statically linking it in (no, I can't switch to DLL). I would like to bundle more Lua extensions that use native code - not just pure .lua files. Specifically, I want to bundle Steve Donovan's winapi which comes as some lua files and some .c files.
How to do it?
You need to do two things. First, you have to compile the Lua DLL projects into non-DLL projects. Since they're intended to be DLL modules, they probably won't have provisions for this in their build systems. That means you have to do it yourself. Get rid of the DLL main functions and other specialized DLL functions (but take note of what they do and make sure you replicate it if it's important). And make sure that you change any #defines that try to include Lua with dynamic linking.
All Lua module DLLs export one or more functions of the form luaopen_*, where * is the name of the module to load. This function will likely be decorated with declspec() notation. Typically, the notation is done via a preprocessor macro, but it may not be. Either way, remove it, turning it into a normal function declaration.
Now, once you have created your lua_State object, just call that luaopen_* function with your lua_State.

Resources