clang set metadata to allocainst - clang

First I'm real noob with clang/llvm.
BUT I'm trying to modify clang for some purpose.
I'd like to add metadata whenever an Alloca instruction is emitted in IR code for a variable which has some annotation.
I noticed this function in CGDecl.cpp:
CodeGenFunction::AutoVarEmission
CodeGenFunction::EmitAutoVarAlloca(const VarDecl &D)
which contains the nice line in the end:
if (D.hasAttr<AnnotateAttr>())
EmitVarAnnotations(&D, emission.Address);
this looks like the condition I need, so I modified it to
if (D.hasAttr<AnnotateAttr>()) {
AnnotateAttr* attr = D.getAttr<AnnotateAttr>();
if(attr->getAnnotation() == "_my_custom_annotation_") {
// set metadata...
}
EmitVarAnnotations(&D, emission.Address);
}
my Issue is I don't know how to add metadata at this point, because I can't find a way to access the instruction
In CGExp.cpp, however, I see where the AllocaInstr is built, but at this point I don't have access to the VarDecl, so I don't know if the annotation is there.
I tried anyway to add metadata (unconditionaly) in this function:
llvm::AllocaInst *CodeGenFunction::CreateIRTemp(QualType Ty,
const Twine &Name) {
llvm::AllocaInst *Alloc = CreateTempAlloca(ConvertType(Ty), Name);
// FIXME: Should we prefer the preferred type alignment here?
CharUnits Align = getContext().getTypeAlignInChars(Ty);
// how to put it conditionaly on the annotation?
llvm::MDNode* node = getRangeForLoadFromType(Ty);
Alloc->setMetadata("_my_custom_metadata", node);
Alloc->setAlignment(Align.getQuantity());
return Alloc;
}
by adding the setMetadata call.
However I don't see the metadata attached in the generated IR.
I compile with clang -g -S -target i686-pc-win32 -emit-llvm main.cpp -o output.ll
Maybe I'm totally wrong, but the thing is I don't master the code generation in clang :)
PS: here is the code I compile
int main() {
__attribute__ ((annotate("_my_custom_annotation_"))) float a[12];
}
Any help is appreciated!
Thanks

if (D.hasAttr<AnnotateAttr>()) {
AnnotateAttr* attr = D.getAttr<AnnotateAttr>();
if(attr->getAnnotation() == "_my_custom_annotation_") {
// set metadata...
}
EmitVarAnnotations(&D, emission.Address);
}
Looks like you are at the right place. In fact all EmitAutoVarAlloca has special handling for different kinds of variable declarations, but all end with the "address" (i.e., the instruction) in emission.Address.
So what you want to do is:
if (D.hasAttr<AnnotateAttr>()) {
AnnotateAttr* attr = D.getAttr<AnnotateAttr>();
if(attr->getAnnotation() == "_my_custom_annotation_") {
emission.Address->setMetadata(...); // <--- your MDNode goes here
}
EmitVarAnnotations(&D, emission.Address);
}
However, I would recommend a special attribute for adding metadata to instructions. If you read further through the code you will see that the AnnotateAttr has a special meaning and your emitted IR may not be as expected. You can add a custom attribute in the Attr.td file. I suggest a copy of the Annotate entry. Then you can follow the AnnotateAttr through the code and add code for your Attribute at the right places to get it recognized and handled by clang.

Related

Issue reading file. IO: ("Unsupported scheme \'java+compilationUnit\'")

I have run into and issue. I have been trying to read the content of a file for an example project which contains I single file. Below you will find the code and the error which I get. I have tried running this code with Rascal version 0.22.0, 0.23.0, and 0.24,2. In all versions I have the same issue, but I do not understand what is wrong, and I am pretty sure this code worked for me over a year ago.
void demoFunc() {
list[str] output = [];
m3x = createM3FromEclipseProject(|project://testProject|);
projectFiles = files(m3x);
for(file <- projectFiles) {
output = readFileLines(file);
}
}
rascal>demoFunc();
|std:///IO.rsc|(15157,756,<620,0>,<640,24>): IO("Unsupported scheme \'java+compilationUnit\'")
at *** somewhere ***(|std:///IO.rsc|(15157,756,<620,0>,<640,24>))
at readFileLines(|project://TQM/src/Helper.rsc|(998,4,<39,26>,<39,30>))
at $root$(|prompt:///|(0,11,<1,0>,<1,11>)ok
Looks like the latest rascal-eclipse release has a new bug. To work around this one you could resolve the source file from the logical name yourself:
loc sourceFile(loc logical, M3 model) {
if (loc f <- model.declarations[logical]) {
return f;
}
throw FileNotFound(logical);
}
That simulates what analysis::m3::Registry would have done for you. The returned loc is a slice of the file where the declared entity is found. If you want the entire file, use myLoc.top.

Build a Path of LLVM basic block

I have to create a LLVM analysis pass for an exam project which consist of printing the independent path of a function using the baseline method.
Currently, I am struggling on how can I build the baseline path traversing the various basic block. Furthermore, I know that basic block are already organized in a CFG but checking the documentation I can't find any useful method to build a linked list of basic block representing a path from the entry point to the end point of a function. I am not an expert with the LLVM environment and I want to ask if someone with more knowledge knows how to build this kind of path.
Thank you everyone.
Update: i followed the advice of the answer to this post and i made this code for building a path:
#include "llvm/Support/raw_ostream.h"
#include "llvm/IR/CFG.h"
#include <set>
#include <list>
using namespace llvm;
using namespace std;
void Build_Baseline_path(BasicBlock *Start, set<BasicBlock *> Explored, list<BasicBlock *> Decision_points, list<BasicBlock *>Path) {
for (BasicBlock *Successor : successors(Start)) {
Instruction *Teriminator = Successor->getTerminator();
const char *Instruction_string = Teriminator->getOpcodeName();
if (Instruction_string == "br" || Instruction_string == "switch") {
errs() << "Decision point found" << "\n";
Decision_points.push_back(Successor);
}
if (Instruction_string == "ret") {
if (Explored.find(Successor) == Explored.end()) {
errs() << "Added node to the baseline path" << "\n";
Path.push_back(Successor);
return;
}
return;
}
if (Explored.find(Successor) == Explored.end()) {
Path.push_back(Successor);
Build_Baseline_path(Successor,Explored,Decision_points,Path);
}
}
}
This is a code that wrote in another file .cpp and i include it in my Function Pass, but when i run the pass with this function, everything is blocked and seems like that my pc is crashing when i run this pass. I tried to comment the call of this function in the pass to see if the problem is somewhere else, but everything works fine so the problem is in this code, what is wrong in this code? I am sorry but i am a novice with c++, i can't figure out how to solve this.
First off, there isn't a single end point. At least four kinds of instructions may be end points: return, unreachable and in some cases call/invoke (when the called function throws and the exception isn't caught in this function).
Accordingly, there are many possible paths. The number of possible paths is not even sure to be countable, depending on how you treat loops.
If you regard loops in a simplistic way and ignore exceptions, then it's simple to construct a list of paths. There exists an iterator called successors() which you can use as in this answer. You can use successors() in a recursive function to process successors, and when you reach a return or something like that, you act on the path you've built.

Nix overlays and override pattern

I have trouble understanding Nix overlays and the override pattern. What I want to do is add something to "patches" of gdb without copy/pasting
the whole derivation.
From Nix Pills I kind of see that override just mimics OOP, in reality it is just another attribute of the set. But how does it work then? Override is a function from the original attribute set to a transformed one that again has a predefined override function?
And as Nix is a functional language you also don't have variables only bindings which you can shadow in a different scope. But that still doesn't explain how overlays achieve their "magic".
Through ~/.config/nixpkgs I have configured a test overlay approximately like this:
self: super:
{
test1 = super.gdb // { name = "test1"; buildInputs = [ super.curl ]; };
test2 = super.gdb // { name = "test2"; buildInputs = [ super.coreutils ]; };
test3 = super.gdb.override { pythonSupport = false; };
};
And I get:
nix-repl> "${test1}"
"/nix/store/ib55xzrp60fmbf5dcswxy6v8hjjl0s34-gdb-8.3"
nix-repl> "${test2}"
"/nix/store/ib55xzrp60fmbf5dcswxy6v8hjjl0s34-gdb-8.3"
nix-repl> "${test3}"
"/nix/store/vqlrphs3a2jfw69v8kwk60vhdsadv3k5-gdb-8.3"
But then
$ nix-env -iA nixpkgs.test1
replacing old 'test1'
installing 'test1'
Can you explain me those results please? Am I correct that override can just alter the "defined interface" - that is all parameters of the function and as "patches" isn't a parameter of gdb I won't be able to change it? What is the best alternative then?
I will write an answer in case anyone else stumbles on this.
Edit 21.8.2019:
what I actually wanted is described in https://nixos.org/nixpkgs/manual/#sec-overrides
overrideDerivation and overrideAttrs
overrideDerivation is basically "derivation (drv.drvAttrs // (f drv))" and overrideAttrs is defined as part of mkDerivation in https://github.com/NixOS/nixpkgs/blob/master/pkgs/stdenv/generic/make-derivation.nix
And my code then looks like:
gdb = super.gdb.overrideAttrs (oldAttrs: rec {
patches = oldAttrs.patches ++ [
(super.fetchpatch {
name = "...";
url = "...";
sha256 = "...";
})
];
});
The question title is misleading and comes from my fundamental misunderstanding of derivations. Overlays work exactly as advertised. And they are probably also not that magic. Just some recursion where endresult is result of previous step // output of last overlay function.
What is the purpose of nix-instantiate? What is a store-derivation?
Correct me please wherever I am wrong.
But basically when you evaluate Nix code the "derivation function" turns a descriptive attribute set (name, system, builder) into an "actual derivation". That "actual derivation" is again an attribute set, but the trick is that it is backed by a .drv file in the store. So in some sense derivation has side-effects. The drv encodes how the building is supposed to take place and what dependencies are required. The hash of this file also determines the directory name for the artefacts (despite nothing was built yet). So implicitly the name in the nix store also depends on all build inputs.
When I was creating a new derivation sort of like Frankenstein based on tying together existing derivations all I did was create multiple references to the same .drv file. As if I was copying a pointer with the result of getting two pointers pointing to the same value on the heap. I was able to change some metadata but in the end the build procedure was still the same. Infact as Nix is pure I bet there is no way to even write to the filesystem (to change the .drv file) - except again with something that wraps the derivation function.
Override on the other hand allows you to create a "new instance". Due to "inputs pattern" every package in Nix is a function from a dependencies attribute set to the actual code that in the end invokes the "derivation function". With override you are able to call that function again which makes "derivation function" get different parameters.

iOS blocks, how to use in different implementation files

I am trying to make some reusable blocks for my application.
CommonBlocks.h
void (^testBlock)(int) = ^(int number) {
// do nothing for now;
};
VariousImplementationFile.m
#import "CommonBlocks.h"
(void)setup {
testBlock(5);
}
Unfortunately, when I try to push this code to iOS device I receive error: linker command failed with exit code 1 (use -v to see invocation). It seems that I missing some.
Any advice?
Thanks
You try add static keyword before the declaration:
static void (^testBlock)(int) = ^(int number) {
// do nothing for now;
};
Your code causes error because you have non-static variable testBlock declared in .h header file.
When you call #import "CommonBlocks.h" in VariousImplementationFile.m, testBlock is declared once. Then you import CommonBlocks.h in some where else, testBlock is declared once more, so you'll get symbol duplicate error.
Declare block in CommonBlocks.h this way
typedef void (^RCCompleteBlockWithResult) (BOOL result, NSError *error);
Then you may use in any method for example:
-(void)getConversationFromServer:(NSInteger)placeId completionBlock:(RCCompleteBlockWithResult)completionBlock
This is not specific to blocks. Basically, you want to know how to have a global variable that is accessible from multiple files.
Basically, the issue is that in in C, each "symbol" can only be "defined" once (it can be "declared" multiple times, but just be "defined" once). Thus, you cannot put the "definition" of a symbol in a header file, because it will be included in multiple source files, so effectively, the same symbol will be "defined" multiple times.
For a function, the prototype is declaration, and the implementation with the code is the definition. You cannot implement a function in a header file for this reason. For a regular variable, writing the name and type of the variable is defining it. To only "declare" it, you need to use extern.
It is also worth mentioning static. static makes a variable local to a particular source file. That way, its name won't interfere with variables with the same name elsewhere. You can use this to make global variables that are "private" to a particular file. However, that is not what you are asking for -- you are asking for the exact opposite -- a variable that is "public", i.e. shared among files.
The standard way to do it is this:
CommonBlocks.h
extern void (^testBlock)(int); // any file can include the declaration
CommonBlocks.m
// but it's only defined in one source file
void (^testBlock)(int) = ^(int number) {
// do nothing for now;
};

Conditional imports / code for Dart packages

Is there any way to conditionally import libraries / code based on environment flags or target platforms in Dart? I'm trying to switch out between dart:io's ZLibDecoder / ZLibEncoder classes and zlib.js based on the target platform.
There is an article that describes how to create a unified interface, but I'm unable to visualize that technique not creating duplicate code and redundant tests to test that duplicate code. game_loop employs this technique, but uses separate classes (GameLoopHtml and GameLoopIsolate) that don't seem to share anything.
My code looks a bit like this:
class Parser {
Layer parse(String data) {
List<int> rawBytes = /* ... */;
/* stuff you don't care about */
return new Layer(_inflateBytes(rawBytes));
}
String _inflateBytes(List<int> bytes) {
// Uses ZLibEncoder on dartvm, zlib.js in browser
}
}
I'd like to avoid duplicating code by having two separate classes -- ParserHtml and ParserServer -- that implement everything identically except for _inflateBytes.
EDIT: concrete example here: https://github.com/radicaled/citadel/blob/master/lib/tilemap/parser.dart. It's a TMX (Tile Map XML) parser.
You could use mirrors (reflection) to solve this problem. The pub package path is using reflection to access dart:io on the standalone VM or dart:html in the browser.
The source is located here. The good thing is, that they use #MirrorsUsed, so only the required classes are included for the mirrors api. In my opinion the code is documented very good, it should be easy to adopt the solution for your code.
Start at the getters _io and _html (stating at line 72), they show that you can load a library without that they are available on your type of the VM. Loading just returns false if the library it isn't available.
/// If we're running in the server-side Dart VM, this will return a
/// [LibraryMirror] that gives access to the `dart:io` library.
///
/// If `dart:io` is not available, this returns null.
LibraryMirror get _io => currentMirrorSystem().libraries[Uri.parse('dart:io')];
// TODO(nweiz): when issue 6490 or 6943 are fixed, make this work under dart2js.
/// If we're running in Dartium, this will return a [LibraryMirror] that gives
/// access to the `dart:html` library.
///
/// If `dart:html` is not available, this returns null.
LibraryMirror get _html =>
currentMirrorSystem().libraries[Uri.parse('dart:html')];
Later you can use mirrors to invoke methods or getters. See the getter current (starting at line 86) for an example implementation.
/// Gets the path to the current working directory.
///
/// In the browser, this means the current URL. When using dart2js, this
/// currently returns `.` due to technical constraints. In the future, it will
/// return the current URL.
String get current {
if (_io != null) {
return _io.classes[#Directory].getField(#current).reflectee.path;
} else if (_html != null) {
return _html.getField(#window).reflectee.location.href;
} else {
return '.';
}
}
As you see in the comments, this only works in the Dart VM at the moment. After issue 6490 is solved, it should work in Dart2Js, too. This may means that this solution isn't applicable for you at the moment, but would be a solution later.
The issue 6943 could also be helpful, but describes another solution that is not implemented yet.
Conditional imports are possible based on the presence of dart:html or dart:io, see for example the import statements of resource_loader.dart in package:resource.
I'm not yet sure how to do an import conditional on being on the Flutter platform.

Resources