How to get a field's type by using CDT parser - parsing

I'm trying to extract c++ source code's info.
One is field's type.
when source code like under I want to extract info's Type when info.call() is called.
Info info;
//skip
info.call(); //<- from here
Trough making a visitor which visit IASTName node, I tried to extract type info like under.
public class CDTVisitor extends ASTVisitor {
public CDTVisitor(boolean visitNodes) {
super(true);
}
public int visit(IASTName node){
if(node.resolveBinding().getName().toString().equals("info"))
System.out.println(((IField)node.getBinding()).getType());
// this not work properly.
//result is "org.eclipse.cdt.internal.core.dom.parser.ProblemType#86be70a"
return 3;
}
}

Assuming the code is in fact valid, a variable's type resolving to a ProblemType is an indication of a configuration problem in whatever tool or plugin is running this code, or in the project/workspace containing the code on which it is run.
In this case, the type of the variable info is Info, which is presumably a class or structure type, or a typedef. To resolve it correctly, CDT needs to be able to see the declaration of this type.
If this type is not declared in the same file that's being analyzed, but rather in a header file included by that file, CDT needs to use the project's index to find the declaration. That means:
The AST must be index-based. For example, if using ITranslationUnit.getAST to create the AST, the overload that takes an IIndex parameter must be used, and a non-null argument must be provided for it.
Since an IIndex is associated with a CDT project, the code being analyzed needs to be part of a CDT project, and the project needs to be indexed.
In order for the indexer to resolve #include directives correctly, the project's include paths need to be configured correctly, so that the indexer can actually find the right header files to parse.
Any one of these not being the case can lead to a type resolving to a ProblemType.

Self response.
The reason I couldn't get a binding object was the type of AST.
When try to parse C++ source code, I should have used ICPPASTTranslationUnit.
There is no code related this, I used IASTTranslationUnit as a return type of AST.
After using ICPPASTTranslationUnit instead of IASTTranslationUnit, I solved this problem.

Yes, I figure it out! Here is the entire code which can index all files in "src" folder of a cpp project and output the resolved type binding for all code expressions including the return value of low level API such as memcpy. Note that the project variable in following code is created by programatically importing an existing manually configured cpp project. I often manually create an empty cpp project and programatically import it as a general project (once imported, Eclipse will automatically detect the project type and complete the relevant configuration of CPP project). This is much more convenient than creating and configuring a cpp project from scratch programmatically. When importing project, you'd better not to copy the project or containment structures into workspace, because this may lead to infinitely copying same project in subfolder (infinite folder depth). The code works in Eclipse-2021-12 version. I download Eclipse-For-cpp and install plugin-development and jdt plugins. Then I create an Eclipse plugin project and extend the "org.eclipse.core.runtime.applications" extension point.
In another word, it is an Eclipse-Application plugin project which can use nearly all features of Eclipse but do not start the graphical interface (UI) of Eclipse. You should add all cdt related non-ui plugins as the dependencies because new version of Eclipse does not automatically add missing plugins any more.
ICProject cproject = CoreModel.getDefault().getCModel().getCProject(project.getName());
// this code creates index for entire project.
IIndex index = CCorePlugin.getIndexManager().getIndex(cproject);
IFolder folder = project.getFolder("src");
IResource[] rcs = folder.members();
// iterate all source files in src folder and visit all expressions to print the resolved type binding.
for (IResource rc : rcs) {
if (rc instanceof IFile) {
IFile f = (IFile) rc;
ITranslationUnit tu= (ITranslationUnit) CoreModel.getDefault().create(f);
index.acquireReadLock(); // we need a read-lock on the index
ICPPASTTranslationUnit ast = null;
try {
ast = (ICPPASTTranslationUnit) tu.getAST(index, ITranslationUnit.AST_SKIP_INDEXED_HEADERS);
} finally {
index.releaseReadLock();
}
if (ast != null) {
ast.accept(new ASTVisitor() {
#Override
public int visit(IASTExpression expression) {
// get the resolved type binding of expression.
IType etp = expression.getExpressionType();
System.out.println("IASTExpression type:" + etp + "#expr_str:" + expression.toString());
return super.visit(expression);
}
});
}
}
}

Related

How to reflect Dart library name?

Is there a way to reflect specific library properties (like the library name) in Dart?
How do you get the library object reference?
First of all, not all Dart libraries have names. In fact, most don't. They used to, but the name isn't used for anything any more, so most library authors don't bother adding a name.
To do reflection on anything, including libraries, you need to use the dart:mirrors library, which does not exist on most platforms, including the web and Flutter.
If you are not running the stand-alone VM, you probably don't have dart:mirrors.
With dart:mirrors, you can get the program's libraries in various ways.
library my.library.name;
import "dart:mirrors";
final List<LibraryMirror> allLibraries =
[...currentMirrorSystem().libraries.values];
void main() {
// Recognize the library's mirror in *some* way.
var someLibrary = allLibraries.firstWhere((LibraryMirror library) =>
library.simpleName.toString().contains("name"));
// Find the library mirror by its name.
// Not great if you don't know the name and want to find it.
var currentLibrary = currentMirrorSystem().findLibrary(#my.library.name);
print(currentLibrary.simpleName);
// Find a declaration in the current library, and start from there.
var mainFunction = reflect(main) as ClosureMirror;
var alsoCurrentLibrary = mainFunction.function.owner as LibraryMirror;
print(alsoCurrentLibrary.simpleName);
}
What are you trying to do, which requires doing reflection?

How to upload pie file successfully in GraphDB by ontotext

I have the pie file which is used for inference in GraphDB ontotext. I have written the ruleset correctly. while uploading the file it seems ok. But, while creating the repository, it is showing the “Invalid Ruleset file. Please upload valid one” I think the issue is related to the hidden character present inside the file. How to get out if such characters. My file content is :
Prefices
{
rdf : http://www.w3.org/1999/02/22-rdf-syntax-ns#
owl : http://www.w3.org/2002/07/owl#
abc : http://www.xyzabc.com/schema/abcentity#
}
Axioms
{
<abc:isLocatedIn> <rdf:type> <owl:ObjectProperty>
}
Rules
{
Id: isLocatedInHierarchy
a <abc:isLocatedIn> b [Constraint a != b]
b <abc:isLocatedIn> c [Constraint b != c]
a <abc:isLocatedIn> c [Constraint a != c]
}
hidden character present inside the file
Do you mean a Unicode BOM mark? Get an editor that can save without such mark (I strongly recommend Akelpad: http://akelpad.sourceforge.net/), or just save in ASCII.
BTW, writing PIE files with per-property rules is not a good idea. Instead, use a generic rule for transitive property and then declare abc:isLocatedIn transitive in your ontology. The cheapest builtin in which such rule is included is rdfsPlus-optimized. If you select it, then you add to your ontology
abc:isLocatedIn a owl:TransitiveProperty.
However, it's a better idea to keep a "step" property abc:isLocatedIn and then a transitive property on top of it, eg abc:isLocatedTransitive:
abc:isLocatedTransitive a owl:TransitiveProperty.
abc:isLocatedIn rdfs:subPropertyOf abc:isLocatedTransitive.
Finally, there's a more efficient way to compute the transitive closure, see http://rawgit2.com/VladimirAlexiev/my/master/pubs/extending-owl2/index.html#sec-3-1:
abc:isLocatedTransitive ptop:transitiveOver abc:isLocatedIn.
abc:isLocatedIn rdfs:subPropertyOf abc:isLocatedTransitive.
I was also been able to upload successfully your .pie file. Maybe the issue is related to the computer locale or something in the environment. If you are using Windows Notepad++ seems like a logical choice. I guess there is an option to view all the hidden characters, but I've never used it. If you are using Linux there are plenty of choices, even included one like vim or nano which will work just fine.

How to implement a Groovy global AST transformation in a Grails plugin?

I'd like to modify some of my Grails domain classes at compilation time. I initially thought this was a job for Groovy's global ASTTransformation since I don't want to annotate my domain classes (which local transformers require). What's the best way to do this?
I also tried mimicking DefaultGrailsDomainClassInjector.java by creating my own class in the same package, implementing the same interfaces, but I probably just didn't know how to package it up in the right place because I never saw my methods get invoked.
On the other hand I was able to manually create a JAR which contained a compiled AST transformation class, along with the META-INF/services artifacts that plain Groovy global transformations require. I threw that JAR into my project's "lib" dir and visit() was successfully invoked. Obviously this was a sloppy job because I am hoping to have the source code of my AST transformation in a Grails plugin and not require a separate JAR artifact if I don't have to, plus I couldn't get this approach to work by having the JAR in my Grails plugin's "lib" but had to put it into the Grails app's "lib" instead.
This post helped a bit too: Grails 2.1.1 - How to develop a plugin with an AstTransformer?
The thing about global transforms the transform code should be available when the compilation starts. Having the transformer in a jar was what i did first! But as you said it is a sloppy job.
What you want to do is have your ast transforming class compile before others gets to the compilation phase. Here is what you do!
Preparing the transformer
Create a directory called precompiled in src folder! and add the Transformation class and the classes (such as annotations) the transformer uses in this directory with the correct packaging structure.
Then create a file called org.codehaus.groovy.transform.ASTTransformation in called precompiled/META-INF/services and you will have the following structure.
precompiled
--amanu
----LoggingASTTransformation.groovy
--META-INF
----services
------org.codehaus.groovy.transform.ASTTransformation
Then write the fully qualified name of the transformer in the org.codehaus.groovy.transform.ASTTransformation file, for the example above the fully qualified name would be amanu.LoggingASTTransformation
Implementation
package amanu
import org.codehaus.groovy.transform.GroovyASTTransformation
import org.codehaus.groovy.transform.ASTTransformation
import org.codehaus.groovy.control.CompilePhase
import org.codehaus.groovy.ast.ASTNode
import org.codehaus.groovy.control.SourceUnit
#GroovyASTTransformation(phase=CompilePhase.CANONICALIZATION)
class TeamDomainASTTransformation implements ASTTransformation{
public void visit(ASTNode[] nodes, SourceUnit sourceUnit) {
println ("*********************** VISIT ************")
source.getAST()?.getClasses()?.each { classNode ->
//Class node is a class that is contained in the file being compiled
classNode.addProperty("filed", ClassNode.ACC_PUBLIC, new ClassNode(Class.forName("java.lang.String")), null, null, null)
}
}
}
Compilation
After implementing this you can go off in two ways! The first approach is to put it in a jar, like you did! and the other is to use a groovy script to compile it before others. To do this in grails, we use _Events.groovy script.
You can do this from a plugin or the main project, it doesn't matter. If it doesn't exist create a file called _Events.groovy and add the following content.
The code is copied from reinhard-seiler.blogspot.com with modifications
eventCompileStart = {target ->
...
compileAST(pluginBasedir, classesDirPath)
...
}
def compileAST(def srcBaseDir, def destDir) {
ant.sequential {
echo "Precompiling AST Transformations ..."
echo "src ${srcBaseDir} ${destDir}"
path id: "grails.compile.classpath", compileClasspath
def classpathId = "grails.compile.classpath"
mkdir dir: destDir
groovyc(destdir: destDir,
srcDir: "$srcBaseDir/src/precompiled",
classpathref: classpathId,
stacktrace: "yes",
encoding: "UTF-8")
copy(toDir:"$destDir/META-INF"){
fileset(dir:"$srcBaseDir/src/precompiled/META-INF")
}
echo "done precompiling AST Transformations"
}
}
the previous script will compile the transformer before others are compiled! This enable the transformer to be available for transforming your domain classes.
Don't forget
If you use any class other than those added in your classpath, you will have to precompile those too. The above script will compile everything in the precompiled directory and you can also add classes that don't need ast, but are needed for it in that directory!
If you want to use domain classes in transformation, You might want to do the precompilation in evenCompileEnd block! But this will make things slower!
Update
#Douglas Mendes mentioned there is a simple way to cause pre compilation. Which is more concise.
eventCompileStart = {
target -> projectCompiler.srcDirectories.add(0, "./src/precompiled")
}

How to make web_ui compile css files automatically

I'm using web_ui and whenever I change a CSS file in web/css/ it will not be compiled unless I change the web/index.html file. I guess that's because only the file 'web/index.html' is listed as entry point in build.dart.
But adding the stylesheet to the entry points list didn't work.
Is there a way to autocompile CSS files every time they are changed without having to edit the .html files?
Keep in mind that you can edit any .dart or .html file and the compiler will run; it doesn't have to be the entry point file.
Autocompilation of CSS files on change can be achieved by passing the compiler the full flag:
build(['--machine', '--full'], ['web/index.html']);
The machine flag tells the compiler to print messages to the Dart Editor console. For a full list of flags see Build.dart and the Dart Editor Build System.
This method means that every time a file is changed your entire project will be rebuilt instead of the usual incremental approach. If you have a large project this may take a while. Here is a more comprehensive build file that takes advantage of incremental compilation and only rebuilds the whole project if a css file was changed:
List<String> args = new Options().arguments;
bool fullRebuild = false;
for (String arg in args) {
if (arg.startsWith('--changed=') && arg.endsWith('.css')) {
fullRebuild = true;
}
}
if(fullRebuild) {
build(['--machine', '--full'], ['web/index.html']);
} else {
build(args, ['web/index.html']);
}

ILASM does not set FileVersion

I have an .il file which I can compile without any problems. I can strong name it and so without any issues. But I am not able to set the file version via the attribute as I would expect it. How can I set the FileVersion for an assembly when using ilasm?
If I do a round trip I get always a .res file which does contain only binary data which is not readable. What is inside this res file and can I edit it?
The code does not work
.assembly myAssembly
{
.custom instance void [mscorlib]System.Reflection.AssemblyFileVersionAttribute::.ctor(string) = { string('1.2.3.4') }
The issue can be solved by using the .res file. It is not sufficient to do a round trip with ildasm and ilasm. The IL file does not reference the .res file. I had to add it to the ilasm call manually. The data in the res file seemed to contain the infos which are written into the PE header which is ok for me.
The final command line needed was
ilasm test.il /dll /res:test.res
I still do not know what exactly is inside the res file but I can exhange it with the meta data information of any other assemlby that I create manually and then decompile it to replace the metadata of the original assembly as I need.
It seems not many people are doing such stuff.

Resources