How to upload pie file successfully in GraphDB by ontotext - hidden-field

I have the pie file which is used for inference in GraphDB ontotext. I have written the ruleset correctly. while uploading the file it seems ok. But, while creating the repository, it is showing the “Invalid Ruleset file. Please upload valid one” I think the issue is related to the hidden character present inside the file. How to get out if such characters. My file content is :
Prefices
{
rdf : http://www.w3.org/1999/02/22-rdf-syntax-ns#
owl : http://www.w3.org/2002/07/owl#
abc : http://www.xyzabc.com/schema/abcentity#
}
Axioms
{
<abc:isLocatedIn> <rdf:type> <owl:ObjectProperty>
}
Rules
{
Id: isLocatedInHierarchy
a <abc:isLocatedIn> b [Constraint a != b]
b <abc:isLocatedIn> c [Constraint b != c]
a <abc:isLocatedIn> c [Constraint a != c]
}

hidden character present inside the file
Do you mean a Unicode BOM mark? Get an editor that can save without such mark (I strongly recommend Akelpad: http://akelpad.sourceforge.net/), or just save in ASCII.
BTW, writing PIE files with per-property rules is not a good idea. Instead, use a generic rule for transitive property and then declare abc:isLocatedIn transitive in your ontology. The cheapest builtin in which such rule is included is rdfsPlus-optimized. If you select it, then you add to your ontology
abc:isLocatedIn a owl:TransitiveProperty.
However, it's a better idea to keep a "step" property abc:isLocatedIn and then a transitive property on top of it, eg abc:isLocatedTransitive:
abc:isLocatedTransitive a owl:TransitiveProperty.
abc:isLocatedIn rdfs:subPropertyOf abc:isLocatedTransitive.
Finally, there's a more efficient way to compute the transitive closure, see http://rawgit2.com/VladimirAlexiev/my/master/pubs/extending-owl2/index.html#sec-3-1:
abc:isLocatedTransitive ptop:transitiveOver abc:isLocatedIn.
abc:isLocatedIn rdfs:subPropertyOf abc:isLocatedTransitive.

I was also been able to upload successfully your .pie file. Maybe the issue is related to the computer locale or something in the environment. If you are using Windows Notepad++ seems like a logical choice. I guess there is an option to view all the hidden characters, but I've never used it. If you are using Linux there are plenty of choices, even included one like vim or nano which will work just fine.

Related

How to get a field's type by using CDT parser

I'm trying to extract c++ source code's info.
One is field's type.
when source code like under I want to extract info's Type when info.call() is called.
Info info;
//skip
info.call(); //<- from here
Trough making a visitor which visit IASTName node, I tried to extract type info like under.
public class CDTVisitor extends ASTVisitor {
public CDTVisitor(boolean visitNodes) {
super(true);
}
public int visit(IASTName node){
if(node.resolveBinding().getName().toString().equals("info"))
System.out.println(((IField)node.getBinding()).getType());
// this not work properly.
//result is "org.eclipse.cdt.internal.core.dom.parser.ProblemType#86be70a"
return 3;
}
}
Assuming the code is in fact valid, a variable's type resolving to a ProblemType is an indication of a configuration problem in whatever tool or plugin is running this code, or in the project/workspace containing the code on which it is run.
In this case, the type of the variable info is Info, which is presumably a class or structure type, or a typedef. To resolve it correctly, CDT needs to be able to see the declaration of this type.
If this type is not declared in the same file that's being analyzed, but rather in a header file included by that file, CDT needs to use the project's index to find the declaration. That means:
The AST must be index-based. For example, if using ITranslationUnit.getAST to create the AST, the overload that takes an IIndex parameter must be used, and a non-null argument must be provided for it.
Since an IIndex is associated with a CDT project, the code being analyzed needs to be part of a CDT project, and the project needs to be indexed.
In order for the indexer to resolve #include directives correctly, the project's include paths need to be configured correctly, so that the indexer can actually find the right header files to parse.
Any one of these not being the case can lead to a type resolving to a ProblemType.
Self response.
The reason I couldn't get a binding object was the type of AST.
When try to parse C++ source code, I should have used ICPPASTTranslationUnit.
There is no code related this, I used IASTTranslationUnit as a return type of AST.
After using ICPPASTTranslationUnit instead of IASTTranslationUnit, I solved this problem.
Yes, I figure it out! Here is the entire code which can index all files in "src" folder of a cpp project and output the resolved type binding for all code expressions including the return value of low level API such as memcpy. Note that the project variable in following code is created by programatically importing an existing manually configured cpp project. I often manually create an empty cpp project and programatically import it as a general project (once imported, Eclipse will automatically detect the project type and complete the relevant configuration of CPP project). This is much more convenient than creating and configuring a cpp project from scratch programmatically. When importing project, you'd better not to copy the project or containment structures into workspace, because this may lead to infinitely copying same project in subfolder (infinite folder depth). The code works in Eclipse-2021-12 version. I download Eclipse-For-cpp and install plugin-development and jdt plugins. Then I create an Eclipse plugin project and extend the "org.eclipse.core.runtime.applications" extension point.
In another word, it is an Eclipse-Application plugin project which can use nearly all features of Eclipse but do not start the graphical interface (UI) of Eclipse. You should add all cdt related non-ui plugins as the dependencies because new version of Eclipse does not automatically add missing plugins any more.
ICProject cproject = CoreModel.getDefault().getCModel().getCProject(project.getName());
// this code creates index for entire project.
IIndex index = CCorePlugin.getIndexManager().getIndex(cproject);
IFolder folder = project.getFolder("src");
IResource[] rcs = folder.members();
// iterate all source files in src folder and visit all expressions to print the resolved type binding.
for (IResource rc : rcs) {
if (rc instanceof IFile) {
IFile f = (IFile) rc;
ITranslationUnit tu= (ITranslationUnit) CoreModel.getDefault().create(f);
index.acquireReadLock(); // we need a read-lock on the index
ICPPASTTranslationUnit ast = null;
try {
ast = (ICPPASTTranslationUnit) tu.getAST(index, ITranslationUnit.AST_SKIP_INDEXED_HEADERS);
} finally {
index.releaseReadLock();
}
if (ast != null) {
ast.accept(new ASTVisitor() {
#Override
public int visit(IASTExpression expression) {
// get the resolved type binding of expression.
IType etp = expression.getExpressionType();
System.out.println("IASTExpression type:" + etp + "#expr_str:" + expression.toString());
return super.visit(expression);
}
});
}
}
}

Parsing LLVM IR code (with debug symbols) to map it back to the original source

I'm thinking about building a tool to help me visualise the generated LLVM-IR code for each instruction/function on my original source file.
Something like this but for LLVM-IR.
The steps to build such tool so far seem to be:
Start by with LLVM-IR AST builder.
Parse generated IR code.
On caret position get AST element.
Read the element scope, line, column and
file and signal it on the original source file.
Is this the correct way to approach it? Am I trivialising it too much?
I think your approach is quite correct. The UI part will probably be quite long to implement so I'll focus on the llvm part.
Let's say you start from a input file containing your LLVM-IR.
Step 1 process module:
Read file content to a string. Then Build a module from it, and process it to get the debug info:
llvm::MemoryBuffer* buf = llvm::MemoryBuffer::getMemBuffer(llvm::StringRef(fileContent)).release();
llvm::SMDiagnostic diag;
llvm::Module* module = llvm::parseIR(buf->getMemBufferRef(), diag, *context).release();
llvm::DebugInfoFinder* dif = new llvm::DebugInfoFinder();
dif->processModule(*module);
Step 2 iterate on instructions:
Once done with that, you can simply iterate on function and blocks and instructions:
// pseudo code for loops (real code is a bit long)
foreach(llvm::Function f in module.functions)
{
foreach(llvm::BasicBlock b in f.BasicBlockList)
{
foreach(llvm::Instruction inst in b.InstList)
{
llvm::DebugLoc dl = inst.getDebugLoc();
unsigned line = dl->getLine();
// accordingly populate some dictionary between your instructions and source code
}
}
}
Step 3 update your UI
This is another story...

JacORB: changing prefix and suffix

I would like to change package prefix and suffix in my ant build while generating java from idl. This has to be generic solution! The idea goes like that:
I have idl files (ONE.idl, TWO.idl) with namespace ONE_cb in first and TWO_cb in second (as _cb suffix is required for c++ compatibility). TWO_cb has atributes from ONE_cb, ONE_cb has only basic types. I want to change that to packages going like com.example.ONE and com.example.TWO.
I'm using JacORB 3.6. and I don't know how to do it.
My code looks like that:
<target name="idlj-generate">
<idl2java
srcdir="${psm.dir}/${project}/"
destdir="${build.generated.dir}"
includepath="${psm.dir}"
all="true">
<define key="__JACORB_GENERATE__"/>
<i2jpackage names=":com.example"/>
<i2jpackage names="_cb:"/>
</idl2java>
</target>
It doesn't work. As I stated before it has to be generic solution. adding
<i2jpackage names="TWO_cb:TWO"/> //option 2
<i2jpackage names="ONE_cb:ONE"/> //option 2b
Is not acceptable
Thank you for Your time.
If I understand you correctly you have something like
module ONE_cb
{
...
}
but you want it to be
com.example.ONE { ... }
This is feasible with i2jpackage e.g.
idl -forceOverwrite -d /tmp/generated -i2jpackage ONE_cb:com.example.ONE myfile.idl
The problem you have is that you are compiling both files at once. Remove the "all" and try compiling them in two phases.
If you are using Maven I would also recommend trying org.codehaus.mojo:idlj-maven-plugin as you can do multiple executions very easily with that.
To use multiple i2jpackage I got it working with
idl -forceOverwrite -d /tmp/generated -all -i2jpackagefile /tmp/file antBugJac608-2.idl
(where antBugJac608-2 #includes antBugJac608).
For various research I concluded that generic solution is immpossible.
Only way to perform changing prefix and suffix the same time is to explicite set all included names.

Is there a DXL API to get the reference count of opened modules?

The "Manage Open Modules" dialog of DOORS 8.3 lists all open modules, their mode, if visible, etc. and the number of references. I want to use that reference count to decide if my script can securely close the module and to avoid closing if it is currently in use. I'm not sure what the "References" column displays exactly. I didn't find a description of it in the help or corresponding informations on the internet. Does anybody know if there is some undocumented DXL API which gives me access to that information?
Edit: I found the function refcount_ which returns an integer. But I have no idea what the return value means.
It looks like References refers to the number of open modules currently referencing that module. For example: when you open a module that has links, DOORS also opens in the background all of the Link Modules that the links use. So if I open a document that has links through LINKMOD_A, LINKMOD_A will show 1 reference. If I then open another document that has links through that same LINKMOD_A the number of references will increase to 2. I do not see the number of references ever higher than 1 on a Formal Module. Try this on some of your modules and see when you get more than one reference on a link module, then run your refcount_ function against that link module and see if you get the same number. I am not sure if that is the function you are looking for but it is certainly possible. Good Luck!
I assume your script is opening the modules, so all you need to do is check if it is already open first.
string sModuleFullName = "/Some/Module/Path"
Module oModule = module(sModuleFullName)
bool bClose = null(oModule)
if(null(oModule)) {
oModule = read(sModuleFullName, true,true)
}
// do stuff
if(bClose) {
close(oModule)
}
Edit:
Alternative method for closing modules opened by triggers, attribute or layout dxl
// Save currently open Modules to a Skip
Skip oOpenModulesSkip = createString()
Module oModule
for oModule in database do {
put(oOpenModulesSkip, fullName(oModule), fullName(oModule))
}
// do stuff
// Close Modules not in the Skip
for oModule in database do {
if(!find(oOpenModulesSkip, fullName(oModule))) {
close(oModule, false)
}
}
delete(oOpenModulesSkip)

How to write a simple .txt content processor in XNA?

I don't really understand how Content importer/processor works in XNA.
I need to read a text file (Content/levels/level1.txt) of the form:
x x
x x
x x
where x's are just integers, into an int[,] array.
Any tips on writting a SIMPLE .txt importer??? By searching google/msdn I only found .x/.fbx file importer examples. And they seem too complicated.
Do you actually need to process the text file? If not, then you can probably skip most of the content pipeline.
Something like:
string filename = "Content/TextFiles/sometext.txt";
string path = Path.Combine(StorageContainer.TitleLocation, filename);
string lineOfText;
StreamReader sr = new StreamReader(path);
while ((lineOfText = sr.ReadLine()) != null)
{
// do something
}
Also, be sure to set the "Build Action" to "None" and the "Copy to Output Directory" to "Copy if newer" on the text files you've added. This tells the content pipeline not to compile the text file but rather copy it to the output directory for use as is.
I got this (more or less) from the RacingGame sample provided by Microsoft. It foregoes much of the content pipeline and simply loads and processes text files (XML) for much of its level data.
XNA 4.0 uses
System.IO.Stream stream = TitleContainer.OpenStream("tilename.txt");
See http://msdn.microsoft.com/en-us/library/bb199094.aspx and also http://blogs.msdn.com/b/shawnhar/archive/2010/12/09/reading-files-in-xna-game-studio-4-0.aspx
There doesn't seem to be a lot of info out there, but this blog post does indicate how you can load .txt files through code using XNA.
Hopefully this can help you get the file into memory, from there it should be straightforward to parse it in any way you like.
XNA 3.0 - Reading Text Files on the Xbox
http://www.ziggyware.com/readarticle.php?article_id=69 is probably a good place to start. It covers creating a basic content processor.

Resources