owl api state ObjectProperty relation between individuals from imports - ontology

I have an ontology instance that imports other ontology instances and I'm trying to state a relationship using an ObjectProperty between an individual of the import (professors-instance or acm-ccs-lite-core) and an individual of the main ontology instance (curricula-instance).
If I do it by hand using protege it creates:
<!-- http://www.semanticweb.org/lsarni/ontologies/professors-instance#Andrés_Calviño -->
<rdf:Description rdf:about="http://www.semanticweb.org/lsarni/ontologies/professors-instance#Andrés_Calviño">
<curricula:inChargeOf rdf:resource="http://www.semanticweb.org/lsarni/ontologies/curricula-instance#Software_Architecture"/>
</rdf:Description>
<!-- http://www.semanticweb.org/lulas/ontologies/2018/acm-ccs-lite-core#10011119 -->
<rdf:Description rdf:about="http://www.semanticweb.org/lulas/ontologies/2018/acm-ccs-lite-core#10011119">
<curricula:taughtIn rdf:resource="http://www.semanticweb.org/lsarni/ontologies/curricula-instance#Databases_1"/>
</rdf:Description>
But the way I'm trying to do it using owl api it creates a NamedIndividual in the main ontology instead and adds the relationship like this:
<!-- http://www.semanticweb.org/lsarni/ontologies/professors-instance#Andrés_Calviño -->
<owl:NamedIndividual rdf:about="http://www.semanticweb.org/lsarni/ontologies/professors-instance#Andrés_Calviño">
<curricula:inChargeOf rdf:resource="http://www.semanticweb.org/lsarni/ontologies/curricula-instance#Software_Architecture"/>
</owl:NamedIndividual>
This is the code I'm using:
File file = new File("C:\\Users\\lulas\\Documents\\Curricula Ontology\\curricula-instance.owl");
OWLOntology o = man.loadOntologyFromOntologyDocument(file);
OWLDataFactory df = o.getOWLOntologyManager().getOWLDataFactory();
IRI curriculaIOR = IRI.create("http://www.semanticweb.org/lsarni/ontologies/curricula");
IRI instanceIOR = IRI.create("http://www.semanticweb.org/lsarni/ontologies/curricula-instance");
IRI profInstanceIOR = IRI.create("http://www.semanticweb.org/lsarni/ontologies/professors-instance");
OWLObjectProperty charge = df.getOWLObjectProperty(curriculaIOR + "#inChargeOf");
OWLIndividual individual = df.getOWLNamedIndividual(profInstanceIOR + "#Andrés_Calviño");
OWLIndividual course = df.getOWLNamedIndividual(instanceIOR + "#Software_Architecture");
OWLObjectPropertyAssertionAxiom objAssertion = df.getOWLObjectPropertyAssertionAxiom(charge, individual, course);
AddAxiom addAxiom = new AddAxiom(o, objAssertion);
man.applyChange(addAxiom);
Which is the correct way of creating a rdf:Description?
Edit
I'm using Protege version 5.2.0 on windows.
As you both said the code was correct, I was using the incorrect IRI for one of the imported ontologies, that's why it was acting as this NamedIndividuals where different.

An rdf:Description with anrdf:about` IRI is equivalent to a named individual, so there is no real difference between the two versions. They will be parsed as the same thing by OWL API.
Not sure why Protege is outputting it in that format - as Henriette asked in the comment, which version of Protege is doing this?

Related

Not able to make Vaadin Tree Grid work with File System Data Provider

I have a requirement wherein i need to display a tree containing users home folder hierarchy including files and folders. I have been trying to use Vaadin TreeGrid and FileSystemDataProvider for this purpose
I am using Vaadin 21.0.0 and using this dependency
<dependency>
<groupId>org.vaadin.filesystemdataprovider</groupId>
<artifactId>filesystemdataprovider</artifactId>
<version>3.0.0</version>
</dependency>
The Code for the same is as follows
String path = System.getProperty("user.home");
File rootFile = new File(path);
FilesystemData root = new FilesystemData(rootFile, false);
FilesystemDataProvider fileSystem = new FilesystemDataProvider(root);
tree.setDataProvider(fileSystem);
add(tree)
However the treegrid is displaying blank(Not displaying the tree structure) on running the program. Need your help guys on this what i might be doing wrong
You need to configure your TreeGrid to have columns, for example:
public GridView() {
File rootFile = new File(path);
FilesystemData root = new FilesystemData(rootFile, false);
FilesystemDataProvider fileSystem = new FilesystemDataProvider(root);
TreeGrid<File> tree = new TreeGrid<>();
tree.setDataProvider(fileSystem);
tree.addHierarchyColumn(file -> file.getName()).setHeader("Name");
tree.addColumn(file -> file.length()).setHeader("Size");
tree.setWidth("750px");
tree.setHeight("500px");
setSizeFull();
setAlignItems(Alignment.CENTER);
setJustifyContentMode(JustifyContentMode.CENTER);
add(tree);
}
If you didn't give a bean class to the Grid/TreeGrid's constructor, you have to add a column explicitly through one of the addColumn or addHierarchyColumn methods. When a columns are not added this way, the Grid would be rendered as blank on the page.

Date/Time placeholder localization

I am coping with the localization of the Date/Time placeholders in form-runner of Orbeon 2018 (albeit it seems this has not been changed in 2019 and 2020 neither).
What I am looking for is defined in orbeon-form-runner.jar\xbl\orbeon\date\date.xbl (and time/time.xbl, but for now, I think it is enough to discuss the first one) file, more specifically here:
<xf:var
name="placeholder"
value="
let $format := xxf:property('oxf.xforms.format.input.date'),
$cleaned := translate($format, '[01]', ''),
$duplicate := replace(replace(replace($cleaned,
'M', 'MM'),
'D', 'DD'),
'Y', 'YYYY'),
$format-en := instance('orbeon-resources')/resource[#xml:lang = 'en']/format,
$format-lang := xxf:r('format'),
$translated := translate($duplicate, $format-en, $format-lang)
return
$translated
"/>
<xh:input type="text" placeholder="{$placeholder}" id="input"/>
The placeholder variable is assembled over the html input, this is clear.
In my language, the YYYY, MM, DD is not the right placeholder for date parts, so my requirement is to change them depending on the current request locale.
At first I tried to extend the labels in the apps/fr/18n/resource.xml, and I replaced the static 'MM', 'DD', etc. constants with xxf:r('components.labels.MM', '|fr-fr-resources|')) and similar things without any success (okay, the placeholder has been displayed, but the same default placeholder that was visible before my modification).
My second approach was to put these labels to the same file, and refer them on the same way: xxf:r('MM'), no success (the same result as in the first case).
My third approach, and I am here now, was to trying to hardcode these static things and only fix these labels for my locale (using an xsl:choose) and here I am: I can't find how on earth could I grab the request locale here (in the context of xbl files).
Neither the <xf:var name="lang" value="xxf.instance('fr-language-instance')"/>, nor the <xf:var name="fr-lang" value="xxf.instance('fr-fr-language-instance')"/> variables pointed to the right current request locale (they showed as "en").
Do you have any idea how to solve this properly?
You define the input format through the oxf.xforms.format.input.date property. And there can be only one input format, which can't depend on the current language.
In the placeholder, the component shows the format you defined through in oxf.xforms.format.input.date, but changing the letter M (month), D (day), and Y (year) to match the current language, and that is done by adding a resource to orbeon-resources, which has currently:
<resource xml:lang="en"><format>MDY</format></resource>
<resource xml:lang="fr"><format>MJA</format></resource>
<resource xml:lang="de"><format>MTJ</format></resource>
<resource xml:lang="pl"><format>YMD</format></resource>

Which settings should be used for TokensregexNER

When I try regexner it works as expected with the following settings and data;
props.setProperty("annotators", "tokenize, cleanxml, ssplit, pos, lemma, regexner");
Bachelor of Laws DEGREE
Bachelor of (Arts|Laws|Science|Engineering|Divinity) DEGREE
What I would like to do is that using TokenRegex. For example
Bachelor of Laws DEGREE
Bachelor of ([{tag:NNS}] [{tag:NNP}]) DEGREE
I read that to do this, I should use TokensregexNERAnnotator.
I tried to use it as follows, but it did not work.
Pipeline.addAnnotator(new TokensRegexNERAnnotator("expressions.txt", true));
Or I tried setting annotator in another way,
props.setProperty("annotators", "tokenize, cleanxml, ssplit, pos, lemma, tokenregexner");
props.setProperty("customAnnotatorClass.tokenregexner", "edu.stanford.nlp.pipeline.TokensRegexNERAnnotator");
I tried to different TokenRegex formats but either annotator could not find the expression or I got SyntaxException.
What is the proper way to use TokenRegex (query on tokens with tags) on NER data file ?
BTW I just see a comment in TokensRegexNERAnnotator.java file. Not sure if it is related pos tags does not work with RegexNerAnnotator.
if (entry.tokensRegex != null) {
// TODO: posTagPatterns...
pattern = TokenSequencePattern.compile(env, entry.tokensRegex);
}
First you need to make a TokensRegex rule file (sample_degree.rules). Here is an example:
ner = { type: "CLASS", value: "edu.stanford.nlp.ling.CoreAnnotations$NamedEntityTagAnnotation" }
{ pattern: (/Bachelor/ /of/ [{tag:NNP}]), action: Annotate($0, ner, "DEGREE") }
To explain the rule a bit, the pattern field is specifying what type of pattern to match. The action field is saying to annotate every token in the overall match (that is what $0 represents), annotate the ner field (note that we specified ner = ... in the rule file as well, and the third parameter is saying set the field to the String "DEGREE").
Then make this .props file (degree_example.props) for the command:
customAnnotatorClass.tokensregex = edu.stanford.nlp.pipeline.TokensRegexAnnotator
tokensregex.rules = sample_degree.rules
annotators = tokenize,ssplit,pos,lemma,ner,tokensregex
Then run this command:
java -Xmx8g edu.stanford.nlp.pipeline.StanfordCoreNLP -props degree_example.props -file sample-degree-sentence.txt -outputFormat text
You should see that the three tokens you wanted tagged as "DEGREE" will be tagged.
I think I will push a change to the code to make tokensregex link to the TokensRegexAnnotator so you won't have to specify it as a custom annotator.
But for now you need to add that line in the .props file.
This example should help in implementing this. Here are some more resources if you want to learn more:
http://nlp.stanford.edu/software/tokensregex.shtml#TokensRegexRules
http://nlp.stanford.edu/nlp/javadoc/javanlp/edu/stanford/nlp/ling/tokensregex/SequenceMatchRules.html
http://nlp.stanford.edu/nlp/javadoc/javanlp/edu/stanford/nlp/ling/tokensregex/types/Expressions.html

How should I use getDatatypeProperty(String PropertyName) in Jena?

I have an owl ontology in which has a DatatypeProperty "hasAge"
Can anyone tell me why this code returns null value?
String URI = "http://owldl.com/ontologies/dl-safe.owl"
DatatypeProperty data = model.getDatatypeProperty(URI+"hasAge")
data is null!
Although this line exits in the owl file:
<!-- http://owldl.com/ontologies/dl-safe.owl#hasAge -->
<owl:DatatypeProperty rdf:about="&dl-safe;hasAge"/>
This ontology works just fine with ObjectProperties. However, It does not seem to work with DatatypeProperties
You're missing a # there.
URI+"hasAge" is going to be:
http://owldl.com/ontologies/dl-safe.owlhasAge
But as the comment indicates, the property's URI is:
http://owldl.com/ontologies/dl-safe.owl#hasAge
So change it to:
String URI = "http://owldl.com/ontologies/dl-safe.owl#";

how to insert rdf into virtuoso via jena?

After i generate my rdf triple.I want to insert them into Virtuoso triple Store via Jena.
....
model.write(System.out,"RDF/XML");
....
url = "jdbc:virtuoso://localhost:1111";
VirtGraph set = new VirtGraph (url, "dba", "dba");
Query sparql = QueryFactory.create("?????");
VirtuosoQueryExecution vqe = VirtuosoQueryExecutionFactory.create (sparql, set);
vqe.exec();
How can i do ?
The documentation for the Virtuoso Jena Provider includes a sample program VirtuosoSPARQLExample8 demonstrating how to insert triples into a graph.

Resources