Notation3 rules using language tags - localization

I have facts expressed in Turtle/Notation3 syntax that use language tags for localization of strings, e.g.
#prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>.
#prefix ex: <http://www.example.org/#>.
ex:A rdfs:label "example"#en;
rdfs:label "beispiel"#de.
Is it possible and if so, how could one define rules specific to a given language tag?
#prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>.
#prefix ex: <http://www.example.org/#>.
{
?s rdfs:label ?v#en. # a mechanism is needed here to select for 'en' lang tag
}
=>
{
?s a ex:EnglishLabeledThing.
}.
Thanks for your help ;)
I tried various variations of the above syntax, accessing properties of langString or LocalizableString but did not came up with a solution. Also I could not find any explanation in the N3 specs. I'm using EYE v2.3.0.

For future reference, I came up with a solution by using func (http://www.w3.org/2007/rif-builtin-function#):
prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>.
#prefix ex: <http://www.example.org/#>.
#prefix func: <http://www.w3.org/2007/rif-builtin-function#>.
{
?s rdfs:label ?v.
(?v) func:lang-from-PlainLiteral ?l.
(?l "en") func:compare 0.
}
=>
{
?s a ex:EnglishLabeledThing.
}.
results in ex:A a ex:EnglishLabeledThing. as expected, however I also came across the pred:matches-language-range which might be a better fit here.

Related

How to add time series data stored in a csv column to rdf graph

I have a set of multiple files that store time series data.
The time series data is stored as a CSV file with multiple large columns.
E.g.:
Time
Force
1
0.1
2
0.2
3
0.2
4
0.3
...
...
I would like to make this data accessible using RDF without conversion of the actual data into RDF. The columns in the file should be related to ontology individuals.
I know I can create a json-ld file that stores the meta information of the file. I read that this file is usually stored together with the csv file as "-metadata.json" file.
I made a minimal example for my table:
{
"#context":[
"http://www.w3.org/ns/csvw"
],
"url":"PATH-TO-URL",
"dialect":{
"delimiter":"\t",
"skipRows":4,
"headerRowCount":2,
"encoding":"ISO-8859-1"
},
"tableSchema":{
"columns":[
{
"titles":"Time",
"#id":"Time",
"#type":"Column"
},
{
"titles":"Force",
"#id":"Force",
"#type":"Column"
}
]
}
}
Now I also have an ontology, with individuals representing time and force of a specific experiment (F1 and T1). This ontology is stored in a tripplestore.
E.g.:
#prefix : <http://www.semanticweb.org/example#> .
#prefix owl: <http://www.w3.org/2002/07/owl#> .
#prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
#prefix xml: <http://www.w3.org/XML/1998/namespace> .
#prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
#prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
#base <http://www.semanticweb.org/example> .
<http://www.semanticweb.org/example> rdf:type owl:Ontology .
#################################################################
# Object Properties
#################################################################
### http://www.semanticweb.org/example#hasDataReference
:hasDataReference rdf:type owl:ObjectProperty .
#################################################################
# Classes
#################################################################
### http://www.semanticweb.org/example#Force
:Force rdf:type owl:Class .
### http://www.semanticweb.org/example#Time
:Time rdf:type owl:Class .
#################################################################
# Individuals
#################################################################
### http://www.semanticweb.org/example#F1
:F1 rdf:type owl:NamedIndividual ,
:Force .
### http://www.semanticweb.org/example#T1
:T1 rdf:type owl:NamedIndividual ,
:Time .
I would like to relate these individuals to the columns stored in the "-metadata.json" file. So that the user can locate the specific column by following the graph from the individuals.
What would be the correct way to link the individuals with the column location ? Do I need to convert the json-ld to ttl, add it to the tripplestore and then add a relation to the column, e.g. using my "hasDataReference" relation?
I think the file itself could be made accessible using an ontology like dcat.
In general, I am looking for the recommended way to make csv data (e.g. the columns and rows) available via an ontology-based graph.
I do not want to load the entire column in the tripplestore only the location of the column in the file, so that the user can extract the column if needed.

Jena Fuseki missed inference?

I'm using Jena Fuseki 3.13.1 (with OWLFBRuleReasoner), and I have asserted (uploaded) the following triples:
#prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
#prefix owl: <http://www.w3.org/2002/07/owl#> .
#prefix f: <http://vleo.net/family#> .
f:Bob f:hasWife f:Alice .
f:Bob f:hasWife f:Alice2 .
f:Alice2 f:hasHusband f:Bob2 .
f:hasWife a owl:FunctionalProperty .
f:hasWife a owl:InverseFunctionalProperty .
f:hasHusband owl:inverseOf f:hasWife .
Now, If I query and ASK { f:Alice owl:sameAs f:Alice2 }, I get true.
However, If I ASK { f:Bob owl:sameAs f:Bob2 }, I get false! Loading the same triples on another reasoner (owl-rl), I get the triple f:Bob owl:sameAs f:Bob2 inferred.
What is happening here?
I have worked with jena reasoner following this doc:
https://jena.apache.org/documentation/inference/
I have many years of experience with jena, and had never used OWLFBRuleReasoner, and it does not appear in the indicated doc, which is curious for me.
Not all reasoners work with the same construct, and that is the reason why I check the doc, that means perhaps OWLFBRuleReasoner does not use the same owl construct as the another reasoner you used (owl-rl).
Another thing is that, as I understand your KB is inconsistent, because you are declaring:
f:hasWife a owl:FunctionalProperty
But, you are assigning 2 values to it, which must make your KB inconsistent.
Luis Ramos
A follow-up: as suggested by UninformedUser, I've asked this one on the Jena mailing list and got an answer from Dave.
Jena's implementation trades some reasoning completeness for performance, the solution in this case is to add explicitly the forward version of inverseOf to the owl-fb rules file:
[inverseOf2b: (?P owl:inverseOf ?Q), (?X ?P ?Y) -> (?Y ?Q ?X) ]
The details are in this thread.

How to create user defined datatypes in Apache Jena?

I'm creating an ontology using Apache Jena. However, I can't find a way of creating custom datatypes as in the following example:
'has value' some xsd:float[>= 0.0f , <= 15.0f].
Do you have any ideas?
It seems what you need is DatatypeRestriction with two facet restrictions: xsd:minInclusive and xsd:maxInclusive.
It is OWL2 constructions.
org.apache.jena.ontology.OntModel does not support OWL2, only OWL1.1 partially (see documentation), and, therefore, there are no builtin methods for creating such data-ranges (there is only DataOneOf data range expression, see OntModel#createDataRange(RDFList)).
So you have to create a desired datatype manually, triple by triple, using the general org.apache.jena.rdf.model.Model interface.
In RDF, it would look like this:
_:x rdf:type rdfs:Datatype.
_:x owl:onDatatype DN.
_:x owl:withRestrictions (_:x1 ... _:xn).
See also owl2-quick-guide.
Or, to build such an ontology, you can use some external utilities or APIs.
For example, in ONT-API (v. 2.x.x) the following snippet
String ns = "https://stackoverflow.com/questions/54131709#";
OntModel m = OntModelFactory.createModel()
.setNsPrefixes(OntModelFactory.STANDARD).setNsPrefix("q", ns);
OntDataRange.Named floatDT = m.getDatatype(XSD.xfloat);
OntFacetRestriction min = m.createFacetRestriction(OntFacetRestriction.MinInclusive.class,
floatDT.createLiteral("0.0"));
OntFacetRestriction max = m.createFacetRestriction(OntFacetRestriction.MaxInclusive.class,
floatDT.createLiteral("15.0"));
OntDataRange.Named myDT = m.createDatatype(ns + "MyDatatype");
myDT.addEquivalentClass(m.createDataRestriction(floatDT, min, max));
m.createResource().addProperty(m.createDataProperty(ns + "someProperty"),
myDT.createLiteral("2.2"));
m.write(System.out, "ttl");
will produce the following ontology:
#prefix q: <https://stackoverflow.com/questions/54131709#> .
#prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
#prefix owl: <http://www.w3.org/2002/07/owl#> .
#prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
#prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
[ q:someProperty "2.2"^^q:MyDatatype ] .
q:MyDatatype a rdfs:Datatype ;
owl:equivalentClass [ a rdfs:Datatype ;
owl:onDatatype xsd:float ;
owl:withRestrictions ( [ xsd:minInclusive "0.0"^^xsd:float ]
[ xsd:maxInclusive "15.0"^^xsd:float ]
)
] .
q:someProperty a owl:DatatypeProperty .

Create a new object using the inference rules

I have a semantic network. Is it possible to use a jena framework to create a new object in the semantic web based on some rule. For example there is an object has a certain property, then you need to create a new object and make a connection between them. Is it possible?
Yes, this is possible in Jena's rule system. Typically, we create such nodes using the makeSkolem Reasoner Builtin Primitive:
[example:
(?a urn:ex:owns ?b)
makeSkolem(?ownership,?a,?b)
->
(?a urn:ex:hasOwnership ?ownership)
(?ownership urn:ex:of ?b)
]
This will create a new blank node in the graph that will be used to reify the <urn:ex:owns> triple. E.g., when given a graph containing the triple <urn:ex:a> <urn:ex:owns> <urn:ex:b> as input, the preceding rule will generate the following graph structure:
<urn:ex:a> <urn:ex:hasOwnership> [
<urn:ex:of> <urn:ex:b>
].
You can also construct URIs in your rule if you have some scheme for generating them.
Java Example
Assuming that so.rules exists on your classpath and contains the rule from above, the following java code will demonstrate custom rules for this task.
// Obtains a list of rules to pass to a rule-based reasoner
// These rules are read from a file.
// This is the most common case.
final List<Rule> rules;
try (final InputStream src = Resources.getResource("so.rules").openStream()) {
rules = Rule
.parseRules(Rule.rulesParserFromReader(new BufferedReader(new InputStreamReader(src))));
}
// Create a rule-based reasoner.
// There are multiple types of reasoners available.
// You may prefer some over others (e.g., when performing OWL inference in tandem with custom rules)
final GenericRuleReasoner reasoner =
(GenericRuleReasoner) GenericRuleReasonerFactory.theInstance().create(null);
reasoner.setRules(rules);
// Create a RDF Model to store data in.
// Create an inference model to interact with.
// The inference model will store any added data in the base model.
// The inference model will store inferred triples internally.
final Model baseModel = ModelFactory.createDefaultModel();
final InfModel model = ModelFactory.createInfModel(reasoner, baseModel);
model.prepare();
// Stimulate the rule by introducing the desired triples to the graph
// :a :owns :b
final Property owns = model.createProperty("urn:ex:", "owns");
final Property hasOwnership = model.createProperty("urn:ex:","hasOwnership");
final Property of = model.createProperty("urn:ex:","of");
final Resource a = model.createResource("urn:ex:a");
final Resource b = model.createResource("urn:ex:b");
model.add(a,owns,b);
// Verify that the rule has fired. That is, that we have created some node
// and that the node relates our other two resources
// -> :a :hasOwnership [ :of :b ]
assertTrue(a.hasProperty(hasOwnership));
final Resource createdObject = a.getPropertyResourceValue(hasOwnership);
assertTrue(createdObject.hasProperty(of,b));
If you needs are reasonably simple you can use SPARQL CONSTRUCT queries, i.e.
CONSTRUCT { ?p :hasGrandfather ?g . }
WHERE {
?p :hasParent ?parent .
?parent :hasParent ?g .
?g :gender :male .
}
will cause triples to be generated for stating grandfather relations.
If your needs are more sophisticated, you can achieve this with SHACL for which an implementation on top of Jena exists. I will give a brief example. Assume you have the following RDF data:
#prefix ex: <http://example.com/ns#> .
#prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
#prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
ex:InvalidRectangle
a ex:Rectangle .
ex:NonSquareRectangle
a ex:Rectangle ;
ex:height 2 ;
ex:width 3 .
ex:SquareRectangle
a ex:Rectangle ;
ex:height 4 ;
ex:width 4 .
for which you define the following shape file:
#prefix ex: <http://example.com/ns#> .
#prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
#prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
#prefix dash: <http://datashapes.org/dash#> .
#prefix sh: <http://www.w3.org/ns/shacl#> .
#prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
ex:Rectangle
a rdfs:Class, sh:NodeShape ;
rdfs:label "Rectangle" ;
sh:property [
sh:path ex:height ;
sh:datatype xsd:integer ;
sh:maxCount 1 ;
sh:minCount 1 ;
sh:name "height" ;
] ;
sh:property [
sh:path ex:width ;
sh:datatype xsd:integer ;
sh:maxCount 1 ;
sh:minCount 1 ;
sh:name "width" ;
] ;
sh:rule [
a sh:TripleRule ;
sh:subject sh:this ;
sh:predicate rdf:type ;
sh:object ex:Square ;
sh:condition ex:Rectangle ;
sh:condition [
sh:property [
sh:path ex:width ;
sh:equals ex:height ;
] ;
] ;
] .
It will generate the following RDF data:
#prefix owl: <http://www.w3.org/2002/07/owl#> .
#prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
#prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
#prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
<http://example.com/ns#SquareRectangle>
a <http://example.com/ns#Square> .
which you can add to your RDF store.
This example with code can be found here, as well as a more advanced example.

Jena tdbloader assembler

How to load TDB storage with inference via tdbloader.bat (windows, Jena 2.7.3)?
I used this assembler file:
#prefix tdb: <http://jena.hpl.hp.com/2008/tdb#> .
#prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
#prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
#prefix ja: <http://jena.hpl.hp.com/2005/11/Assembler#> .
#prefix tdb: <http://jena.hpl.hp.com/2008/tdb#> .
[] ja:loadClass "com.hp.hpl.jena.tdb.TDB" .
tdb:DatasetTDB rdfs:subClassOf ja:RDFDataset .
tdb:GraphTDB rdfs:subClassOf ja:Model .
<#dataset> rdf:type ja:RDFDataset ;
ja:defaultGraph <#infModel> .
<#infModel> a ja:InfModel ;
ja:baseModel <#tdbGraph>;
ja:reasoner
[ ja:reasonerURL <http://jena.hpl.hp.com/2003/OWLFBRuleReasoner> ].
<#tdbGraph> rdf:type tdb:GraphTDB ;
tdb:location "DB";
.
My command:
c:\apache-jena-2.7.3\bat>tdbloader --tdb=test.ttl C:\apache-jena-2.7.3\Lubm10\*
I got an exception:
java.lang.ClassCastException: com.hp.hpl.jena.reasoner.rulesys.FBRuleInfGraph cannot be cast to com.hp.hpl.jena.tdb.store.GraphTDB
What is wrong?
(removing semicolon after "DB" - does not help)
It's not clear what you are trying to achieve. tdbloader is a tool for loading triples into a TDB store, prior to processing those triples via your app or SPARQL end-point. Separately, from your app code, you can construct a Jena model which uses the inference engine over a base model from a TDB graph. But I can't see why you are using an inference model at load time. If you look at the exception you are getting:
FBRuleInfGraph cannot be cast to com.hp.hpl.jena.tdb.store.GraphTDB
it confirms that you can't use an inference graph at that stage of the process, and I'm not sure why you would. Unless, of course, you are trying to statically compute the inference closure over the base model and store that in TDB, saving the need for inference computation at runtime. However, if you are trying to do that, I don't believe that can currently be done via the Jena assembler. You'll have to write custom code to do that at the moment.
Bottom line: separate the concerns. Use a plain graph description for tdbloader, use the inference graph at run time.

Resources