How to write snap SPARQL Query when using Pellet as reasoner - ontology

i learn ontologies for study purposes. Here i encontered problem when querying.
i use pellet as reasoner, because i did some swrl builts-in
here i has class named resep_makanan and i want to get all its instance
so i write
PREFIX owl: <http://www.w3.org/2002/07/owl#>
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX :<http://www.semanticweb.org/astrid/ontologies/2019/5/mpasiv2#>
SELECT ?resep_makanan
WHERE {
?resep_makanan rdf:type :resep_makanan.
}
ORDER BY ?resep_makanan
Please click here to see the error
So, how should i write it correctly?

Related

What are the Jena OWL Reasoners Limitations?

I have been making some tests using Jena OWL Reasoner, but I don´t understand some results obtained, for example, if I have the following KB:
Class A
Class B
Class C rdfs:subClassOf A
A owl:disjointWith B
...and if I ask "C owl:disjointWith B"? to the inference model, the answer sould be "yes", but the Jena OWL Reasoner answer is NO...I check this using...
if (infmodel.contains(A, OWL.disjointWith, C)) {
...
}
....
So, is there some limitations to make inferences with this reasoner?
Thanks
Your query contains A owl:disjointWith C, which cannot be inferred from your ontology. Are you sure it's the correct query?

SWRL and Virtuoso

I'm looking for a clear approach to use SWRL clearly in virtuoso server. For example, I designed an ontology using Protege 4.3 and I wrote the SWRL rules using Rules tab in Protege.
Product(?P),hasName(?P,?N),inGroupB(?P,?B)->hasBug(?P)
I uploaded my RDF data (~3GB) into Virtuoso server, along with the Ontology schema. I tried to recall the data that is supposed to be inferred based on the Rules in the ontology but the query return empty results. Example of the SPARQL query that it should clearly return the inferred relation form the rule above as follow:
DEFINE input:inference <http://example.com/2/owl>
PREFIX e:<http://example.com/e/>
SELECT *
FROM <http://example.com/2/data>
WHERE
{
?P a e:Product ;
e:hasBug ?B
}
I believe that I have problem on integrating the things together (RDF data ,OWL schema and SWRL rules). I used Jena and Virtuoso Jena driver in order to load data, ontology and run the SPARQL queries. Any advice on how to let the reasoning part work properly?
Virtuoso 7.x does not support SWRL.
Virtuoso 8.x implements SPIN, into which SWRL can be translated, among other complex reasoning.
See Creating Custom Inference Rules using the SPIN Vocabulary and Virtuoso 8.0 for one walk-through.
Your rough SWRL above translates roughly to --
CONSTRUCT { ?P <hasBug> ?B }
WHERE
{
?P a <Product> ;
<hasName> ?N ;
<inGroupB> ?B .
}
-- or --
CONSTRUCT { ?P a <BuggyProduct> }
WHERE
{
?P a <Product> ;
<hasName> ?N ;
<inGroupB> ?B .
}
Once you have a SPARQL CONSTRUCT, making a Custom Inference Rule boils down to a few steps:
Describe (with a collection of RDF statements in a Turtle doc) your Rule using SPIN terms
EXEC ('SPARQL ' || SPARQL_SPIN_GRAPH_TO_DEFSPIN('{turtle-doc-with-rule-description-iri'))
Test your Rule
More complete user documentation is in progress; you can get assistance via the Virtuoso Users mailing list or the OpenLink Support Case System.

How to add Datalog rules to Pellet reasoner in Jena?

I've multiple personal inference rules in Datalog Form.
I can extend Jena GenericRuleReasoner in order to take them into account during the inference step. Here is an example of code to do that :
String rules = "[r1: (?e1 st:runningTask st:gic_eth0) -> (?e1 rdf:type st:dataFromEthernet2IP)]";
Reasoner reasoner = new GenericRuleReasoner(Rule.parseRules(rules));
reasoner.setDerivationLogging(true);
InfModel inf = ModelFactory.createInfModel(reasoner, rawData);
Actually, I'm want to use Pellet reasoner since it is easy to plug to Jena. I wonder if Pellet is extensible as GenericRuleReasoner ? and if it is, how to add my Datalog rules in it ?

Best way to query Individuals from an Inferred Jena Ontology

I created an ontology based Security alerts.
After reading in some data(Individuals) it got pretty big, so I decided to use a Jena Rule Reasoner to determine some facts. I mostly give Individuals types and attributes and use some regex. Heres a small (constructed) example which gives an individual the type "multiple", when its information matches the regex:
[testRuleContent: (?X ns:hasClassification ?Y), (?Y ns:hasText ?Z), regex(?Z, '.Multiple.')
-> (?X rdf:type ns:Multiple)]
To use the reasoner i create an infModel based on my previous loaded ontology:
RuleReasoner ruleReasoner = new RuleReasoner("GenaralRuleReasoner");
//read rules from file
List<Rule> ruleList = Rule.parseRules(Rule.parseRules(rd));
com.hp.hpl.jena.reasoner.Reasoner reasoner = new GenericRuleReasoner(ruleList);
//jenaOntology is the ontology with the data
InfModel inferredOntotlogy = ModelFactory.createInfModel(reasoner, jenaOntology);
inferredOntotlogy.prepare();
This works without a problem and i can write the infModel into a file with the added types.
Whats the preferable method to query the inferred ontology for certain individuals (in this example those with the type: "Multiple")?
At the moment I use "listStatements()" on the Inferred Model:
Resource multiple = inferredOntotlogy.getResource("file:/C:/ns#Multiple");
StmtIterator iter = inferredOntotlogy.listStatements(null, RDF.type, multiple);
while (iter.hasNext()) {
Resource subject = iter.next().getSubject();
//Individual ind = subject.as(Individual.class);
String indUri = iter.next().getSubject().getURI();
The cast throws an exception(Its only a node with the Uri). But I get the valid Uri of the individual and could work with the basic ontology model without the new proprties (I only need them to get the searched individual so its a possible solution).
A similar attempt would be to use getDeductionsModel() on the Inferred Model to get a Model -> OntModel and query it (potentially with SPARQL).
But id prefere an easy way to query the Inferred Model. Is there such a solution? Or can u give me a tip how to handle this situaion the best way?
I will just work with the resources fow now. It provides all the functionality I need. I should have taken a closer look at the API.
I answered my own question to mark it as solved tomorrow.

How to create bilingual ontology in Protege?

I want to create a web application based on semantic web. The knowledge base is an ontology. My problem is that my application have to supports two languages (English and Romanian). In this moment the only solution that I have is to create two different ontologies (with the same values only translated) but I think is possible to create one that supports this two languages.
So I want to found the way to do these things in Protege.
Can you help me with any ideas?
Thanks (sorry for my bad English)
It depends what you mean by a bilingual ontology. If you want to have an ontology in which the concepts can be presented to both an English reader and a Romanian reader, but there's only one base set of concepts, then it's fairly easy. The identity of the concept is expressed as a URI, for example:
<http://example.com/ontology/animals/Dog> a owl:Class .
There's only one concept denoting dogs, but you add labels and comments that will allow the class to be presented to users in either language:
<http://example.com/ontology/animals/Doc>
a owl:Class ;
rdfs:label "dog"#en ;
rdfs:label "câine"#ro ;
rdfs:comment "Denotes the class of all dogs"#en ;
rdfs:comment "Denotă clasa tuturor câinilor"#ro .
Apologies for the Romanian - it was from Google Translate. I hope it makes sense! I don't use Protege, but you should be able to add labels and comments in multiple languages - it's a basic facility in RDF.
If, on the other hand, you want to have one ontology that contains concepts drawn from both languages, that can be harder. I don't know Romanian (see above!), so I don't know if this is the case or not, but some languages conceptualise the world in quite different ways, so defining a single ontology that merges concepts from both world-views can be difficult. It's also the case that, while RDF and OWL use UTF-8 as their base encoding, you may find some tools that aren't comfortable processing URI's that contain characters outside of US ASCII. That shouldn't be the case in theory, but you might want to be careful just in case.

Resources