I have created the relationship A 'is functional parent of' B and defined 'has functional parent' as the inverse of 'is functional parent of'. 'A' and 'B' are both subclasses of 'chemical entity'.
I want Protege to infer B 'has functional parent' A. The query 'has functional parent' some A fails.
Error #1: Not understanding open world
I realized that some implies that not all B have the relationship 'has functional parent' with 'A'. However, the query 'chemical entity' and 'has functional parent' still fails.
My ontology has no instances. I was hoping the query wound find subclasses.
Turtle File
#prefix : <http://www.semanticweb.org/michaelchary/ontologies/2020/8/untitled-ontology-10#> .
#prefix owl: <http://www.w3.org/2002/07/owl#> .
#prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
#prefix xml: <http://www.w3.org/XML/1998/namespace> .
#prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
#prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
#base <http://www.semanticweb.org/michaelchary/ontologies/2020/8/untitled-ontology-10> .
<http://www.semanticweb.org/michaelchary/ontologies/2020/8/untitled-ontology-10> rdf:type owl:Ontology .
#################################################################
# Object Properties
#################################################################
### http://www.semanticweb.org/michaelchary/ontologies/2020/8/untitled-ontology-10#hasFunctionalParent
:hasFunctionalParent rdf:type owl:ObjectProperty ;
owl:inverseOf :isFunctionalParentOf .
### http://www.semanticweb.org/michaelchary/ontologies/2020/8/untitled-ontology-10#isFunctionalParentOf
:isFunctionalParentOf rdf:type owl:ObjectProperty .
#################################################################
# Classes
#################################################################
### http://www.semanticweb.org/michaelchary/ontologies/2020/8/untitled-ontology-10#A
:A rdf:type owl:Class ;
rdfs:subClassOf :Z ,
[ rdf:type owl:Restriction ;
owl:onProperty :isFunctionalParentOf ;
owl:someValuesFrom :B
] .
### http://www.semanticweb.org/michaelchary/ontologies/2020/8/untitled-ontology-10#B
:B rdf:type owl:Class ;
rdfs:subClassOf :Z .
### http://www.semanticweb.org/michaelchary/ontologies/2020/8/untitled-ontology-10#Z
:Z rdf:type owl:Class .
### Generated by the OWL API (version 4.5.9.2019-02-01T07:24:44Z) https://github.com/owlcs/owlapi
From the axioms you stated in your ontology, there is absolutely nothing from which the reasoner can derive that B hasFunctionalParent A.
To understand why this is the case, it is helpful to think in terms of individuals even though your ontology does not include any explicit individuals. When the reasoner runs, it tries to generate a model based on the axioms in the ontology. A model consists of generated individuals that adheres to the axioms of your ontology.
For illustration purposes, let us assume the universe of individuals consists of the following numbers:
Domain = {0, 1, 2, 3, 4, 5, 6, 7},
Z = {1, 2, 3, 5, 6, 7},
A = {5, 7} and
B = {2, 3, 6}
Then you have an object property hasFunctionalParent with its inverse. For short I will refer to hasFunctionalParent as R and its inverse as invR. What does R and invR mean? It basically states that when 2 individuals in our domain are related via R, they are also related via invR. That is, if we have R(1, 2), then invR(2, 1) also hold.
Stating that A subClassOf invR some B implies that each individual of A is related via invR to at least 1 individual of B. Thus, if we have invR(5, 2) and invR(7, 3), we also will have R(2, 5) and R(3, 7). However, this says nothing about the class B in general. It is completely possible that R(6, 0) holds. Therefore the reasoner cannot infer that B hasFunctionalParent A.
To get B and Z for the query "find the super classes of hasFunctionalParent some B" (that means "superclasses" must be ticked in Protege when doing the query) you have to state that isFunctionalParentOf has domain A and range B. This states that whenever 2 individuals x and y are related via isFunctionalParentOf, we can assume x is an instance of A and y is an instance of B.
Lastly, you will note that you will need to use the DL query tab in Protege to get to this inference. In particular it is not shown as part of the inferences after reasoning. Why is that? That is because Protege only shows inferences of named classes. hasFunctionalParent some B is an anonymous class, therefore this inference is not shown. A trick to make this show in Protege is to add an arbitrary concept say X that you set as equivalent to hasFunctionalParent some B. If you now run the reasoner, Protege will infer that X subClassOf B.
Related
I was trying alternative ways to write the below proof from this question and Isabelle 2020's Rings.thy. (In particular, I added the note div_mult_mod_eq[of a b] line to test the use of the note command:
lemma mod_div_decomp:
fixes a b
obtains q r where "q = a div b" and "r = a mod b"
and "a = q * b + r"
proof -
from div_mult_mod_eq have "a = a div b * b + a mod b" by simp
note div_mult_mod_eq[of a b]
moreover have "a div b = a div b" ..
moreover have "a mod b = a mod b" ..
note that ultimately show thesis by blast
qed
However, if I write it in a separate .thy file, there is an error about type unification at the note line:
Type unification failed: Variable 'a::{plus,times} not of sort semiring_modulo
Failed to meet type constraint:
Term: a :: 'a
Type: ??'a
The problem goes way if I enclose the whole proof in a pair of type class class begin ... end as follows:
theory "test"
imports Main
HOL.Rings
begin
...
class semiring_modulo = comm_semiring_1_cancel + divide + modulo +
assumes div_mult_mod_eq: "a div b * b + a mod b = a"
begin
(* ... inserted proof here *)
end
...
end
My questions are:
Is this the correct way to prove a theorem about a type class? i.e. to write a separate class definition in a different file?
Is it always necessary to duplicate type class definitions as I did?
If not, what is the proper way to prove a theorem about a type class outside of its original place of definition?
There are two ways to prove things in type classes (basically sort = typeclass for Isabelle/HOL):
Proving in the context of the typeclass
context semiring_modulo
begin
...
end
(slightly less clean) Add the sort constraints to the type:
lemma mod_div_decomp:
fixes a b :: "'a :: {semiring_modulo}"
obtains q r where "q = a div b" and "r = a mod b"
and "a = q * b + r"
semiring_modulo subsumes plus and times, but you can also type {semiring_modulo,plus,times} to really have all of them.
The documentation of classes contains more examples.
The issue you ran into is related to how Isabelle implements polymorphism. Sorts represent a subset of all types, and we characterize them by a set of intersected classes. By attaching a sort to a variable, we restrict the space of terms with which that variable can be instantiated with. One way of looking at this is an assumption that the variable belongs to a certain sort. In your case, type inference (+) (*) div mod apparently gives you {plus,times}, which is insufficient for div_mult_mod_eq. To restrict the variable further you can make an explicit type annotation as Mathias explained.
Note that the simp in the line above should run into the same problem.
I have a list of person names like, for example, this except (Person is the column name):
Person
"Wilson, Charles; Harris Arthur"
"White, D.
Arthur Harris"
Note that the multiple persons are mentioned in different ways and are separated differently.
I would like to use the RDF Mapping Language https://rml.io/ to create the following RDF without cleaning (or changing) the input data:
:Wilson a foaf:Person;
foaf:firstName "Charles";
foaf:lastName "Wilson" .
:Harris a foaf:Person;
foaf:firstName "Arthur";
foaf:lastName "Harris" .
:White a foaf:Person;
foaf:firstName "D.";
foaf:lastName "White" .
Note that Arthur Harris is mentioned twice in the input data, but only a single RDF resource is created.
I use the function ontology https://fno.io/ and created a custom java method. Based on the argument mode a list of person properties are returned (e.g. only the URIs or only the first names).
public static List<String> getPersons(String value, String mode) {
if(mode == null || value.trim().isEmpty())
return Arrays.asList();
List<String> results = new ArrayList<>();
for(Person p : getAllPersons(value)) {
if(mode.trim().isEmpty() || mode.equals("URI")) {
results.add("http://example.org/person/" + p.getLastName());
} else if(mode.equals("firstName")) {
results.add(p.getFirstName());
} else if(mode.equals("lastName")) {
results.add(p.getLastName());
} else if(mode.equals("fullName")) {
results.add(p.getFullName());
}
}
return results;
}
Assume that the getAllPersons method correctly extracts the persons from a given string, like the ones above.
In order to extract multiple persons from one cell I call the getPersons function in a subjectMap like this:
:tripleMap a rr:TriplesMap .
:tripleMap rml:logicalSource :ExampleSource .
:tripleMap rr:subjectMap [
fnml:functionValue [
rr:predicateObjectMap [
rr:predicate fno:executes ;
rr:objectMap [ rr:constant cf:getPersons ]
] ;
rr:predicateObjectMap [
rr:predicate grel:valueParameter ;
rr:objectMap [ rml:reference "Person" ] # the column name
] ;
rr:predicateObjectMap [
rr:predicate grel:valueParameter2 ;
rr:objectMap [ rr:constant "URI" ] # the mode
]
];
rr:termType rr:IRI ;
rr:class foaf:Person
] .
I use RMLMapper https://github.com/RMLio/rmlmapper-java, however, it only allows to return one subject for each line, see https://github.com/RMLio/rmlmapper-java/blob/master/src/main/java/be/ugent/rml/Executor.java#L292 .
That is why I wrote a List<ProvenancedTerm> getSubjects(Term triplesMap, Mapping mapping, Record record, int i) method and replaced it accordingly.
This leads to the following result:
:Wilson a foaf:Person .
:Harris a foaf:Person .
:White a foaf:Person .
I know that this extension is incompatible with the RML specification https://rml.io/specs/rml/ where the following is stated:
It [a triples map] must have exactly one subject map that specifies how to generate a subject for each row/record/element/object of the logical source (database/CSV/XML/JSON data source accordingly).
If I proceed to also add first name resp. last name, the following predicateObjectMap could be added:
:tripleMap rr:predicateObjectMap [
rr:predicate foaf:firstName;
rr:objectMap [
fnml:functionValue [
rr:predicateObjectMap [
rr:predicate fno:executes ;
rr:objectMap [ rr:constant cf:getPersons ]
] ;
rr:predicateObjectMap [
rr:predicate grel:valueParameter ;
rr:objectMap [ rml:reference "Person" ] # the column name
] ;
rr:predicateObjectMap [
rr:predicate grel:valueParameter2 ;
rr:objectMap [ rr:constant "firstName" ] # the mode
]
]
]
] .
Because a predicateObjectMap is evaluated for each subject and multiple subjects are returned now, every person resource will get the first name of every person. In order to make it more clear, it looks like this:
:Wilson a foaf:Person;
foaf:firstName "Charles" ;
foaf:firstName "Arthur" ;
foaf:firstName "D." .
:Harris a foaf:Person;
foaf:firstName "Charles" ;
foaf:firstName "Arthur" ;
foaf:firstName "D." .
:White a foaf:Person;
foaf:firstName "Charles" ;
foaf:firstName "Arthur" ;
foaf:firstName "D." .
My question is: Is there a solution or work-around in RML for multiple complex entities (e.g. persons having first and last names) in one data element (cell) of the input without cleaning (or changing) the input data?
Maybe this issue is related to my question: https://www.w3.org/community/kg-construct/track/issues/3
It would also be fine if such a use case is not meant to be solved by a mapping framework like RML. If this is the case, what could be alternatives? For example, a handcrafted extraction pipeline that generates RDF?
As far as I am aware, what you are trying to do is not possible using FnO functions and join conditions.
However, what you could try is specifying a clever rml:query or rml:iterator which splits the complex values before they reach the RMLMapper. Whether this is possible depends on the specific source database, though.
For instance, if the source is a SQL Server database, you could use the function STRING_SPLIT. Or if it is a PostgreSQL database, you could use STRING_TO_ARRAY together with unnest. (Since different separators are used in the data, it is possible you would have to call STRING_SPLIT or STRING_TO_ARRAY once for each different separator.
If you provide more information about the underlying database, I can update this answer with an example
(Note: I contribute to RML and its technologies.)
As I understood, you have a normalization problem (multi-value cells). Definitely, what you are asking for is to have a dataset in 1NF, see: https://en.wikipedia.org/wiki/First_normal_form
To address these usual heterogeneity problems in CSV files you can use CSV on the Web annotations (W3C recommendation). More in detail, the property you are asking for in this case is csvw:separator (https://www.w3.org/TR/tabular-data-primer/#sequence-values).
However, there are not many parsers for CSVW and the semantics of its properties to generate RDF is not very clear. We've been working on a solution that works with CSVW and RML+FnO to generate virtual KGs from tabular data (having also a SPARQL query as input and not transforming the input dataset to RDF). The output of our proposal is a well-formed database with a standard [R2]RML mapping, so any [R2]RML-compliant could be used to answer queries or to materialize the Knowledge Graph. Although we currently do not support the materialization step, it is in our ToDo list.
You can take a look at the contribution (under review right now): http://www.semantic-web-journal.net/content/enhancing-virtual-ontology-based-access-over-tabular-data-morph-csv
Website: https://morph.oeg.fi.upm.es/tool/morph-csv
I have 3 datasets each from a particular year. I have already merge all 3 but I want to blank cases where year=2016. So far this is the syntax I came up with:
Do (if subyr=2016).
Recode X1 to X32 (Lowest to Highest=SYMIS)(Else=SYMIS).
End if.
You should be able to simply use
DO IF (subyr=2016) .
RECODE X1 TO X32 (ELSE=SYSMIS) .
END IF .
EXE .
If you ever wanted to code the valid values differently from the SYSMIS values, you could use
DO IF (subyr=2016) .
RECODE X1 TO X32 (LO THRU HI=0)(ELSE=SYSMIS) .
END IF .
EXE .
which would give you that flexibility. This example sets valid values to 0 and keep SYSMIS values as SYSMIS.
For one of my projects, I am trying to use Common Lisp, specifically SBCL (in the process, learning it. This is one of the motivations.)
I need to read a file with questions and answers, basically like a Standardized test with mainly multiple choice question answers.
I have some sample questions marked with some section markers like "|" for start and "//s" for and of a section. The question paper will have a hierarchical structure like this: Section -> multiple sub-sections -> each sub-section with multiple questions -> each question will have multiple answers one of them being correct.
This hierarchical structure needs to be converted into a json file finally and pushed to an android app for downstream consumption.
STEP-1: After reading from the source Test paper, this is how my list will look like:
(("Test" . "t")
("0.1" . "v")
("today" . "d")
("General Knowledge" . "p")
("Science" . "s")
("what is the speed of light in miles per second?" . "q")
("Choose the best answer from the following" . "i")
("MCQ question" . "n")
("186000" . "c")
("286262" . "w")
("200000" . "w"))
[PS.1] See legend at the end of the post for the explanation of the cdar values like h, p, t , v etc.,
[PS.2] The source file sample attached at the end of this post
Each car of the consed pair representing the content and the cdr representing the section - which will corresponding to a section, sub-section or a question etc.,
STEP-2: Finally I need to convert this into the following format - an alist -
((:QANDA . "Test") (:VERSION . "0.1") (:DATE . "today")
(:SECTION
((:TITLE . "General Knowledge")
(:SUBSECTION
((:SSTITLE . "Science")
(:QUESTION
((:QUESTION . "what is the speed of light in miles per second?")
(:DIRECTIONS . "Choose the best answer from the following")
(:TYPE . "MCQ question")
(:CHOICES ((:CHOICE . "186000") (:CORRECT . "Y"))
((:CHOICE . "286000") (:CORRECT . "N"))
((:CHOICE . "200000") (:CORRECT . "N"))))))))))
to be consumed by cl-json.
STEP-3: cl-json will produce an appropriate json from this.
The json will look like this:
{
"qanda": "Test",
"version": "0.1",
"date": "today",
"section": [
{
"title": "General Knowledge",
"subsection": [
{
"sstitle": "Science",
"question": [
{
"question": "what is the speed of light in miles per second?",
"Directions": "Choose the best answer from the following",
"type": "MCQ question",
"choices": [
{
"choice": "186000",
"Correct": "Y"
},
{
"choice": "286000",
"Correct": "N"
},
{
"choice": "200000",
"Correct": "N"
}
]
}
]
}
]
}
]
}
I've been successful in reading the source file, generating the consed pair list. Where I am struggling is to create this nested list as shown above to feed it to cl-json.
I realized after a bit of struggle that this is more or less like an n-ary tree problem.
Here are my questions:
a) What is the right way to construct such an n-ary tree representation of the Test paper source file?
b) Or is there a better or easier data structure to represent this?
Here is what I tried, where qtree will be '() initially and kvlist is the consed pair list shown above. This is an incomplete code, as I tried push., consing and nconc (with unreliable results).
Step-1 and Step 3 are fine. Step-2 is where I need help.The problem is to how to add child nodes successively by iterating through the kvlist and find the right parent to add the child when more than one parent is there (e.g., adding a question to the second sub-section):
(defun build-qtree (qtree kvlist)
(cond
((eq '() kvlist) qtree)
((equal "h" (cdar kvlist))
(push (car kvlist) qtree)
(build-qtree qtree (cdr kvlist)))
((equal "p" (cdar kvlist))
(nconc (last qtree) '((:SECTION))))
(t
(qtree))))
[PS.1] Legend: This will be used in the conditions branches or may be a defstruct or a dictionary type of list etc.,
t - title, v - version, d - date, p - section, s - sub section, q - question, i - instructions, n - type of question, c - correct answer, w - wrong answer
[PS.2]Source File:
|Test//t
|0.1//v
|today//d
|General Knowledge//p
|Science//s
|what is the speed of light in miles per second?//q
|Choose the best answer from the following//i
|MCQ question//n
|186000//c
|286000//w
|200000//w
You have a simple problem with a complex example. It might be just a simple parsing problem: What you need is a grammar and a parser for it.
Example grammar. Terminal items are in upper case. * means one or more.
s = q
q = Q i*
i = I w*
w = W
Simple parser:
(defun example (sentence)
(labels ((next-item ()
(pop sentence))
(empty? ()
(null sentence))
(peek-item ()
(unless (empty?)
(first sentence)))
(expect-item (sym)
(let ((item (next-item)))
(if (eq item sym)
sym
(error "Parser error: next ~a, expected ~a" item sym))))
(star (sym fn)
(cons (funcall fn)
(loop while (eq (peek-item) sym)
collect (funcall fn)))))
(labels ((s ()
(q))
(q ()
(list (expect-item 'q) (star 'i #'i)))
(i ()
(list (expect-item 'i) (star 'w #'w)))
(w ()
(expect-item 'w)))
(s))))
Example:
CL-USER 10 > (example '(q i w w w i w w))
(Q
((I (W
W
W))
(I (W
W))))
i have a result binding in jena query solution(say sol1: user=u1, location=loc1, LocationType=type1) and I want to use a dataset to extend my result binding using a set of jena rules. in fact, having sol1 and having
loc1 location:ispartof loc2
loc2 rdf:type loc2Type
in my dataset, i want to add this new solution to my result set:
sol2: user1=u1, location=loc2, locationType=loc2Type
for that, i need to add my solution set to my dataset, write a rule like
#prefix pre: <http://jena.hpl.hp.com/prefix#>.
[rule1: (?sol pre:user ?a) (?sol pre:location ?b) (?sol pre:lcationType ?c) (?b location:ispartof ?d) (?d rdf:type ?type) -> (sol2 pre:user ?a) (sol2 pre:location ?d) (sol2 pre:locationType ?type) ]
do inference based on above rule.Afterward to extract all solutions from dataset i need to query dataset with
#prefix pre: <http://jena.hpl.hp.com/prefix#>.
select * WHERE {?sol pre:user ?a. ?sol pre:location ?b. ?sol pre:lcationType ?c.}
Now my problem is
1) is there any way to prevent adding my solutions to my big dataset by writing resoning rule on 2 datasets?
2)how to individually name each new solution in rule consequence?
Thanks.