Multiple necessary conditions vs intersection of conditions in Protege - ontology

Is there any difference between the following ways of expressing multiple necessary conditions in ontology (via Protege)
Each necessary condition expressed one by one inside the SubclassOf Section (for class A):
instrument some B
object some C
All of those stated at once (via the class expression editor)
instrument some B and object some C
Are 1 and 2 semantically the same?

Yes they are equivalent. The choice of which way to go is yours: which approach do you find more readable? That is the best choice.

Related

Ontology comparison in owlapi

I am using OWLAPI for a project, and I need to compare two ontologies for differences between them. This would ignore blank nodes so that, for instance, I can determine whether the same OWL restrictions are in both ontologies. Not only do I need to know whether there are differences, but I need to find out what those differences are. does such functionality exist in the OWLAPI, oz is there a relatively simple way to do this?
The equality between anonymous class expressions is not based on the blank node ids - anonymous class expressions only have blank nodes in the textual output, in memory the ids are ignored. So checking if an axiom exists in an ontology will by default match expressions correctly for your diff.
This is not true for individuals - anonymous individuals will not be found to be the same across ontologies, and this is by specs. An anonymous individual in one ontology cannot be found in another, because the anonymous individual ids are scoped to the containing ontology.
Note: the unit tests for OWLAPI have to carry out a very similar task, to verify that an ontology can be parsed, written and parsed again without change (i.e., roundtripped between input syntax and output syntax), so there is code that you can look at to take inspiration. See TestBase.java - equal() method for more details. This includes code to deal with different ids for anonymous individuals.

How can I distinguish an axiom from an inferred statement in a Jena RDFS-INF model?

When I create a RDFS_MEM_RDFS_INF model in Jena and read some RDFS-File, a number of statements, that were not explicitly stated in the file are added. E.g. if we have a triple
a p b
and p is a rdfs:subPropertyOf q, than
a q b is
also in the model. A concrete example is the following: if
a skos:related b
is in the file
a skos:semanticRelation b
is also in the model.
Is there any possibility to check whether a statement in the model is an axiom or an inferred one? There are such methods for OWL Models, but I use the RDFS Model. A trivial solution would be to build two models, one without and one with inference, but I would prefer a less memory consuming solution.
Jena InfModel has a method getRawModel(). This Model wont contain the inferred statements, it will contain only the axioms in the file. use a check against that. If you are using the OntModel it has got a method getBaseModel().
To preserve djthequest's answer from a comment:
Jena InfModel has a method getRawModel(). This Model wont contain the
inferred statements, it will contain only the axioms in the file. use
a check against that. If you are using the OntModel it has got a
method getBaseModel().
and Christian Wartena's response indicating that this was a solution:
Thanks. This works fine! I didn't find that method when I was reading
the documentation last week.
(I'll remove this answer if djthequest posts one.)

How to rename a variable using Z3?

given an expression x'=x+1, I wish to rename x' to y. How to do using z3?
There are a number of API functions that let you modify terms and substitute new subterms for old ones. They are described under the Modifiers section that contain the modifiers Z3_update_term, z3_substitute, and Z3_substitute_vars (there is also Z3_translate to port terms between two contexts).
Here is the link:
http://research.microsoft.com/en-us/um/redmond/projects/z3/group__capi.html#gaa7497c70a827db2d61ba98889fe657b5
You can also traverse terms directly and write utilities to modify terms.
The display_ast example shows the main cases for recursively traversing terms:
http://research.microsoft.com/en-us/um/redmond/projects/z3/group__capi__ex.html#ga807b5fe0e26acdec09e52a77318208d0

Can Z3 check the satisfiability of recursive functions on bounded data structures?

I know that Z3 cannot check the satisfiability of formulas that contain recursive functions. But, I wonder if Z3 can handle such formulas over bounded data structures. For example, I've defined a list of length at most two in my Z3 program and a function, called last, to return the last element of the list. However, Z3 does not terminate when asked to check the satisfiability of a formula that contains last.
Is there a way to use recursive functions over bounded lists in Z3?
(Note that this related to your other question as well.) We looked at such cases as part of the Leon verifier project. What we are doing there is avoiding the use of quantifiers and instead "unrolling" the recursive function definitions: if we see the term length(lst) in the formula, we expand it using the definition of length by introducing a new equality: length(lst) = if(isNil(lst)) 0 else 1 + length(tail(lst)). You can view this as a manual quantifier instantiation procedure.
If you're interested in lists of length at most two, doing the manual instantiation for all terms, then doing it once more for the new list terms should be enough, as long as you add the term:
isCons(lst) => ((isCons(tail(lst)) => isNil(tail(tail(lst))))
for each list. In practice you of course don't want to generate these equalities and implications manually; in our case, we wrote a program that is essentially a loop around Z3 adding more such axioms when needed.
A very interesting property (very related to your question) is that it turns out that for some functions (such as length), using successive unrollings will give you a complete decision procedure. Ie. even if you don't constrain the size of the datastructures, you will eventually be able to conclude SAT or UNSAT (for the quantifier-free case).
You can find more details in our paper Satisfiability Modulo Recursive Programs, or I'm happy to give more here.
You may be interested in the work of Erik Reeber on SULFA, the ``Subclass of Unrollable List Formulas in ACL2.'' He showed in his PhD thesis how a large class of list-oriented formulas can be proven by unrolling function definitions and applying SAT-based methods. He proved decidability for the SULFA class using these methods.
See, e.g., http://www.cs.utexas.edu/~reeber/IJCAR-2006.pdf .

Liskov Substitution Principle and the directionality of the original statement

I came across the original statement of the Liskov Substitution Principle on Ward's wiki tonight:
What is wanted here is something like the following substitution property: If for each object o1 of type S there is an object o2 of type T such that for all programs P defined in terms of T, the behavior of P is unchanged when o1 is substituted for o2 then S is a subtype of T." - Barbara Liskov, Data Abstraction and Hierarchy, SIGPLAN Notices, 23,5 (May, 1988).
I've always been crap at parsing predicate logic (I failed Calc IV the first time though), so while I kind of understand how the above translates to:
Functions that use pointers or references to base classes must be able to use objects of derived classes without knowing it.
what I don't understand is why the property Liskov describes implies that S is a subtype of T and not the other way around.
Maybe I just don't know enough yet about OOP, but why does Liskov's statement only allow for the possibility S -> T, and not T -> S?
the hypothetical set of programs P (in terms of T) is not defined in terms of S, and therefore it doesn't say much about S. On the other hand, we do say that S works as well as T in that set of programs P, and so we can draw conclusions about S and its relationship to T.
One way to think about this is that P demands certain properties of T. S happens to satisfy those properties. Perhaps you could even say that 'every o1 in S is also in T'. This conclusion is used to define the word subtype.
As pointed by sepp2k there are multiple views explaining this in the other post.
Here are my two cents about it.
I like to look at it this way
If for each object o1 of type TallPerson there is an object o2 of type Person such that for all programs P defined in terms of Person, the behavior of P is unchanged when o1 is substituted for o2 then TallPerson is a subtype of Person. (replacing S with TallPerson and T with Person)
We typically have a perspective of objects deriving some base class have more functionality, since its extended. However with more functionality we are specializing it and reducing the scope in which they can be used thus becoming subtypes to its base class(broader type).
A derived class inherits its base class's public interface and is expected to either use the inherited implementation or to provide an implementation that behaves similarly (e.g., a Count() method should return the number of elements regardless of how those elements are stored.)
A base class wouldn't necessarily have the interface of any (let alone all) of its derived classes so it wouldn't make sense to expect an arbitrary reference to a base class to be substitutable for a specified derived class. Even if it appears that only the subset of the interface that supported by the base class's interface is required, that might not be the case (e.g., it could be that a shadowing method in the specific derived class is referred to).

Resources