Liskov Substitution Principle and the directionality of the original statement - parsing

I came across the original statement of the Liskov Substitution Principle on Ward's wiki tonight:
What is wanted here is something like the following substitution property: If for each object o1 of type S there is an object o2 of type T such that for all programs P defined in terms of T, the behavior of P is unchanged when o1 is substituted for o2 then S is a subtype of T." - Barbara Liskov, Data Abstraction and Hierarchy, SIGPLAN Notices, 23,5 (May, 1988).
I've always been crap at parsing predicate logic (I failed Calc IV the first time though), so while I kind of understand how the above translates to:
Functions that use pointers or references to base classes must be able to use objects of derived classes without knowing it.
what I don't understand is why the property Liskov describes implies that S is a subtype of T and not the other way around.
Maybe I just don't know enough yet about OOP, but why does Liskov's statement only allow for the possibility S -> T, and not T -> S?

the hypothetical set of programs P (in terms of T) is not defined in terms of S, and therefore it doesn't say much about S. On the other hand, we do say that S works as well as T in that set of programs P, and so we can draw conclusions about S and its relationship to T.
One way to think about this is that P demands certain properties of T. S happens to satisfy those properties. Perhaps you could even say that 'every o1 in S is also in T'. This conclusion is used to define the word subtype.

As pointed by sepp2k there are multiple views explaining this in the other post.
Here are my two cents about it.
I like to look at it this way
If for each object o1 of type TallPerson there is an object o2 of type Person such that for all programs P defined in terms of Person, the behavior of P is unchanged when o1 is substituted for o2 then TallPerson is a subtype of Person. (replacing S with TallPerson and T with Person)
We typically have a perspective of objects deriving some base class have more functionality, since its extended. However with more functionality we are specializing it and reducing the scope in which they can be used thus becoming subtypes to its base class(broader type).

A derived class inherits its base class's public interface and is expected to either use the inherited implementation or to provide an implementation that behaves similarly (e.g., a Count() method should return the number of elements regardless of how those elements are stored.)
A base class wouldn't necessarily have the interface of any (let alone all) of its derived classes so it wouldn't make sense to expect an arbitrary reference to a base class to be substitutable for a specified derived class. Even if it appears that only the subset of the interface that supported by the base class's interface is required, that might not be the case (e.g., it could be that a shadowing method in the specific derived class is referred to).

Related

Why is HermiT or Pellet Reasoner for Protegev5.5 is not detecting inconsistency in Ontology

I have used an object property O to relate Class A with Class B. I also have instance a and b of classes A and B respectively. I have used the same object property O to relate the instances a and b.
Again, I have used the same object property O to link a with c, where c is an instance of Class C which is not linked with class A or B using any object property.
Reasoners are still showing that the Ontology is Consistent.
My question is "Should this not be marked as inconsistent by the reasoners? Please enlighten me regarding your answer, whether the answer is 'Yes' or 'No' and the reason behind your answer"?
Thanks in advance.
You understand the semantics of domain and range axioms incorrectly. In the case of your object property O it merely states that whenever 2 individuals x, y are linked via O it means that the reasoner will infer that x is of type A and y is of type B.
In the case linking individuals a and c where c is of type C you will notice that c is also now inferred to be of type B.
If you want to see an inconsistency, what you can do is make classes B and C disjoint. Then linking a and c via O will result in an inconsistency.
BTW, if you are interested, on my blog I write about OWL2 ontologies and the use of reasoners and some of the counter intuitive ways in which reasoners can seem to "fail".

How to declare "free" instance of type class?

I'm trying to model some program analysis in Isabelle/HOL. The analysis computes values in a bounded lattice, but (for now, and for generality) I don't want to commit to any concrete lattice definition; all I care about is whether some result is bottom or not. I'm looking for a way to declare an abstract type which is an instance of Isabelle/HOL's bounded_lattice type class without committing to a concrete instance.
That is, analogously to how I could write
typedecl some_data
type_synonym my_analysis_result = "var => some_data"
where some_data is completely free, I would like to be able to write something like
typedecl "some_lattice::bounded_lattice"
type_synonym my_analysis_result = "var => some_lattice"
where some_lattice would be the "free" bounded lattice of which I require nothing except that it fulfill the lattice laws. This particular syntax is not accepted by Isabelle/HOL, and neither is something like
type_synonym my_analysis_result = "var => 'a::bounded_lattice"
I can work around this problem by defining a concrete datatype and making it an instance of bounded_lattice, but I don't see why there shouldn't be a more general way. Is there some easy (or complex) syntax to achieve what I'm doing? Do I have to (somehow, I'm not sure whether it would work) stick my entire development inside a context bounded_lattice block? Or is there some reason why it's logically OK to have fully free types via typedecl but not free types restricted by type class?
Making an unspecified type an instance of a type class may introduce inconsistencies if the type class has contradictory assumptions. To make this axiomatic nature explicity, you have to axiomatize the instantiation. Here's an example:
typedecl some_lattice
axiomatization where some_lattice_bounded:
"OFCLASS(some_lattice, bounded_lattice_class)"
instance some_lattice :: bounded_lattice by(rule some_lattice_bounded)
NB: A few years ago, you could have used the command arities, but this has been discontinued to emphasize the axiomatic nature of unspecified type class instantiations.
Alternatively, you could just use a type variable for the lattice. This is more flexible because you can later on instantiate the type variable if you need a concrete bounded lattice. However, you have to carry the type variable around all the time. For example,
type_synonym 'a my_analysis_result = "var => 'a"
Type synonyms with sort constraints are not supported by Isabelle (as the do not make much sense anyway). If you add the sort constraint, you will get a warning that it is going to be ignored. Whenever you need the bounded_lattice instance, type inference will add the sort constraint (or you have to mention it explicitly for the type variable).

How are interfaces represented in Go?

I'm reading through two articles right now and am a little confused.
This article - http://blog.golang.org/laws-of-reflection says
> var r io.Reader
tty, err := os.OpenFile("/dev/tty", os.O_RDWR, 0)
if err != nil {
return nil, err
}
r = tty
r contains, schematically, the (value, type) pair, (tty, *os.File).
Notice that the type *os.File implements methods other than Read; even
though the interface value provides access only to the Read method,
the value inside carries all the type information about that value.
This other article, says
In terms of our example, the itable for Stringer holding type Binary
lists the methods used to satisfy Stringer, which is just String:
Binary's other methods (Get) make no appearance in the itable.
It feels like these two are in opposition. According to the second article, the variable r in the first extract should be (tty, io.Reader), as that is the static type of r. Instead, the article says that *os.File is the type of tty. If the second example were right, then the diagram in the first example should have all of the methods implemented by the Binary type.
Where am I going wrong?
The two articles are explaining a similar concept at two very different levels of granularity. As Volker said, "Laws of Reflection" is a basic overview of what is actually happening when you examine objects via reflection. Your second article is examining the dynamic dispatch properties of an interface (which, can be resolved via reflection as well) and how the runtime resolves them at runtime.
According to the second article, the variable r in the first extract should be (tty, io.Reader)
Given that understanding, think of an interface at runtime as a "wrapper object". It exists to provide information about another object (the itable from your second article) to know where to jump to in the wrapped objects layout (implementation may differ between versions .. but the principle is basically the same for most languages).
This is why calling Read on r works .. first it will check the itable and jump to the function that is laid out for the os.File type. If that was an interface .. you would be looking at another dereference and dispatch (which IIRC isn't applicable at all in Go).
RE: Reflection - you're getting an easily digestible representation of this, in the form of a (value, type) pair (via the reflect.ValueOf and reflect.TypeOf methods).
Both are correct, r "holds" (tty, *os.File) and that is what the second article says. Note that Laws of Reflection is a bit more high-level and does not mention implementation details (which might change in each release) like discussed in the second article. The diagram of the second article read as follows: "s contains schematically (b, *Binary). s is of type Stringer, its data is a Binary with value 200 and s's itable contains one method String and the other methods of Binary (or *Binary) are not represented in the itable and thus not accessible for s.
Note that I think the actual implementation of interfaces in Go 1.4 is different from what the second article (Russ's?) states. "

F# limitations of discriminated unions

I am trying to port a small compiler from C# to F# to take advantage of features like pattern matching and discriminated unions. Currently, I am modeling the AST using a pattern based on System.Linq.Expressions: A an abstract base "Expression" class, derived classes for each expression type, and a NodeType enum allowing for switching on expressions without lots of casting. I had hoped to greatly reduce this using an F# discriminated union, but I've run into several seeming limitations:
Forced public default constructor (I'd like to do type-checking and argument validation on expression construction, as System.Linq.Expressions does with it's static factory methods)
Lack of named properties (seems like this is fixed in F# 3.1)
Inability to refer to a case type directly. For example, it seems like I can't declare a function that takes in only one type from the union (e. g. let f (x : TYPE) = x compiles for Expression (the union type) but not for Add or Expression.Add. This seems to sacrifice some type-safety over my C# approach.
Are there good workarounds for these or design patterns which make them less frustrating?
I think, you are stuck a little too much with the idea that a DU is a class hierarchy. It is more helpful to think of it as data, really. As such:
Forced public default constructor (I'd like to do type-checking and argument validation on expression construction, as
System.Linq.Expressions does with it's static factory methods)
A DU is just data, pretty much like say a string or a number, not functionality. Why don't you make a function that returns you an Expression option to express, that your data might be invalid.
Lack of named properties (seems like this is fixed in F# 3.1)
If you feel like you need named properties, you probably have an inappropriate type like say string * string * string * int * float as the data for your Expression. Better make a record instead, something like AddInfo and make your case of the DU use that instead, like say | Add of AddInfo. This way you have properties in pattern matches, intellisense, etc.
Inability to refer to a case type directly. For example, it seems like I can't declare a function that takes in only one type from the
union (e. g. let f (x : TYPE) = x compiles for Expression (the union
type) but not for Add or Expression.Add. This seems to sacrifice some
type-safety over my C# approach.
You cannot request something to be the Add case, but you definitely do can write a function, that takes an AddInfo. Plus you can always do it in a monadic way and have functions that take any Expression and only return an option. In that case, you can pattern match, that your input is of the appropriate type and return None if it is not. At the call site, you then can "use" the value in the good case, using functions like Option.bind.
Basically try not to think of a DU as a set of classes, but really just cases of data. Kind of like an enum.
You can make the implementation private. This allows you the full power of DUs in your implementation but presents a limited view to consumers of your API. See this answer to a related question about records (although it also applies to DUs).
EDIT
I can't find the syntax on MSDN, but here it is:
type T =
private
| A
| B
private here means "private to the module."

LR-attributed parser Technique

I would like to know what LR-attributed parsers can do and how it is implemented.
yacc generated parsers allow inherited attributes when the source of the attribute is a sibling that is located to the left using the $0, $-1, etc. specification syntax. With S -> A B B would be able to inherit a synthesized attribute from A but would not be able to inherit something from S. I think this is done by looking down 1 element from B in the stack which would be A.
Now zyacc doc says that they allow for LR-attributes grammars which is I guess more or less the same as yacc allows. Only that with zyacc those attributes are specified with the nonterminal (like parameters) and not just accessed within the semantic action. Are there any other difference like LR-attributes are mightier than the yacc inherited attributes or like LR-attributes are implemented differently (not just looking down the stack).
The point of LR attributed grammars is to make information seen in the left context,
available to the right extension.
Imagine your grammar had
R -> X S Y;
S -> A B;
You've already agreed that S can see attributes synthesized from X. In fact, those attributes can be available on the completion of parsing of X. Done properly, those attributes should be available to A and B, as they are parsed, as inherited attributes from S.
YACC doesn't implement any of this to my knowledge, unless you want to count the existence of the parse tree for X as being a "synthesized" attribute of parsing X.
How you implement attributed grammars depends on what you want to do. My company's main product, DMS, uses attribute grammars heavily, with no direction constraints. We simply build the full tree and the propagate the attributes as needed.
What we do is pre-compute, for each node type, the set of attributes [and their types] it may inherit, and the set it may synthesize, and synthesize a struct for each. At attribute evaluation time, these structs are associated with tree nodes via a very fast access hash table. For each node type, we inspect the dataflows (which child consumes which inherited attribute, which children use synthesized attributes from which other children). From that we compute an order of execution to cause all the attributes to be computed in a safe (generated-before-consumed) order and generate a procedure to accomplish this for that node type, which calls children procedures. Attribute evaluation then consists of calling the generated procedure for the grammar root. (In fact, we actually generate a partial order for evaluating the children, and generate a partial-order parallel call using DMS's implementation parallel programming language's capabilities, ensuring fast evaluation using multiple cores on very big ASTs).
There isn't any reason you couldn't limit this process to LR attributes. (Someday we'll push LR-compatible attributes into the parsing phase to allow their use in semantic checks).
It shouldn't surprise you that the device that generates the attribute evaluation process, is itself an attribute-evaluator that operates on grammars. Bootstrapping that was a bit of fun.

Resources