Coq: unfolding class instances - typeclass

How do I unfold class instances in Coq? It seems to be possible only when the instance doesn't include a proof, or something. Consider this:
Class C1 (t:Type) := {v1:t}.
Class C2 (t:Type) := {v2:t;c2:v2=v2}.
Instance C1_nat: C1 nat:= {v1:=4}.
Instance C2_nat: C2 nat:= {v2:=4}.
trivial.
Qed.
Theorem thm1 : v1=4.
unfold v1.
unfold C1_nat.
trivial.
Qed.
Theorem thm2 : v2=4.
unfold v2.
unfold C2_nat.
trivial.
Qed.
thm1 is proved, but I can't prove thm2; it complains at the unfold C2_nat step with Error: Cannot coerce C2_nat to an evaluable reference..
What's going on? How do I get to C2_nat's definition of v2?

You ended the definition of C2_nat with Qed. Definitions ending with Qed are opaque and cannot be unfolded. Write the following instead
Instance C2_nat: C2 nat:= {v2:=4}.
trivial.
Defined.
and your proof finishes without problems.

Related

"??" ("if null") operator type resolution issue

Consider code below:
mixin M {}
abstract class I {}
class A = I with M;
class B = I with M;
void main() {
final A? a = A();
final B? b = B();
final I? i1 = a ?? b; // compilation error saying "A value of type 'Object?' can't be assigned to a variable of type 'I?'"
}
Could you please help me to understand why I am having compilation error using "??" operator.
Dart 2.19.0
The a ?? b expression needs to figure out a single type which represents the two possible resulting values, one of type A or one of type B?.
Rather than checking that both A and B? are assignable to I?, it tries to figure out the type of the result of a ?? b as a single type combining both A and B?, which means finding a supertype of both A and B.
To do that, it computes an upper bound using the language-specified "least upper bound" algorithm. (I quote the name, because it finds an upper bound, which is sometimes a least upper bound, but not always.)
The two types are:
A with immediate supertypes I and M, both of which have immediate supertype Object, which has supertype Object? (and all other top types).
B? with supertypes I?, M? and their supertype Object?.
The problem here is that while it can see that I and I? occur in those types, and it can therefore decide that I? is an upper bound,
it also finds M and M? and decides that M? is an upper bound,
and those two types are otherwise completely equivalent, they have the same length of the super-class chain up to Object and are not related to each other. Neither is a least upper bound. So the algorithm ignores them and look for something that's both shared and does not have another type of equal "depth from Object". And it finds Object?.
Which is not assignable to I.
This is a known shortcoming (among several) of the least upper bound algorithm in Dart, but it's a hard problem to solve optimally, because it's very, very easy for a class to introduce some internal private superclass, and sometimes that type then becomes the least upper bound of two public subclasses, leaving users scratching their head.
There are requests to do better, for example by not ignoring the context type, but changing the type inference has to be done very carefully. It can break existing code which has been fine-tuned to the current behavior.
There are no great workarounds here, but there are functional ones.
You will have to rewrite the code to ensure that the static type of the two branches do not both have the unnecessary M type in them.
Rather than down-casting the result to I? at the end, which requires a runtime type check, I'd up-cast the original values:
final I? i3 = a ?? (b as I?); // ignore: unnecessary_cast
That should be completely free up-casts, but may (will!) cause analyzer warnings that you'll have to ignore.
Compiler doesn't seem to recognise that objects a and b are of classes inherited of I in that case.
So the syntax does the same, but in the first case needs help with casting.
final I? i1 = (a ?? b) as I?;
This does not give an error.

How to implement custom constraint using C++?

I am trying to reimplement littledog.ipynb using C++. I find it is hard to translate the function velocity_dynamics_constraint and have 3 questions
What is the function of ad_velocity_dynamics_context? Can we ignore it?
How to reimplement velocity_dynamics_constraint using C++? Do I have to create a new class like class VelocityDynamicsConstraint : public drake::solvers::Constraint? Is three any easier way to implement it?
Why we need to consider isinstance(vars[0], AutoDiffXd) condition?
# Some code from https://github.com/RussTedrake/underactuated/blob/master/examples/littledog.ipynb
ad_velocity_dynamics_context = [
ad_plant.CreateDefaultContext() for i in range(N)
]
def velocity_dynamics_constraint(vars, context_index):
h, q, v, qn = np.split(vars, [1, 1+nq, 1+nq+nv])
if isinstance(vars[0], AutoDiffXd):
if not autoDiffArrayEqual(
q,
ad_plant.GetPositions(
ad_velocity_dynamics_context[context_index])):
ad_plant.SetPositions(
ad_velocity_dynamics_context[context_index], q)
v_from_qdot = ad_plant.MapQDotToVelocity(
ad_velocity_dynamics_context[context_index], (qn - q) / h)
else:
if not np.array_equal(q, plant.GetPositions(
context[context_index])):
plant.SetPositions(context[context_index], q)
v_from_qdot = plant.MapQDotToVelocity(context[context_index],
(qn - q) / h)
return v - v_from_qdot
for n in range(N-1):
prog.AddConstraint(partial(velocity_dynamics_constraint,
context_index=n),
lb=[0] * nv,
ub=[0] * nv,
vars=np.concatenate(
([h[n]], q[:, n], v[:, n], q[:, n + 1]))
What is the function of ad_velocity_dynamics_context? Can we ignore it?
The context caches the intermediate computation result for a given q, v, u. It is very common that several constraints are imposed on the same set of q, v, u (for example, consider at the final time, we typically have a kinematic constraint on the final state, say the robot foot has to land on the ground and its center of mass is at a certain location. At the same time we we have the velocity dynamics constraint on the final state). Hence these different constraints can share some intermediate computation result, such as the rigid transform between each adjacent links. Hence we cache the result in ad_velocity_dynamics_context, and this ad_velocity_dynamics_context can be used later when we impose other constraints.
How to reimplement velocity_dynamics_constraint using C++? Do I have to create a new class like class VelocityDynamicsConstraint : public drake::solvers::Constraint? Is there any easier way to implement it?
That is right, you will need to create a new class VelocityDynamicsConstraint. The main challenge in implementing this class is to write the three overloaded DoEval function for three scalar types (double, AutoDiffXd, symbolic::Expression). You can refer to PositionConstraint as a reference. And for the moment you can ignore the case to call DoEval(const Eigen::Ref<const AutoDiffXd>&, AutoDiffXd*) with a MultibodyPlant<double> case, and only implement the this DoEval function with MultibodyPlant<AutoDiffXd>.
Why we need to consider isinstance(vars[0], AutoDiffXd) condition?
Because when the scalar type is AutoDiffXd, we want to compare not only the value of q against the one stored in context, but also its gradient. If they are different, then we need to call SetPositions to recompute the cache. When the scalar type is double, we then only need to compare the value.

How to prove a thereom about a type class outside the original type class definition?

I was trying alternative ways to write the below proof from this question and Isabelle 2020's Rings.thy. (In particular, I added the note div_mult_mod_eq[of a b] line to test the use of the note command:
lemma mod_div_decomp:
fixes a b
obtains q r where "q = a div b" and "r = a mod b"
and "a = q * b + r"
proof -
from div_mult_mod_eq have "a = a div b * b + a mod b" by simp
note div_mult_mod_eq[of a b]
moreover have "a div b = a div b" ..
moreover have "a mod b = a mod b" ..
note that ultimately show thesis by blast
qed
However, if I write it in a separate .thy file, there is an error about type unification at the note line:
Type unification failed: Variable 'a::{plus,times} not of sort semiring_modulo
Failed to meet type constraint:
Term: a :: 'a
Type: ??'a
The problem goes way if I enclose the whole proof in a pair of type class class begin ... end as follows:
theory "test"
imports Main
HOL.Rings
begin
...
class semiring_modulo = comm_semiring_1_cancel + divide + modulo +
assumes div_mult_mod_eq: "a div b * b + a mod b = a"
begin
(* ... inserted proof here *)
end
...
end
My questions are:
Is this the correct way to prove a theorem about a type class? i.e. to write a separate class definition in a different file?
Is it always necessary to duplicate type class definitions as I did?
If not, what is the proper way to prove a theorem about a type class outside of its original place of definition?
There are two ways to prove things in type classes (basically sort = typeclass for Isabelle/HOL):
Proving in the context of the typeclass
context semiring_modulo
begin
...
end
(slightly less clean) Add the sort constraints to the type:
lemma mod_div_decomp:
fixes a b :: "'a :: {semiring_modulo}"
obtains q r where "q = a div b" and "r = a mod b"
and "a = q * b + r"
semiring_modulo subsumes plus and times, but you can also type {semiring_modulo,plus,times} to really have all of them.
The documentation of classes contains more examples.
The issue you ran into is related to how Isabelle implements polymorphism. Sorts represent a subset of all types, and we characterize them by a set of intersected classes. By attaching a sort to a variable, we restrict the space of terms with which that variable can be instantiated with. One way of looking at this is an assumption that the variable belongs to a certain sort. In your case, type inference (+) (*) div mod apparently gives you {plus,times}, which is insufficient for div_mult_mod_eq. To restrict the variable further you can make an explicit type annotation as Mathias explained.
Note that the simp in the line above should run into the same problem.

Creating variables, pairs, and sets in Z3Py

this is a three part question on the use of the Python API to Z3 (Z3Py).
I thought I knew the difference between a constant and a variable but apparently not. I was thinking I could declare a sort and instantiate a variable of that sort as follows:
Node, (a1,a2,a3) = EnumSort('Node', ['a1','a2','a3'])
n1 = Node('n1') # c.f. x = Int('x')
But python throws an exception saying that you can't "call Node". The only thing that seems to work is to declare n1 a constant
Node, (a1,a2,a3) = EnumSort('Node', ['a1','a2','a3'])
n1 = Const('n1',Node)
but I'm baffled at this since I would think that a1,a2,a3 are the constants. Perhaps n1 is a symbolic constant, but how would I declare an actual variable?
How to create a constant set? I tried starting with an empty set and adding to it but that doesn't work
Node, (a1,a2,a3) = EnumSort('Node', ['a1','a2','a3'])
n1 = Const('n1',Node)
nodes = EmptySet(Node)
SetAdd(nodes, a1) #<-- want to create a set {a1}
solve([IsMember(n1,nodes)])
But this doesn't work Z3 returns no solution. On the other hand replacing the 3rd line with
nodes = Const('nodes',SetSort(Node))
is now too permissive, allowing Z3 to interpret nodes as any set of nodes that's needed to satisfy the formula. How do I create just the set {a1}?
Is there an easy way to create pairs, other than having to go through the datatype declaration which seems a bit cumbersome? eg
Edge = Datatype('Edge')
Edge.declare('pr', ('fst', Node), ('snd',Node))
Edge.create()
edge1 = Edge.pr(a1,a2)
Declaring Enums
Const is the right way to declare as you found out. It's a bit misleading indeed, but it is actually how all symbolic variables are created. For instance, you can say:
a = Const('a', IntSort())
and that would be equivalent to saying
a = Int('a')
It's just that the latter looks nicer, but in fact it's merely a function z3 folks defined that sort of does what the former does. If you like that syntax, you can do the following:
NodeSort, (a1,a2,a3) = EnumSort('Node', ['a1','a2','a3'])
def Node(nm):
return Const(nm, NodeSort)
Now you can say:
n1 = Node ('n1')
which is what you intended I suppose.
Inserting to sets
You're on the right track; but keep in mind that the function SetAdd does not modify the set argument. It just creates a new one. So, simply give it a name and use it like this:
emptyNodes = EmptySet(Node)
myNodes = SetAdd(emptyNodes, a1)
solve([IsMember(n1,myNodes)])
Or, you can simply substitute:
mySet = SetAdd(SetAdd(EmptySet(Node), a1), a2)
which would create the set {a1, a2}.
As a rule of thumb, the API tries to be always functional, i.e., no destructive updates to existing variables, but you instead create new values out of old.
Working with pairs
That's the only way. But nothing is stopping you from defining your own functions to simplify this task, just like we did with the Node function in the first part. After all, z3py is essentially Python library and z3 folks did a lot of work to make it nicer, but you also have the entire power of Python to simplify your life. In fact, many other interfaces to z3 from other languages (Scala, Haskell, O'Caml etc.) precisely do that to provide a much easier to work with API using the features of their respective host languages.

Type extensions and members visiblity in F#

F# has feature called "Type extension" that gives a developer ability to extend existing types.
There is two types of extensions: intrinsic extension and optional extension. First one is similar to partial types in C# and second one is something similar to method extension (but more powerful).
To use intrinsic extension we should put two declarations into the same file. In this case compiler will merge two definitions into one final type (i.e. this is two "parts" of one type).
The issue is that those two types has different access rules for different members and values:
// SampleType.fs
// "Main" declaration
type SampleType(a: int) =
let f1 = 42
let func() = 42
[<DefaultValue>]
val mutable f2: int
member private x.f3 = 42
static member private f4 = 42
member private this.someMethod() =
// "Main" declaration has access to all values (a, f1 and func())
// as well as to all members (f2, f3, f4)
printf "a: %d, f1: %d, f2: %d, f3: %d, f4: %d, func(): %d"
a f1 this.f2 this.f3 SampleType.f4 (func())
// "Partial" declaration
type SampleType with
member private this.anotherMethod() =
// But "partial" declaration has no access to values (a, f1 and func())
// and following two lines won't compile
//printf "a: %d" a
//printf "f1: %d" f1
//printf "func(): %d" (func())
// But has access to private members (f2, f3 and f4)
printf "f2: %d, f3: %d, f4: %d"
this.f2 this.f3 SampleType.f4
I read F# specification but didn't find any ideas why F# compiler differentiate between value and member declarations.
In 8.6.1.3 section of F# spec said that "The functions and values defined by instance definitions are lexically scoped (and thus implicitly private) to the object being defined.". Partial declaration has all access to all private members (static and instance). My guess is that by "lexical scope" specification authors specifically mean only "main" declaration but this behavior seems weird to me.
The question is: is this behavior intentional and what rationale behind it?
This is a great question! As you pointed out, the specification says that "local values are lexically scoped to the object being defined", but looking at the F# specification, it does not actually define what lexical scoping means in this case.
As your sample shows, the current behavior is that the lexical scope of object definition is just the primary type definition (excluding intrinsic extensions). I'm not too surprised by that, but I see that the other interpretation would make sense too...
I think a good reason for this is that the two kinds of extensions should behave the same (as much as possible) and you should be able to refactor your code from using one to using the other as you need. The two kinds only differ in how they are compiled under the cover. This property would be broken if one kind allowed access to lexical scope while the other did not (because, extension members technically cannot do that).
That said, I think this could be (at least) clarified in the specification. The best way to report this is to send email to fsbugs at microsoft dot com.
I sent this question to fsbugs at microsoft dot com and got following answer from Don Syme:
Hi Sergey,
Yes, the behaviour is intentional. When you use “let” in the class scope the identifier has lexical scope over the type definition. The value may not even be placed in a field – for example if a value is not captured by any methods then it becomes local to the constructor. This analysis is done locally to the class.
I understand that you expect the feature to work like partial classes in C#. However it just doesn’t work that way.
I think term "lexical scope" should be define more clearly in the spec, because otherwise current behavior would be surprising for other developers as well.
Many thanks to Don for his response!

Resources