XOR Boolean algebra, simplifying - digital

I am doing boolean algebra simplification and got the result of
(A.B) xor(C xor D)
Does
(A.B) xor (C xor D) = (A.B) xor C xor D ?

The xor operation corresponds to addition mod 2, where 0 <=> false and true <=> 1, so, yes, it is an associative operation. See: https://en.wikipedia.org/wiki/Exclusive_or#Relation_to_modern_algebra

Related

Horn clauses with multiplication in Z3

I've just started digging into Z3's fixed point solver and I've cooked up an example that hangs when using multiplication but completes when defining multiplication as a series of additions. As I'm new to working with Horn clauses, there could be something I don't get here. Is there a reason "native" multiplication is so slow whereas multiplication defined as a series of additions produces a satisfying result in a reasonable timeframe? Thanks!
def test_mseq_hangs():
mul = Function('mul', IntSort(), IntSort(), IntSort(), BoolSort())
mc = Function('mc', IntSort(), IntSort(), BoolSort())
n, m, p = Ints('m n p')
fp = Fixedpoint()
fp.declare_var(n,m,p)
fp.register_relation(mc, mul)
fp.fact(mul(m, n, m * n))
fp.rule(mc(m, 1), m <= 1)
fp.rule(mc(m, n), [m > 1 , mc(m-1, p), mul(m, p, n)])
assert fp.query(And(mc(m,n),n < 1)) == unsat
assert fp.query(And(mc(m,n),n < 2)) == sat
assert fp.query(And(mc(m,n),n > 100 )) == sat
assert fp.query(mc(5,120)) == sat
assert fp.query(mc(5,24)) == unsat
def test_mseq():
mul = Function('mul', IntSort(), IntSort(), IntSort(), BoolSort())
add = Function('add', IntSort(), IntSort(), IntSort(), BoolSort())
neg = Function('neg', IntSort(), IntSort(), BoolSort())
mc = Function('mc', IntSort(), IntSort(), BoolSort())
n, m, p, o = Ints('m n p o')
fp = Fixedpoint()
fp.declare_var(n,m,p,o)
fp.register_relation(mc, add, mul, neg)
fp.fact(add(m, n, m + n))
fp.fact(neg(m, -m))
fp.rule(mul(m, n, 0), n == 0)
fp.rule(mul(m, n, m), n == 1)
fp.rule(mul(m, n, o), [n < 0, mul(m,n,p), neg(p,o)])
fp.rule(mul(m, n, o), [n > 1, mul(m,n-1,p), add(m,p,o)])
fp.rule(mc(m, 1), m <= 1)
fp.rule(mc(m, n), [m > 1 , mc(m-1, p), mul(m, p, n)])
assert fp.query(And(mc(m,n),n < 1)) == unsat
assert fp.query(And(mc(m,n),n < 2)) == sat
assert fp.query(And(mc(m,n),n > 100 )) == sat
assert fp.query(mc(5,120)) == sat
assert fp.query(mc(5,24)) == unsat
This isn't very surprising since multiplying variables leads to non-linear arithmetic, while repeated addition leaves it in the linear fragment. Non-linear arithmetic is not decidable, while there are efficient decision procedures (such as Presburger) for the linear fragment.
I'm not entirely sure how the Fixed-Point engine comes into play here, but the above holds true of the general queries; I'm guessing the same reasoning applies.
Having said that, Z3 does have a non-linear arithmetic solver, called nlsat. You might want to give that a try, though I wouldn't hold my breath. See this question on how to trigger it: (check-sat) then (check-sat-using qfnra-nlsat)
NB. I'm not sure if it's possible to use the nlsat engine from the FixedPoint engine via Python though, so you might have to do some digging to find out what the proper incantation would be, if it is possible to start with.

Substitute variable?

I'm trying to substitute L with Lα:
f(x) := c * (x + L);
c: L;
f(x), L: Lα;
I expected the output:
Lα * (x + Lα)
instead I got
L * (x + Lα)
Maybe I should define f(x) instead?
kill(all);
define(
f(x),
c * (x + L)
);
c: L;
f(x), L: Lα;
Nope — same result.
Do I substitute L for Lα in a wrong way?
Edit:
Turns out it is expected behaviour, as maxima evavluates expression only once. One can impose "infinite evaluation" via the flag infeval:
f(x), L: La, infeval;
=> La*(x + La)
Another solution is to use subst instead:
subst(
Lα, L, f(x)
);
(source)
You need to add an extra eval step to make this work:
f(x) := c * (x + L);
c: L;
f(x), L: Lα, eval;
Output:
Lα (x + Lα)
Use subst instead of ev.
(%i1) f(x) := c * (x + L)$
(%i2) c: L$
(%i3) subst(L=La,f(x));
(%o3) La (x + La)
But keep in mind that the function continues to be c*(x+L). The symbol c has been bound to L and if you then bind the symbol L to La, c will continue to be bound to L and not to La. Maxima variables work as in Lisp, which might be different to what you are used to in other languages.

maxima: use function as function argument

Like the title says, I want to use a function as a function argument.
Intuitive I tried something like:
a(t,c) := t+c;
b(R_11, R_12, R_13, d_1x, d_1y, d_1z) := R_11*d_1x + R_12*d_1y + R_13*d_1z;
f( a(t,c), b(R_11, R_12, R_13, d_1x, d_1y, d_1z), %lambda ) := a(t,c) +
%lambda * b(R_11, R_12, R_13, d_1x, d_1y, d_1z);
But Maxima stated "define: in definition of f, found bad argument"
My goal is to simplify my equations to get a better overview. When I differentiate like
diff( f(...), R_11 )
the result for this example should be the partial derivative of b with respect to R_11.
f' = b_R11(...)
Is there a way to do such thinks or is this an odd attempt and there is maybe a better way?
You can declare that b depends on some arguments and then diff will construct formal derivatives of b.
(%i1) depends (b, [R1, R2]);
(%o1) [b(R1, R2)]
(%i2) depends (a, t);
(%o2) [a(t)]
(%i3) f(t, R1, R2) := a(t) + b(R1, R2);
(%o3) f(t, R1, R2) := a(t) + b(R1, R2)
(%i4) diff (f(t, R1, R2), R1);
d
(%o4) --- (b(R1, R2))
dR1
(%i5) diff (f(t, R1, R2), t);
d
(%o5) -- (a(t))
dt
But that only works as long as b is undefined. When b is defined, diff will go ahead and call b and compute the derivative with respect to whatever is returned.
(%i8) b(R1, R2) := 2*R1 + 3*R2;
(%o8) b(R1, R2) := 2 R1 + 3 R2
(%i9) diff (f(t, R1, R2), R1);
(%o9) 2

Definition of normal form in coq

In the book Types and Programing Languages of B. Pierce, the author introduce a small language in order to introduce different concepts used through the book.
The language is the following:
t::=
true
false
if t then t else t
v::=
true
false
There is three reduction rules:
if true then t2 else t3 \rightarrow t2
if false then t2 else t3 \rightarrow t3
t1 \rightarrow t1'
------------------
if t1 then t2 else t3 \rightarrow if t1' then t2 else t3
I would like to prove that every normal form is a value.
I use the following definition for the normal form:
Definition normal_form (t:term) :=
~(exists t', step t t').
I'm stuck because at one point, I have something that looks like :
~ ~(exists t : term, ...)
and I don't see how I can infer
exists t : term, ...
since we are in a intuitionistic logic.
Here is the whole proof:
Inductive term : Set :=
| true : term
| false : term
| ifthenelse : term -> term -> term -> term.
Definition normal_form (t:term) :=
~(exists t', step t t').
Inductive is_value : term -> Prop :=
| vtrue : is_value true
| vfalse : is_value false.
Lemma normal_form_implies_value : forall t, normal_form t -> is_value t.
Proof.
intro.
induction t.
intros.
apply vtrue.
intros.
apply vfalse.
intros.
unfold normal_form in H.
destruct t1.
unfold not in H.
assert (exists t' : term, step(ifthenelse true t2 t3) t').
exists t2.
apply eiftrue.
apply H in H0.
contradiction.
assert (exists t' : term, step(ifthenelse false t2 t3) t').
exists t3.
apply eiffalse.
apply H in H0.
contradiction.
assert(~(is_value (ifthenelse t1_1 t1_2 t1_3))).
intro.
inversion H0.
assert(~(normal_form(ifthenelse t1_1 t1_2 t1_3))).
intro.
apply IHt1 in H1.
contradiction.
unfold normal_form in H1.
unfold not in H1.
Should I use an other definition for the normal form? Is it possible to finish the proof without any classical axiom?
One interesting lemma to prove is the inversion lemma stating that if ifthenelse b l r is in normal form then so are b, l and r.
Lemma normal_form_ifthenelse (b l r : term) :
normal_form (ifthenelse b l r) ->
normal_form b /\ normal_form l /\ normal_form r.
Which can be proven rather easily if you are willing to use a lot of help from the automation machinery.
Proof.
intros H (* assumption "normal_form (ifthenelse b l r)" *)
; repeat split (* split the big conjunction into 3 goals *)
; intros [t redt] (* introduce the "exists t', step t t'" proofs
all the goals are now "False" *)
; apply H (* because we know that "step t t'", we are going to
be able to prove that "step (ifthenelse ...) ..."
which H says is impossible *)
; eexists (* we let Coq guess which term we are going to step to *)
; constructor (* we pick the appropriate constructor between the structural ones *)
; eapply redt. (* finally we lookup the proof we were given earlier *)
Qed.
If that's a bit too much automation for you, you can try to prove manually the following (simpler) lemma because it's the bit we are going to need in the final proof:
Lemma normal_form_ifthenelse (b l r : term) :
normal_form (ifthenelse b l r) -> normal_form b.
Your lemma can then be proven rather quickly: in the two first cases of the induction, using constructor will pick the right is_value constructor.
In the last one, we are provided with an induction hypothesis IHt1 saying that provided that t1 is a normal_form then it is_value. We can use our intermediate lemma to prove that normal_form t1 based on the fact that we know that normal_form (ifthenelse t1 t2 t3) and conclude that is_value t1.
But the fact that t1 is a value contradicts the fact normal_form (ifthenelse t1 t2 t3): we can indeed step to either t2 or t3 depending on whether t1 is true or false. False_ind is a way for us to say "and now we have derived a contradiction".
Lemma normal_form_implies_value : forall t, normal_form t -> is_value t.
Proof.
intro t; induction t; intro ht.
- constructor.
- constructor.
- destruct (normal_form_ifthenelse _ _ _ ht) as [ht1 _].
apply False_ind, ht; destruct (IHt1 ht1); eexists; constructor.
Qed.
is_value is decidable,
Lemma is_value_dec : forall t, {is_value t} + {~is_value t}.
Proof.
induction t;
try (left; constructor);
destruct IHt1;
right; intro C; inversion C.
Qed.
so you can prove normal_form_implies_value by considering those two cases (with destruct), like so:
Lemma normal_form_implies_value : forall t, normal_form t -> is_value t.
Proof.
induction t;
try constructor;
intros;
destruct (is_value_dec t1), t1;
apply False_ind;
apply H;
try (eexists; constructor; fail);
try (inversion i; fail).
contradict n;
apply IHt1;
intros [tt C];
eauto using scomp.
Qed.
scomp is a constructor for step, using this definition:
Inductive step : term -> term -> Prop :=
| strue: forall t1 t2, step (ifthenelse true t1 t2) t1
| sfalse: forall t1 t2, step (ifthenelse false t1 t2) t2
| scomp: forall t1 t1' t2 t3, step t1 t1' ->
step (ifthenelse t1 t2 t3) (ifthenelse t1' t2 t3).

Maxima - differentiating a piecewise function

Suppose you have a function defined by intervals, such as
f(x):=block(if x<0 then x^2 else x^3);
When we differentiate it with
diff(f(x),x);
we get
d/dx (if x<0 then x^2 else x^3)
whereas I'd like to get
(if x<0 then 2*x else 3*x^2)
Is there a way to obtain such result?
This may help in a simple case:
(%i1) f(x):= charfun(x<0)*x^2 + charfun(x>=0)*x^3$
(%i2) gradef(charfun(y), 0)$
(%i3) diff(f(x),x);
2
(%o3) 2 x charfun(x < 0) + 3 x charfun(x >= 0)
charfun, gradef
You can try also Pw.mac package from Richard Hennessy.
Here's a different approach using a simplification rule for "if" expressions. The unsolved part here is to detect discontinuities and generate delta functions for those locations. If you want to ignore those, you can define FOO to return 0. Note that I didn't attempt to implement the function discontinuities; that part is unsolved here. I can give it a try if there is interest.
(%i1) display2d : false $
(%i2) matchdeclare ([aa, bb, cc], all, xx, symbolp) $
(%i3) 'diff (if aa then bb else cc, xx) $
(%i4) tellsimpafter (''%, apply ("if", [aa, diff (bb, xx), true, diff (cc, xx)]) + FOO (aa, bb, cc, xx)) $
(%i5) FOO (a, b, c, x) := 'lsum ((ev (c, x = d) - ev (b, x = d)) * delta (d, x), d, discontinuities (a, x)) $
(%i6) diff (if x > 0 then x^2 else x^3, x);
(%o6) (if x > 0 then 2*x else 3*x^2)+'lsum((d^3-d^2)*delta(d,x),d,
discontinuities(x > 0,x))
Building on slitinov's answer I wrote this quite naive implementation for functions with more than two "pieces":
gradef(charfun(dummy),0)$
/* piecewise function definition */
itv: [[x<0],[x>=0,x<1], [x>=1]]; /* intervals */
fi: [ 1, x^2+1, 2*x ]; /* local functions */
/* creation of global function f and its derivative df */
f:0;
for i: 1 thru 3 do f:f+charfun(apply("and",itv[i]))*fi[i];
df:diff(f,x);
/* display of local functions and derivatives */
for i: 1 thru 3 do (
apply(assume,itv[i]),
newline(),
print(itv[i]),
print("f = ",ev(f)),
print("df = ",ev(df)),
apply(forget,itv[i])
);
plot2d([f,df],[x,-2,3],[y,-1,5],[style,[lines,4,3],[lines,2,2]]);

Resources