Z3 solver: mapping boolean to integer and computing sums - z3

I have a boolean expression
f(x1, x2, x3, ..., xn)
Let m(x) be a mapping from {true,false} to {1,0} and let
g(x1, x2, x3, ..., xn) = m(x1) + m(x2) + ... + m(xn)
g() is computing the number of its arguments that evaluate to true.
I have some constraints on k = g(x1, x2, ..., xn) such as leastsignificantbit(k) = 1.
How can I specify this in Z3?
As a concrete example: Let f1 = a OR b, f2 = c OR d OR e, c = leastsignificantbit(m(a) + m(b)) where m(x) is defined as above.
I am using the Z3 .net API, however, it is not necessary for the answer to use the .net API as long as it uses the Z3 concepts in any supported language binding.

Related

Vector substitution

I have a huge dynamic system in wxmaxima, and I need to do some vector substitution but it ends up just with some crazy results. This is what I need to do:
forces:[
F1=[x1,y1,z1],
F2=[x2,y2,z2]
];
equations:[F3=-F2];
subst(forces,subst(equations,F1+F3));
the result which I'm seeking is just a simple [x1+x2,y1+y2,z1+z2], but I got instead: [[x1-x2,x1-y2,x1-z2],[y1-x2,y1-y2,y1-z2],[z1-x2,z1-y2,z1-z2]]
any suggestions?
OK, that is pretty puzzling, although I see now what's going on.
subst is serial (one by one) substitution, so subst([F1 = ..., F2 = ...], ...) is equivalent to subst(F2 = ..., subst(F1 = ..., ...)). That is, substitute for F1 first and then substitute F2 into the result of that.
However the result of subst(F1 = [x1, y1, z1], F1 - F2) is [x1 - F2, y1 - F2, z1 - F2]. You can see now what's going to happen if you substitute F2 into that -- you'll get the messy nested list result.
I think if you try psubst (parallel substitution) you'll get the expected result.
(%i2) forces:[
F1=[x1,y1,z1],
F2=[x2,y2,z2]
];
(%o2) [F1 = [x1, y1, z1], F2 = [x2, y2, z2]]
(%i3) equations:[F3=-F2];
(%o3) [F3 = - F2]
(%i4) subst(equations, F1 + F3);
(%o4) F1 - F2
(%i5) psubst (forces, %o4);
(%o5) [x1 - x2, y1 - y2, z1 - z2]
(%i6) psubst(forces, subst(equations, F1 + F3));
(%o6) [x1 - x2, y1 - y2, z1 - z2]

In F#, is it possible to implement operators for tuples?

I am working with an API that represents points like float * float.
These are inconvenient to do arithmetic on:
let a = (3.0, 4.0)
let b = (2.0, 1.0)
let c = (fst a + fst b, snd a + snd b)
I would like to write:
let c = a + b
I can do this if I define my own type:
type Vector2 =
{
X : float;
Y : float;
}
with
static member (+) (a : Vector2, b : Vector2) =
{ X = a.X + b.X; Y = a.Y + b.Y }
But then I need to convert for the API I am using:
let c = a + b
let cAsTuple = (c.X, c.Y)
Alternatively, I could create a free function:
let add (ax, ay) (bx, by) =
(ax + bx, ay + by)
let c = a |> add b
But this is not quite as nice as true infix operators.
Does F# allow me to define custom operators for tuples?
If you are willing to use a different operator like (+.) you can do this:
let inline (+.) (a,b) (c,d) = (a + c, b + d)
it works with ints, floats, strings:
( 4 , 3 ) +. ( 3 , 2 ) // (7, 5)
( 4., 3.) +. ( 3., 2.) // (7.0, 5.0)
("4", "3") +. ("3", "2") // ("43", "32")
TL;DR; #AMieres answer is the real one, this should rather be a comment but comments are length limited and code formatting is not nice ¯_(ツ)_/¯
There is work in progress to make operator extensions become reality: Issue, RFC, PR Once this is done, the following might finally work:
open System
open System.Runtime.CompilerServices
[<Extension>]
type TupleExtensions() =
[<Extension>]
static member inline (+) ((x1, y1), (x2, y2)) = (x1 + x2, y1 + y2)
// or
type Tuple<'T1, 'T2> with
// warning FS1215: Extension members cannot provide operator overloads.
// Consider defining the operator as part of the type definition instead.
static member inline (+) ((x1, y1), (x2, y2)) = (x1 + x2, y1 + y2)
// and then
let t1 = (1., 2.)
let t2 = (42., 3.141)
TupleExtensions.(+) (t1, t2) // (43.0, 5.141)
// error FS0001: Expecting a type supporting the operator '+' but given a tuple type
t1 + t2

While loop for Z3 or Smt2

How to convert a simple while loop(c- code) to smt2 language or z3?
For ex :
int x,a;
while(x > 10 && x < 100){
a = x + a;
x++;
}
The input language to an SMT solver is first-order logic (with theories) and as such has no notion of computational operations such as loops.
You can
either use a loop invariant to encode an arbitrary loop iteration (and the pre- and post-state of the loop) and prove your relevant properties with respect to that arbitrary iteration, which is what deductive program verifiers such as Boogie, Dafny or Viper do
or, if the number of iterations is statically known, you unroll the loop and basically use single static assignment form to encode the different unrollings
For your loop, the latter would look as follows (not using proper SMT syntax here because I'm lazy):
declare x0, a0 // initial values
declare a1, x1 // values after first unrolling
x0 > 10 && x0 < 100 ==> a1 == a0 + x0 && x1 == x0 + 1
declare a2, x2 // values after second unrolling
x1 > 10 && x1 < 100 ==> a2 == a1 + x1 && x2 == x1 + 1
...

Maxima: Simplify expressions containing minimum

I am trying to use Maxima to simplify mathematical expressions involving min(a, b). Say I have defined
z = min(x1, x2) and d = abs(x1 - x2), then the identities x1 * x2 = z*(z+d) and x1 + x2 = 2*z + d follow.
For example, take the following expression:
(2*z^3+(−3*x2−3*x1)*z^2+6*x1*x2*z)/6
If I manually apply the above identities, I can simplify it to
z^3 / 3 + (z^2 * d) / 2
In Maxima, naively trying
subst(min(x1, x2), v, ((6*v*x1−3*v^2)*x2−3*v^2*x1+2*v^3)/6), ratsimp
Produces a long expression.
How can I make Maxima find the occurrences of x1 * x2 and x1 + x2 buried deep inside the expression? I have tried various forms of tellsimp, let and letsimp, for example:
let(x1*x2, z * (z+d))
or
let(K * x1*x2, K * z * (z + d), K, integer)
or
matchdeclare(R, true)
let(R * x1*x2 * z, R * z * (z+d))
How can I make Maxima produce the nice short expression that I can manually arrive at? I need to work with much larger expressions.
May be this is helpful,
load("lrats");
lratsubst([x1*x2=z*(z+d),x1+x2=2*z+d],
(2*z^3+(−3*x2−3*x1)*z^2+6*x1*x2*z)/6)
(2*z^3+3*d*z^2)/6
Also if you want to test your identities, you could do
z(x1,x2):=min(x1,x2)$
d(x1,x2):=abs(x1-x2)$
Now put numerical values say, z(2,3)*(z(2,3)+d(2,3))=6. Apparently these don't help in simplifying your expression.

How to control binding of local variables in Scilab?

I have a hard time to describe my problem. Look at the end for an example.
I have a function G that uses another function F. In order use to the function F I need to define a new (local/nested) function g. The function g depends on local (to G) variables with common names as A, b, x etc. The trouble is that these variable names are also used in F. Since Scilabs function calls seems to be some form of macro expansion that binds local variables at the point of calling/expansion instead of the point where the function is defined the values of the local variables in F are used instead of the ones in G. Can I define the function gin another way to bind the variables at the site of definition?
function y = F(f,x)
k = -10
y = f(x)
endfunction
function [a, b] = G(u)
k = u
deff('y = g(x)','y = x + k')
k = 10
a = g(1)
b = F(g,1)
endfunction
[a b] = G(0)
disp(a)
disp(b)
This program displays the values 11. and -9.. I would like it to display 1. and 1.. The problem is that the binding of the local variable k in f is decided, not at the point of the definition of f, but at the point of calling/expanding f. How do I define a local function g that binds its local variables at the point of definition?
The example is only a small working example. In reality my function g lokks something like this
function y = g(x), y = t*h0(x) + A'*diag(d.^2)*A, endfunction
where t is a scalar, h0 is a function A a matrix and d a vector are local to G. The input parameter x is a vector. The matrices and vectors can have dimensions up to 1000x1000 and 1000 and be dense.
You could use sprintf to insert the value as string so it is 'defined' instantly, or use the string() operator.
Working example
function y = F(f,x)
k = -10
y = f(x)
endfunction
function [a, b] = G(u)
k = u
// Using the sprintf operator
deff('y = g(x)',sprintf('y = x + %d',k) )
// Using the string operator
deff('y = g(x)','y = x +' + string(k) )
k = 10
a = g(1)
b = F(g,1)
endfunction
[a b] = G(0)
disp(a)
disp(b)

Resources