z3py: How do I set the used logic in z3py? - z3

Using the following code:
import z3
solver = z3.Solver(ctx=z3.Context())
#solver = z3.Solver()
Direction = z3.Datatype('Direction')
Direction.declare('up')
Direction.declare('down')
Direction = Direction.create()
Cell = z3.Datatype('Cell')
Cell.declare('cons', ('front', Direction), ('back', z3.IntSort()))
Cell = Cell.create()
mycell = z3.Const("mycell", Cell)
solver.add(Cell.cons(Direction.up, 10) == Cell.cons(Direction.up, 10))
I get the following error:
Traceback (most recent call last):
File "thedt2opttest.py", line 17, in <module>
solver.add(Cell.cons(Direction.up, 10) == Cell.cons(Direction.up, 10))
File "/home/john/tools/z3-master/build/python/z3/z3.py", line 6052, in add
self.assert_exprs(*args)
File "/home/john/tools/z3-master/build/python/z3/z3.py", line 6040, in assert_exprs
arg = s.cast(arg)
File "/home/john/tools/z3-master/build/python/z3/z3.py", line 1304, in cast
_z3_assert(self.eq(val.sort()), "Value cannot be converted into a Z3 Boolean value")
File "/home/john/tools/z3-master/build/python/z3/z3.py", line 90, in _z3_assert
raise Z3Exception(msg)
z3types.Z3Exception: Value cannot be converted into a Z3 Boolean value
When only using z3.Solver() without giving a new z3.Context as Parameter the code is working.
Can someone please answer the following questions:
What is the difference here?
How do I set the logic in z3py?
Which logic should I use with datatypes?

Solution: SolverFor()
To set a logic with Z3Py, instead of creating a solver using the Solver() function constructor, you can use the SolverFor(logic) function, where logic is the logic you would like to use.
For example, if you type:
s = SolverFor("LIA")
then the variable s would contain a solver based on Linear Integer Arithmetics, or if you type
s = SolverFor("LRA")
then the variable s would contain a solver based on Linear Real Arithmetics.
Beware that, so far (but I haven't used z3 for a while, then the updated versions may have fixed this) if you specify a non-existing/unsupported logic, for example typing SolverFor("abc"), then no error would be generated and the logic would be guessed automatically as usual.
Because of the issue above, the only ways to test whether the logic you want is actually being used would be to compare the performances with respect to an automatically chosen logic, or to try solving something which is not supported by the logic you specified (for example, using real variables when you specified LIA which only accepts integer varaibales) to see if an error is generated. If yes, then the solver is actually trying to use that logic.

Related

Constraints for Direct Collocation method in pydrake

I am looking for a way to describe the constraints of the Direct Collocation method in pydrake.
I got the robot model from my own URDF by using FindResource as this(l11-16).
Then, I tried to make some functions which calculate the positions of the joints as swing_foot_height(q) of this.
However there is a problem.
It is maybe a type error.
I defined q as following
robot = MultibodyPlant(time_step=0.0)
scene_graph = SceneGraph()
robot.RegisterAsSourceForSceneGraph(scene_graph)
file_name = FindResource("models/robot.urdf")
Parser(robot).AddModelFromFile(file_name)
robot.Finalize()
context = robot.CreateDefaultContext()
dircol = DirectCollocation(
robot,
context,
...(Omission)...
input_port_index=robot.get_actuation_input_port().get_index())
x = dircol.state()
nq = biped_robot.num_positions()
q = x[0:nq]
Then, I used this q for the function like swing_foot_height(q).
The error is like
SetPositions(): incompatible function arguments. The following argument types are supported:
...
q: numpy.ndarray[numpy.float64[m, 1]]
...
Invoked with:
...
array([Variable('x(0)', Continuous), ... Variable('x(9)', Continuous)],dtype=object)
Are there some way to avoid this error?
Right. In the compass gait notebook that you cited, there was an important line:
# overwrite MultibodyPlant with its autodiff copy
compass_gait = compass_gait.ToAutoDiffXd()
so that multibody plant that was being used in the constraint is actually an AutoDiffXd version of the plant.
The littledog notebook has more examples of this, with a more robust implementation that works for both float and autodiff constraint evaluations.
As far as I understand this, trajectory optimization with DirectCollocation converts the data type of the decision variables (in your case, x and q) to AutoDiffXd type. That is the type you're seeing here in the "Invoked with" error message. This is the type used for automatic differentiation which is used for finding the gradients for the optimization solver.
You'll need to convert back to float to use the SetPositions() function.

How do I create additional constraints from a model obtained by a solver in z3 Python API?

Once I have a constraint problem, I would like to see if it is satisfiable. Based on the returned model (when it is sat) I would like to add assertions and then run the solver again. However, it seems like I am misunderstanding some of the types/values contained in the returned model. Consider the following example:
solv = z3.Solver()
n = z3.Int("n")
solv.add(n >= 42)
solv.check() # This is satisfiable
model = solv.model()
for var in model:
# do something
solv.add(var == model[var])
solv.check() # This is unsat
I would expect that after the loop i essentially have the two constraints n >= 42 and n == 42, assuming of course that z3 produces the model n=42 in the first call. Despite this, in the second call check() returns unsat. What am I missing?
Sidenote: when replacing solv.add(var == model[var]) with solv.add(var >= model[var]) I get a z3.z3types.Z3Exception: Python value cannot be used as a Z3 integer. Why is that?
When you loop over a model, you do not get a variable that you can directly query. What you get is an internal representation, which can correspond to a constant, or it can correspond to something more complicated like a function or an array. Typically, you should query the model with the variables you have, i.e., with n. (As in model[n].)
You can fix your immediate problem like this:
for var in model:
solve.add(var() == model[var()])
but this'll only work assuming you have simple variables in the model, i.e., no uninterpreted-functions, arrays, or other objects. See this question for a detailed discussion: https://stackoverflow.com/a/11869410/936310
Similarly, your second expression throws an exception because while == is defined over arbitrary objects (though doing the wrong thing here), >= isn't. So, in a sense it's the "right" thing to do to throw an exception here. (That is, == should've thrown an exception as well.) Alas, the Python bindings are loosely typed, meaning it'll try to make sense of what you wrote, not necessarily always doing what you intended along the way.

Provide custom gradient to drake::MathematicalProgram

Drake has an interface where you can give it a generic function as a constraint and it can set up the nonlinearly-constrained mathematical program automatically (as long as it supports AutoDiff). I have a situation where my constraint does not support AutoDiff (the constraint function conducts a line search to approximate the maximum value of some function), but I have a closed-form expression for the gradient of the constraint. In my case, the math works out so that it's difficult to find a point on this function, but once you have that point it's easy to linearize around it.
I know many optimization libraries will allow you to provide your own analytical gradient when available; can you do this with Drake's MathematicalProgram as well? I could not find mention of it in the MathematicalProgram class documentation.
Any help is appreciated!
It's definitely possible, but I admit we haven't provided helper functions that make it pretty yet. Please let me know if/how this helps; I will plan to tidy it up and add it as an example or code snippet that we can reference in drake.
Consider the following code:
from pydrake.all import AutoDiffXd, MathematicalProgram, Solve
prog = MathematicalProgram()
x = prog.NewContinuousVariables(1, 'x')
def cost(x):
return (x[0]-1.)*(x[0]-1.)
def constraint(x):
if isinstance(x[0], AutoDiffXd):
print(x[0].value())
print(x[0].derivatives())
return x
cost_binding = prog.AddCost(cost, vars=x)
constraint_binding = prog.AddConstraint(
constraint, lb=[0.], ub=[2.], vars=x)
result = Solve(prog)
When we register the cost or constraint with MathematicalProgram in this way, we are allowing that it can get called with either x being a float, or x being an AutoDiffXd -- which is simply a wrapping of Eigen's AutoDiffScalar (with dynamically allocated derivatives of type double). The snippet above shows you roughly how it works -- every scalar value has a vector of (partial) derivatives associated with it. On entry to the function, you are passed x with the derivatives of x set to dx/dx (which will be 1 or zero).
Your job is to return a value, call it y, with the value set to the value of your cost/constraint, and the derivatives set to dy/dx. Normally, all of this happens magically for you. But it sounds like you get to do it yourself.
Here's a very simple code snippet that, I hope, gets you started:
from pydrake.all import AutoDiffXd, MathematicalProgram, Solve
prog = MathematicalProgram()
x = prog.NewContinuousVariables(1, 'x')
def cost(x):
return (x[0]-1.)*(x[0]-1.)
def constraint(x):
if isinstance(x[0], AutoDiffXd):
y = AutoDiffXd(2*x[0].value(), 2*x[0].derivatives())
return [y]
return 2*x
cost_binding = prog.AddCost(cost, vars=x)
constraint_binding = prog.AddConstraint(
constraint, lb=[0.], ub=[2.], vars=x)
result = Solve(prog)
Let me know?

Arrays and Datatypes in Z3py

I'm actually using Z3py for scheduling solving problems and I'm trying to represent a 2 processors system where 4 process of different execution time must be done.
My actual data are :
Process 1 : Arrival at 0 and execution time of 4
Process 2 : Arrival at 1 and execution time of 3
Process 3 : Arrival at 3 and execution time of 5
Process 4 : Arrival at 1 and execution time of 2
I'm actually trying to represent each process while decomposing each in subprocess of equal time so my datatypes are like this :
Pn = Datatype('Pn')
Pn.declare('1')
Pn.declare('2')
Pn.declare('3')
Pn.declare('4')
Pt = Datatype('Pt')
Pt.declare('1')
Pt.declare('2')
Pt.declare('3')
Pt.declare('4')
Pt.declare('5')
Process = Datatype('Process')
Process.declare('cons' , ('name',Pn) , ('time', Pt))
Process.declare('idle')
where pn and pt are the process name and the part of the process (process 1 is in 4 parts, ...)
But now I don't know how I can represent my processors to add 3 rules I need : unicity (each sub process must be done 1 and only 1 time by only 1 processor) check arrival (the first part of a process can't be processed before it arrived) and order (each part of a process must be processed after the precedent)
So I was thinking of using arrays to represent my 2 processors with this kind of declaration :
P = Array('P', IntSort() , Process)
But when I tried to execute it I got an error message saying :
Traceback (most recent call last):
File "C:\Users\Alexis\Desktop\test.py", line 16, in <module>
P = Array('P', IntSort() , Process)
File "src/api/python\z3.py", line 3887, in Array
File "src/api/python\z3.py", line 3873, in ArraySort
File "src/api/python\z3.py", line 56, in _z3_assert
Z3Exception: 'Z3 sort expected'
And know I don't know how handle it... must I create a new datatype and figure a way to add my rules ? or Is there a way to add datatypes to array which would let me create rules like this :
unicity = ForAll([x,y] , (Implies(x!=y,P[x]!=P[y])))
Thanks in advance
There is a tutorial on using Datatypes from the Python API. A link to the tutorial is:
http://rise4fun.com/Z3Py/tutorialcontent/advanced#h22
It shows how to create a list data-type and use the "create()" method to instantiate a Sort object from the object used when declaring the data-type. For your example, it suffices to add calls to "create()" in the places where you want to use the declared type as a sort.
See: http://rise4fun.com/Z3Py/rQ7t
Regarding the rest of the case study you are looking at: it is certainly possible to express the constrsaints you describe using quantifiers and arrays. You could also consider somewhat more efficient encodings:
Instead of using an array, use a function declaration. So P would be declared as a unary function:
P = Function('P', IntSort(), Process.create()).
Using quantifiers for small finite domain problems may be more of an overhead than a benefit. Writing down the constraints directly as a finite conjunction saves the overhead of instantiating quantifiers possibly repeatedly. That said, some quantified axioms can also be optimized. Z3 automatically compiles axioms of the form: ForAll([x,y], Implies(x != y, P(x) != P(y))) into
an axioms of the form Forall([x], Pinv(P(x)) == x), where "Pinv" is a fresh function. The new axiom still enforces that P is injective but requires only a linear number of instantiations, linear in the number of occurrences of P(t) for some term 't'.
Have fun!

Variable number of function arguments Lua 5.1

In my Lua script I'm trying to create a function with a variable number of arguments. As far as I know it should work like below, but somehow I get an error with Lua 5.1 on the TI-NSpire (global arg is nil). What am I doing wrong? Thanks!
function equation:init(...)
self.equation = arg[1]
self.answers = {}
self.pipe = {arg[1]}
self.selected = 1
-- Loop arguments to add answers.
for i = 2, #arg do
table.insert(self.answers, arg[i])
end
end
instance = equation({"x^2+8=12", -4, 4})
Luis's answer is right, if terser than a beginner to the language might hope for. I'll try to elaborate on it a bit, hopefully without creating additional confusion.
Your question is in the context of Lua embedded in a specific model of TI calculator. So there will be details that differ from standalone Lua, but mostly those details will relate to what libraries and functions are made available in your environment. It is unusual (although since Lua is open source, possible) for embedded versions of Lua to differ significantly from the standalone Lua distributed by its authors. (The Lua Binaries is a repository of binaries for many platforms. Lua for Windows is a batteries-included complete distribution for Windows.)
Your sample code has a confounding factor the detail that it needs to interface with a class system provided by the calculator framework. That detail mostly appears as an absence of connection between your equation object and the equation:init() function being called. Since there are techniques that can glue that up, it is just a distraction.
Your question as I understand it boils down to a confusion about how variadic functions (functions with a variable number of arguments) are declared and implemented in Lua. From your comment on Luis's answer, you have been reading the online edition of Programming in Lua (aka PiL). You cited section 5.2. PiL is a good source for background on the language. Unfortunately, variadic functions are one of the features that has been in flux. The edition of the book on line is correct as of Lua version 5.0, but the TI calculator is probably running Lua 5.1.4.
In Lua 5, a variadic function is declared with a parameter list that ends with the symbol ... which stands for the rest of the arguments. In Lua 5.0, the call was implemented with a "magic" local variable named arg which contained a table containing the arguments matching the .... This required that every variadic function create a table when called, which is a source of unnecessary overhead and pressure on the garbage collector. So in Lua 5.1, the implementation was changed: the ... can be used directly in the called function as an alias to the matching arguments, but no table is actually created. Instead, if the count of arguments is needed, you write select("#",...), and if the value of the nth argument is desired you write select(n,...).
A confounding factor in your example comes back to the class system. You want to declare the function equation:init(...). Since this declaration uses the colon syntax, it is equivalent to writing equation.init(self,...). So, when called eventually via the class framework's use of the __call metamethod, the real first argument is named self and the zero or more actual arguments will match the ....
As noted by Amr's comment below, the expression select(n,...) actually returns all the values from the nth argument on, which is particularly useful in this case for constructing self.answers, but also leads to a possible bug in the initialization of self.pipe.
Here is my revised approximation of what you are trying to achieve in your definition of equation:init(), but do note that I don't have one of the TI calculators at hand and this is untested:
function equation:init(...)
self.equation = select(1, ...)
self.pipe = { (select(1,...)) }
self.selected = 1
self.answers = { select(2,...) }
end
In the revised version shown above I have written {(select(1,...))} to create a table containing exactly one element which is the first argument, and {select(2,...)} to create a table containing all the remaining arguments. While there is a limit to the number of values that can be inserted into a table in that way, that limit is related to the number of return values of a function or the number of parameters that can be passed to a function and so cannot be exceeded by the reference to .... Note that this might not be the case in general, and writing { unpack(t) } can result in not copying all of the array part of t.
A slightly less efficient way to write the function would be to write a loop over the passed arguments, which is the version in my original answer. That would look like the following:
function equation:init(...)
self.equation = select(1, ...)
self.pipe = {(select(1,...))}
self.selected = 1
-- Loop arguments to add answers.
local t = {}
for i = 2, select("#",...) do
t[#t+1] = select(i,...)
end
self.answers = t
end
Try
function equation:init(...)
local arg={...}
--- original code here
end

Resources