define-fun vs define-funs-rec in smtlib - z3

It seems that define-funs-rec is a strict superset of what define-fun can do according to the SMTLIB standard. If so is there a reason for not always using define-funs-rec, maybe except for syntactic simplicity?

Strictly speaking; no. But define-fun-rec is rather new (as opposed to good old define-fun), so if you want greater portability and have no need for a recursive definition, then you should stick to define-fun.
It is conceivable that the define-fun-rec might also bring heavier-machinery in a solver that is not really needed unless you really have a recursive definition, such as a well-foundedness checker. This might end up costing some performance cycles; though I doubt this would be too much of a concern.

Related

Performance issues with z3py using modulo and optimization

I'm experimenting with Z3 (using the python api) where I'm building up a scheduling model for a class assignment, where I have to use modulo quite often (because its periodic). Modulo seems already to slow down z3 by a lot, but if I try do some optimization on top (minimize a cost function which is a sum), then it takes quite some time for fairly small problems.
Without optimization it works okayish (few seconds for a smaller problem). So that being said, I have now 2 questions:
1) Is there any trick with the modulo function of how to use it? I already assign the modulo value to a function. Or is there any other way to express periodic/ring behavior?
2) I am not interested in finding THE best solution. A good one, will be good enough. Is there a way to set some bounds for the cost function. Like, if know the upper and lower bound of it? Any other tricks, where I could use domain knowledge to find solutions fast.
Furthermore, I thought that if I ll use timeout option solver.set("timeout" 10000), then the solver would time out with the best solution so far. That doesnt seem to be the case. It just times out.
Impossible to comment on the mod function without seeing some code. But as a rule of thumb, division is difficult. Are you using Int values, or bit-vectors? If you are using unbounded integers, you might want to try bit-vectors which might benefit from better internal heuristics. Or try Real values, and then do a proper reduction.
Regarding how to get the "best so far" optimal value, see this answer: Finding suboptimal solution (best solution so far) with Z3 command line tool and timeout
Instead of using the built-in modulo and division, you could introduce uninterpreted functions mymod and mydiv, for which you only provide the necessary axioms of division and modulo that your problem requires. If I remember correctly, Microsoft's Ironclad and/or Ironfleet team did that when they had modulo/division-related performance problems (using the pipeline Dafny -> Boogie -> Z3).

Setting logic for solver in Z3 (API)

I notice that the Z3 C++ (and C) API allows you to supply the logic to be used.
I have two questions about this that I couldn't answer by looking online:
Are these supposed to be the standard SMT-LIB logics i.e. QF_LRA
When are these worth supplying i.e. when will Z3 actually use this information
My context is mainly QF no BV but everything else possible, I am using the SMT solver incrementally and I can always work out what logic I will be in at the start.
Z3 will also try to figure out what the logic is (when run with default options), but it doesn't have custom tactics for all combinations of theories (see default_tactic.cpp and smt_strategic_solver.cpp). When you are not sure what Z3 will decide to do, then it's best to set the tactic right up front, so that you will get errors if you try to use things that are not in that logic. It will also use that information to set up the smt kernel, e.g., enabling various preprocessors, various solver features, and chosing heuristics (see e.g., smt_setup.cpp).
Try it out and see!
Usually it does make a big difference. Setting the logic means the solver will use a specialized tactic to solve the formula, instead of going through the generic loop. Z3 will also try to guess the logic, but it's usually better to just provide it upfront.

Primitive operations in proofs

For learning dependent types, I'm rewriting my old Haskell game in Idris. Currently the game „engine“ uses builtin integral types, such as Word8. I'd want to prove some lemmas involving numerical properties of those numbers (such as, that double negation is identity). However, it's not possible to say something about behaviour of primitive arithmetic operations. What would be better, to just use believe_me or other handwaving (at least for the most basic properties), or to rewrite my code using Nat, Fin and other „high-level“ numerical types?
I'd suggest using postulate for any of the primitive properties you need, being careful only to use things which are actually true for the numerical types in question, of course (which basically just means being careful about overflow). So you can say things like:
postulate add_commutes : (x, y : Int) -> x + y = y + x
believe_me is best avoided unless you need some computation behaviour of the proof. But, to be honest, we're still trying to work this stuff out when reasoning about primitives...
I believe for the moment it's generally better to use Nat when you can. The Idris devs are planning, eventually, to implement a general mechanism for replacing proof-friendly types with fast primitive ones in compilation, but for now that only happens for Nat. You could believe_me through if you really wanted, but you'd end up with functions that are not so easy to work with in proofs. Note that if you decide to play with believe_me, then you should consider also really_believe_me, which apparently makes it more believable to the type checker.

use of define-fun in z3's input language

Going by this post, define-fun seems to be a macro that gets expanded by z3's preprocessor and not seen by the actual solver but makes the input file possibly compact and easier to read. I'm asking because I see some (apparently random) differences in the satisfying assignments and performance on the same problem: one which uses define-funand one which doesn't, which makes me wonder if it is more than just a macro. Does one have to be careful about the aggressive use of define-fun in certain cases? I observed this on some QF_NIRA, QF_LIRA problems that I have.
define-fun is treated as a macro in Z3.
The behavior of Z3 will be different if you use equalities and uninterpreted functions.
Furthermore, if you use "declare-fun" and then create quantified equalities,
the problem is no longer in the QF_LIRA fragment.

F# tail recursion and why not write a while loop?

I'm learning F# (new to functional programming in general though used functional aspects of C# for years but let's face it, that's pretty different) and one of the things that I've read is that the F# compiler identifies tail recursion and compiles it into a while loop (see http://thevalerios.net/matt/2009/01/recursion-in-f-and-the-tail-recursion-police/).
What I don't understand is why you would write a recursive function instead of a while loop if that's what it's going to turn into anyway. Especially considering that you need to do some extra work to make your function recursive.
I have a feeling someone might say that the while loop is not particularly functional and you want to act all functional and whatnot so you use recursion but then why is it sufficient for the compiler to turn it into a while loop?
Can someone explain this to me?
You could use the same argument for any transformation that the compiler performs. For instance, when you're using C#, do you ever use lambda expressions or anonymous delegates? If the compiler is just going to turn those into classes and (non-anonymous) delegates, then why not just use those constructions yourself? Likewise, do you ever use iterator blocks? If the compiler is just going to turn those into state machines which explicitly implement IEnumerable<T>, then why not just write that code yourself? Or if the C# compiler is just going to emit IL anyway, why bother writing C# instead of IL in the first place? And so on.
One obvious answer to all of these questions is that we want to write code which allows us to express ourselves clearly. Likewise, there are many algorithms which are naturally recursive, and so writing recursive functions will often lead to a clear expression of those algorithms. In particular, it is arguably easier to reason about the termination of a recursive algorithm than a while loop in many cases (e.g. is there a clear base case, and does each recursive call make the problem "smaller"?).
However, since we're writing code and not mathematics papers, it's also nice to have software which meets certain real-world performance criteria (such as the ability to handle large inputs without overflowing the stack). Therefore, the fact that tail recursion is converted into the equivalent of while loops is critical for being able to use recursive formulations of algorithms.
A recursive function is often the most natural way to work with certain data structures (such as trees and F# lists). If the compiler wants to transform my natural, intuitive code into an awkward while loop for performance reasons that's fine, but why would I want to write that myself?
Also, Brian's answer to a related question is relevant here. Higher-order functions can often replace both loops and recursive functions in your code.
The fact that F# performs tail optimization is just an implementation detail that allows you to use tail recursion with the same efficiency (and no fear of a stack overflow) as a while loop. But it is just that - an implementation detail - on the surface your algorithm is still recursive and is structured that way, which for many algorithms is the most logical, functional way to represent it.
The same applies to some of the list handling internals as well in F# - internally mutation is used for a more efficient implementation of list manipulation, but this fact is hidden from the programmer.
What it comes down to is how the language allows you to describe and implement your algorithm, not what mechanics are used under the hood to make it happen.
A while loop is imperative by its nature. Most of the time, when using while loops, you will find yourself writing code like this:
let mutable x = ...
...
while someCond do
...
x <- ...
This pattern is common in imperative languages like C, C++ or C#, but not so common in functional languages.
As the other posters have said some data structures, more exactly recursive data structures, lend themselves to recursive processing. Since the most common data structure in functional languages is by far the singly linked list, solving problems by using lists and recursive functions is a common practice.
Another argument in favor of recursive solutions is the tight relation between recursion and induction. Using a recursive solution allows the programmer to think about the problem inductively, which arguably helps in solving it.
Again, as other posters said, the fact that the compiler optimizes tail-recursive functions (obviously, not all functions can benefit from tail-call optimization) is an implementation detail which lets your recursive algorithm run in constant space.

Resources