I'm experimenting with Z3 (using the python api) where I'm building up a scheduling model for a class assignment, where I have to use modulo quite often (because its periodic). Modulo seems already to slow down z3 by a lot, but if I try do some optimization on top (minimize a cost function which is a sum), then it takes quite some time for fairly small problems.
Without optimization it works okayish (few seconds for a smaller problem). So that being said, I have now 2 questions:
1) Is there any trick with the modulo function of how to use it? I already assign the modulo value to a function. Or is there any other way to express periodic/ring behavior?
2) I am not interested in finding THE best solution. A good one, will be good enough. Is there a way to set some bounds for the cost function. Like, if know the upper and lower bound of it? Any other tricks, where I could use domain knowledge to find solutions fast.
Furthermore, I thought that if I ll use timeout option solver.set("timeout" 10000), then the solver would time out with the best solution so far. That doesnt seem to be the case. It just times out.
Impossible to comment on the mod function without seeing some code. But as a rule of thumb, division is difficult. Are you using Int values, or bit-vectors? If you are using unbounded integers, you might want to try bit-vectors which might benefit from better internal heuristics. Or try Real values, and then do a proper reduction.
Regarding how to get the "best so far" optimal value, see this answer: Finding suboptimal solution (best solution so far) with Z3 command line tool and timeout
Instead of using the built-in modulo and division, you could introduce uninterpreted functions mymod and mydiv, for which you only provide the necessary axioms of division and modulo that your problem requires. If I remember correctly, Microsoft's Ironclad and/or Ironfleet team did that when they had modulo/division-related performance problems (using the pipeline Dafny -> Boogie -> Z3).
Related
In constraint solver like Gecode , We can control the exploration of search space with help of branching function. for e.g. branch(home , x , INT_VAL_MIN ) This will start exploring the search space from the minimum possible value of variable x in its domain and try to find solution.(There are many such alternatives .)
For z3, do we have this kind of flexibility in-built ?? Any alternative possible??
SMT solvers usually do not allow for these sorts of "hints" to be given, they act more as black-boxes.
Having said that, each solver uses a ton of internal heuristics, and z3 itself has a number of settings that you can play with to give it hints. If you run:
z3 -pd
it will display all the options you can provide, and there are literally over 600 of them! Unfortunately, these options are not really well documented, and how they impact the solver is rather cryptic. The only reliable way to find out would be to study the source code and see what they do, which isn't for the faint of heart. But in any case, it will not be as obvious as the branch feature you cite for gecode.
There are, however, other tricks one can use to speed up solving for SMT-solvers, unfortunately, these things are usually very problem-specific. If you post specific instances, you might get better suggestions.
I'm having trouble figuring out how to debug z3. Is there a way to see what the SMT engine is "trying" to make it easier to understand why it's failing to see a solution that seems obvious and where it's devoting it's time instead?
As an example in my particular circumstance, I'm working with a recursive function and setting z3 to find inputs where the function has a certain result. SMT was timing out, yadda yadda yadda, turns out the thing I was recursing on had a base case of 0, but if it ever went negative, it'd recurse forever. Z3 didn't know not to pick a negative number, so it'd get stuck. I figured that out by staring at the code, but if I had some output somewhere that said "trying i == -10, trying i == -11, etc" it'd be very obvious what was going wrong.
I'm continuing to have less obvious issues, and I suspect Z3 is still getting stuck in loops. How can I see the loop it's getting stuck in?
It is unfortunately very difficult to find out why exactly Z3 is running forever, but typical culprits are matching loops due to bad patterns (a quantifier instantiations yields new ground terms that trigger another instantiations, and so on) and non-linear arithmetic.
The Z3 axiom profiler, described in this paper can help with identifying problems due to too many quantifier instantiations.
I am trying to find an optimal solution using the Z3 API for python. I have used set_option("verbose", 1) to print statements that Z3 generates while checking for sat. One of the statements it prints is pb.conflict statements. The statements look something like this -
pb.conflict statements.
I want to know what exactly is pb.conflict. What do these statements signify? Also, what are the two numbers that get printed along with it?
pb stands for Pseudo-boolean. A pseudo-boolean function is a function from booleans to some other domain, usually Real. A conflict happens when the choice of a variable leads to an unsatisfiable clause set, at which point the solver has to backtrack. Keeping the backtracking to a minimum is essential for efficiency, and many of the SAT engines carefully track that number. While the details are entirely solver specific (i.e., those two numbers you're asking about), in general the higher the numbers, the more conflict cases the solver met, and hence might decide to reset the state completely or take some other action. Often, there are parameters that users can set to specify when such actions are taken and exactly what those are. But again, this is entirely solver and implementation specific.
A google search on pseudo-boolean optimization will result in a bunch of scholarly articles that you might want to peruse.
If you really want to find Z3's treatment of pseudo-booleans, then your best bet is probably to look at the implementation itself: https://github.com/Z3Prover/z3/blob/master/src/smt/theory_pb.cpp
I notice that the Z3 C++ (and C) API allows you to supply the logic to be used.
I have two questions about this that I couldn't answer by looking online:
Are these supposed to be the standard SMT-LIB logics i.e. QF_LRA
When are these worth supplying i.e. when will Z3 actually use this information
My context is mainly QF no BV but everything else possible, I am using the SMT solver incrementally and I can always work out what logic I will be in at the start.
Z3 will also try to figure out what the logic is (when run with default options), but it doesn't have custom tactics for all combinations of theories (see default_tactic.cpp and smt_strategic_solver.cpp). When you are not sure what Z3 will decide to do, then it's best to set the tactic right up front, so that you will get errors if you try to use things that are not in that logic. It will also use that information to set up the smt kernel, e.g., enabling various preprocessors, various solver features, and chosing heuristics (see e.g., smt_setup.cpp).
Try it out and see!
Usually it does make a big difference. Setting the logic means the solver will use a specialized tactic to solve the formula, instead of going through the generic loop. Z3 will also try to guess the logic, but it's usually better to just provide it upfront.
I use == and != a lot in my code and I was wondering which is quicker in objective c so that I can make my app as fast as possible.
Situation
I have a variable which is one of two things and I want the quickest method to see which one it is
Thanks in advance
You should not worry about this level of detail for performance reasons, unless you've identified a performance issue.
However, wondering to satisfy an inquiring mind is a different matter! :-) The answer is they are identical.
A comparison is usually compiled as an instruction which sets condition flags; this could be a specific comparison instruction or something like an arithmetic instruction which sets condition codes; followed by a conditional jump which tests the condition flags - and a test for "equal" is the same cost as for "not equal", just a different setting of those condition flags.
This also means that statements such as if([some method call]) ... and if(![some method call]) ... have the same cost - the "not" operator produces no extra code.
You can test yourself.
Check current milliseconds before and after operating.
I guess there's no differences..
If you really need to know,
you could make a lot of operating with loop.
then you will get the answer.
This is silly. You would have to execute millions of iterations of code using the 2 versions of if statement in order to even detect a difference in speed. This is a triviality, and not worth worrying about.
As the other poster said, == and != should take exactly the same amount of time for non-floatingpoint values. For floating point, there might be some differences, since for an equal comparison the processor has to first normalize the 2 floating point values, then compare them, and normalizing is relatively time-consuming. I don't know if testing for non-equality if slower than equality. IT's unlikely but not impossible.