I am using z3 to solve the reachability problem of linear hybrid automata. I run the experiments under the limited memory. I am confused about the memory usage. There is a case that z3 can solve under the limited memory given bound 2500. However, the memory usage of z3 exceeds the maximal permit when the bound is set to 2000. What's the reason about this?
A smaller number of constraints in the SMT2 file does not necessarily mean that Z3 will use less memory to solve the problem. For example, a small but unsatisfiable problem may require a lot more memory than a big satisfiable problem.
It may well be that setting a lower bound on the unwinding of the automaton turns a satisfiable problem (at bound 2500) into an unsatisfiable problem (at bound 2000), which in turn makes the problem much harder for Z3, even though there are fewer constraints. Consequently, Z3 will use more time and/or memory.
Getting around this may require a different encoding of the problem or using different options on the solver, e.g., to tune the heuristics such that they get "lucky" more often and find solutions earlier.
Related
Z3 supports the SMT-lib set-logic statement to restrict to specific fragments. In a program using (set-logic QF_LRA) for example, quantifier-free linear real arithmetic is enabled. If multiple theories are enabled, it makes sense to me that SAT would be required. However, it's not clear to me if it's possible to enable a single theory and guarantee that SAT is never run, thereby reducing Z3 purely to a solver for that single theory alone. This would be useful for example to claim that a tool honors the particular performance bound of the solver for a given theory.
Is there a way to do this in SMT-lib, or directly in Z3? Or is guaranteeing that the SAT solver is disabled impossible?
The Nelson-Oppen theory combination algorithm that many SMT solvers essentially implement crucially relies on the SAT solver: In a sense, the SAT solver is the engine that solves your SMT query, consulting the theory solvers to make sure the conflicts it finds are propagated/resolved according to the theory rules. So, it isn't really possible to talk about an SMT solver without an underlying SAT engine, and neither SMTLib nor any other tool I'm aware of allows you to "turn off" SAT. It's an integral piece of the whole system that cannot be turned on/off at will. Here's a nice set of slides for Nelson-Oppen: https://web.stanford.edu/class/cs357/lecture11.pdf
I suppose it would be possible to construct an SMT solver that did not use this architecture; but then every theory solver would need to essentially have a SAT solver embedded into itself. So, even in that scenario, extracting the "SAT" bits out is really not possible.
If you're after precise measurements of what part of the solver spends what amount of time, your best bet is to instrument the solver to collect statistics on where it spends its time. Even then, precisely calling which parts belong to the SAT solver, which parts belong to the theory solver, and which parts go to their combination will be tricky.
I would like to improve the scalability of SMT solving. I have actually implemented the incremental solving. But I would like to improve more. Any other general methods to improve it without the knowledge of the problem itself?
There's no single "trick" that can make z3 scale better for an arbitrary problem. It really depends on what the actual problem is and what sort of constraints you have. Of course, this goes for any general computing problem, but it really applies in the context of an SMT solver.
Having said that, here're some general ideas based on my experience, roughly in the order of ease of use:
Read the Programming Z3 book This is a very nice write-up and will teach you a ton of things about how z3 is architected and what the best idioms are. You might hit something in there that directly applies to your problem: https://theory.stanford.edu/~nikolaj/programmingz3.html
Keep booleans as booleans not integers Never use integers to represent booleans. (That is, use 1 for true, 0 for false; multiplication for and etc. This is a terrible idea that kills the powerful SAT engine underneath.) Explicitly convert if necessary. Most problems where people tend to deploy such tricks involve counting how many booleans are true etc.: Such problems should be solved using the pseudo-boolean tactics that's built into the solver. (Look up pbEq, pbLt etc.)
Don't optimize unless absolutely necessary The optimization engine is not incremental, nor it is well optimized (pun intended). It works rather slowly compared to all other engines, and for good reason: Optimization modulo theories is a very tricky thing to do. Avoid it unless you really have an optimization problem to tackle. You might also try to optimize "outside" the solver: Make a SAT call, get the results, and making subsequent calls asking for "smaller" cost values. You may not hit the optimum using this trick, but the values might be good enough after a couple of iterations. Of course, how well the results will be depends entirely on your problem.
Case split Try reducing your constraints by case-splitting on key variables. Example: If you're dealing with floating-point constraints, say; do a case split on normal, denormal, infinity, and NaN values separately. Depending on your particular domain, you might have such semantic categories where underlying algorithms take different paths, and mixing-and-matching them will always give the solver a hard time. Case splitting based on context can speed things up.
Use a faster machine and more memory This goes without saying; but having plenty of memory can really speed up certain problems, especially if you have a lot of variables. Get the biggest machine you can!
Make use of your cores You probably have a machine with many cores, further your operating system most likely provides fine-grained multi-tasking. Make use of this: Start many instances of z3 working on the same problem but with different tactics, random seeds, etc.; and take the result of the first one that completes. Random seeds can play a significant role if you have a huge constraint set, so running more instances with different seed values can get you "lucky" on average.
Try to use parallel solving Most SAT/SMT solver algorithms are sequential in nature. There has been a number of papers on how to parallelize some of the algorithms, but most engines do not have parallel counterparts. z3 has an interface for parallel solving, though it's less advertised and rather finicky. Give it a try and see if it helps out. Details are here: https://theory.stanford.edu/~nikolaj/programmingz3.html#sec-parallel-z3
Profile Profile z3 source code itself as it runs on your problem, and see
where the hot-spots are. See if you can recommend code improvements to developers to address these. (Better yet, submit a pull request!) Needless to say, this will require a deep study of z3 itself, probably not suitable for end-users.
Bottom line: There's no free lunch. No single method will make z3 run better for your problems. But above ideas might help improve run times. If you describe the particular problem you're working on in some detail, you'll most likely get better advice as it applies to your constraints.
I have a huge set of linear real arithmetic constraints to solve, and I am incrementally feeding them to the solver. Z3 always seems to get stuck after a while. Is Z3 internally going to change its strategy in solving the constraints, such as moving away from the Simplex algorithm and try others, etc. Or do I have to explicitly instruct Z3 to do so? I am using Z3py.
Without further details it's impossible to answer this question precisely.
Generally, with no logic being set and the default tactic being run or (check-sat) being called without further options, Z3 will switch to a different solver the first time it sees a push command; prior to that it can use a non-incremental solver.
The incremental solver comes with all the positives and negatives of incremental solvers, i.e., it may be faster initially, but it may not be able to exploit previously learned lemmas after some time, and it may simply remember too many irrelevant facts. Also, the heuristics may 'remember' information that doesn't apply at a later time, e.g., a 'good' variable ordering may change into a bad one after everything is popped and a different problem over the same variables is pushed. In the past, some users found it works better for them to use the incremental solver for some number of queries, but to start from scratch when it becomes too slow.
In the many resources online it is possible to find different usages of 'memory','bandwidth' 'latency' bound kernels. It seems to me that the authors sometimes use their own definition of these terms and I think if would be very beneficial for someone to make a clear distinction.
To my understanding:
Bandwidth bound kernels approach the physical limits of the device in terms of access to global memory. E.g. an application uses 170GB/s out of 177GB/s on an M2090 device.
A latency bound kernel is one whose predominant stall reason is due to memory fetches. So we are not saturating the global memory bus, but still have to wait to get the data into the kernel.
A compute bound kernel is one in which computation dominates the kernel time, under the assumption that there is no problem feeding the kernel with memory, and there is good overlap of arithmetic and latency.
If I got these correct, what would a 'memory bound' kernel be? Is there ambiguity, and if yes, should we limit the conversation to the three above terms?
Thanks!
what would a 'memory bound' kernel be?
Memory bound refers to a general case where a code is limited by memory access, ie. it includes codes that are latency bound and codes that are bandwidth bound. You've defined pretty much all the other terms correctly.
Is there ambiguity, and if yes, should we limit the conversation to the three above terms?
I don't think there's much ambiguity (you've clearly demarcated 3 of the 4 terms, anyway), and you're not going to impose order on the world in a SO question/answer.
I am trying to use tactic solver in Z3, to solver a problem of some X constraints as opposed to general purpose solver.
I am using the following tactics -
simplify purify-arith elim-term-ite reduce-args propagate-values solve-eqs symmetry-reduce smt sat sat-preprocess
I apply the tactics one after another to the problem by using Z3_tactic_and_then API.
Also I am using this technique in order to configure the time-out for the solver.
Now, for the same problem if I use a general solver, it times out for the query for the specified time-out. However, if I use the mentioned tactics for the solver, then it does not time-out in the given time. It runs much longer.
For example, I specified a timeout of 180*1000 milliseconds, but it timed out in 730900 milliseconds.
I tried to remove a few tactics mentioned above, but the behaviour was still the same.
Z3 version 4.1
Unfortunately, not every tactic respects the timeout. The tactic smt is a big "offender". This tactic wraps a very old solver implemented in Z3. Unfortunately, this solver cannot be interrupted during some expensive computations because it would leave the system in a corrupted state. That is, Z3 would crash in future operations. When this solver was implemented, a very simplistic design was used. If we want to interrupt the process: kill it. Of course, this is not acceptable when using Z3 embedded in bigger applications. New code is usually much more responsive to timeouts, and we try to avoid this kind of poor design.