Consider the following smt2 file generated with the help of Klee.
I am trying to evaluate it using z3. However, z3 hangs forever. Specifically, when the formula is UNSAT, z3 runs for ever and does not produce any result.
Is formula size is big?
Is there any issue while using logic theory AUFBV?
May I get some suggestions to improve the z3 performance.
Each assert statement having some common subexpression. Is it possible to improve the z3 performance by solving subexpression separately?
This is going to be impossible to answer as the SMT-lib file you've linked is undecipherable for a non-KLEE user. I recommend asking KLEE folks directly using your original program that gave rise to this. Failing that, try to reduce the SMT2Lib to a minimum and see if you can at least hand-annotate to see what it's trying to do.
Regarding your question for common subexpressions: You have to experiment to find out. But the way most such solvers are constructed, they /will/ discover common subexpressions themselves and reuse lemmas about them automatically as they convert your input to an internal representation. So, it'd surprise me if it helped in any significant way to do this by hand; unless the input is really massive. (The example you linked isn't really that big so I doubt that's an issue.)
Related
If only (check-sat) is done, It mark as no sat.
But if you try (get-model), it doesn't mark, and the error comes out right away.
Is there any way that mark me without being error?
As Patrick commented, it's really hard to decipher what you're asking. Please provide some code snippets to show what you get and what you hope to achieve.
Having said that, I'll take a wild guess that you are in a situation where the solver is unsat, i.e., (check-sat) returns an unsat response. Yet, your script has (get-model) in the next line, which of course errors out since there's no model. And you'd like a way to avoid the error message.
This is a known limitation of SMTLib command language: You cannot programmatically inspect the output of commands, unfortunately. The usual way to solve this sort of problem is to have a "live" connection to the solver (typically in the form of a text pipe), and read a line after issuing (check-sat), and programmatically continue depending on the response. This is how most systems built on top of SMT-solvers work.
Alternatively, you can use higher-level APIs in other languages (C/C++/Java/Python/Haskell..) and program z3 that way; and use the facilities of the host language to direct that interaction. This requires you to learn another interface on top of SMTLib, but for serious uses of this technology, you probably don't want to limit yourself to a pure SMTLib interface anyhow.
Also see this answer for a related discussion: Executing get-model or unsat-core depending on solver's decision
Hope this addresses your issue, though it's hard from your question whether this is what you were getting at.
I have a SMT application (built on Haskell SBV library), which solves some complex equation against single s variable in Real logic using Z3. Finding solution takes about 30 seconds in my case.
Trying to speed things up, I added additional constraint s < 40000, as I have some estimation of solution. I was thinking that such constraint would shrink the search space and make solver return the result faster. However, this only made it slower (it didn't even finished in 10 minutes, actually).
The question is: can it be assumed that additional constraints always slows down/speeds up solution process, or there are no general rules and it always depends on circumstances?
I'm worried that even my 30-seconds algorithm may contain some extra constraint that isn't necessarily needed, but just slows the process.
I don't think you can make any general assumptions about this. It may or may not impact solving time, assuming sat/unsat status doesn't change.
Equalities usually help (as they propagate freely), but for anything else, it's anybody's guess. Also, different solvers can exhibit differing behavior for the same addition.
One way to think about this is that the underlying DPLL(T) algorithm is essentially a very smart glorified search algorithm. It keeps producing "learned lemmas" with the hope that it will find a contradiction with a previously known fact. The new "constraint" you add might cause it to generate a ton of correct but irrelevant lemmas that makes it go down the deep-end with no useful result. (Quantified formulae are usually like this: You can instantiate them in a million different ways; but unless you find the "correct" instantiation, all they do is end up polluting your search space.)
At least that's been my experience!
I know parseSMTLIB2File Java API ignores certain commands in an SMT2 file. However, is there a way around it? I am generating smt2 files and use parseSMTLIB2File and solver.check() to parse and solve the constraints.
Now, I would like to use the unsat cores from the solver for some computation. I know I probably can do it using std in and out (here). However, this will be very inefficient for running the algorithms. Further, it is also not ideal to change the entire code base to switch every constraint generation through Z3 Java APIs.
Since the native C++ interface handles options and (tracked) assertions well. Hence, is there any way around it? How can I do this programmatically and efficiently?
Do other C++/C/Python parseSMTLIB2File APIs perform the same thing as Java's or they may read in additional things.
is there a way around it?
No. parseSMTLIB2File is not a complete interface to the solver, and it's not intended to be. The only options are to switch to the full API interface, or to the full text interface by emitting .smt2 files and passing those to Z3. The latter can be done via pipes instead of actual files, and many users are happy with the performance of that.
When writing proofs I noticed that Agda's auto proof search frequently wouldn't find solutions that seem obvious to me. Unfortunately coming up with a small example, that illustrates the problem seems to be hard, so I try to describe the most common patterns instead.
I forgot to add -m to the hole to make Agda look at the module scope. Can I make that flag the default? What downsides would that have?
Often the current hole can be filled by a parameter of the function I am about to implement. Even when adding -m, Agda will not consider function parameters or symbols introduced in let or where clauses though. Is there something wrong with simply trying all of them?
When viewing a goal, symbols introduced in let or where clauses are not even displayed. Why?
What other habits can make using auto more effective?
Agda's auto proof search is hardwired into the compiler. That makes it fast,
but limits the amount of customization you can do. One alternative approach
would be to implement a similar proof search procedure using Agda's
reflection mechanism. With the recent beefed up version of reflection using
the TC monad,
you no longer need to implement your own unification procedure.
Carlos
Tome's been working on reimplementing these ideas (check out his code
https://github.com/carlostome/AutoInAgda ). He's been working on several
versions that try to use information from the context, print debugging info,
etc. Hope this helps!
One way to solve optimisation problems is to use an SMT solver to ask whether a (bad) solution exists, then to progressively add tighter cost constraints until the proposition is no longer satisfiable. This approach is discussed in, for example, http://www.lsi.upc.edu/~oliveras/espai/papers/sat06.pdf and http://isi.uni-bremen.de/agra/doc/konf/08_isvlsi_optprob.pdf.
Is this approach efficient, though? i.e. will the solver re-use information from previous solutions when attempting to solve with additional constraints?
The solver can reuse lemmas learned when trying to solve previous queries. Just keep in mind than in Z3 whenever you execute a pop all lemmas (created since the corresponding push) are forgotten. So, to accomplish that you must avoid push and pop commands and use "assumptions" if you need to retract assertions. In the following question, I describe how to use "assumptions" in Z3:
Soft/Hard constraints in Z3
Regarding efficiency, this approach is not the most efficient one for every problem domain. On the other hand, it can be implemented on top of most SMT solvers. Moreover, Pseudo-Boolean solvers (solver for 0-1 integer problems) successfully use a similar approach for solving optimization problems.