In general, First Order logic is Undecidable. However, Some fragments of first-order logic as Monadic logics, BSR Fragments, Separated Fragments are decidable.
There exist tools to solve SAT/SMT Solvers as Z3.
Is there any tool/Language which checks the satisfiability of FOL formulas?
SMT solvers, like Z3, can attempt to check satisfiability of FOL (even 2nd order logic!), though performance might not be great (depending on how the problem looks like)
There are also dedicated FOL provers (aka TPTP solvers), like Vampire, E, iProver, etc. See more here: https://en.wikipedia.org/wiki/Automated_theorem_proving
Related
While reading Extending Sledgehammer with SMT solvers I read the following:
In the original Sledgehammer architecture, the available lemmas were rewritten
to clause normal form using a naive application of distributive laws before the relevance filter was invoked. To avoid clausifying thousands of lemmas on each invocation, the clauses were kept in a cache. This design was technically incompatible with the (cache-unaware) smt method, and it was already unsatisfactory for ATPs, which include custom polynomial-time clausifiers.
My understanding of SMT so far is as follows: SMTs don't work over clauses. Instead, they try to build a model for the quantifier-free part of a problem. The search is refined by instantiating quantifiers according to some set of active terms. Thus, indeed no clausal form is needed for SMT solvers.
We rewrote the relevance filter so that it operates on arbitrary HOL formulas, trying to simulate the old behavior. To mimic the penalty associated with Skolem functions in the clause-based code, we keep track of polarities and detect quantifiers that give rise to Skolem functions.
What's the penalty associated with Skolem functions? I could understand they are not good for SMTs, but here it seems that they are bad for ATPs too...
First, SMT solvers do work over clauses and there is definitely some (non-naive) normalization internally (e.g., miniscoping). But you do not need to do the normalization before calling the SMT solver (especially, since it will be more naive and generate a larger number of clauses).
Anyway, Section 6.6.7 explains why skolemization was done on the Isabelle side. To summarize: it is not possible to introduce polymorphic constants in a proof in Isabelle; hence it must be done before starting the proof.
It seems likely that, when writing the paper, not changing the filtering lead to worse performance and, hence, the penalty was added. However, I tried to find the relevant code simulating clausification in Sledgehammer, so I don't believe that this happens anymore.
is there a way to express assumptions in Z3 (I am using the Z3Py library) such that the engine does not check their validity but takes them as underlying theories, just like in theorem proving?
For example, lets say that I have two unary functions with argument of type Real. I would like to tell the Z3 engine that for all input values, f1(t) is equal to f2(t).
Encoded in Z3Py that would look something like the following:
t = Real("t")
assumption1 = ForAll(t, f1(t) = f2(t)).
The problem with the presented code is that my assertion set is quite big and I use quantifiers (I am trying to prove satisfiability of a real-time system). If I add the above assertion to the set of the other assertions the checking procedure does not terminate.
is there a way to express assumptions in Z3 (I am using the Z3Py library) such that the engine does not check their validity but takes them as underlying theories, just like in theorem proving?
In fact, all assertions you add to Z3 are treated as what you call assumptions. Z3 checks satisfiability of the assertions, it does not check validity. To check validity of a formula F, you assert (not F), and check for satisfiability of (not F). If (not F) is unsat, then F is valid. If you have background axioms, you are essentially checking validity of Background => F, so you can check satisifiability of Background & (not F).
Whether Z3 terminates on your query depends on which combination of theories and quantifiers you use. The more features your queries combine the tougher it is.
For formulas over pure linear arithmetic or polynomial real arithmetic,
these are called LRA, LIA and NRA in the SMT-LIB classification (see smtlib.org) Z3 uses specialized decision procedures that have recently been added.
Yes, that's possible just as you describe it, but you will end up with quantifiers, which does of course mean that you're solving a harder problem and Z3 will behave differently (it's possible you end up using completely different solvers that don't even share much source code).
For the particular example given, it's possible to eliminate the quantifier cheaply because it has the form of a function definition (ForAll x . f(x) = ...), i.e., we can just replace all occurrences of f with the right hand side and then the quantifier is trivially satisfied. In Z3, this is done by the macro finder, which may be applied as a tactic (with name "macro-finder"), or if you are using the "smt" tactic (implicitly via others or directly), then you can set smt.macro_finder=true.
My program, bounded synthesizer of reactive finite state systems, produces SMT queries to annotate a product automaton of the (uninterpreted) system and a specification. Essentially it is a model checking with uninterpreted functions. If the annotation exists => the model found by Z3 satisfies the spec. The queries contain:
datatype (to encode states of a system and of a specification automaton)
>= (greater), > (strictly) (to specify ranking function of states of automaton system*spec, which is used to search lassos with bad states)or in other words, ordering of states of that automaton, which
uninterpreted functions with boolean domain and range
all clauses are horn clauses
An example is https://dl.dropboxusercontent.com/u/444947/posts/full_arbiter2.smt2
('forall' are used to encode "don't care" inputs to functions)
Currently queries take strictly greater > operator from integers arithmetic (that is a ranking function has Int range).
Question: is it worth developing a custom theory solver in Z3 for such queries? It could exploit DFS based search of lassos which might be faster than integers theory solver (or diff-neg tactic).
Or Z3 already efficiently handles this? (efficiently means "comparable to graph-based search of lassos").
Arithmetic is not the bottleneck of your benchmark.
We can check that by using
valgrind --tool=callgrind z3 full_arbiter2.smt2
kcachegrind
Valgrind and kcachegrind are available in most Linux distros.
So, I don't think you will get a significant performance improvement if you implement a solver for order theory.
One bottleneck is the datatype theory. You may get a performance boost if you encode the types Q and T using Bit-vectors. Another bottleneck is quantifier reasoning. Have you tried to expand them before invoking Z3?
In Z3, the qe (quantifier elimination) tactic will essentially expand Boolean quantifiers.
I got a small speedup by replacing
(check-sat)
with
(check-sat-using (then qe smt))
As far as I understand, Z3, when encountering quantified linear real/rational arithmetic, applies a form of quantifier elimination described in Bjørner, IJCAR 2010 and more recent work by Bjørner and Monniaux (that's what qe_sat_tactic.cpp says, at least).
I was wondering
Whether it still works if the formula is multilinear, in the sense that the "constants" are symbolic. E.g. ∀x, ax≤b ⇒ ax ≤ 0 can be dealt with by separating the cases a<0, a=0 and a>0. This is possible using Weispfenning's virtual substitution approach, but I don't know what ended up being implemented in Z3 (that is, whether it implements the general approach or the one restricted to constant coefficients).
Whether it is possible, in Z3, to output the result of elimination instead of just solving for one model. There might be a Z3 tactic to do so but I don't know how this is supposed to be requested.
Whether it is possible, in Z3, to perform elimination as described above, then use the new nonlinear solver to obtain a model. Again, a succession of tactics might do the trick, but I don't know how this is supposed to be requested.
Thanks.
After long travels (including a travel where I met David at a conference), here is a short summary to answer the questions as they are posed.
There is no specific support for multi-linear forms.
The 'qe' tactic produces results of elimination, but may as a side-effect decide satisfiablity.
This is a very interesting problem to investigate, but it is not supported out of the box.
I am experimenting with optimizing the use of Z3 for proving facts about a first-order theory. Currently, I specify a first-order theory in Python, ground the quantifiers there and send all the clauses along with the negation of the proof goal to Z3. I have the following idea that I hope could optimize the outcome: I only want to send the formulas in the theory to Z3 that are relevant to the proof goal. I will not discuss this concept in detail, but I think the intuition is simple: my theory is a conjunction of formulas, and I only want to send conjuncts that can possibly affect the truth value of the proof goal.
My question is the following: can this lead to an improvement in efficiency, or does Z3 already use a similar method? I would guess not, because I don't think that Z3 always assumes that the last assertion is the proof goal, so it has no way of optimizing this.
Yes, removing irrelevant facts can make a big difference. Suppose that we have a unsatisfiable formula of the form F_1 and F_2 and (not G). Moreover, let us assume that F_1 and (not G) is unsatisfiable, and F_2 is satisfiable. F_2 is what you call irrelevant. If there is a cheap way to remove F_2 before sending the formulat to Z3, it will probable make a big difference.
Z3 has heuristics for "ignoring" irrelevant facts, but they are just heuristics. For our example, the worst case scenario is a F_2 that is really hard for Z3 to satisfy. Z3 is essentially trying to build an interpretation/solution that satisfies the input formula (the formula F_1 an F_2 and (not G) in our working example). A formula is unsatisfiable when Z3 can show it is impossible to build the interpretation. In practice, the formula F_2 is irrelevant for Z3 only if it can quickly show it to be satisfiable, and the interpretation/solution for F_2 does not conflicts F_1 and (not G). If that is not the case, Z3 can waste a lot of resources with F_2.