Craig Interpolation API - z3

There are versions of the Z3 SMT solver that support Craig interpolation. Methods of the API where, for example, Z3_interpolate, Z3_write_interpolation_problem, or Z3_mk_interpolation_context.
Microsoft Research provides a website with the description of the Z3 C API! The methods that are listed above cannot be found in this documentation. Have this methods been removed? Can they be found in a specific branch of Z3?

The documentation is for the master branch of Z3, newer features are available in the unstable branch, which will become the new master soon.

The code can be found (as of today; the code changes very frequently) in the branch "opt". The API functions were moved from z3_api.h to z3_interp.h. It is unclear whether the code will stay in this place or not.

Related

issuing multiple (check-sat) calls until it returns unsat

The release notes for z3 version 4.6 mention a new feature "issuing multiple (check-sat) calls until it returns unsat".
Is this an equivalent for ALLSAT?
Where can I find any further documentation or an example for this feature?
No, this was for addressing this issue: https://github.com/Z3Prover/z3/issues/1008
An ALLSAT command is not supported by z3; though it would be easy to code it using the "assert the negation of the previous model and re-check" loop. Most high-level interfaces provide this as a layer on top of what's possible using SMT-Lib2. If you do want support for this, it might be best to first convince the SMTLib folks (http://smtlib.cs.uiowa.edu/) so a standard way of doing so would be developed and can be implemented by multiple solvers.

What is the difference between tf.estimator.Estimator and tf.contrib.learn.Estimator in TensorFlow

Some months ago, I used the tf.contrib.learn.DNNRegressor API from TensorFlow, which I found very convenient to use. I didn't keep up with the development of TensorFlow the last few months. Now I have a project where I want to use a Regressor again, but with more control over the actual model as provided by DNNRegressor. As far as I can see, this is supported by the Estimator API using the model_fn parameter.
But there are two Estimators in the TensorFlow API:
tf.contrib.learn.Estimator
tf.estimator.Estimator
Both provide a similar API, but are nevertheless slightly different in their usage. Why are there two different implementations and are there reasons to prefer one?
Unfortunately, I can't find any differences in the TensorFlow documentation or a guide when to use which of both. Actually, working through the TensorFlow tutorials produced a lot of warnings as some of the interfaces apparently have changed (instead of the x,y parameter, the input_fn parameter et cetera).
I wondered the same and cannot give a definitive answer, but I have a few educated guesses that might help you:
It seems that tf.estimator.Estimator together with a model function that returns tf.estimator.EstimatorSpec is the most current one that is used in the newer examples and the one to be used in new code.
My guess now is that the tf.contrib.learn.Estimator is an early prototype that got replaced by the tf.estimator.Estimator. According to the docs everything in tf.contrib is unstable API that may change at any time and it looks like the tf.estimator module is the stable API that “evolved” from the tf.contrib.learn module. I assume that the authors just forgot to mark tf.contrib.learn.Estimator as deprecated and that it wasn't removed yet so existing code won't break.
Now there is this explicit statement in the docs:
Note: TensorFlow also includes a deprecated Estimator class at tf.contrib.learn.Estimator, which you should not use.
https://www.tensorflow.org/programmers_guide/estimators
For some reason it's not marked Deprecated in code.
To add to Christoph's answer.
The distinction between these packages has been specifically mentioned at Tensorflow Dev Summit 2017 by Martin Wicke:
The distinction between core and contrib is really in core things
don't change. Things are backward compatible until release 2.0, and nobody's thinking about that right now.
If you have something in core, it's stable, you should use it. If you have something in contrib, the API may change and depending on your needs
you may or may not want to use it.
So you can think of tf.contrib package as "experimental" or "early preview". For classes that are already in both tf.estimator and tf.contrib, you should definitely use tf.estimator version, because tf.contrib class gets deprecated automatically (even if it's not stated explicitly in the documentation) and can be dropped in the next release.
As of tensorflow 1.4 the list of "graduated" classes includes: Estimator
DNNClassifier, DNNRegressor, LinearClassifier, LinearRegressor, DNNLinearCombinedClassifier, DNNLinearCombinedRegressor. These should be ported to tf.estimator.
I had the same question about to ask.
I guess tf.estimator.Estimator is high level interface and recommended usage while tf.contrib.learn.Estimator is so said not high level interface (but it is indeed).
As Christoph mentioned, tf.contrib is unstable so tf.contrib.learn.Estimator is vulnerable to changes. It is changed from 0.x versions to 1.1 version and changed again in 2016.12. The problem is, the usage of them seems different. You can use tf.contrib.learn.SKCompat to wrap tf.contrib.learn.Estimator while for tf.estimator.Estimator, you can't do the same thing. And model_fn requirement/parameter is different if you check error messages.
The conclusion is that this two Estimator is different thing!
Anyway, I think tf doc did very bad on this topic since tf.estimator is in their tutorial page which means they are very serious about this...

Custom simplifiers

Back in the old days (ie. last year), we used to be able to use theory plugins as a hack to implement custom simplifiers. The Z3 doc even contained an example of "procedural attachments".
My question is very simple; is there any way to achieve the same goal with Z3 4.x?
In particular, I'm interested in a way to provide Z3 with externally computed evaluations for ground terms.
The theory plugins are currently marked as deprecated in Z3 4.x. So, although they can still be used to implement custom simplifier, the user would be forced to use deprecated APIs.
In Z3 4.x, custom simplifiers should be implemented as Tactics. The new build system makes it fairly easy to extend the set of available tactics.
I will try to write a tutorial on how to write tactics inside the Z3 code base.
Of course, in this approach, we have to write C++ code. The main advantage is that the tactic will be available in all front-ends (C, C++, .Net, Java, Python, OCaml, SMT2). Moreover, external developers can contribute their tactics to the Z3 codebase and they will be available for all Z3 users.
We also plan to support an API for creating a simplifier tactic based on callbacks provided by the user. This API would allow users to write "custom simplifiers" in their favorite programming language. This new API is conceptually simple, but there is a lot of "hacking" needed to make it available in every front-end (C++, .Net, Java, Python, OCaml) . It would be great if some external developer is interested in implementing and maintaining this feature. I'm sure it would benefit many users.

Reading a Z3 file

My program creates a log of all z3 interactions with Z3_open_log(). Then in another program, I read it back with Z3_parse_z3_file(). It gives me the conjuction of all asserts made on the input. Let say I have two asserts: a1 and a2. Then by parsing the z3 file, I get (and a1 a2).
I would like to test (and (not a1) a2). How can I do that provided that I only get the conjuction of the two asserts, not a pair of asserts ? I could not find any function in the API that allows me to navigate into an AST, see if it is a conjunction and iterate over it.
If it is not the way I should go, what way would you recommand?
Thanks in advance,
AG.
As Pad already described in the comment above, you can use the API to traverse Z3 ASTs.
That being said, I have a couple of comments.
The logging is meant for debugging purposes. They are mainly used to report problematic traces. We have a new logging mechanism in Z3 4.0. It records all APIs, and allow us to have a faithful reproduction of the interaction between the host application and Z3.
The Z3 low level and Simplify formats are deprecated in Z3 4.0. Z3 still have some limited support for them.
Z3 4.0 has new C, C++, .NET and Python APIs. The C API is backward compatible, but I marked several procedures as deprecated. It is much easier to traverse and manipulate ASTs using the new APIs. Python API is already available online. Here is one example:
http://rise4fun.com/Z3Py/Cp
Here is another example that builds (and a1 a2), extract each children, and builds (and (not a1) a2).
http://rise4fun.com/Z3Py/8h
The following tutorial covers the new Z3 API:
http://rise4fun.com/Z3Py/tutorial/guide
Z3 4.0 will be released soon.

Detailed documentation of Z3 INI options

Is there any detailed documentation of the INI options of Z3. I had to do a trial and error approach to figure out the best options for my QF_BV problems. I am still not sure if there are more options that would make my z3-run faster. It would be great if someone can point to any existing detailed explanation of the INI options.
Thanks.
We are currently restructuring Z3, and moving away from the approach: a solver with “thousand” parameters.
We are moving Z3 into a more modular and flexible approach for combining solvers and specifying strategies.
You can find more information about this new approach in the following draft.
Regarding INI options, several of them are deprecated, and only exist because we didn’t finish the transition to the new approach yet.
Several of these options were added for particular projects, and are obsolete now. They only exist for backward compatibility.
Regarding QF_BV, Z3 3.2 contains two QF_BV solvers: old (the one from 2.x) and new. The new (official) one is only available in the Z3 official input format: SMT 2.0.
SMT 1.0, Simplify and Z3 low level input formats are obsolete. Most of the performance improvements in Z3 3.x are only available when one uses SMT 2.0 input format.
In a couple of months, the strategy specification language will be officially supported in Z3.
We will have a tutorial and documentation describing how to use it.
In the meantime, I strongly recommend that you use the default configuration and the SMT 2.0 input format for logics such as QF_BV.

Resources