Detailed documentation of Z3 INI options - z3

Is there any detailed documentation of the INI options of Z3. I had to do a trial and error approach to figure out the best options for my QF_BV problems. I am still not sure if there are more options that would make my z3-run faster. It would be great if someone can point to any existing detailed explanation of the INI options.
Thanks.

We are currently restructuring Z3, and moving away from the approach: a solver with “thousand” parameters.
We are moving Z3 into a more modular and flexible approach for combining solvers and specifying strategies.
You can find more information about this new approach in the following draft.
Regarding INI options, several of them are deprecated, and only exist because we didn’t finish the transition to the new approach yet.
Several of these options were added for particular projects, and are obsolete now. They only exist for backward compatibility.
Regarding QF_BV, Z3 3.2 contains two QF_BV solvers: old (the one from 2.x) and new. The new (official) one is only available in the Z3 official input format: SMT 2.0.
SMT 1.0, Simplify and Z3 low level input formats are obsolete. Most of the performance improvements in Z3 3.x are only available when one uses SMT 2.0 input format.
In a couple of months, the strategy specification language will be officially supported in Z3.
We will have a tutorial and documentation describing how to use it.
In the meantime, I strongly recommend that you use the default configuration and the SMT 2.0 input format for logics such as QF_BV.

Related

is there a way to include another file in smtlib?

Similar to #include in C in importing functions and axioms that are defined in another file. I wasn't able to find such functionality described in the SMTLIB documentation or from the online examples. Any hints?
SMTLib has no means of #include'ing or importing other files. This might look like a shortcoming, but it is quite rare for people to hand-write SMTLib files: It is almost always machine generated from a higher level language, and it is assumed that whoever generates the SMTLib can simply spit out one big file that includes everything you need.
Having said that, I think this would be a useful feature to have indeed. SMTLib standard is always evolving and such features are usually discussed in their mailing list:
https://groups.google.com/forum/#!forum/smt-lib
Feel free to join the discussion and make a request!

What is the difference between tf.estimator.Estimator and tf.contrib.learn.Estimator in TensorFlow

Some months ago, I used the tf.contrib.learn.DNNRegressor API from TensorFlow, which I found very convenient to use. I didn't keep up with the development of TensorFlow the last few months. Now I have a project where I want to use a Regressor again, but with more control over the actual model as provided by DNNRegressor. As far as I can see, this is supported by the Estimator API using the model_fn parameter.
But there are two Estimators in the TensorFlow API:
tf.contrib.learn.Estimator
tf.estimator.Estimator
Both provide a similar API, but are nevertheless slightly different in their usage. Why are there two different implementations and are there reasons to prefer one?
Unfortunately, I can't find any differences in the TensorFlow documentation or a guide when to use which of both. Actually, working through the TensorFlow tutorials produced a lot of warnings as some of the interfaces apparently have changed (instead of the x,y parameter, the input_fn parameter et cetera).
I wondered the same and cannot give a definitive answer, but I have a few educated guesses that might help you:
It seems that tf.estimator.Estimator together with a model function that returns tf.estimator.EstimatorSpec is the most current one that is used in the newer examples and the one to be used in new code.
My guess now is that the tf.contrib.learn.Estimator is an early prototype that got replaced by the tf.estimator.Estimator. According to the docs everything in tf.contrib is unstable API that may change at any time and it looks like the tf.estimator module is the stable API that “evolved” from the tf.contrib.learn module. I assume that the authors just forgot to mark tf.contrib.learn.Estimator as deprecated and that it wasn't removed yet so existing code won't break.
Now there is this explicit statement in the docs:
Note: TensorFlow also includes a deprecated Estimator class at tf.contrib.learn.Estimator, which you should not use.
https://www.tensorflow.org/programmers_guide/estimators
For some reason it's not marked Deprecated in code.
To add to Christoph's answer.
The distinction between these packages has been specifically mentioned at Tensorflow Dev Summit 2017 by Martin Wicke:
The distinction between core and contrib is really in core things
don't change. Things are backward compatible until release 2.0, and nobody's thinking about that right now.
If you have something in core, it's stable, you should use it. If you have something in contrib, the API may change and depending on your needs
you may or may not want to use it.
So you can think of tf.contrib package as "experimental" or "early preview". For classes that are already in both tf.estimator and tf.contrib, you should definitely use tf.estimator version, because tf.contrib class gets deprecated automatically (even if it's not stated explicitly in the documentation) and can be dropped in the next release.
As of tensorflow 1.4 the list of "graduated" classes includes: Estimator
DNNClassifier, DNNRegressor, LinearClassifier, LinearRegressor, DNNLinearCombinedClassifier, DNNLinearCombinedRegressor. These should be ported to tf.estimator.
I had the same question about to ask.
I guess tf.estimator.Estimator is high level interface and recommended usage while tf.contrib.learn.Estimator is so said not high level interface (but it is indeed).
As Christoph mentioned, tf.contrib is unstable so tf.contrib.learn.Estimator is vulnerable to changes. It is changed from 0.x versions to 1.1 version and changed again in 2016.12. The problem is, the usage of them seems different. You can use tf.contrib.learn.SKCompat to wrap tf.contrib.learn.Estimator while for tf.estimator.Estimator, you can't do the same thing. And model_fn requirement/parameter is different if you check error messages.
The conclusion is that this two Estimator is different thing!
Anyway, I think tf doc did very bad on this topic since tf.estimator is in their tutorial page which means they are very serious about this...

What happened to the custom theory solver methods in the Z3 API?

I have been reading Nikolai's article on Engineering Theories with Z3 for how to interface a custom decision procedure with Z3. In there several methods such as AssertTheoryAxiom, NewAssignment, and FinalCheck etc are mentioned. However I have been unable to locate them in the most recent (new?) Z3 API at http://research.microsoft.com/en-us/um/redmond/projects/z3/namespace_microsoft_1_1_z3.html. Could someone let me know where they or their replacements are?
2. On a related note I see several new concepts in the interface such as Probes and Tactics. Are these described or explained anywhere?
The interface for custom decision procedure is currently deprecated. They can still be used with the old solver API. See the following posts for additional information:
Using theory plugins with solvers
Custom simplifiers
Here is the full list of deprecated APIs.
Regarding tactics and probes, see this article, and the Z3 tutorials (Python and SMT 2.0) about it.

Custom simplifiers

Back in the old days (ie. last year), we used to be able to use theory plugins as a hack to implement custom simplifiers. The Z3 doc even contained an example of "procedural attachments".
My question is very simple; is there any way to achieve the same goal with Z3 4.x?
In particular, I'm interested in a way to provide Z3 with externally computed evaluations for ground terms.
The theory plugins are currently marked as deprecated in Z3 4.x. So, although they can still be used to implement custom simplifier, the user would be forced to use deprecated APIs.
In Z3 4.x, custom simplifiers should be implemented as Tactics. The new build system makes it fairly easy to extend the set of available tactics.
I will try to write a tutorial on how to write tactics inside the Z3 code base.
Of course, in this approach, we have to write C++ code. The main advantage is that the tactic will be available in all front-ends (C, C++, .Net, Java, Python, OCaml, SMT2). Moreover, external developers can contribute their tactics to the Z3 codebase and they will be available for all Z3 users.
We also plan to support an API for creating a simplifier tactic based on callbacks provided by the user. This API would allow users to write "custom simplifiers" in their favorite programming language. This new API is conceptually simple, but there is a lot of "hacking" needed to make it available in every front-end (C++, .Net, Java, Python, OCaml) . It would be great if some external developer is interested in implementing and maintaining this feature. I'm sure it would benefit many users.

Writing a parser - In the need of guides and research papers

My knowledge about implementing a parser is a bit rusty.
I have no idea about the current state of research in the area, and could need some links regarding recent advances and their impact on performance.
General resources about writing a parser are also welcome, (tutorials, guides etc.) since much of what I had learned at college I have already forgotten :)
I have the Dragon book, but that's about it.
And does anyone have input on parser generators like ANTLR and their performance? (ie. comparison with other generators)
edit My main target is RDF/OWL/SKOS in N3 notation.
Mentioning the dragon book and antlr means you've answered your own question.
If you're looking for other parser generators you could also check out boost::spirit (http://spirit.sourceforge.net/).
Depending on what you're trying to achieve you might also want to consider a DSL, which you can either parse yourself or write in a scripting language like boo, ruby, python etc...
Hmm … your request is a bit unspecific. While there are many recent developments in this general area, they're all quite specialized (naturally, since the field has matured). The original parsing approaches haven't really changed, though. You might want to read up on changes in parser creation tools (Antlr, Gold Parser, to name but a few).
You might also want to take a look at SableCC, another parser generator "which generates fully featured object-oriented frameworks for building compilers".
Their is some documentation about basic uses here and here. Since you asked about research papers, SableCC's main developper's master thesis (1998) is available and explains a little more about SableCC advantages.
Although the current stable version is 3.2, the development branch v4 is a complete rewrite and should implement features new to parser generators.
If you want to build custom analyzers for complex languages,
consider our DMS Software Reengineering Toolkit.
See http://www.semanticdesigns.com/Products/DMS/DMSToolkit.html
This provides very strong parsing technology, making it "easy" to define your language
(especially in comparison with most parser generators).
Conventional parser generators may help
with parsing, but they provide zero help in the hard part of the
process, which happens after you can parse the code.
DMS provides a vast amount of machinery to support analyzing and transforming
the code once your have parsed it.

Resources