Is it possible to serialize/deserialize a Z3 context (from C#)?
If not, is this feature planned ?
I think this feature is important for real world applications.
This is not directly supported in the current API. The next release will support multiple solvers, and we will provide commands for copying the assertions from one solver to another, and retrieving the assertions. With these commands, one can implement serialization by dumping the expressions in a file (in SMT 2.0 format). To deserialize, we just read the file back.
Note that, this solution can already be implemented using the current API if you keep track of the assertions you asserted into the logical context.
That being said, I've seen the following approach used in many projects that use Z3. They have their own representation for formulas. When they invoke Z3, they translate their representation into Z3's representation. In most cases the performance overhead is minimal. This approach gives them a lot of flexibility. Serialization is a good example. Some programming environment (e.g., Python) already provide some built-in support for serialization.
Related
I know parseSMTLIB2File Java API ignores certain commands in an SMT2 file. However, is there a way around it? I am generating smt2 files and use parseSMTLIB2File and solver.check() to parse and solve the constraints.
Now, I would like to use the unsat cores from the solver for some computation. I know I probably can do it using std in and out (here). However, this will be very inefficient for running the algorithms. Further, it is also not ideal to change the entire code base to switch every constraint generation through Z3 Java APIs.
Since the native C++ interface handles options and (tracked) assertions well. Hence, is there any way around it? How can I do this programmatically and efficiently?
Do other C++/C/Python parseSMTLIB2File APIs perform the same thing as Java's or they may read in additional things.
is there a way around it?
No. parseSMTLIB2File is not a complete interface to the solver, and it's not intended to be. The only options are to switch to the full API interface, or to the full text interface by emitting .smt2 files and passing those to Z3. The latter can be done via pipes instead of actual files, and many users are happy with the performance of that.
I am using z3py I have a predicate over two integers that needs to be evaluated using a custom algorithm. I have been trying to get it implemented, without much success. Apparently, what I need is a procedural attachment, which is now deprecated. Could anybody tell me how I might impelement this in z3py? I understand that it involves use of Tactics, but I am afraid I haven't managed to figure out how to use them. I wouldn't mind using the deprecated way either, as long as it works.
There is no procedural attachment tactic. All tactics are implemented inside of Z3;
you can compose tactics from outside.
Previous versions of Z3 exposed a way to register a "user theory".
This was deprecated since (1) the source of Z3 is now available so users can compile with their custom theories directly, (2) the user-theory abstraction lacked proper support
for model generation. You can of course try previous versions of Z3 that have the user theory extension, but it is not supported.
Data and functions in Rascal can scatter in different source files, and when imported are merged accordingly. In other words, Rascal supports open data and open functions. So Rascal solves the expression problem? Is it designed so as to do?
I think to write that Rascal "solves" the expression problem, is a bit strong, but you could say that you can easily write openly extensible implementations of expression grammars in it. It was designed exactly for this, see http://www.rascal-mpl.org/from-functions-to-term-rewriting-and-back/
On the one hand, one can write programs which do not suffer from the expression problem in Rascal, precisely for the reason you said: both data and functions are openly extensible, and they work together via dynamically dispatching via pattern matching.
On the other hand, It is pretty easy to write non-extensible implementations as well in Rascal. In particular when using the current visit or switch statements, which are not openly extensible. Also if you write a set of mutually recursive functions, it may be pretty hard to extend them in an unforeseen manner. We are also working on language features to cover extending those kinds of designs. That is for the future.
Using non-deterministic functions is unavoidable in applications that talk to the real world. Making a clear separation between deterministic and non-deterministic is important.
Haskell has the IO monad that sets the impure context by looking at which we know that everything outside of it is pure. Which is nice, if you ask me, when it comes to unit testing one can tell which part of their code is ultimately testable and which is not.
I could not find anything that allows separating the two in F#. Does it mean there is just no way to do that?
The distinction between deterministic and non-deterministic function is not captured by the F# type system, but a typical F# system that needs to deal with non-determinism would use some structure (or "design pattern") that clearly separates the two.
If your core model is some computation that does not interact with the world (you only need to collect inputs and run the computation), then you can write most of your code as functional transformations on immutable data structures and then invoke these from some "main" I/O loop.
If you're writing some highly interactive or reactive application then you can use F# agents (here is an introductory article) and structure your application so that the non-determinism is safely contained in individual agents (see more about agent-based architectures)
F# is based on OCaml, and much like OCaml it isn't pure FP. I don't believe there is away to accomplish your goal in either language.
One way to manage it could be in making up a nominal type that would represent the notion of the real world and make sure each non-deterministic function takes its singleton as a parameter. This way all dependent functions would have to pass it on along the line. This makes a strong distinction between the two at a cost of some discipline and a bit of extra typing. Good thing about this approach is that it can verified by the compiler given necessary conditions are met.
Recent versions of Z3 have decoupled the notions of Z3_context and Z3_solver. The API mostly reflects these changes; for instance push is deprecated on contexts and respecified as taking a solver as an extra argument.
The interface for theories has not been updated, however. Theories are still created from contexts, and as far as I can tell, never explicitly attached to solvers.
One could think that a theory created from a context will always be attached to all solvers created from the context, but it seems from our experience that this is not the case. Instead, user-defined theories seem to be ignored entirely.
What is the exact status of the combination of Z3_solvers with Z3_theorys ?
The theory plugins were introduced a long time ago (version 2.8), since then Z3 has changed a lot.
They are considered deprecated in Z3 4.x. They can still be used with the old API, but can't be used with new features and in particular with Z3 solver objects (Z3_solver).
In the current Z3, we have many different solvers. The oldest one (implemented in the folder src/smt) is called smt::context. The theory plugins are actually extensions for this old solver.
We say smt::context is a general purpose solver as it supports many theories and quantifiers.
Actually, in Z3 4.3.1, it is the only general purpose solver available.
However, I believe it is based on an obsolete architecture that is not adequate for the new features we are planning for Z3. My plan is to replace it in the future with a solver based on the architecture described here.
Moreover, we don't really work on smt::context anymore. We are essentially just maintaining it and fixing bugs.
After we released the Z3 source code, I imagined the theory plugin support was not necessary anymore since users would be able to add their extensions inside the Z3 code base. However, this view is too simplistic since it prevents users from writing extensions in different programming languages.
So, the current plan is to eventually have theory plugins for the new solver that will be eventually available in Z3. The goal is to have an API such as:
Z3_solver Z3_mk_mcsat_solver(Z3_context ctx, Z3_mcsat_ext ext);
This API would create a new solver object using the given extension ext.
In the meantime, we could also extend the API with a function such as:
Z3_solver Z3_mk_smt_solver(Z3_context ctx, Z3_theory t);
That would create a new solver object based on smt::context using the given theory plugin.
This solution is conceptually simple, but there is a lot of "plumbing" needed to make it happen.
We have to adjust the Z3_theory object, fix some limitations that prevent theory plugins to be used with features that create copies of smt::context (e.g., MBQI), etc. If someone is very interested in this interface, I would invest cycles on it (remark: we had only a handful of users for the theory plugins). I'm not super excited about it because the plan is to eventually replace smt::context.