Procedural Attachment in Z3 - z3

I am using z3py I have a predicate over two integers that needs to be evaluated using a custom algorithm. I have been trying to get it implemented, without much success. Apparently, what I need is a procedural attachment, which is now deprecated. Could anybody tell me how I might impelement this in z3py? I understand that it involves use of Tactics, but I am afraid I haven't managed to figure out how to use them. I wouldn't mind using the deprecated way either, as long as it works.

There is no procedural attachment tactic. All tactics are implemented inside of Z3;
you can compose tactics from outside.
Previous versions of Z3 exposed a way to register a "user theory".
This was deprecated since (1) the source of Z3 is now available so users can compile with their custom theories directly, (2) the user-theory abstraction lacked proper support
for model generation. You can of course try previous versions of Z3 that have the user theory extension, but it is not supported.

Related

unsat cores from APIs

I know parseSMTLIB2File Java API ignores certain commands in an SMT2 file. However, is there a way around it? I am generating smt2 files and use parseSMTLIB2File and solver.check() to parse and solve the constraints.
Now, I would like to use the unsat cores from the solver for some computation. I know I probably can do it using std in and out (here). However, this will be very inefficient for running the algorithms. Further, it is also not ideal to change the entire code base to switch every constraint generation through Z3 Java APIs.
Since the native C++ interface handles options and (tracked) assertions well. Hence, is there any way around it? How can I do this programmatically and efficiently?
Do other C++/C/Python parseSMTLIB2File APIs perform the same thing as Java's or they may read in additional things.
is there a way around it?
No. parseSMTLIB2File is not a complete interface to the solver, and it's not intended to be. The only options are to switch to the full API interface, or to the full text interface by emitting .smt2 files and passing those to Z3. The latter can be done via pipes instead of actual files, and many users are happy with the performance of that.

How to use Agda's auto proof search effectively?

When writing proofs I noticed that Agda's auto proof search frequently wouldn't find solutions that seem obvious to me. Unfortunately coming up with a small example, that illustrates the problem seems to be hard, so I try to describe the most common patterns instead.
I forgot to add -m to the hole to make Agda look at the module scope. Can I make that flag the default? What downsides would that have?
Often the current hole can be filled by a parameter of the function I am about to implement. Even when adding -m, Agda will not consider function parameters or symbols introduced in let or where clauses though. Is there something wrong with simply trying all of them?
When viewing a goal, symbols introduced in let or where clauses are not even displayed. Why?
What other habits can make using auto more effective?
Agda's auto proof search is hardwired into the compiler. That makes it fast,
but limits the amount of customization you can do. One alternative approach
would be to implement a similar proof search procedure using Agda's
reflection mechanism. With the recent beefed up version of reflection using
the TC monad,
you no longer need to implement your own unification procedure.
Carlos
Tome's been working on reimplementing these ideas (check out his code
https://github.com/carlostome/AutoInAgda ). He's been working on several
versions that try to use information from the context, print debugging info,
etc. Hope this helps!

Using theory plugins with solvers

Recent versions of Z3 have decoupled the notions of Z3_context and Z3_solver. The API mostly reflects these changes; for instance push is deprecated on contexts and respecified as taking a solver as an extra argument.
The interface for theories has not been updated, however. Theories are still created from contexts, and as far as I can tell, never explicitly attached to solvers.
One could think that a theory created from a context will always be attached to all solvers created from the context, but it seems from our experience that this is not the case. Instead, user-defined theories seem to be ignored entirely.
What is the exact status of the combination of Z3_solvers with Z3_theorys ?
The theory plugins were introduced a long time ago (version 2.8), since then Z3 has changed a lot.
They are considered deprecated in Z3 4.x. They can still be used with the old API, but can't be used with new features and in particular with Z3 solver objects (Z3_solver).
In the current Z3, we have many different solvers. The oldest one (implemented in the folder src/smt) is called smt::context. The theory plugins are actually extensions for this old solver.
We say smt::context is a general purpose solver as it supports many theories and quantifiers.
Actually, in Z3 4.3.1, it is the only general purpose solver available.
However, I believe it is based on an obsolete architecture that is not adequate for the new features we are planning for Z3. My plan is to replace it in the future with a solver based on the architecture described here.
Moreover, we don't really work on smt::context anymore. We are essentially just maintaining it and fixing bugs.
After we released the Z3 source code, I imagined the theory plugin support was not necessary anymore since users would be able to add their extensions inside the Z3 code base. However, this view is too simplistic since it prevents users from writing extensions in different programming languages.
So, the current plan is to eventually have theory plugins for the new solver that will be eventually available in Z3. The goal is to have an API such as:
Z3_solver Z3_mk_mcsat_solver(Z3_context ctx, Z3_mcsat_ext ext);
This API would create a new solver object using the given extension ext.
In the meantime, we could also extend the API with a function such as:
Z3_solver Z3_mk_smt_solver(Z3_context ctx, Z3_theory t);
That would create a new solver object based on smt::context using the given theory plugin.
This solution is conceptually simple, but there is a lot of "plumbing" needed to make it happen.
We have to adjust the Z3_theory object, fix some limitations that prevent theory plugins to be used with features that create copies of smt::context (e.g., MBQI), etc. If someone is very interested in this interface, I would invest cycles on it (remark: we had only a handful of users for the theory plugins). I'm not super excited about it because the plan is to eventually replace smt::context.

Z3 Context serialization/deserialization?

Is it possible to serialize/deserialize a Z3 context (from C#)?
If not, is this feature planned ?
I think this feature is important for real world applications.
This is not directly supported in the current API. The next release will support multiple solvers, and we will provide commands for copying the assertions from one solver to another, and retrieving the assertions. With these commands, one can implement serialization by dumping the expressions in a file (in SMT 2.0 format). To deserialize, we just read the file back.
Note that, this solution can already be implemented using the current API if you keep track of the assertions you asserted into the logical context.
That being said, I've seen the following approach used in many projects that use Z3. They have their own representation for formulas. When they invoke Z3, they translate their representation into Z3's representation. In most cases the performance overhead is minimal. This approach gives them a lot of flexibility. Serialization is a good example. Some programming environment (e.g., Python) already provide some built-in support for serialization.

Suggestions on how to make a configurable parser

I want to build a parser for a C like language. The interesting aspect about it is that I want to build it in such a way that someone who has access to the source can easily modified it to extend the language (a new expression type of instance) with the extensions being runtime configurable (they can be turned on and off).
My current intent is to build a recursive decent parser as an object. Each production will be a method of an object. The method of extension will be to derive classes from this base replacing methods (and production definitions) as needed. I'm still trying to figure out how to mix and match extensions. One idea is to play games with the v-tbl. Objects would be constructed with a v-tbl that is a copy of the base but with methods replaced from derived classes.
Aside from the bit-twiddling nature of the solution the only issues I have with it is
a reasonable way to do the v-tbl mixup
what to do when 2 extensions alter the same productions (as most replacements will end up calling the original having one replacement call the other would work but the mechanics of setting this up are the issue)
how to allow the extension of extensions (this might end up looking like a standard MI system, but I've never got how they work)
Another solution (a slightly more mundane version of the same same approach) would be to use static member variables to store function-pointers and call them for the same effect.
Edit: I have already built a system that lets me build productions from BNF definitions. I can alter it to support whatever I decide on.
These are some of the challenges the Perl 6 design effort has faced. You may find it worthwhile looking into some of the solutions they came up with. Or you may find that to be gross overkill.
I made a configurable parser I uploadei it some time ago at
http://code.google.com/p/compparser/
The project there is not up-to-date but is working fine.
If I recall my university courses correctly, recursive descent parsers have some limitations that might bite you, especially since you're allowing extensions - somebody elses language extension could cause issues.
A proper compiler toolkit - such as the open source ANTLR - might make things easier, and might also provide some different approaches for you.
another option is to express the parsing rules in XML or something, instead of in code; less efficient, but far more dynamically configurable; each language or variant can just use its own (XML) file, and even include/reference other files as 'base' files...
Frankly, I am not even sure I understood everything you wrote... :-)
But when I see parser and flexibility, I think about LPeg - Parsing Expression Grammars For Lua. It might not fit your needs but it is well worth a look... ;-)

Resources