Say for modeling the haskell map function, which takes in a "mapper" function that is applied to all elements of a list. How can I declare map in smtlib?
No; SMTLib is essentially a first-order theory; higher order functions are simply not supported.
Z3, however, allows mapping functions over arrays, with the (_ map f) extension. See https://rise4fun.com/Z3/tutorial/guide, search for "Mapping Functions on Arrays." This doesn't give you arbitrary higher-order functions, but can be used to simulate those that operate on SMTLib arrays.
If you do intend to reason about higher-order functions, then SMTLib is arguably the wrong logic for you. Use of a more traditional theorem prover like HOL/Isabelle, or modern incarnations in Agda/Coq would be more suitable. You can also take a look at Lean, which has a good compromise of features and automation.
Related
As previously asked Z3 can not provide a model for recursive functions. However, is it possible to tell Z3 (preferably via the Java API) that a model without a definition for recursive functions is sufficient (or indeed choose the functions I am interested in, essentially I do not need a model for non-constant functions)? Obviously, that might lead to queries that return sat although some functions do not have a model. Basically, the user then has to ensure that these functions do have some model.
However, I would assume this is not really realizable the way Z3 or SMT solvers work, i.e., I would assume Z3 needs a (partial) model of recursive functions to evalutate expressions.
The MBQI implementation for quantifiers doesn't really work nicely with recursive functions.
There is one thing that might do the trick for you: replace recursive function definitions by bounded-recursive functions. You add an extra argument that counts the number of unfoldings you are willing to explore of the function. Whenever you apply the recursive function in the rest of the input, set the depth parameter. You can use either algebraic data-type Peano numbers or integers.
The symbolic automata toolkit uses this approach, for example.
I found one paper that explores the idea given in Nikolaj Bjorner's answer further: "Computing with an SMT solver" by Amin, Leino and Rompf (2014).
Furthermore, I found "Satisfiability Modulo Recursive Programs" by Suter, Köksal and Kuncak (2011) which unfolds calls to recursive functions successively until the satisfiability can be decided.
I have precise and validated descriptions of the behaviors of many X86 instructions in terms amenable to encoding in QF_ABV and solving directly with the standard solver (using no special solving strategies). I wrote an SMT-LIB script whose interface matches my ultimate goal perfectly:
X86State, a record sort describing x86 machine state (registers and flags as bitvectors, and memory as an array).
X86Instr, a record sort describing x86 instructions (enumerated mnemonics, operands as an ML-like discriminated union describing registers, memory expressions, etc.)
A function x86-translate taking an X86State and an X86Instr, and returning a new X86State. It decodes the X86Instr and produces a new X86State in terms of the symbolic effects of the given X86Instr on the input X86State.
It's great for prototyping: the user can write x86 easily and directly. After simplifying a formula built using the library, all functions and extraneous data types are eliminated, leaving a QF_ABV expression. I hoped that users could simply (set-logic QF_ABV) and #include my script (alas, neither the SMT-LIB standard nor Z3 support #include).
Unfortunately, by defining functions and types, the script requires theories such as uninterpreted functions, thus requiring a logic other than QF_ABV (or even QF_AUFBV due to the types). My experience with SMT solvers dictates that the lowest acceptable logic should be specified for best solving time. Also, it is unclear whether I can reuse my SMT-LIB script in a programmatic context (e.g. OCaml, Python, C) as I desire. Finally, the script is a bit verbose given the lack of higher-order functions, and my lack of access to par leading to code duplication.
Thus, despite having accomplished my technical goals, I think that SMT-LIB might be the wrong approach. Is there a more natural avenue for interacting with Z3 to implement my x86 instruction description / QF_ABV translation scheme? Is the SMT-LIB script re-usable at all in these avenues? For example, you can build "custom OCaml top-levels", i.e. interpreters with scripts "burned into them". Something like that could be nice. Or do I have to re-implement the functionality in another language, in a program that interacts with Z3 via a theory extension (C DLL)? What's the best option here?
Well, I don't think that people write .smt2 files by hand. These are usually generated automatically by some program.
I find the Z3 Python interface quite nice, so I guess you could give it a try. But you can always write a simple .smt2 dumper from any language.
BTW, do you plan releasing the specification you wrote for X86? I would be really interested!
In z3 is it possible to declare a function that takes another function as an argument? For instance, this
(declare-fun foo ( ((Int) Bool) ) Int)
doesn't quite seem to work. Thanks.
As Leonardo mentioned, SMT-Lib does not allow higher-order functions. This is not merely a syntactic restriction: Reasoning with higher-order functions is (generally) beyond what SMT solvers can deal with. (Although uninterpreted functions can be used in some special cases.)
If you do need to reason with higher-order functions, then interactive theorem provers are the main weapon of choice: Isabelle, HOL, Coq being some of the examples.
However, sometimes you want the higher-order functions not to reason about them, but rather merely to simplify programming tasks. SMT-Lib input language is not suitable for high-level programming that end-users typically need in practical situations. If that is your use case, then I'd recommend not using SMT-Lib directly, but rather working with a programming language that gives you access to Z3 (or other SMT solvers). There are several choices, depending on what host language is most suitable for your use case:
If you are a Python user, Z3Py that just shipped with Z3 4.0 is the way to go,
If you are a Scala user, then look into Scala^Z3.
If Haskell is your preferred language, then take a look at SBV.
Each binding has its own feature set, Z3Py probably being the most versatile since it's directly supported by the Z3 folks. (It also provides access to Z3 internals that remain inaccessible for the other choices, at least for the time being.)
No, this is not possible. However, you can define a function that takes an array as an argument.
(declare-fun foo ((Array Int Bool)) Int)
You can use this trick to simulate high-order functions like the one in your question.
Here is an example: http://rise4fun.com/Z3/qsED
The Z3 guide contains more information about Z3 and SMT.
I'm learning F# (new to functional programming in general though used functional aspects of C# for years but let's face it, that's pretty different) and one of the things that I've read is that the F# compiler identifies tail recursion and compiles it into a while loop (see http://thevalerios.net/matt/2009/01/recursion-in-f-and-the-tail-recursion-police/).
What I don't understand is why you would write a recursive function instead of a while loop if that's what it's going to turn into anyway. Especially considering that you need to do some extra work to make your function recursive.
I have a feeling someone might say that the while loop is not particularly functional and you want to act all functional and whatnot so you use recursion but then why is it sufficient for the compiler to turn it into a while loop?
Can someone explain this to me?
You could use the same argument for any transformation that the compiler performs. For instance, when you're using C#, do you ever use lambda expressions or anonymous delegates? If the compiler is just going to turn those into classes and (non-anonymous) delegates, then why not just use those constructions yourself? Likewise, do you ever use iterator blocks? If the compiler is just going to turn those into state machines which explicitly implement IEnumerable<T>, then why not just write that code yourself? Or if the C# compiler is just going to emit IL anyway, why bother writing C# instead of IL in the first place? And so on.
One obvious answer to all of these questions is that we want to write code which allows us to express ourselves clearly. Likewise, there are many algorithms which are naturally recursive, and so writing recursive functions will often lead to a clear expression of those algorithms. In particular, it is arguably easier to reason about the termination of a recursive algorithm than a while loop in many cases (e.g. is there a clear base case, and does each recursive call make the problem "smaller"?).
However, since we're writing code and not mathematics papers, it's also nice to have software which meets certain real-world performance criteria (such as the ability to handle large inputs without overflowing the stack). Therefore, the fact that tail recursion is converted into the equivalent of while loops is critical for being able to use recursive formulations of algorithms.
A recursive function is often the most natural way to work with certain data structures (such as trees and F# lists). If the compiler wants to transform my natural, intuitive code into an awkward while loop for performance reasons that's fine, but why would I want to write that myself?
Also, Brian's answer to a related question is relevant here. Higher-order functions can often replace both loops and recursive functions in your code.
The fact that F# performs tail optimization is just an implementation detail that allows you to use tail recursion with the same efficiency (and no fear of a stack overflow) as a while loop. But it is just that - an implementation detail - on the surface your algorithm is still recursive and is structured that way, which for many algorithms is the most logical, functional way to represent it.
The same applies to some of the list handling internals as well in F# - internally mutation is used for a more efficient implementation of list manipulation, but this fact is hidden from the programmer.
What it comes down to is how the language allows you to describe and implement your algorithm, not what mechanics are used under the hood to make it happen.
A while loop is imperative by its nature. Most of the time, when using while loops, you will find yourself writing code like this:
let mutable x = ...
...
while someCond do
...
x <- ...
This pattern is common in imperative languages like C, C++ or C#, but not so common in functional languages.
As the other posters have said some data structures, more exactly recursive data structures, lend themselves to recursive processing. Since the most common data structure in functional languages is by far the singly linked list, solving problems by using lists and recursive functions is a common practice.
Another argument in favor of recursive solutions is the tight relation between recursion and induction. Using a recursive solution allows the programmer to think about the problem inductively, which arguably helps in solving it.
Again, as other posters said, the fact that the compiler optimizes tail-recursive functions (obviously, not all functions can benefit from tail-call optimization) is an implementation detail which lets your recursive algorithm run in constant space.
I have just started delving into the world of functional programming.
A lot of OOP (Object Oriented Programming) concepts such as inheritance and polymorphism apply to most modern OO languages like C#, Java and VB.NET.
But how about concepts such as Map, Reduce, Tuples and Sets, do they apply to all FP (Functional Programming) languages?
I have just started with F#. But do aforementioned concepts apply to other FP like Haskell, Nemerle, Lisp, etc.?
You bet. The desirable thing about function programming is that the mathematical concepts you describe are more naturally expressed in an FP.
It's a bit of tough going, but John Backus' Turing Award paper in which he described functional (or "applicative") programming is a good read. The Wikipedia article is good too.
Yes; higher-order functions, algebraic data types, folds/catamorphisms, etc are common to almost all functional languages (though they sometimes go by slightly different names in each language).
Functional tools apply to all programming, not just languages that handle that explicitly. For example, python has map and reduce builtin functions that do exactly what you expect, besides out of order evaluation. you'll need something like the multiprocessing module to get really clever.
Even if the language doesn't provide the exact primitives, most modern languages still make it possible to get the desired effect with a bit more work. This is similar to the way a class-like concept can be coded in pure C.
I would interpret what you're asking as, "Are higher-order functions (map, reduce, filter, ...) and immutable data structures (tuples, cons lists, records, maps, sets, ...) common across FP languages?" and I would say, absolutely yes.
Like you say, OOP has well known pillars (encapsulation, inheritance, polymorphism). The "pillars" of functional programming I'd say are 1) Using functions as first-class values and 2) Expressing yourself without side effects.
You'll likely find common tools to apply these ideas across various FP languages (F# is an excellent choice BTW!) and you'll see them finding their way into more mainstream languages; maybe in a less recognizable form (e.g. LINQ's Select = map, Aggregate = reduce/fold, Where = filter, C# has light weight lambda syntax, System.Tuple, etc.).
As an aside, the thing that seems to be generally missing from non-explicitly-FP languages is good immutable data structures and syntax support for them (not merely a library) which makes it hard to stick to pillar #2 in those languages. F# lists, records, tuples, etc. all are good examples of great language and library combined support for this.
If you really want to jump into the deep end and understand why these concepts are not just conventional but, ahem, foundational, check out the paper "Functional programming with bananas, lenses, envelopes and barbed wire".
They apply to all languages that contain data types that can be "mapped" and "reduced", i.e maps, arrays/vectors, or lists.
In a "pure lambda calculus" language, where every data structure is defined via function application, you can of course apply functions in parallel (i.e., in a call fn(expr1, expr2), you can evaluate expr1 and expr2 in parallel), but that isn't really what map/reduce is about.