Transpiling language constructs to z3 - z3

I am looking for a discussion of how to convert various programming language constructs to Z3, without limiting the efficiency of the solver.
In particular, is there an algorithm and a set of best-practice rules for converting a function/program expressed in continuation passing style to Z3?
The use context is to use Z3 for implementing a type system for a programming language, so the solution has to be efficient and incremental (e.g. retain previous Z3 ASTs in the Z3 instance).
I am aware of Boogie, but is looking for something more lightweight if it is possible.
Constraining the semantics of the input language somewhat is acceptable, but not so far that it is trivially expressible in SMTLIB.
I am basically looking for a discussion of different strategies that one might consider for various typical language constructs.
Edit: It might be acceptable to have restrictions that make the input language non-turing-complete and only reasonable efficient for smaller program fragments (like functions in a library). E.g. the input language could be a language for writing "statically checked" libraries that is later transpiled down to libraries for an existing dynamic programming language with dynamic runtime asserts where appropriate.

Related

Does lean enhance proof surveyability?

By proof-surveyability I understand the fact that a human user could "trace" all the details of a proof. There are things that are not easily traceable. For instance, an SMT proof is based on specific heuristics that are then translated into the prover. In that situations, it may be useful to have easy mechanisms (not need to be expert to have them at your disposal) to scan why the proof failed or examine the internal structures of the proof procedure.
I was wondering if Lean enhances this kind of proof surveyability in contrast to Coq or Isabelle. I get the impression that this may be the case skimming through A Metaprogramming Framework for Formal Verification.
If I understand proof-surveyability or -traceability correctly, then by definition, a fully detailed proof is "100% traceable", whereas just stating the result (e.g. a lemma) is "0% traceable".
In that case, I don't see why Lean would improve over Coq or Isabelle, or any other tool whose core purpose is to check correctness of a fully detailed proof. Such tools often provide means to increase automation, which is convenient, but arguably reduces traceability, depending on how the additional proof steps are represented. E.g. a Coq-like tactic can increase automation, but traceability can be "recovered" because the steps the tactic infers can be represented in the same way the explicitly provided steps are represented: as proof rule applications or deduction steps.
The latter part is difficult for SMT-inferred proof steps: SMT solvers can achieve a much higher degree of automation, compared to proof checkers such as Coq, but at the expense of traceability, because its "reasoning" is much more technical and less human-like/deductive.
As a side remark: this difference between proof checkers and SMT solvers reminds me of the difference between classical and AI-based image recognition. The former is less automated/efficient, but easier to trace/explain.

Can a Turing complete language ever have a CFG?

Does turing completeness preclude a language from having a CFG? I couldn't find any paper saying that.
I found this:
"TeX can only be parsed by a complete Turing machine (modulo the finite space available), which precludes it from having a BNF."
We are often imprecise with these terms, but a correct answer to your question requires that we be very precise with how we are using terms.
Two computation systems are equivalent if they can simulate each other. A computation system is Turing-equivalent if it is equivalent to Turing machines.
A computation is complete with respect to a computation system if it requires all capabilities of that system to be computed in that system; that is, any change to the computing system which causes it not to be capable of performing at least the same computations as before will cause it not to be able to perform this computation. A computation is Turing-complete if it is complete with respect to Turing machines.
BNF grammars describe context-free languages, and the least capable computing system capable of parsing such languages are the pushdown automata. This computing system cannot simulate Turing machines in that there are computations a Turing machine can perform which a pushdown automaton cannot; therefore, pushdown automata are not Turing-equivalent.
The article says that TeX is a Turing-complete language, that is, deciding the language of valid TeX strings requires all capabilities of Turing machines. Any system not capable of simulating a Turing machine cannot possibly parse decide membership in the language of valid TeX strings.
The article is NOT saying that TeX is Turing-equivalent (maybe it is, maybe it isn't; I have no idea). As pointed out in the comment, Turing-completeness of a computation system's representation is completely unrelated to that computation system's Turing-equivalence. Even Turing machines themselves can be represented using strings of a regular language (in fact, extend the interpretation of any language so that otherwise invalid programs compile to the program which halts without doing anything, and suddenly ALL strings are valid, and the language of all strings is certainly regular).

F# typing rules as inference rules

Since F# uses type inferencing and type inferencing uses type rules, where are the F# type rules expressed as inference rules found. I suspect they are not published, easily located, or even available as inference rules, which why I am asking.
Googling F# type rules returns nothing relevant.
Searching the F# spec gives an interpretation in section 14 - Inference Procedures but does not give the actual rules.
I know I could extract them by reading the F# source code, (Microsoft.FSharp.Compiler.ConstraintSolver) but that could take time to extract and verify. Also while I am versed with Prolog which gives me some help in understanding the constraint solver, there is still a learning curve for me to read it.
You are correct in that a formal specification of the F# type inference and type checking would be written using the format of inference rules.
The problem is that any realistic programming language is just too complicated for this kind of formal specification. If you wanted to capture the full complexity of the F# type inference, then you'd be just using mathematical notation to write essentially the same thing as what is in the F# compiler source code.
So, programming language theoreticians usually write typing and inference rules for some interesting subsets of the whole system - to illustrate issues related to some new aspect.
The core part of the F# type system is based on the Standard ML language. You can find a reasonable subset of ML formally specified in The Definition of Standard ML (PDF). This explains some interesting things like the "value restriction" rule.
I did some work on formalizing how the F# Data library works and this includes a simple model of bits of type checking for provided types, which you can find in F# Data paper.
The F# Computation Zoo paper (PDF) defines typing rules for how F# computation expression works. This actually captures typical use cases rather than what the compiler does.
In summary, I don't think it's feasible to expect formal specification of F# type inference in terms of typing rules, but I don't think any other language really has that. Formal models of languages are used more for exploring subtleties of little subsets, rather than for talking about the whole language.

Practical examples of NFA and epsilon NFA

What are the real time examples of NFA and epsilon NFA i.e. practical examples other than that it is used in designing compilers
Any time anyone uses a regular expression, they're using a finite automaton. And if you don't know a lot about regexes, let me tell you they're incredibly common -- in many ecosystems, it's the first tool most people try to apply when faced with getting structured data out of strings. Understanding automata is one way to understand (and reason about) regexes, and a quite viable one at that if you're mathematically inclined.
Actually, today's regex engines have grown beyond these mathematical concepts and added features that permit doing more than an FA allows. Nevertheless, many regexes don't use these features, or use them in such a limited way that it's feasible to implement them with FAs.
Now, I only spoke of finite automata in general before. An NFA is a specific FA, just like a DFA is, and the two can be converted into one another (technically, any DFA already is a NFA). So while you can just substitute "finite automaton" with "NFA" in the above, be aware that it doesn't have to be an NFA under the hood.
Like explained by #delnan, automatas are often used in the form of regular expressions. However, they are used a bit more than just that. Automatas are often used to model hardware and software systems and verifying their certain properties. You can find more information by looking at model checking. One really really simplified motivating example can be found in the introduction of Introduction to Automata, Languages, and Computation.
And let's not forget Markov chains which are basically based on on finite automata as well. In combination with the hardware and sofware modelling that bellpeace mentioned, a very powerful tool.
If you are wondering why epsilon NFAs are considered a variation of NFAs, then I don't think there is a good reason. They are interpreted in the same way, except every step may not be a unit time anymore, but an NFA is that not really either.
A somewhat obscure but effective example would be the aho-corasick algorithm, which uses a finite automaton to search for multiple strings within text

Intelligent code-completion? Is there AI to write code by learning?

I am asking this question because I know there are a lot of well-read CS types on here who can give a clear answer.
I am wondering if such an AI exists (or is being researched/developed) that it writes programs by generating and compiling code all on it's own and then progresses by learning from former iterations. I am talking about working to make us, programmers, obsolete. I'm imagining something that learns what works and what doesn't in a programming languages by trial and error.
I know this sounds pie-in-the-sky so I'm asking to find out what's been done, if anything.
Of course even a human programmer needs inputs and specifications, so such an experiment has to have carefully defined parameters. Like if the AI was going to explore different timing functions, that aspect has to be clearly defined.
But with a sophisticated learning AI I'd be curious to see what it might generate.
I know there are a lot of human qualities computers can't replicate like our judgement, tastes and prejudices. But my imagination likes the idea of a program that spits out a web site after a day of thinking and lets me see what it came up with, and even still I would often expect it to be garbage; but maybe once a day I maybe give it feedback and help it learn.
Another avenue of this thought is it would be nice to give a high-level description like "menued website" or "image tools" and it generates code with enough depth that would be useful as a code completion module for me to then code in the details. But I suppose that could be envisioned as a non-intelligent static hierarchical code completion scheme.
How about it?
Such tools exist. They are the subject of a discipline called Genetic Programming. How you evaluate their success depends on the scope of their application.
They have been extremely successful (orders of magnitude more efficient than humans) to design optimal programs for the management of industrial process, automated medical diagnosis, or integrated circuit design. Those processes are well constrained, with an explicit and immutable success measure, and a great amount of "universe knowledge", that is a large set of rules on what is a valid, working, program and what is not.
They have been totally useless in trying to build mainstream programs, that require user interaction, because the main item a system that learns needs is an explicit "fitness function", or evaluation of the quality of the current solution it has come up with.
Another domain that can be seen in dealing with "program learning" is Inductive Logic Programming, although it is more used to provide automatic demonstration or language / taxonomy learning.
Disclaimer: I am not a native English speaker nor an expert in the field, I am an amateur - expect imprecisions and/or errors in what follow. So, in the spirit of stackoverflow, don't be afraid to correct and improve my prose and/or my content. Note also that this is not a complete survey of automatic programming techniques (code generation (CG) from Model-Driven Architectures (MDAs) merits at least a passing mention).
I want to add more to what Varkhan answered (which is essentially correct).
The Genetic Programming (GP) approach to Automatic Programming conflates, with its fitness functions, two different problems ("self-compilation" is conceptually a no-brainer):
self-improvement/adaptation - of the synthesized program and, if so desired, of the synthesizer itself; and
program synthesis.
w.r.t. self-improvement/adaptation refer to Jürgen Schmidhuber's Goedel machines: self-referential universal problem solvers making provably optimal self-improvements. (As a side note: interesting is his work on artificial curiosity.) Also relevant for this discussion are Autonomic Systems.
w.r.t. program synthesis, I think is possible to classify 3 main branches: stochastic (probabilistic - like above mentioned GP), inductive and deductive.
GP is essentially stochastic because it produces the space of likely programs with heuristics such as crossover, random mutation, gene duplication, gene deletion, etc... (than it tests programs with the fitness function and let the fittest survive and reproduce).
Inductive program synthesis is usually known as Inductive Programming (IP), of which Inductive Logic Programming (ILP) is a sub-field. That is, in general the technique is not limited to logic program synthesis or to synthesizers written in a logic programming language (nor both are limited to "..automatic demonstration or language/taxonomy learning").
IP is often deterministic (but there are exceptions): starts from an incomplete specification (such as example input/output pairs) and use that to constraint the search space of likely programs satisfying such specification and then to test it (generate-and-test approach) or to directly synthesize a program detecting recurrences in the given examples, which are then generalized (data-driven or analytical approach). The process as a whole is essentially statistical induction/inference - i.e. considering what to include into the incomplete specification is akin to random sampling.
Generate-and-test and data-driven/analytical§ approaches can be quite fast, so both are promising (even if only little synthesized programs are demonstrated in public until now), but generate-and-test (like GP) is embarrassingly parallel and then notable improvements (scaling to realistic program sizes) can be expected. But note that Incremental Inductive Programming (IIP)§, which is inherently sequential, has demonstrated to be orders of magnitude more effective of non-incremental approaches.
§ These links are directly to PDF files: sorry, I am unable to find an abstract.
Programming by Demonstration (PbD) and Programming by Example (PbE) are end-user development techniques known to leverage inductive program synthesis practically.
Deductive program synthesis start with a (presumed) complete (formal) specification (logic conditions) instead. One of the techniques leverage automated theorem provers: to synthesize a program, it constructs a proof of the existence of an object meeting the specification; hence, via Curry-Howard-de Bruijn isomorphism (proofs-as-programs correspondence and formulae-as-types correspondence), it extracts a program from the proof. Other variants include the use of constraint solving and deductive composition of subroutine libraries.
In my opinion inductive and deductive synthesis in practice are attacking the same problem by two somewhat different angles, because what constitute a complete specification is debatable (besides, a complete specification today can become incomplete tomorrow - the world is not static).
When (if) these techniques (self-improvement/adaptation and program synthesis) will mature, they promise to rise the amount of automation provided by declarative programming (that such setting is to be considered "programming" is sometimes debated): we will concentrate more on Domain Engineering and Requirements Analysis and Engineering than on software manual design and development, manual debugging, manual system performance tuning and so on (possibly with less accidental complexity compared to that introduced with current manual, not self-improving/adapting techniques). This will also promote a level of agility yet to be demonstrated by current techniques.

Resources