Certain program analyses can be encoded as Chain programs (corresponding to context-free languages), which belong to a restricted form of Datalog programs. Each rule in a chain program has the following format:
p(X,Y) :- q0(X,Z1), q1(Z1,Z2), q2(Z2,Z3)..., qn(Zn,Y)
My question is whether Z3 can take advantage of the structure of the chain programs and be more efficient in evaluating chain programs in comparison to evaluating arbitrary datalog programs.
Z3's finite state datalog engine uses bottom-up evaluation.
It includes an option to perform magic set transformations that can be enabled.
This transformation does wonders in some cases involving chain programs.
You can enable the option by setting ":magic-sets-for-queries" to "true".
Hope this helps
Related
I'm currently studying compiler's and am on the topic of "Chomsky Hierarchy and the 4 languages." But it beats me as to what the practical purpose of all this is?
It'd be great if I could see real-life examples of the 4 grammars: Unrestricted, CSG, CFG, Regular Grammer come to play.
I found online that Chomsky hierarchy along with the 4 grammars is used to evaluate proposals within cognitive science but this goes way over my head. It'd be great if someone could break it down for me, thanks a lot!
There is no practical value. That's the whole point.
Let me try to break that down a bit. It's useful to remember that Chomsky is a linguist --someone who studies human languages-- and that he was writing in the late 1950s when computational theory was not as well-developed as it is today. (To put it mildly.) His goal was to find a mathematical model which could provide some insights into the mechanisms by which human beings generate and understand sentences, and he took as his starting point a particular simple model of sentence generation.
In this model, a grammar is a function F, which transforms elements from an arbitrary sequence of symbols from some alphabet onto another sequence of symbols from the same alphabet. F is defined by a finite set of pairs (called productions) α → β. We then say that F(ω) = F(ζ) if the definition of F contains some pair α → β such that α is a substring of ω and ζ is the result of substituting a single instance of α in ω with β.
That's not very interesting in and of itself; we make that into a full language by starting with some designated starting sequence, normally represented as the single symbol S, and repeatedly apply F as many times as is necessary. (In all interesting grammars, the set so constructed is infinite, so it cannot actually be constructed. But we can imagine proceeding from the starting point until we find the sentence we wanted to generate.)
The problem with this model is that it can be used to describe an arbitrary Turing Machine. Or, if you like, an arbitrary computer program, although the equivalence is easier to see with a Turing Machine. In other words, it is at least theoretically possible to construct a finite grammar which will recognise strings consisting of the description of a Turing Machine (i.e., a program written in some programming language) followed by an input and an output only if the Turing Machine applied to the input would produce the output. In other words, there exists (in the mathematical sense) a grammar of this form which is computationally equivalent to a general purpose computer.
Unfortunately, that's not actually very useful if our goal is to understand sentences, because there is actually no algorithm for running computers backwards. The best we can do as a general solution is to enumerate all possible inputs and run the program on each of them until we find the output we hoped for. And that doesn't actually work because there is no limit to the amount of time the program might take to produce an output and no way to even know if the program will eventually come to an end. (This is called the "halting problem".) So we might get stuck on some possible input, and we'll never know if some other input might have produced the desired output.
As a result, we cannot tell whether the provided input was "grammatical", that is, whether it conformed to the grammar provided. And that's not just the case with the particular grammar we built to emulate Turing Machines. It means that we have no confidence that we can recognise sentences from any arbitrary grammar, and even if we stumble upon an answer, we have no way to limit how much time it might take to get there.
Clearly, this is not how human beings understand each other. So if it is to serve a practical purpose, we must restrict the possible grammars in some way to make them computationally feasible.
On the other end of the spectrum, a lot was known about finite-state machines. A finite-state machine is a Turing Machine without a tape; that is to say, it is simply a finite collection of states. In each state, the machine reads a single input symbol and uses it to decide what the next state will be. It turns out that finite-state machines can be modelled using a grammar (as above) restricted to very simple productions, each of which is either of the form A → a B or A → a, where a is a symbol from the "terminal alphabet" (that is, a word) and A and B are single grammatical symbols. These grammars are called "regular grammars" and they are computationally equivalent to what mathematicians call "regular expressions" (which are a small subset of what is recognised by "regex" libraries, but that's a whole other discussion).
Regular grammars are easy to parse. All that is needed to is trace through the state machine, so it can be done without backtracking in time proportional to the length of the input. But regular grammars are far too weak to be able to represent human language, or even most computer languages. As a simple example, algebraic expressions with parentheses cannot be recognised with a regular grammar (or with a finite-state machine) because there is no way to count the parenthesis depth; the finite-state machine has no memory at all (other than knowing which state it is in, and there are only a finite number of states).
So unrestricted grammars are too powerful to parse and regular grammars are too weak to be useful. (Useful for complex parsing problems, that is. There are certain applications for regular expressions, but parsing complete computer programs is not one of them.)
The next step, then, was to try to find a restriction on grammars which was still powerful enough to represent human language without being so powerful that parsing became impossible.
That, finally, is the origin of Chomsky's hierarchy. Between the two extremes described above (type 0 and type 3 grammars), Chomsky proposed two possible intermediate restrictions -- type 1 and type 2 grammars -- and proved a number of important properties about each of them.
While this work turned out to be fundamental in the development of formal language theory, it cannot really be said to have answered the question Chomsky started with. Type 2 grammars -- context-free grammars -- are indeed computationally tractable; they can be parsed with simple algorithms in polynomial time, and can represent a large number of useful languages. But they are still too weak to represent human language. In particular, context-free grammars cannot represent a language as simple as "all strings which contain two instances of the same substring". Type 1 grammars -- context-sensitive grammars -- can probably represent any useful language, and are not quite as unruly as unrestricted grammars, but they are still too powerful to parse. (Since the derivation steps in a context-sensitive grammar never get shorter, it is possible to enumerate all possible derivations from a starting point in order by length, which means that you can decide whether a sentence is generated by the grammar without running into the halting problem. But that's as good as it gets; that procedure takes exponential time and is not remotely feasible for non-trivial inputs.)
In the six decades since Chomsky published his seminal papers, a lot of work has been done to try to find useful intermediate restrictions between type 1 and type 2 grammars. And there has been a lot of useful study into algorithms for parsing context-free languages, which is of enormous utility in building compilers. All of this builds on the crucial work done by Chomsky and the other computational theorists whose work he built on -- Markov, Turing, Church and Kleene, just to name a few worthy of study. But Chomsky's original project remains unsolved.
So if your goal is to build a simple parser for a programming language, the Chomsky hierarchy is probably just an interesting footnote. But if you are interested in the academic study of formal language theory, there are still lots of interesting unsolved problems to work on.
While reading Extending Sledgehammer with SMT solvers I read the following:
In the original Sledgehammer architecture, the available lemmas were rewritten
to clause normal form using a naive application of distributive laws before the relevance filter was invoked. To avoid clausifying thousands of lemmas on each invocation, the clauses were kept in a cache. This design was technically incompatible with the (cache-unaware) smt method, and it was already unsatisfactory for ATPs, which include custom polynomial-time clausifiers.
My understanding of SMT so far is as follows: SMTs don't work over clauses. Instead, they try to build a model for the quantifier-free part of a problem. The search is refined by instantiating quantifiers according to some set of active terms. Thus, indeed no clausal form is needed for SMT solvers.
We rewrote the relevance filter so that it operates on arbitrary HOL formulas, trying to simulate the old behavior. To mimic the penalty associated with Skolem functions in the clause-based code, we keep track of polarities and detect quantifiers that give rise to Skolem functions.
What's the penalty associated with Skolem functions? I could understand they are not good for SMTs, but here it seems that they are bad for ATPs too...
First, SMT solvers do work over clauses and there is definitely some (non-naive) normalization internally (e.g., miniscoping). But you do not need to do the normalization before calling the SMT solver (especially, since it will be more naive and generate a larger number of clauses).
Anyway, Section 6.6.7 explains why skolemization was done on the Isabelle side. To summarize: it is not possible to introduce polymorphic constants in a proof in Isabelle; hence it must be done before starting the proof.
It seems likely that, when writing the paper, not changing the filtering lead to worse performance and, hence, the penalty was added. However, I tried to find the relevant code simulating clausification in Sledgehammer, so I don't believe that this happens anymore.
In general, First Order logic is Undecidable. However, Some fragments of first-order logic as Monadic logics, BSR Fragments, Separated Fragments are decidable.
There exist tools to solve SAT/SMT Solvers as Z3.
Is there any tool/Language which checks the satisfiability of FOL formulas?
SMT solvers, like Z3, can attempt to check satisfiability of FOL (even 2nd order logic!), though performance might not be great (depending on how the problem looks like)
There are also dedicated FOL provers (aka TPTP solvers), like Vampire, E, iProver, etc. See more here: https://en.wikipedia.org/wiki/Automated_theorem_proving
I have recently had the need to create an ANTLR language grammar for the purpose of a transpiler (Converting one scripting language to another). It occurs to me that Google Translate does a pretty good job translating natural language. We have all manner of recurrent neural network models, LSTM, and GPT-2 is generating grammatically correct text.
Question: Is there a model sufficient to train on grammar/code example combinations for the purpose of then outputting a new grammar file given an arbitrary example source-code?
I doubt any such model exists.
The main issue is that languages are generated from the grammars and it is next to impossible to convert back due to the infinite number of parser trees (combinations) available for various source-codes.
So in your case, say you train on python code (1000 sample codes), the resultant grammar for training will be the same. So, the model will always generate the same grammar irrespective of the example source code.
If you use training samples from a number of languages, the model still can't generate the grammar as it consists of an infinite number of possibilities.
Your example of Google translate works for real life translation as small errors are acceptable, but these models don't rely on generating the root grammar for each language. There are some tools that can translate programming languages example, but they don't generate the grammar, work based on the grammar.
Update
How to learn grammar from code.
After comparing to some NLP concepts, I have a list of issue that may arise and a way to counter them.
Dealing with variable names, coding structures and tokens.
For understanding the grammar, we'll have to breakdown the code to its bare minimum form. This means understanding what each and every term in the code means. Have a look at this example
The already simple expression is reduced to the parse tree. We can see that the tree breaks down the expression and tags each number as a factor. This is really important to get rid of the human element of the code (such as variable names etc.) and dive into the actual grammar. In NLP this concept is known as Part of Speech tagging. You'll have to develop your own method to do the tagging, by it's easy given that you know grammar for the language.
Understanding the relations
For this, you can tokenize the reduced code and train using a model based on the output you are looking for. In case you want to write code, make use of a n grams model using LSTM like this example. The model will learn the grammar, but extracting it is not a simple task. You'll have to run separate code to try and extract all the possible relations learned by the model.
Example
Code snippet
# Sample code
int a = 1 + 2;
cout<<a;
Tags
# Sample tags and tokens
int a = 1 + 2 ;
[int] [variable] [operator] [factor] [expr] [factor] [end]
Leaving the operator, expr and keywords shouldn't matter if there is enough data present, but they will become a part of the grammar.
This is a sample to help understand my idea. You can improve on this by having a deeper look at the Theory of Computation and understanding the working of the automata and the different grammars.
What you're describing is 'just' learning structure of Context-Free Grammars.
I'm not sure if this approach will actually work for your case, but it's a long-standing problem in NLP: grammar induction for Context-Free Grammars. An example introduction how to tackle this problem using statistical learning methods can be found in Charniak's Statistical Language Learning.
Note that what I described is about CFGs in general, but you might want to check induction for LL grammars, because parser generators mostly use these types of grammars.
I know nothing about ANTLR, but there are pretty good examples of translating natural language e.g. into valid SQL requests: http://nlpprogress.com/english/semantic_parsing.html#sql-parsing.
My program, bounded synthesizer of reactive finite state systems, produces SMT queries to annotate a product automaton of the (uninterpreted) system and a specification. Essentially it is a model checking with uninterpreted functions. If the annotation exists => the model found by Z3 satisfies the spec. The queries contain:
datatype (to encode states of a system and of a specification automaton)
>= (greater), > (strictly) (to specify ranking function of states of automaton system*spec, which is used to search lassos with bad states)or in other words, ordering of states of that automaton, which
uninterpreted functions with boolean domain and range
all clauses are horn clauses
An example is https://dl.dropboxusercontent.com/u/444947/posts/full_arbiter2.smt2
('forall' are used to encode "don't care" inputs to functions)
Currently queries take strictly greater > operator from integers arithmetic (that is a ranking function has Int range).
Question: is it worth developing a custom theory solver in Z3 for such queries? It could exploit DFS based search of lassos which might be faster than integers theory solver (or diff-neg tactic).
Or Z3 already efficiently handles this? (efficiently means "comparable to graph-based search of lassos").
Arithmetic is not the bottleneck of your benchmark.
We can check that by using
valgrind --tool=callgrind z3 full_arbiter2.smt2
kcachegrind
Valgrind and kcachegrind are available in most Linux distros.
So, I don't think you will get a significant performance improvement if you implement a solver for order theory.
One bottleneck is the datatype theory. You may get a performance boost if you encode the types Q and T using Bit-vectors. Another bottleneck is quantifier reasoning. Have you tried to expand them before invoking Z3?
In Z3, the qe (quantifier elimination) tactic will essentially expand Boolean quantifiers.
I got a small speedup by replacing
(check-sat)
with
(check-sat-using (then qe smt))