Right now I am studying how reals are done in Coq, but I remember a while back watching Bauer's lectures and hearing him state that HoTT has a different and better way of defining reals that either the Cauchy sequences or Dedekind cuts. I am not sure if the same applies to Cubical Agda, but for that reason I am asking. I know that the HoTT book has a section on this, but it is too much for me at the moment.
I'd be interested seeing code examples in Cubical Agda so I can contrast them with the Coq ones.
Related
Currently, I have a somewhat superficial understanding of how SMT solvers work (the basics of algorithms like E-matching, MBQI, and CVC4/5's inductive reasoning). However, it's very frustrating to debug by trial-and-error.
Is there any guidance on how to debug SMT scripts that make heavy use of quantifiers?
A badly-written script often goes into infinite loop but I cannot tell if it's my mistake, or it's just taking too long to respond.
The SMT solvers tend to hide internals from users, so it's quite hard to figure out why it's stuck. Is there any way to print the "solving context"?
Or maybe I'm using SMT solvers the wrong way? I should design my own verification algorithm, only employing SMT solvers for local decisions?
Any help is appreciated!
This is a very subjective question, and largely opinion based. But a couple of general remarks:
Don't directly program in SMTLib. It is not meant to be for human-consumption. Instead, use a higher-level API, and script them from a language that you're more familiar with. There are bindings available from any number of languages, including C/C++/Java/Python/O'Caml/Haskell/Scala etc. Just doing this will get rid of most of the mundane mistakes you make.
Turn on verbosity output of the solver. You might be able to notice patterns in the log output. Unfortunately this is very solver specific, and can be hard to decipher; but can also indicate if, for instance, you're stuck in an e-matching loop in the presence of quantifiers.
If there's a custom algorithm for your verification problem (Hoare triples, separation logic, abstract interpretation, ...), then you first have to apply these techniques and delegate local/sub-lemmas to an SMT solver. Do not expect the SMT solver to be able to do large proofs, and anything that requires actual induction out-of-the box.
Try reducing complexity by putting in over-constraints and see which ones help. Based on your findings you might be able to do a case-split, for instance, if the over-constraints enumerate a reasonably small search-space.
Again, these are very general remarks and whether they'll apply for your specific problem is anyone's guess. But I'd start with coding in a higher-level API if you aren't already doing so.
By proof-surveyability I understand the fact that a human user could "trace" all the details of a proof. There are things that are not easily traceable. For instance, an SMT proof is based on specific heuristics that are then translated into the prover. In that situations, it may be useful to have easy mechanisms (not need to be expert to have them at your disposal) to scan why the proof failed or examine the internal structures of the proof procedure.
I was wondering if Lean enhances this kind of proof surveyability in contrast to Coq or Isabelle. I get the impression that this may be the case skimming through A Metaprogramming Framework for Formal Verification.
If I understand proof-surveyability or -traceability correctly, then by definition, a fully detailed proof is "100% traceable", whereas just stating the result (e.g. a lemma) is "0% traceable".
In that case, I don't see why Lean would improve over Coq or Isabelle, or any other tool whose core purpose is to check correctness of a fully detailed proof. Such tools often provide means to increase automation, which is convenient, but arguably reduces traceability, depending on how the additional proof steps are represented. E.g. a Coq-like tactic can increase automation, but traceability can be "recovered" because the steps the tactic infers can be represented in the same way the explicitly provided steps are represented: as proof rule applications or deduction steps.
The latter part is difficult for SMT-inferred proof steps: SMT solvers can achieve a much higher degree of automation, compared to proof checkers such as Coq, but at the expense of traceability, because its "reasoning" is much more technical and less human-like/deductive.
As a side remark: this difference between proof checkers and SMT solvers reminds me of the difference between classical and AI-based image recognition. The former is less automated/efficient, but easier to trace/explain.
What are the real time examples of NFA and epsilon NFA i.e. practical examples other than that it is used in designing compilers
Any time anyone uses a regular expression, they're using a finite automaton. And if you don't know a lot about regexes, let me tell you they're incredibly common -- in many ecosystems, it's the first tool most people try to apply when faced with getting structured data out of strings. Understanding automata is one way to understand (and reason about) regexes, and a quite viable one at that if you're mathematically inclined.
Actually, today's regex engines have grown beyond these mathematical concepts and added features that permit doing more than an FA allows. Nevertheless, many regexes don't use these features, or use them in such a limited way that it's feasible to implement them with FAs.
Now, I only spoke of finite automata in general before. An NFA is a specific FA, just like a DFA is, and the two can be converted into one another (technically, any DFA already is a NFA). So while you can just substitute "finite automaton" with "NFA" in the above, be aware that it doesn't have to be an NFA under the hood.
Like explained by #delnan, automatas are often used in the form of regular expressions. However, they are used a bit more than just that. Automatas are often used to model hardware and software systems and verifying their certain properties. You can find more information by looking at model checking. One really really simplified motivating example can be found in the introduction of Introduction to Automata, Languages, and Computation.
And let's not forget Markov chains which are basically based on on finite automata as well. In combination with the hardware and sofware modelling that bellpeace mentioned, a very powerful tool.
If you are wondering why epsilon NFAs are considered a variation of NFAs, then I don't think there is a good reason. They are interpreted in the same way, except every step may not be a unit time anymore, but an NFA is that not really either.
A somewhat obscure but effective example would be the aho-corasick algorithm, which uses a finite automaton to search for multiple strings within text
A lot of websites states that packrat parsers can parse input in linear time.
So at the first look they me be faster than LALR parser contructed by the tools yacc or bison.
I wanted to know if the performance of packrat parsers is better/worse than the performance of LALR parser when tested with common input (like programming language source files) and not with any theoretical inputs.
Does anyone can explain the main differences between the two approaches.
Thanks!
I'm not an expert at packrat parsing, but you can learn more at Parsing expression grammar on Wikipedia.
I haven't dug into it so I'll assume the linear-time characterization of packrat parsing is correct.
L(AL)R parsers are linear time parsers too. So in theory, neither packrat nor L(AL)R parsers are "faster".
What matters, in practice, of course, is implementation. L(AL)R state transitions can be executed in very few machine instructions ("look token code up in vector, get next state and action") so they can be extremely fast in practice. By "compiling" L(AL)R parsing to machine code, you can end up with lightning fast parsers, as shown by this 1986 Tom Pennello paper on Very Fast LR parsing. (Machines are now 20 years faster than when he wrote the paper!).
If packrat parsers are storing/caching results as they go, they may be linear time, but I'd guess the constant overhead would be pretty high, and then L(AL)R parsers in practice would be much faster. The YACC and Bison implementations from what I hear are pretty good.
If you care about the answer, read the basic technical papers closely; if you really care, then implement one of each and check out the overhead constants. My money is strongly on L(AL)R.
An observation: most language front-ends don't spend most of their time "parsing"; rather, they spend a lot of time in lexical analysis. Optimize that (your bio says you are), and the parser speed won't matter much.
(I used to build LALR parser generators and corresponding parsers. I don't do that anymore; instead I use GLR parsers which are linear time in practice but handle arbitrary context-free grammmars. I give up some performance, but I can [and do, see bio] build dozens of parsers for many languages without a lot of trouble.).
I am the author of LRSTAR, an open-source LR(k) parser generator. Because people are showing interest in it, I have put the product back online here LRSTAR.
I have studied the speed of LALR parsers and DFA lexers for many years. Tom Pennello's paper is very interesting, but is more of an academic exercise than a real-world solution for compilers. However, if all you want is a pattern recognizer, then it may be the perfect solution for you.
The problem is that real-world compilers usually need to do more than pattern recognition, such as symbol-table look-up for incoming symbols, error recovery, provide an expecting list (statement completion information), and build an abstract-syntax tree while parsing.
In 1989, I compared the parsing speed of LRSTAR parsers to "yacc" and found that they are 2 times the speed of "yacc" parsers. LRSTAR parsers use the ideas published in the paper: "Optimization of Parser Tables for Portable Compilers".
For lexer (lexical analysis) speed I discovered in 2009 that "re2c" was generating the fastest lexers, about twice the speed of those generated by "flex". I was rewriting the LRSTAR lexer generator section at that time and found a way to make lexers that are almost as fast as "re2c" and much smaller. However, I prefer the table-driven lexers that LRSTAR generates, because they are almost as fast and the code compiles much quicker.
BTW, compiler front-ends generated by LRSTAR can process source code at a speed of 2,400,000 lines per second or faster. The lexers generated by LRSTAR can process 30,000,000 tokens per second. The testing computer was a 3.5 GHz machine (from 2010).
[2015/02/15] here is the 1986 Tom Pennello paper on Very Fast LR parsing
http://www.genesishistory.org/content/ProfPapers/VF-LRParsing.pdf
I know this is an old post, but a month or so ago, I stumbled on this paper: https://www.mercurylang.org/documentation/papers/packrat.pdf and accidentally saw this post today.
The watered-down version of that the paper says: packrat memoisation is a mixed blessing. The best results can be achieved if you have some heuristics wrt how often this or another rule is going to match. Essentially, it only makes sense to memoise the rules that have two following properties: (1) few elements, (2) very common.
Performance is mostly a matter of language design. For each language, there will be an approach, technology, or parser generator that will make best fit.
I can't prove it without more thought, but I think that nothing can beat a top-down descent parser in which the semantics drive the parser, and the parser drives the lexer, performance-wise. It would also be among the most versatile and easier to maintain among the implementations.
I am asking this question because I know there are a lot of well-read CS types on here who can give a clear answer.
I am wondering if such an AI exists (or is being researched/developed) that it writes programs by generating and compiling code all on it's own and then progresses by learning from former iterations. I am talking about working to make us, programmers, obsolete. I'm imagining something that learns what works and what doesn't in a programming languages by trial and error.
I know this sounds pie-in-the-sky so I'm asking to find out what's been done, if anything.
Of course even a human programmer needs inputs and specifications, so such an experiment has to have carefully defined parameters. Like if the AI was going to explore different timing functions, that aspect has to be clearly defined.
But with a sophisticated learning AI I'd be curious to see what it might generate.
I know there are a lot of human qualities computers can't replicate like our judgement, tastes and prejudices. But my imagination likes the idea of a program that spits out a web site after a day of thinking and lets me see what it came up with, and even still I would often expect it to be garbage; but maybe once a day I maybe give it feedback and help it learn.
Another avenue of this thought is it would be nice to give a high-level description like "menued website" or "image tools" and it generates code with enough depth that would be useful as a code completion module for me to then code in the details. But I suppose that could be envisioned as a non-intelligent static hierarchical code completion scheme.
How about it?
Such tools exist. They are the subject of a discipline called Genetic Programming. How you evaluate their success depends on the scope of their application.
They have been extremely successful (orders of magnitude more efficient than humans) to design optimal programs for the management of industrial process, automated medical diagnosis, or integrated circuit design. Those processes are well constrained, with an explicit and immutable success measure, and a great amount of "universe knowledge", that is a large set of rules on what is a valid, working, program and what is not.
They have been totally useless in trying to build mainstream programs, that require user interaction, because the main item a system that learns needs is an explicit "fitness function", or evaluation of the quality of the current solution it has come up with.
Another domain that can be seen in dealing with "program learning" is Inductive Logic Programming, although it is more used to provide automatic demonstration or language / taxonomy learning.
Disclaimer: I am not a native English speaker nor an expert in the field, I am an amateur - expect imprecisions and/or errors in what follow. So, in the spirit of stackoverflow, don't be afraid to correct and improve my prose and/or my content. Note also that this is not a complete survey of automatic programming techniques (code generation (CG) from Model-Driven Architectures (MDAs) merits at least a passing mention).
I want to add more to what Varkhan answered (which is essentially correct).
The Genetic Programming (GP) approach to Automatic Programming conflates, with its fitness functions, two different problems ("self-compilation" is conceptually a no-brainer):
self-improvement/adaptation - of the synthesized program and, if so desired, of the synthesizer itself; and
program synthesis.
w.r.t. self-improvement/adaptation refer to Jürgen Schmidhuber's Goedel machines: self-referential universal problem solvers making provably optimal self-improvements. (As a side note: interesting is his work on artificial curiosity.) Also relevant for this discussion are Autonomic Systems.
w.r.t. program synthesis, I think is possible to classify 3 main branches: stochastic (probabilistic - like above mentioned GP), inductive and deductive.
GP is essentially stochastic because it produces the space of likely programs with heuristics such as crossover, random mutation, gene duplication, gene deletion, etc... (than it tests programs with the fitness function and let the fittest survive and reproduce).
Inductive program synthesis is usually known as Inductive Programming (IP), of which Inductive Logic Programming (ILP) is a sub-field. That is, in general the technique is not limited to logic program synthesis or to synthesizers written in a logic programming language (nor both are limited to "..automatic demonstration or language/taxonomy learning").
IP is often deterministic (but there are exceptions): starts from an incomplete specification (such as example input/output pairs) and use that to constraint the search space of likely programs satisfying such specification and then to test it (generate-and-test approach) or to directly synthesize a program detecting recurrences in the given examples, which are then generalized (data-driven or analytical approach). The process as a whole is essentially statistical induction/inference - i.e. considering what to include into the incomplete specification is akin to random sampling.
Generate-and-test and data-driven/analytical§ approaches can be quite fast, so both are promising (even if only little synthesized programs are demonstrated in public until now), but generate-and-test (like GP) is embarrassingly parallel and then notable improvements (scaling to realistic program sizes) can be expected. But note that Incremental Inductive Programming (IIP)§, which is inherently sequential, has demonstrated to be orders of magnitude more effective of non-incremental approaches.
§ These links are directly to PDF files: sorry, I am unable to find an abstract.
Programming by Demonstration (PbD) and Programming by Example (PbE) are end-user development techniques known to leverage inductive program synthesis practically.
Deductive program synthesis start with a (presumed) complete (formal) specification (logic conditions) instead. One of the techniques leverage automated theorem provers: to synthesize a program, it constructs a proof of the existence of an object meeting the specification; hence, via Curry-Howard-de Bruijn isomorphism (proofs-as-programs correspondence and formulae-as-types correspondence), it extracts a program from the proof. Other variants include the use of constraint solving and deductive composition of subroutine libraries.
In my opinion inductive and deductive synthesis in practice are attacking the same problem by two somewhat different angles, because what constitute a complete specification is debatable (besides, a complete specification today can become incomplete tomorrow - the world is not static).
When (if) these techniques (self-improvement/adaptation and program synthesis) will mature, they promise to rise the amount of automation provided by declarative programming (that such setting is to be considered "programming" is sometimes debated): we will concentrate more on Domain Engineering and Requirements Analysis and Engineering than on software manual design and development, manual debugging, manual system performance tuning and so on (possibly with less accidental complexity compared to that introduced with current manual, not self-improving/adapting techniques). This will also promote a level of agility yet to be demonstrated by current techniques.