Schema, ad hoc scanner, and recursive descent parser - parsing

Construct an interpreter for a simple language for expressions involving fractions. Your language need only support addition and subtraction of fractions.
Example: 1/5 – (2/11 + 5/3)
Adapt the ad-hoc scanner and recursive descent parser in order to suit your language. The interpreter should generate informed error messages, in case of any syntax error. No semantic analysis is required.
Its required to construct an interpreter using schema, ad hoc scanner, and ll parsing. I have tried searching about it and couldn't find any resources on how to create this program, can anybody help me on the solution.

Related

Can a Turing complete language ever have a CFG?

Does turing completeness preclude a language from having a CFG? I couldn't find any paper saying that.
I found this:
"TeX can only be parsed by a complete Turing machine (modulo the finite space available), which precludes it from having a BNF."
We are often imprecise with these terms, but a correct answer to your question requires that we be very precise with how we are using terms.
Two computation systems are equivalent if they can simulate each other. A computation system is Turing-equivalent if it is equivalent to Turing machines.
A computation is complete with respect to a computation system if it requires all capabilities of that system to be computed in that system; that is, any change to the computing system which causes it not to be capable of performing at least the same computations as before will cause it not to be able to perform this computation. A computation is Turing-complete if it is complete with respect to Turing machines.
BNF grammars describe context-free languages, and the least capable computing system capable of parsing such languages are the pushdown automata. This computing system cannot simulate Turing machines in that there are computations a Turing machine can perform which a pushdown automaton cannot; therefore, pushdown automata are not Turing-equivalent.
The article says that TeX is a Turing-complete language, that is, deciding the language of valid TeX strings requires all capabilities of Turing machines. Any system not capable of simulating a Turing machine cannot possibly parse decide membership in the language of valid TeX strings.
The article is NOT saying that TeX is Turing-equivalent (maybe it is, maybe it isn't; I have no idea). As pointed out in the comment, Turing-completeness of a computation system's representation is completely unrelated to that computation system's Turing-equivalence. Even Turing machines themselves can be represented using strings of a regular language (in fact, extend the interpretation of any language so that otherwise invalid programs compile to the program which halts without doing anything, and suddenly ALL strings are valid, and the language of all strings is certainly regular).

The construction of semantic analyser

In the process of learning about compilers a wrote a simple tokenizer and parser (recursive descent). The parser constructs an abstract syntax tree. Now I am going with semantic analysis. However I have a few questions about a construction of semantic analyser. Should I analyse the code semantically on the generated abstract syntax tree using recursive calls through the tree or maybe should I construct another tree (using a visitor pattern for example) for the purpose of semantic analysis. I found a document online which says that I should analyse the code semantically during the process of parsing, but it does not comply with a rule of single responsibility and makes the whole parser more prone to errors. Or maybe should I make semantic analysis a part of a intermediate representation generator? Maybe I am missing something, I would be grateful if someone could clarify this thing for me.
You are learning. Keep it simple; build a tree and run the semantic analyzer over the tree when parsing is completed.
If you decide (someday) to build a fast compiler, you might consider implementing some of that semantic analysis as you parse. This makes building both the parser and the semantic analyzer harder because they are now interacting (tangled is a better word, read about why most C++ parsers are implemented with a so-called "lexer hack" if you want to get ill). You'll also find that sometimes the information you need isn't available ("is the target of that goto defined so far?" so you usually can't do a complete job as parse runs, or you may have to delay some semantic processing for later in the parse and that's tricky to arrange. I don't recommend adding this kind of complexity early in your compiler education.
Start simple and focus on learning what semantic analysis is about.
You can optimize later when it is clear what you have to optimize and why you should do it.

How to convert from Stanford Universal Dependencies to Phrase Grammar?

In my application I am using Stanford CoreNLP for parsing english text into a graph data structure (Universal Dependencies).
After some modifications of the graph I need to generate a natural language output for which I am using SimpleNLG: https://github.com/simplenlg/simplenlg
However SimpleNLG is using Phrase Grammar.
Therefore in order to successfully use SimpleNLG for natural language generation I need to convert from Universal Dependencies into Phrase Grammar.
What is the easiest way of achieving this?
So far I have only come across this article on this topic:
http://delivery.acm.org/10.1145/1080000/1072147/p14-xia.pdf?ip=86.52.161.138&id=1072147&acc=OPEN&key=4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E6D218144511F3437&CFID=642131329&CFTOKEN=21335001&acm=1468166339_844b802736ce07dab89064efb7f8ede9
I am hoping that someone might have some more practical code examples to share on this issue?
Phrase-structure trees contain more information than dependency trees and therefore you cannot deterministically convert dependency trees to phrase-structure trees.
But if you are using CoreNLP to parse the sentences, take a look at the parse annotator. Unlike the dependency parser, this parser also outputs phrase-structure trees, so you can use this annotator to directly parse your sentences to phrase-structure trees.

F# typing rules as inference rules

Since F# uses type inferencing and type inferencing uses type rules, where are the F# type rules expressed as inference rules found. I suspect they are not published, easily located, or even available as inference rules, which why I am asking.
Googling F# type rules returns nothing relevant.
Searching the F# spec gives an interpretation in section 14 - Inference Procedures but does not give the actual rules.
I know I could extract them by reading the F# source code, (Microsoft.FSharp.Compiler.ConstraintSolver) but that could take time to extract and verify. Also while I am versed with Prolog which gives me some help in understanding the constraint solver, there is still a learning curve for me to read it.
You are correct in that a formal specification of the F# type inference and type checking would be written using the format of inference rules.
The problem is that any realistic programming language is just too complicated for this kind of formal specification. If you wanted to capture the full complexity of the F# type inference, then you'd be just using mathematical notation to write essentially the same thing as what is in the F# compiler source code.
So, programming language theoreticians usually write typing and inference rules for some interesting subsets of the whole system - to illustrate issues related to some new aspect.
The core part of the F# type system is based on the Standard ML language. You can find a reasonable subset of ML formally specified in The Definition of Standard ML (PDF). This explains some interesting things like the "value restriction" rule.
I did some work on formalizing how the F# Data library works and this includes a simple model of bits of type checking for provided types, which you can find in F# Data paper.
The F# Computation Zoo paper (PDF) defines typing rules for how F# computation expression works. This actually captures typical use cases rather than what the compiler does.
In summary, I don't think it's feasible to expect formal specification of F# type inference in terms of typing rules, but I don't think any other language really has that. Formal models of languages are used more for exploring subtleties of little subsets, rather than for talking about the whole language.

Packrat parsing vs. LALR parsing

A lot of websites states that packrat parsers can parse input in linear time.
So at the first look they me be faster than LALR parser contructed by the tools yacc or bison.
I wanted to know if the performance of packrat parsers is better/worse than the performance of LALR parser when tested with common input (like programming language source files) and not with any theoretical inputs.
Does anyone can explain the main differences between the two approaches.
Thanks!
I'm not an expert at packrat parsing, but you can learn more at Parsing expression grammar on Wikipedia.
I haven't dug into it so I'll assume the linear-time characterization of packrat parsing is correct.
L(AL)R parsers are linear time parsers too. So in theory, neither packrat nor L(AL)R parsers are "faster".
What matters, in practice, of course, is implementation. L(AL)R state transitions can be executed in very few machine instructions ("look token code up in vector, get next state and action") so they can be extremely fast in practice. By "compiling" L(AL)R parsing to machine code, you can end up with lightning fast parsers, as shown by this 1986 Tom Pennello paper on Very Fast LR parsing. (Machines are now 20 years faster than when he wrote the paper!).
If packrat parsers are storing/caching results as they go, they may be linear time, but I'd guess the constant overhead would be pretty high, and then L(AL)R parsers in practice would be much faster. The YACC and Bison implementations from what I hear are pretty good.
If you care about the answer, read the basic technical papers closely; if you really care, then implement one of each and check out the overhead constants. My money is strongly on L(AL)R.
An observation: most language front-ends don't spend most of their time "parsing"; rather, they spend a lot of time in lexical analysis. Optimize that (your bio says you are), and the parser speed won't matter much.
(I used to build LALR parser generators and corresponding parsers. I don't do that anymore; instead I use GLR parsers which are linear time in practice but handle arbitrary context-free grammmars. I give up some performance, but I can [and do, see bio] build dozens of parsers for many languages without a lot of trouble.).
I am the author of LRSTAR, an open-source LR(k) parser generator. Because people are showing interest in it, I have put the product back online here LRSTAR.
I have studied the speed of LALR parsers and DFA lexers for many years. Tom Pennello's paper is very interesting, but is more of an academic exercise than a real-world solution for compilers. However, if all you want is a pattern recognizer, then it may be the perfect solution for you.
The problem is that real-world compilers usually need to do more than pattern recognition, such as symbol-table look-up for incoming symbols, error recovery, provide an expecting list (statement completion information), and build an abstract-syntax tree while parsing.
In 1989, I compared the parsing speed of LRSTAR parsers to "yacc" and found that they are 2 times the speed of "yacc" parsers. LRSTAR parsers use the ideas published in the paper: "Optimization of Parser Tables for Portable Compilers".
For lexer (lexical analysis) speed I discovered in 2009 that "re2c" was generating the fastest lexers, about twice the speed of those generated by "flex". I was rewriting the LRSTAR lexer generator section at that time and found a way to make lexers that are almost as fast as "re2c" and much smaller. However, I prefer the table-driven lexers that LRSTAR generates, because they are almost as fast and the code compiles much quicker.
BTW, compiler front-ends generated by LRSTAR can process source code at a speed of 2,400,000 lines per second or faster. The lexers generated by LRSTAR can process 30,000,000 tokens per second. The testing computer was a 3.5 GHz machine (from 2010).
[2015/02/15] here is the 1986 Tom Pennello paper on Very Fast LR parsing
http://www.genesishistory.org/content/ProfPapers/VF-LRParsing.pdf
I know this is an old post, but a month or so ago, I stumbled on this paper: https://www.mercurylang.org/documentation/papers/packrat.pdf and accidentally saw this post today.
The watered-down version of that the paper says: packrat memoisation is a mixed blessing. The best results can be achieved if you have some heuristics wrt how often this or another rule is going to match. Essentially, it only makes sense to memoise the rules that have two following properties: (1) few elements, (2) very common.
Performance is mostly a matter of language design. For each language, there will be an approach, technology, or parser generator that will make best fit.
I can't prove it without more thought, but I think that nothing can beat a top-down descent parser in which the semantics drive the parser, and the parser drives the lexer, performance-wise. It would also be among the most versatile and easier to maintain among the implementations.

Resources