Should I learn Lex/ YACC or FLEX/Bison - parsing

I took compiler course last year. Now I want to learn, how to write our own lexical analyzer and parser. I searched the net came across these two tools which are under GNU License.
LEX
YACC
But there also alternative tools to these known as FLEX and BISON. I think they are under BSD License, I'm not sure. But I'm unable to figure out which tool should I learn.

For most novice users the tools are almost identical. The same input source will build a lexer/parser that performs the same task. There are differences, but none that would impair your ability to learn and use the tools. The differences would only be of interest to those more experienced coders or those that like to focus on the esoteric internal operations of tools.
Just use which ever one works best for you in your software environment. I teach my students using flex and bison on the basis that they can experience the same tools irrespective of platform (Windows, linux, OSX etc).

Related

HLint-like tool for F# code?

HLint is a command line static analysis tool for Haskell code, that even suggests the appropriate refactored version of the code. Anyone know of similar command line tools for linting F# code?
Short answer:
No, there is no such tool yet.
Long answer:
Let's discuss how to build it then.
I did some background research which might be useful.
References
There are a few lint tools in functional languages, which can be used as sources of inspiration. However, they tend to go to different directions.
HLint is an advanced tool and its refactoring capability is amazing. Refactoring suggestion is more tricky in F# due to (1) F# code might have side effects so equational reasoning is unsound (2) When doing point-free transformation, value restriction could eliminate some good suggestions. If we accept false positives, it might become a bit easier.
In Scala's world, you have Wart Remover and Scala Style. The former focuses on common functional programming mistakes in Scala. The latter has its focus on human errors and inconsistencies (e.g. naming, convention, etc.). I guess Wart Remover is more relevant to F# as it is a functional-first programming language. However, a style checker tool is useful on a big code base with multiple developers.
The most relevant lint tool for F# is probably OCaml's style checker, Mascot. It has a big and extensible rule set. Many of these rules are applicable to F# with minor adaptation.
Resources (and the lack thereof)
What we have:
F# compiler is on GitHub. The relevant component - F# compiler service has a NuGet package so bootstrapping is easy. F# compiler source code is a good resource since F# compiler's warnings are very good and informative.
There are recent works using F# compiler e.g. language binding, refactoring, code formatting, etc. so we have experience to build up on.
We have other lint tools to learn from.
What we don't have:
Good documentation on recommended styles and practices in F# is missing. The design guideline has been useful, but it isn't complete enough.
Building lint tool is time-consuming and difficult. Even with simple and single-purpose tool like Fantomas; it takes a lot of time to process F#'s ASTs in a correct way.
To sum up, if we define a right scope, creating a simple yet useful tool for F# is within reach.
Updates
There is an actively-developing linter for F# available at https://github.com/duckmatt/FSharpLint. It seems that my analysis is not too far off :).

Cross-platform parser development - What are the options?

I'm currently working on a project that makes use of a custom language with a simple context-free grammar.
Due to the project's characteristics the same language will have to be used on several platforms, especially mobile ones. Currently, I'm using my small hand-written Java parser (for the Android platform). Soon, I'll have to write basically the same parser for JavaScript and later possibly also for C# (Windows Phone) and Objective C (iOS). There is an additional chance that I'll also have to write it for PHP.
My question is: What options are there to simplify the parser development process? Do I really have to write basically the same parser for each platform or is there a less work-intensive way?
From a development process point of view the best alternative would enable me to write a grammar definition which would then automatically be compiled into a parser.
However, basically the only cross-platform parser generator I've found so far it the GOLD Parser which supports two of my target platforms (Java and C#). It would really be awesome if you could point me to other alternatives.
In case you don't know about other cross-platform compiler-compilers: Do you have hints how to structure the code towards future language extensibility?
I commend https://en.wikipedia.org/wiki/Comparison_of_parser_generators to your attention: if we restrict the domain to Java and C/C++, it suggests APG, GOLD, SableCC, and SLK (amongst others) as being cross-language enough for your stated goals. (I'm also requiring that the action code be separated from the grammar rather than inline, since the latter would defeat the purpose.) If you want JavaScript as well, it looks like your choices are APG (GPL-licensed) and WaxEye (MIT-licensed).
If your language is reasonably simple then I would say to just go with whichever you think will be easiest to integrate into your build environment(s) and has a reasonable match with how you think. Unless parsing time is a huge fraction of your application's total workload, parsing speed should not be an issue -- although table size and memory usage might matter in a mobile context. If your grammar is "simple enough," (i.e. not Perl, for instance) I would expect any of those tools to work.
Have a look in Antlr, I am using it for transforming java code and it is really great. Moreover you can find different grammars here.
REx parser generator supports the required targets, except for Objective C and PHP (code generators for those might be possible). It has not yet been published as open source, though, and there is no decent documentation, just sample grammars. But there are projects that are using it successfully, e.g. xqlint. Here is a paper describing the experience from that project.

What front-end can I use with RPython to implement a language?

I've looked high and low for examples of implementing a language using the RPython toolchain, but the only one I've been able to find so far is this one in which the author writes a simple BF interpreter. Because the grammar is so simple, he doesn't need to use a parser/lexer generator. Is there a front-end out there that supports developing a language in RPython?
Thanks!
I'm not aware of any general lexer or parser generator targeting RPython specifically. Some with Python output may work, but I wouldn't bet on it. However, there's a set of parsing tools in rlib.parsing. It seems quite usable. OTOH, there's a warning in the documentation: It's reportedly still in development, experimental, and only used for the Prolog interpreter so far.
Alternatively, you can write the frontend by hand. Lexers can be annoying and unnatural, granted (you may be able to rip out the utility modules for DFAs used by the Python implementation). But parsers are a piece of cake if you know the right algorithms. I'm a huge fan of "Top Down Operator Precedence parsers" a.k.a. "Pratt parsers", which are reasonably simple (recursive descent) but make all expression parsing issues (nesting, precedence, associativity, etc.) a breeze. There's depressingly little information on them, but the few blog posts were sufficient for me:
One by Crockford (wouldn't recommend it though, it throws a whole lot of unrelated stuff into the parser and thus obscures it),
another one at effbot.org (uses Python),
and a third by a sadly even-less-famous guy who's developing a language himself, Robert Nystrom.
Alex Gaynor has ported David Beazley's excellent PLY to RPython. Its documentation is quite good, and he even gave a talk about using it to implement an interpreter at PyCon US 2013.

Grammar/own-written parser?

I'm doing some small projects which involve having different syntaxes for something, however sometimes these syntaxes are so easy that using a parser generator might be overkill.
Now, when should I use a hand-made parser, and when should I use a parser generator?
Thanks,
William van Doorn
There is no hard-and-fast answer, other than "use whatever is easiest for the particular situation".
My experience is that parsers tend to get more complicated over their lifetimes, so using a parser generator up front usually pays off. Even if the language doesn't get more complicated, using a generator forces you to create a formal specification of the syntax, which is itself valuable.
The downsides are that other programmers may not know how to use the generator, so it makes it difficult for others to help out, and it makes your project dependent on that generator.
It's worth coding the parser by hand if, and only if, you're super-keen to have it be extremely fast even on a machine of very modest speed. For example, in this article on the history of Turbo Pascal from before it got its name, you can see how and why the prototype impressed the small (then Danish) firm "Borland" to hire the prototype's author (Anders Hejlsberg), fully develop the compiler, and launch it as its main product, and I quote...:
with no great expectations I hit the
compile key - AND THEN I WAS
COMPLETELY FLOORED! My test program,
that took minutes to compile and link
using Digital Research’s Pascal MT+,
was compiled and running before I
could blink an eye! That was a great
WOW moment!
Turbo Pascal's amazing compile speed -- coming first and foremost from a carefully hand-coded and highly tuned recursive descent parser coded in assembly language -- allowed it to use a very different strategy from most compilers: no separate compilation pass generating object files and libraries, and then a linker to put them together, rather, Turbo Pascal 1.0 was a single-pass compiler that directly turned source code into a single executable binary.
I remember just the same amazing experience on the tiny personal computers of that era (when a Z80, 64K or RAM, and two floppies was a lot;-) -- Turbo Pascal, with its amazing parser and the IDE and everything else, fit comfortably in memory together with a substantial program in both source and compiled form -- no floppies were needed, which meant many orders of magnitude of difference in program turnaround time.
If Hejlsberg had stuck to what was already the traditional wisdom at the time -- always use parser generators -- Turbo Pascal would probably never have emerged as a commercial product, and definitely not achieved the dominance in the Pascal world it enjoyed for years.
Of course, on a typical PC of today, such extreme parsing speed would not be needed for most compilers. Possible exceptions include compilers that must run seamlessly as part of an "interpreter-like" environment (the simple compilers for languages such as Perl and Python are typically hand-coded, to substantial extents, for that reason -- that was an implementation choice that made them viable in the '90s, although today it's not clear it's still needed), or compilers that run on very limited hardware resources, such as smartphones or low-end netbooks.
In the vast majority of cases in which you'll be writing a compiler, none of these performance considerations probably apply, and you'll be happier with a parser generator.
Your question title suggests that using a grammar is optional. It really isn't - even if I was going to implement a tiny language, I'd sketch out a grammar on a single sheet of paper.
As for when to use parser generators, this is really personal preference. Many people believe in hand-writing recursive descent parsers, rather than using the table-driven approach, for example. The important thing is to be comfortable in understanding the capabilities of the generator.
And don't be thinking that using parser generators is somehow the more professional, or even the easier approach. Bjarne Stroustrup when writing the first C++ compiler intended to use recursive descent, but got talked out of it by some keen colleagues at Bell Labs, much to his eventual chagrin. See section 3.3.2 of The Design and Evolution of C++ for more details.

Is Yacc still used in the industry?

The software base I am developing for uses a signficant amount of yacc which I don't need to deal with. Some times I think it would be helpful in understanding some problems I find but most of the time I can get away with my complete ignorance of yacc.
My question are there enough new projects out there that still use yacc to warrant the time I'll need to learn it?
Edit: Given the response is mostly in favour of learning Yacc, is there a similar language that you would recommend over yacc?
Yes, these tools are worth learning if you ever need to create or modify code that parses a grammar.
For many years the de facto tool for generating code to parse a grammar was yacc, or its GNU cousin, bison.
Lately I've heard there are a couple of new kids on the block, but the principle is the same: you write a declarative grammar in a format that is more or less in Backus-Naur Form (BNF) and yacc/bison/whatever generates some code for you that would be extremely tedious to write by hand.
Also, the principles behind grammars can be very useful to learn even if you don't need to work on such code directly. I haven't worked with parsers much since taking a course on Compiler Design in college, but understanding runtime stacks, lookahead parsers, expression evaluation, and a lot of other related things has helped me immensely to write and debug my code effectively.
edit: Given your followup question about other tools, Yacc/Bison of course are best for C/C++ projects, since they generate C code. There are similar tools for other languages. Not all grammars are equivalent, and some parser generators can only grok grammars of a certain complexity. So you might need to find a tool that can parse your grammar. See http://en.wikipedia.org/wiki/Comparison_of_parser_generators
I don't know about new projects using it but I'm involved in seven different maintenance jobs that use lex and yacc for processing configuration files.
No XML for me, no-sir-ee :-).
Solutions using lex/yacc are a step up from the old configuration files of key=val lines since they allow better hierarchical structures like:
server = "mercury" {
ip = "172.3.5.13"
gateway = "172.3.5.1"
}
server = "venus" {
ip = "172.3.5.21"
gateway = "172.3.5.1"
}
And, yes, I know you can do that with XML, but these are primarily legacy applications written in C and, to be honest, I'd probably use lex/yacc for new (non-Java) jobs as well.
That's because I prefer delivering software on time and budget rather than delivering the greatest new whizz-bang technology - my clients won't pay for my education, they want results first and foremost and I'm already expert at lex/yacc and have all the template code for doing it quickly.
A general rule of thumb: code lasts a long time, so the technologies used in that code last a long time, too. It would take an enormous amount of time to replace the codebase you mention (it took 15 years to build it...), which in turn implies that it will still be around in 5, 10, or more years. (There's even a chance that someone who reads this answer will end up working on it!)
Another rule of thumb: if a general-purpose technology is commonplace enough that you have encountered it already, it's probably commonplace enough that you should familiarize yourself with it, because you'll see it again one day. Who knows: by familiarizing yourself with it, maybe you added a useful tool to your toolbox...
Yacc is one of these technologies: you're probably going to run into it again, it's not that difficult, and the principles you'll learn apply to the whole family of parser constructors.
PEGs are the new hotness, but there are still a ton of projects that use yacc or tools more modern than yacc. I would frown on a new project that chose to use yacc, but for existing projects porting to a more modern tool may not make sense. This makes having rough familiarity with yacc a useful skill.
If you're totally unfamiliar with the topic of parser generators I'd encourage you to learn about one, any one. Many of the concepts are portable between them. Also, it's a useful tool to have in the belt: once you know one you'll understand how they can often be superior compared to regex heavy hand written parsers. If you're already comfortable with the topic of parsers, I wouldn't worry about it. You'll learn yacc if and when you need to in order to get something done.
I work on projects that use Yacc. Not new code - but were they new, they'd still use Yacc or a close relative (Bison, Byacc, ...).
Yes, I regard it as worth learning if you work in C.
Also consider learning ANTLR, or other more modern parser generators. But knowledge of Yacc will stand you in good stead - it will help you learn any other similar tools too, since a lot of the basic theory is similar.
I don't know about yacc/bison specifically, but I have used antlr, cup, jlex and javacc. I thought they would only be of accademic importance, but as it turns out we needed a domain-specific language, and this gave us a much nicer solution than some "simpler" (regex based) parsers out there. Maintenance might be an issue in many environments, though - since most coders these days won't have any experience with parsing tools.
I haven't had the chance to compare it with other parsing systems but I can definitely recommend ANTLR based on my own experience and also with its large and active user base.
Another plus point for ANTLR is ANTLRWorks: The ANTLR GUI Development Environment which is a great help while developing and debugging your grammars. I've yet to see another parsing system which is supported by such an IDE.
We are writing new yacc code at my company for shipping products. Yes, this stuff is still used.

Resources