Is ANTLR an appropriate tool to serialize/deserialize a binary data format? - parsing

I need to read and write octet streams to send over various networks to communicate with smart electric meters. There is an ANSI standard, ANSI C12.19, that describes the binary data format. While the data format is not overly complex the standard is very large (500+ pages) in that it describes many distinct types. The standard is fully described by an EBNF grammar. I am considering utilizing ANTLR to read the EBNF grammar or a modified version of it and create C# classes that can read and write the octet stream.
Is this a good use of ANTLR?
If so, what do I need to do to be able to utilize ANTLR 3.1? From searching the newsgroup archives it seems like I need to implement a new stream that can read bytes instead of characters. Is that all or would I have to implement a Lexer derivative as well?
If ANTLR can help me read/parse the stream can it also help me write the stream?
Thanks.
dan finucane

You might take a look at Ragel. It is a state machine compiler/lexer that is useful for implementing on-the-wire protocols. I have read reports that it generates very fast code. If you don't need a parser and template engine, ragel has less overhead than ANTLR. If you need a full-blown parser, AST, and nice template engine support, ANTLR might be a better choice.

This subject comes up from time to time on the ANTLR mailing list. The answer is usually no, because binary file formats are very regular and it's just not worth the overhead.

It seems to me that having a grammar gives you a tremendous leg up.
ANTLR 3.1 has StringTemplate and code generation features that are separate from the parsing/lexing, so you can decompose the problem that way.
Seems like a winner to me, worth trying.

Related

Are there any known parser combinator library's in F# that can parse binary (not text) files?

I am familiar with some of the basics of fparsec but it seems to be geared towards text files or streams.
Are there any other F# library's that can efficiently parse binary files? Or can fparsec be easily modified to work efficiently with binary streams?
You may be interested in pickler combinators. These are a bit like parser combinators, but are more focused at simpler binary formats (picklers allow you to produce binary data and unpicklers parse them). There is a quite readable article about the idea (PDF) by Andrew Kennedy (the author of units of measure).
I don't have much experience with these myself, but I just realized it may be relevant for you. The idea is used in the F# compiler for generating some binary resources (like quotations stored in resources). Although, I'm not sure if the F# compiler implementation is any good (it is one of those things from early days of the F# compiler).
The problem with working with binary streams is not a parser problem per se, it's a lexing problem. The lexer is what turns the raw data in to elements that the parse can handle.
Most any parsing system has few problems letting you supply your own lexer, and if that's the case you could, ideally, readily write a compliant lexer that works on your binary stream.
The problem, however, is that most parsing and lexing systems today are themselves created from a higher level tool. And THAT tool most likely is not designed to work with binary streams. That is, it's not practical for you specify the tokens and grammar of the binary stream that can be used to create the subsequent parsers and lexer. Also, there is likely no support whatsoever for the higher level concepts of multi byte binary numbers (shorts, longs, floats, etc.) that you are likely to encounter in a binary stream, nor for the generated parser to possibly work well upon them if you actually need to work on their actual value, again because the systems are mostly designed for text based tokens, and the underlying runtime handles the details of converting that text it something the machine can use (such as sequences of ascii numerals in to actual binary integers).
All that said, you can probably actually use the parsing section of the tool, since parsers work more on abstract tokens that are fed them by the lexer. Once you create your grammar, at a symbolic level, you would need to redo the lexer to create the problem tokens from the binary stream to feed in to the parser.
This is actually good, because the parser tends to be far more complicated than the basic lexer, so the toolkit would handle much of the "hard part" for you. But you would still need to deal with creating your own lexer and interfacing it properly to the generated parser. Not an insurmountable task, and if the grammar is of any real complexity, likely worth your effort in the long run.
If it's all mostly simple, then you're likely just better off doing it your self by hand. Of the top of my head, it's hard to imagine a difficult binary grammar, since the major selling point of a binary format is that it's much closer to the machine, which is in contradiction to the text that most parsers are designed to work with. But I don't know your use case.
But consider the case of a disassembler. That's a simple lexer that may be able to under stand at a high level the different instruction types (such as those operands that have no arguments, those that take a single byte as an argument, or a word), and feed that to a parser can then be used to convert the instructions in to their mnemonics and operands in the normal assembler syntax, as well as handle the label references and such.
It's a contrived case, as a disassembler typically doesn't separate the lexing and parsing phases, it's usually not complicated enough to bother, but it's one way to look at the problem.
Addenda:
If you have enough information to convert the binary stream in to text to feed to the engine, then you you have enough information to instead of creating text, you could create the actual tokens that the parser would want to see from the lexer.
That said, what you could do is take your text format, use that as the basis for your parsing tool and grammar, and have it create the lexer and parser machines for you, and then, by hand, you can test your parser and its processing using "text tests".
But when you get around to reading the binary, rather than creating text to then be lexed and parsed, simply create the tokens that the lexer would create (these should be simple objects), and pump the parser directly. This will save you the lex step and save you some processing time.

Writing a code formatting tool for a programming language

I'm looking into the feasibility of writing a code formatting tool for the Apex language, a Salesforce.com variation on Java, and perhams VisualForce, its tag based markup language.
I have no idea on where to start this, apart from feeling/knowing that writing a language parser from scratch is probably not the best approach.
I have a fairly thin grasp of what Antlr is and what it does, but conceptually, I'm imagining one could 'train' antlr to understand the syntax of Apex. I could then get a structured version of the code in a data structure (AST?) which I could then walk to produce correctly formatted code.
Is this the right concept? Is Antlr a tool to do that? Any links to a brief synopsis on this? I'm looking for investing a few days in this task, not months, and I'm not sure if its even vaguely achievable.
Since Apex syntax is similar to Java, I'd look at Eclipse's JDT. Edit down the Java grammar to match Apex. Do the same w/ formatting rules/options. This is more than a few days of work.
Steven Herod wrote:
... I'm imagining one could 'train' antlr to understand the syntax of Apex. ...
What do you mean by "'train' antlr"? "Train" as in artificial intelligence (training a neural-net)? If so, then you are mistaken.
Steven Herod wrote:
... get a structured version of the code in a data structure (AST?) which I could then walk to produce correctly formatted code.
Is this the right concept? Is Antlr a tool to do that?
Yes, more or less. You write a grammar that precisely defines the language you want to parse. Then you use ANTLR which will generate a lexer (tokenizer) and parser based on the grammar file. You can let the parser create an AST from your input source and then walk the AST and emit (custom) output/code.
Steven Herod wrote:
... I'm looking for investing a few days in this task, not months, and I'm not sure if its even vaguely achievable.
Well, I don't know you of course, but I'd say writing a grammar for a language similar to Java, and then emitting output by walking the AST within just a couple of days is impossible, even more so for someone new to ANTLR. I am fairly familiar with ANTLR, but I couldn't do it in just a few days. Note that I'm only talking about the "parsing-part", after you've done that, you'll need to integrate this in some text editor. This all looks to be more a project of several months, not even weeks, let alone several days.
So, in short, if all you want to do is write a custom code highlighter, ANTLR isn't your best choice.
You could have a look at Xtext which uses ANTLR under the hood. To quote their website:
With Xtext you can easily create your own programming languages and domain-specific languages (DSLs). The framework supports the development of language infrastructures including compilers and interpreters as well as full blown Eclipse-based IDE integration. ...
But I doubt you'll have an Eclipse plugin up and running within just a few days.
Anyway, best of luck!
Our DMS Software Reengineering Toolkit is designed to do this as kind poker-pot ante necessary to do any kind of automated software reengineering project.
DMS allows one to define a grammar, similar to ANTLR's (and other parser generator) styles. Unlike ANTLR (and other parser generators), DMS uses a GLR parser, which means you don't have to bend the language grammar rules to meet the requirements of the parser generator. If you can write an context-free grammar, DMS will convert that into a parser for that language. This means in fact you can get a working, correct grammar up considerably faster than with typical LL or L(AL)R parser generators.
Unlike ANTLR (and other parser generators), there is no additional work to build the AST; it is automatically constructed. This means you spend zero time write tree-building rules and none debugging them.
DMS additionally provides a pretty-printing specification language, specifying text boxes stack vertically, horizontally, or indented, in which you can define the "format" that is used to convert the AST back into completely legal, nicely formatted source text. None of the well known parser generators provide any help here; if you want to prettyprint the tree, you get to do a great deal of custom coding. For more details on this, see my SO answer to Compiling an AST back to source. What this means is you can build a prettyprinter for your grammar in an (intense) afternoon by simply annotating the grammar rules with box layout directives.
DMS's lexer is very careful to capture comments and "lexical formats" (was that number octal? What kind of quotes did that string have? Escaped characters?) so that they can be regenerated correctly. Parse-to-AST and then prettyprint-AST-to-text round trips arbitrarily ugly code into formatted code following the prettyprinting rules. (This round trip is the poker ante: if you want go further, to actually manipulate the AST, you still want to be able to regenerate valid source text).
We recently built parser/prettyprinters for EGL. This took about a week end to end. Granted, we are expert at our tools.
You can download any of a number of different formatters built using DMS from our web site, to see what such formatting can do.
EDIT July 2012: Last week (5 days) using DMS, from scratch we (I personally) built a fully compliant IEC61131-3 "Structured Text" (industrial control language, Pascal-like) parser and prettyprinter. (It handles all the examples from the standards documents).
Reverse engineering a language to get a parser is hard. Very hard! Even if it's very close to Java.
But why reinvent the wheel?
There is a wonderful Apex parser implementation as part of the Force.com IDE on GitHub. It's just a jar without source code but you can use it for whatever you want. And the developers behind it are really supportive and helpful.
We are currently building an Apex module of the famous Java static code analyzer PMD here. And we use Salesforce.com internal parser. It works like a charm.
And hey, it's an open source project and we need contributers of any kind ;-)

Will rewriting a multipurpose log file parser to use formal grammars improve maintainability?

TLDR: if I built a multipurpose parser by hand with different code for each format, will it work better in the long run using one chunk of parser code and an ANTLR, PyParsing or similar grammar to specify each format?
Context:
My job involves lots of benchmark log files from ~50 different benchmarks. There are a few in XML, a few HTML, a few CSV and lots of proprietary stuff with no documented spec. To save me and my coworkers the time of entering this data by hand, I wrote a parsing tool that handles all of the formats we deal with regularly with a uniform interface. The design, though, is not so clean.
I wrote this thing in Python and created a Parser class. Each file format is handled as an implementation that provides its own code for the Parser's read() method. I like the idea of having only one definition of Parser that uses grammars to understand each format, but I've never done it before.
Is it worth my time, and will it be easier for other newbies to work with in the future once I finish refactoring?
I can't answer your question with 100% certainty, but I can give you an opinion.
I find the choice to use a proper grammar vs hand rolled regex "parsers" often comes down to how uniform the input is.
If the input is very uniform and you already know a language that deals with strings well, like Python or Perl, then I'd keep your existing code.
On the other hand I find parser generators, like Antlr, really shine when the input can have errors and inconsistencies in it. The reason is that the formal grammar allows you to focus on what should be matched in a certain context without having to worry about walking the input stream manually.
Furthermore if the input stream has an error then I find it's often easier to deal with them using Antlr vs regexs. The reason being is that if a couple of options are available Antlr has built in functionality for hosing the correct path, including rollback via predicates.
Having said all that, there is alot to be said for working code. I find if I want to rewrite something then I try to make a good use case for how the rewrite will benefit the user of the product.

In the programming languages specifications, why is it that lexical analysis not translatable?

In all of the standard specifications for programming languages, why is it that you cannot directly translate the lexical analysis/layout to a grammar that is ready to be plugged into and working?
I can understand that it would be impossible to adapt it for the likes of Flex/Bison, Lex/Yacc, Antlr and so on, and furthermore to make it readable for humans to understand.
But surely, if it is a standard specification, it should be a simple copy/paste the grammar layout and instead end up with loads of shift/reduce errors as a result which can back fire and hence, produce an inaccurate grammar.
In other words, why did they not make it readable for use by a grammar/parser tool straight-away?
Maybe it is a debatable thing I don't know...
Thanks,
Best regards,
Tom.
In other words, why did they not make
it readable for use by a
grammar/parser tool straight-away?
Standards documents are intended to be readable by humans, not parser generators.
It is easy for humans to look at a grammar and know what the author intended, however, a computer needs to have a lot more hand holding along the way.
Specifically, these specifications are generally not LL(1) or LR(1). As such, lookaheads are needed, conflicts need to be resolved. True, this could be done in the language specification, but then it is source code for a lexical analyzer, not a language specification.
I agree with your sentiment, but the guys writing standards can't win on this.
To make the lexer/grammar work for a parser generator directly-out-of-standard, the standard writers would have to choose a specific one. (What choice would the COBOL standard folks have made in 1958?)
The popular ones (LEX, YACC, etc.)
are often not capable of handling reference grammars, written for succinctness and clarity, and so would be a poor (e.g. non-)choice.
More exotic ones (Earley, GLR) might be more effective because they allow infinite lookahead and ambiguity, but are harder to find. So if a specific tool like this
was chosen you would not get what you wanted, which is a grammar that works with the parser generator you have.
Having said that, the DMS Software Reengineering Toolkit uses a GLR parser generator. We don't have to massage reference grammars a lot to get them to work, and DMS now handles a lot of languages, including ones that are famously hard such as C++. IMHO, this is as close to your ideal as you are likely to get.

Most effective way to parse C-like definition strings?

I've got a set of function definitions written in a C-like language with some additional keywords that can be put before some arguments(the same way as "unsigned" or "register", for example) and I need to analyze these lines as well as some function stubs and generate actual C code from them.
Is that correct that Flex/Yacc are the most proper way to do it?
Will it be slower than writing a Shell or Python script using regexps(which may become big pain, as I suppose, if the number of additional keywords becomes bigger and their effects would be rather different) provided that I have zero experience with analysers/parsers(though I know how LALR does its job)?
Are there any good materials on Lex/Yacc that cover similar problems? All papers I could find use the same primitive example of a "toy" calculator.
Any help will be appreciated.
ANTLR is commonly used (as are Lex\Yacc).
ANTLR, ANother Tool for Language
Recognition, is a language tool that
provides a framework for constructing
recognizers, interpreters, compilers,
and translators from grammatical
descriptions containing actions in a
variety of target languages.
There is also the Lemon Parser, which features a less restrictive grammar. The down side is you're married to lemon, re-writing a parser's grammar to something else when you discover some limitation sucks. The up side is its really easy to use .. and self contained. You can drop it in tree and not worry about checking for the presence of others.
SQLite3 uses it, as do several other popular projects. I'm not saying use it because SQLite does, but perhaps give it a try if time permits.
That entirely depends on your definition of "effective". If you have all the time of the world, the fastest parser would be a hand-written pull parser. They take a long time to debug and develop but today, no parser generator beats hand-written code in terms of runtime performance.
If you want something that can parse valid C within a week or so, use a parser generator. The code will be fast enough and most parser generators come with a grammar for C already which you can use as a starting point (avoiding 90% of the common mistakes).
Note that regexps are not suitable for parsing recursive structures. This approach would both be slower than using a generator and more error prone than a hand-written pull parser.
actually, it depends how complex is your language and whether it's really close to C or not...
Still, you could use lex as a first step even for regular expression ....
I would go for lex + menhir and o'caml....
but any flex/yacc combination would be fine..
The main problem with regular bison (the gnu implementation of yacc) stems from the C typing.. you have to describe your whole tree (and all the manipulation functions)... Using o'caml would be really easier ...
For what you want to do, our DMS Software Reengineering Toolkit is likely a very effective solution.
DMS is designed specifically to support customer analyzers/code generators of the type you are discussing. It provides very strong facilities for defining arbitrary language parsers/analyzers (tested on 30+ real languages including several complete dialects of C, C++, Java, C#, and COBOL).
DMS automates the construction of ASTs (so you don't have to do anything but get the grammar right to have a usable AST), enables the construction of custom analyses of exactly the pattern-directed inspection you indicated, can construct new C-specific ASTs representing the code you want to generate, and spit them out as compilable C source text. The pre-existing definitions of C for DMS can likely be bent to cover your C-like language.

Resources