Parsing haskell preserving comments / formatting - parsing

I want to do some source code transformation (automatic import list cleanup) and I'd like to preserve comments and formatting. I heard some stuff on and off about parsers that do this, I think for the ghc parser.
It looks like I might be able to do this with hs-src-exts Language.Haskell.Exts.Annotate and its SrcSpans by pulling things out of the file. I think the SrcsSpanInfo only covers the parsed parts, but I could theoretically figure out the comments by looking at what's in between. But it's not documented in much detail, and there are no helper functions I can find, and it looks like a hassle, e.g. there's no easy way to print out a parsed expression including formatting and comments. So I think it's not meant to be used in this way, it's just so you can highlight code in the file or something. My impression is that the author meant to use annotations to support this, but never got around to it.
It looks like neither yi nor leksah do this. I feel like HaRe might, but it's not super documented. Is there a haskell parser out there that does this?

The haskell-src-exts recently got support for preserving comments, and it already records src spans. I'm not sure if pretty printing is supported, but you could probably get that working.
The GHC parser also does similar things.

Related

Guessing a code block's language for correct syntax highlighting

I'm quite puzzled how the syntax highlighting feature here on SO works, but I have seen similar somewhere else. How does this work?
Is there one parser which can parse multiple languages at once?
Or, are several passes of different parsers needed and the best parsing result is used?
Or, is only a shallow analysis performed and the language is then guessed based on heuristics?
And if one of these is true, how does it work?
Check out Javascript code prettifier on Google Code.

LaTeX to "freeform" converter

I would like to develop a converter that will accept math written in "freeform" or something like Wolfram Alpha would accept. For example: 2*x+(7+y)/(z^2) will be transformed to 2\cdot x+\frac{7+y}{z^{2}} (please ignore any syntax mistakes I might have made here) and vice versa. I was wondering if there exists a LaTeX library for C++/Java that parses and/or holds LaTeX expression in memory. If so, please share.
If not, how would you go about writing something like this? Is it okay to use normal Java/C++ code for this or should I use something like lex?
Yes, it is absolutely OK to write the parser for ‘freeform’ in Flex/Bison, in fact I’d recommend against using TeX, as that language was not written for parsing. I even remember having done exactly that (‘freeform’ to LaTeX) as an exercise with Flex and Bison and Kimwitu++, which allowed to perform arbitrary transformations before output. Note that you dropped the parentheses of your original formula. This might of might not be what you want in the general case.
For the other way round, the comments already mentioned that it’s nearly impossible to do in the general case. For restricted cases, Bison is good enough.

Is it possible to write own "packages" for LaTeX?

As a programmer, I wonder if I could create my own package for LaTeX. I need something like that famous "listings" package, but something that is much more capable for my needs. I look for a listings solution that would watch out for a comment line like
// BEGIN LISTING 3122
// END LISTING 3122
No syntax highlighting, but intelligent support for tab indents. That package then would be used with a file name or path, walk through the lines and copy out just the snippets of interest.
I am 100% sure there is absolutely nothing like this on the market. So I want to program it for LaTeX. If that's possible. I have no idea how and what programming language / IDE. Where would I start looking?
This is certainly possible, but it is non-trivial in the TeX programming language. I don't have time to code it up at the moment but here's an algorithm; I suggest asking on comp.text.tex for more specific LaTeX programming advice.
Locally set all catcodes of special chars to "other" (\dospecials) and start reading in the input file line by line (\read)
for each line compare the first however many tokens of the line (some iterative use of \if or \ifx; there might be a package to make this easier such as stringstrings or xstring)
in the default state throw away the input line and read the next
unless it's // BEGIN LISTING, in which case save each line into a macro (something like \g#addto#macro)
until it's // END LISTING, obviously
keep going until the end of the file (\ifeof)
TeX by Topic is a good reference guide for this sort of work.
The rather simple texments package shows how code can be piped into pdflatex: by writing your shell-invocable filter, you should be able to do something similar with your idea.
I'm pretty certain you can't do this in LaTeX. Basically you can go nuts with anything that's either a command (\foo) or an environment (\begin{foo} ... \end{foo}) but not in the way you are describing here. Within environments or commands it is possible to turn off LaTeX's processing and handle everything yourself in some way. This is how verbatim and listings work. It's not very pretty though.
Basically, I think it might be possible, if you make ‘/’ an active character (there is \makeactive for example but I imagine there are more solutions) and then invent some good magic around it. (You will need to emulate/create an environment with your logic.) Similar things are done in some of the internationalisation packages in order to ease the input of letters with diacritics.
For a character like ‘/’ this might be even harder as this one could have been written in other places of your text, too. So of course you’d have to take special care for that.

Will ANTLR Help? Different Suggestion?

Before I dive into ANTLR (because it is apparently not for the faint of heart), I just want to make sure I have made the right decision regarding its usage.
I want to create a grammar that will parse in a text file with predefined tags so that I can populate values within my application. (The text file is generated by another application.) So, essentially, I want to be able to parse something like this:
Name: TheFileName
Values: 5 3 1 6 1 3
Other Values: 5 3 1 5 1
In my application, TheFileName is stored as a String, and both sets of values are stored to an array. (This is just a sample, the file is much more complicated.) Anyway, am I at least going down the right path with ANTLR? Any other suggestions?
Edit
The files are created by the user and they define the areas via tags. So, it might look something like this.
Name: <string>TheFileName</string>
Values: <array>5 3 1 6 1 3</array>
Important Value: <double>3.45</double>
Something along those lines.
The basic question is how is the file more complicated? Is it basically more of the same, with a tag, a colon and one or more values, or is the basic structure of the other lines more complex? If it's basically just more of the same, code to recognize and read the data is pretty trivial, and a parser generator isn't likely to gain much. If the other lines have substantially different structure, it'll depend primarily on how they differ.
Edit: Based on what you've added, I'd go one (tiny) step further, and format your file as XML. You can then use existing XML parsers (and such) to read the files, extract data, verify that they fit a specified format, etc.
It depends on what control you have over the format of the file you are parsing. If you have no control then a parser-generator such as ANTLR may be valuable. (We do this ourselves for FORTRAN output files over which we have no control). It's quite a bit of work but we have now mastered the basic ANTLR lexer/parser strategy and it's starting to work well.
If, however, you have some or complete control over the format then create it with as much markup as necessary. I would always create such a file in XML as there are so many tools for processing it (not only the parsing, but also XPath, databases, etc.) In general we use ANTLR to parse semi-structured information into XML.
If you don't need for the format to be custom-built, then you should look into using an existing format such as JSON or XML, for which there are parsers available.
Even if you do need a custom format, you may be better off designing one that is dirt simple so that you don't need a full-blown grammar to parse it. Designing your own scripting grammar from scratch and doing a good job of it is a lot of work.
Writing grammar parsers can also be really fun, so if you're curious then you should go for it. But I don't recommend carelessly mixing learning exercises with practical work code.
Well, if it's "much more complicated", then, yes, a parser generator would be helpful. But, since you don't show the actual format of your file, how could anybody know what might be the right tool for the job?
I use the free GOLD Parser Builder, which is incredibly easy to use, and can generate the parser itself in many different languages. There are samples for parsing such expressions also.
If the format of the file is up to the user can you even define a grammar for it?
Seems like you just want a lexer at best. Using ANTLR just for the lexer part is possible, but would seem like overkill.

How would you go about parsing Markdown? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Edit: I recently learned about a project called CommonMark, which
correctly identifies and deals with the ambiguities in the original
Markdown specification. http://commonmark.org/ It has great C# library
support.
You can find the syntax here.
The source that follows with the download is written in Perl, which I have no intentions of honoring. It is riddled with regular expressions, and it relies on MD5 hashes to escape certain characters. Something is just wrong about that!
I'm about to hard code a parser for Markdown. What is experience with this?
If you don't have anything meaningful to say about the actual parsing of Markdown, spare me the time. (This might sound harsh, but yes, I'm looking for insight, not a solution, that is, a third-party library).
To help a bit with the answers, regular expressions are meant to identify patterns! NOT to parse an entire grammar. That people consider doing so is foobar.
If you think about Markdown, it's fundamentally based around the concept of paragraphs.
As such, a reasonable approach might be to split the input into paragraphs.
There are many kinds of paragraphs, for example, heading, text, list, blockquote, and code.
The challenge is thus to identify these paragraphs and in what context they occur.
I'll be back with a solution, once I find it's worthy to be shared.
The only markdown implementation I know of, that uses an actual parser, is Jon MacFarleane’s peg-markdown. Its parser is based on a Parsing Expression Grammar parser generator called peg.
EDIT: Mauricio Fernandez recently released his Simple Markup Markdown parser, which he wrote as part of his OcsiBlog Weblog Engine. Because the parser is written in OCaml, it is extremely simple and short (268 SLOC for the parser, 43 SLOC for the HTML emitter), yet blazingly fast (20% faster than discount (written in hand-optimized C) and sixhundred times faster than BlueCloth (Ruby)), despite the fact that it isn't even optimized for performance yet. Because it is only intended for internal use by Mauricio himself for his weblog, there are a few deviations from the official Markdown specification, but Mauricio has created a branch which reverts most of those changes.
I released a new parser-based Markdown Java implementation last week, called pegdown.
pegdown uses a PEG parser to first build an abstract syntax tree, which is subsequently written out to HTML. As such it is quite clean and much easier to read, maintain and extend than a regex based approach.
The PEG grammar is based on John MacFarlanes C implementation "peg-markdown".
Maybe something of interest to you...
If I was to try to parse markdown (and its extension Markdown extra) I think I would try to use a state machine and parse it one char at a time, linking together some internal structures representing bits of text as I go along then, once all is parsed, generating the output from the objects all stringed together.
Basically, I'd build a mini-DOM-like tree as I read the input file.
To generate an output, I would just traverse the tree and output HTML or anything else (PS, LaTex, RTF,...)
Things that can increase complexity:
The fact that you can mix HTML and markdown, although the rule could be easy to implement: just ignore anything that's between two balanced tags and output it verbatim.
URLs and notes can have their reference at the bottom of the text. Using data structures for hyperlinks could simply record something like:
[my text to a link][linkkey]
results in a structure like:
URLStructure:
| InnerText : "my text to a link"
| Key : "linkkey"
| URL : <null>
Headers can be defined with an underline, that could force us to use a simple data structure for a generic paragraph and modify its properties as we read the file:
ParagraphStructure:
| InnerText : the current paragraph text
| (beginning of line until end of line).
| HeadingLevel : <null> or 1-4 when we can assess
| that paragraph heading level, if any.
Anyway, just some thoughts.
I'm sure that there are many small details to take care of and I'm pretty sure that Regexes could become handy during the process.
After all, they were meant to process text.
I'd probably read the syntax specification enough times to know it, and get a feel for how to parse it.
Reading the existing parser code is of course brilliant, both to see what seems to be the main source of complexity, and if any special clever tricks are being used. The use of MD5 checksumming seems a bit weird, but I haven't studied the code enough to understand why it's being done. A comment in a routine called _EscapeSpecialChars() states:
We're replacing each such character with its corresponding MD5 checksum value;
this is likely overkill, but it should prevent us from colliding with the escape
values by accident.
Replacing a single character by a full MD5 does seem extravagant, but perhaps it really makes sense.
Of course, it'd be clever to consider creating a "true" syntax, for a tool such as Flex to get out of the regex bog.
If Perl isn't your thing, there are Markdown implementations in at least 10 other languages. They probably don't all have 100% compatibility, but tend to be pretty close.
MarkdownPapers is another Java implementation whose parser is defined in a JavaCC grammar.
If you are using a programming language that has more than three other
users, you should be able to find a library to parse it for you. A
quick Google-ing reveals libraries for CL, Haskell, Python,
JavaScript, Ruby, and so on. It is highly unlikely that you will need
to reinvent this wheel.
If you really have to write it from scratch, I recommend writing a
proper parser. With this technique, you won't have to escape things
with MD5 hashes. (I agree that if you have to do something like this,
it's time to reconsider your design.)
There are libraries available in a number of languages, including php, ruby, java, c#, javascript. I'd suggest looking at some of these for ideas.
It depends on which language you wish to use, for the best way to implement it, there will be idiomatic and non idiomatic ways to do it.
Regexes work in perl, because perl and regex are best friends.
Markdown is a JAWL (just another wiki language)
There are plenty of open source wiki's out there that you can examine the code of the parser. Most use REGEX
Check out the screwturn wiki, is has an interesting multi pass formatter pipeline, a very nice technique - see /core/Formatter.cs and /core/FormatterPipeline.cs
Best is to use/join an existing project, these sorts of things are always much harder than they appear
Here you can find a JavaScript-implementation of Markdown. It also relies heavily on regular expressions, as this is just the fastest and easiest way to parse the text.
But it spares the MD5 part.
I cannot help directly with the coding of the parsing, but maybe this link can help you one way or another.

Resources