What tool to use for parsing text - parsing

lets get right into this topic.
So I have a output from an cydia app called "AutoTouch".
touchDown(2, 634.4, 471.3);
usleep(66685.62);
touchUp(2, 635.4, 470.3);
usleep(365600.04);
Now, as i already made some functions for me, i want to parse that into something like that:
tapp(634, 471);
usleep(365600);
What simple language would u reccomend i should use to do that? It should be easy, but also powerful (like compare numbers and such hardcore stuff ^^) and work on osx/linux.
Thanks for your help and i hope i used the word "parsing" correctly :)

On linux/unix/osx, sed is your friend. You'll find a nice sed tutorial on grymoire. Depending on your specific needs, you could use awk as an alternative. There is also nice awk tutorial on grymoire.
The answer to this stack overflow question should help you choose between these two tools.

Related

Parsing haskell preserving comments / formatting

I want to do some source code transformation (automatic import list cleanup) and I'd like to preserve comments and formatting. I heard some stuff on and off about parsers that do this, I think for the ghc parser.
It looks like I might be able to do this with hs-src-exts Language.Haskell.Exts.Annotate and its SrcSpans by pulling things out of the file. I think the SrcsSpanInfo only covers the parsed parts, but I could theoretically figure out the comments by looking at what's in between. But it's not documented in much detail, and there are no helper functions I can find, and it looks like a hassle, e.g. there's no easy way to print out a parsed expression including formatting and comments. So I think it's not meant to be used in this way, it's just so you can highlight code in the file or something. My impression is that the author meant to use annotations to support this, but never got around to it.
It looks like neither yi nor leksah do this. I feel like HaRe might, but it's not super documented. Is there a haskell parser out there that does this?
The haskell-src-exts recently got support for preserving comments, and it already records src spans. I'm not sure if pretty printing is supported, but you could probably get that working.
The GHC parser also does similar things.

I've heard that LaTeX is Turing complete. Are there any programs written in LaTeX?

It's possible to do interesting things with what would ordinarily be thought of as typesetting languages. For example, you can construct the Mandelbrot set using postscript.
It is suggested in this MathOverflow question that LaTeX may be Turing-complete. This implies the ability to write arbitrary programs (although it may not be easy!). Does anyone know of any concrete example of such a program in LaTeX, which does something highly unusual with the language?
In issue 13 of The Monad Reader, Stephen Hicks writes about implementing the solution to an ICFP contest (involving Mars rover navigation) in TeX, with copious use of macros. Amusingly, the solution's output when typeset is a postscript map of the rover's path.
Alternatively, Andrew Greene wrote a BASIC interpreter in TeX (more details). This may count as slightly perverse.
\def\K#1#2{#2}
\def\S#1#2#3{#1#3{#2#3}}
The pgfmath library still amazes me. But on a more Turing-related note: it is possible to write an actual Turing machine in TeX, as per http://en.literateprograms.org/Turing_machine_simulator_(LaTeX). It's just a nifty way of using expansions in TeX.
PostScript is Turing complete as well, if you'll read the manual you'll be amazed by the general programming capabilities of it (at least, I was).
I'm not sure if this qualifies as programming per se, but I've recently starting doing something a bit like Object Oriented stuff in LaTeX. (You don't need to know any maths to follow the following.) In recent papers, I've been writing about categories, which have objects and morphisms. Since there've been quite a few of those, I wanted a consistent style so that, say, đť’ž was a category with typical object C and typical morphism c. Then I'd also have đť’ź with D and d. So I define a "class", say "category" (you need to be a mathematician to understand the joke there), and declare that C is an instance of this class, and then have access to \ccat, \cobj, \cmor and so forth. The reason for not doing \cat{c}, \obj{c}, and \mor{c}, and so forth, is that sometimes these categories have special names and so after declaring the instance, I can modify it's name very easily (simply redefine \ccat - well, actually \mathccat since \ccat is a wrapper which selects \mathccat in math mode and \textccat in text mode). (Of course, it's a little more complicated than the above suggests and the OO stuff really comes in useful when I want to define a new category as a variant of an old one (it can even deal with the case where the old one doesn't exist yet.).)
Although it may not qualify as actual programming, I am using it in papers and do find it useful - the other answers (so far) have more of the feel of showing off the capabilities of LaTeX than of a sensible solution to a practical problem.
I know of someone who wrote the answer to an ACM contest problem in LaTeX.

typeset problem/solution pair (example) in latex

I'd like to know if there is a way in latex to show the following:
Example 1: problem statement here
Solution: solution here
and wrap that in a box so that it will be noticeable.
Seems like a common enough problem that there should be ready made solutions
If there are any suggestions it would be much appreciated!
This can be done using the exercise package. For more information, look at the manual or a previous topic on this subject. A (modified) example from the manual:
\begin{ExerciseList}
\Exercise Discuss\ldots
\Answer $\ldots$
\end{ExerciseList}

How would you go about parsing Markdown? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Edit: I recently learned about a project called CommonMark, which
correctly identifies and deals with the ambiguities in the original
Markdown specification. http://commonmark.org/ It has great C# library
support.
You can find the syntax here.
The source that follows with the download is written in Perl, which I have no intentions of honoring. It is riddled with regular expressions, and it relies on MD5 hashes to escape certain characters. Something is just wrong about that!
I'm about to hard code a parser for Markdown. What is experience with this?
If you don't have anything meaningful to say about the actual parsing of Markdown, spare me the time. (This might sound harsh, but yes, I'm looking for insight, not a solution, that is, a third-party library).
To help a bit with the answers, regular expressions are meant to identify patterns! NOT to parse an entire grammar. That people consider doing so is foobar.
If you think about Markdown, it's fundamentally based around the concept of paragraphs.
As such, a reasonable approach might be to split the input into paragraphs.
There are many kinds of paragraphs, for example, heading, text, list, blockquote, and code.
The challenge is thus to identify these paragraphs and in what context they occur.
I'll be back with a solution, once I find it's worthy to be shared.
The only markdown implementation I know of, that uses an actual parser, is Jon MacFarleane’s peg-markdown. Its parser is based on a Parsing Expression Grammar parser generator called peg.
EDIT: Mauricio Fernandez recently released his Simple Markup Markdown parser, which he wrote as part of his OcsiBlog Weblog Engine. Because the parser is written in OCaml, it is extremely simple and short (268 SLOC for the parser, 43 SLOC for the HTML emitter), yet blazingly fast (20% faster than discount (written in hand-optimized C) and sixhundred times faster than BlueCloth (Ruby)), despite the fact that it isn't even optimized for performance yet. Because it is only intended for internal use by Mauricio himself for his weblog, there are a few deviations from the official Markdown specification, but Mauricio has created a branch which reverts most of those changes.
I released a new parser-based Markdown Java implementation last week, called pegdown.
pegdown uses a PEG parser to first build an abstract syntax tree, which is subsequently written out to HTML. As such it is quite clean and much easier to read, maintain and extend than a regex based approach.
The PEG grammar is based on John MacFarlanes C implementation "peg-markdown".
Maybe something of interest to you...
If I was to try to parse markdown (and its extension Markdown extra) I think I would try to use a state machine and parse it one char at a time, linking together some internal structures representing bits of text as I go along then, once all is parsed, generating the output from the objects all stringed together.
Basically, I'd build a mini-DOM-like tree as I read the input file.
To generate an output, I would just traverse the tree and output HTML or anything else (PS, LaTex, RTF,...)
Things that can increase complexity:
The fact that you can mix HTML and markdown, although the rule could be easy to implement: just ignore anything that's between two balanced tags and output it verbatim.
URLs and notes can have their reference at the bottom of the text. Using data structures for hyperlinks could simply record something like:
[my text to a link][linkkey]
results in a structure like:
URLStructure:
| InnerText : "my text to a link"
| Key : "linkkey"
| URL : <null>
Headers can be defined with an underline, that could force us to use a simple data structure for a generic paragraph and modify its properties as we read the file:
ParagraphStructure:
| InnerText : the current paragraph text
| (beginning of line until end of line).
| HeadingLevel : <null> or 1-4 when we can assess
| that paragraph heading level, if any.
Anyway, just some thoughts.
I'm sure that there are many small details to take care of and I'm pretty sure that Regexes could become handy during the process.
After all, they were meant to process text.
I'd probably read the syntax specification enough times to know it, and get a feel for how to parse it.
Reading the existing parser code is of course brilliant, both to see what seems to be the main source of complexity, and if any special clever tricks are being used. The use of MD5 checksumming seems a bit weird, but I haven't studied the code enough to understand why it's being done. A comment in a routine called _EscapeSpecialChars() states:
We're replacing each such character with its corresponding MD5 checksum value;
this is likely overkill, but it should prevent us from colliding with the escape
values by accident.
Replacing a single character by a full MD5 does seem extravagant, but perhaps it really makes sense.
Of course, it'd be clever to consider creating a "true" syntax, for a tool such as Flex to get out of the regex bog.
If Perl isn't your thing, there are Markdown implementations in at least 10 other languages. They probably don't all have 100% compatibility, but tend to be pretty close.
MarkdownPapers is another Java implementation whose parser is defined in a JavaCC grammar.
If you are using a programming language that has more than three other
users, you should be able to find a library to parse it for you. A
quick Google-ing reveals libraries for CL, Haskell, Python,
JavaScript, Ruby, and so on. It is highly unlikely that you will need
to reinvent this wheel.
If you really have to write it from scratch, I recommend writing a
proper parser. With this technique, you won't have to escape things
with MD5 hashes. (I agree that if you have to do something like this,
it's time to reconsider your design.)
There are libraries available in a number of languages, including php, ruby, java, c#, javascript. I'd suggest looking at some of these for ideas.
It depends on which language you wish to use, for the best way to implement it, there will be idiomatic and non idiomatic ways to do it.
Regexes work in perl, because perl and regex are best friends.
Markdown is a JAWL (just another wiki language)
There are plenty of open source wiki's out there that you can examine the code of the parser. Most use REGEX
Check out the screwturn wiki, is has an interesting multi pass formatter pipeline, a very nice technique - see /core/Formatter.cs and /core/FormatterPipeline.cs
Best is to use/join an existing project, these sorts of things are always much harder than they appear
Here you can find a JavaScript-implementation of Markdown. It also relies heavily on regular expressions, as this is just the fastest and easiest way to parse the text.
But it spares the MD5 part.
I cannot help directly with the coding of the parsing, but maybe this link can help you one way or another.

How can I use NLP to parse recipe ingredients?

I need to parse recipe ingredients into amount, measurement, item, and description as applicable to the line, such as 1 cup flour, the peel of 2 lemons and 1 cup packed brown sugar etc. What would be the best way of doing this? I am interested in using python for the project so I am assuming using the nltk is the best bet but I am open to other languages.
I actually do this for my website, which is now part of an open source project for others to use.
I wrote a blog post on my techniques, enjoy!
http://blog.kitchenpc.com/2011/07/06/chef-watson/
The New York Times faced this problem when they were parsing their recipe archive. They used an NLP technique called linear-chain condition random field (CRF). This blog post provides a good overview:
"Extracting Structured Data From Recipes Using Conditional Random Fields"
They open-sourced their code, but quickly abandoned it. I maintain the most up-to-date version of it and I wrote a bit about how I modernized it.
If you're looking for a ready-made solution, several companies offer ingredient parsing as a service:
Zestful (full disclosure: I'm the author)
Spoonacular
Edamam
I guess this is a few years out, but I was thinking of doing something similar myself and came across this, so thought I might have a stab at it in case it is useful to anyone else in f
Even though you say you want to parse free test, most recipes have a pretty standard format for their recipe lists: each ingredient is on a separate line, exact sentence structure is rarely all that important. The range of vocab is relatively small as well.
One way might be to check each line for words which might be nouns and words/symbols which express quantities. I think WordNet may help with seeing if a word is likely to be a noun or not, but I've not used it before myself. Alternatively, you could use http://en.wikibooks.org/wiki/Cookbook:Ingredients as a word list, though again, I wouldn't know exactly how comprehensive it is.
The other part is to recognise quantities. These come in a few different forms, but few enough that you could probably create a list of keywords. In particular, make sure you have good error reporting. If the program can't fully parse a line, get it to report back to you what that line is, along with what it has/hasn't recognised so you can adjust your keyword lists accordingly.
Aaanyway, I'm not guaranteeing any of this will work (and it's almost certain not to be 100% reliable) but that's how I'd start to approach the problem
This is an incomplete answer, but you're looking at writing up a free-text parser, which as you know, is non-trivial :)
Some ways to cheat, using knowledge specific to cooking:
Construct lists of words for the "adjectives" and "verbs", and filter against them
measurement units form a closed set, using words and abbreviations like {L., c, cup, t, dash}
instructions -- cut, dice, cook, peel. Things that come after this are almost certain to be ingredients
Remember that you're mostly looking for nouns, and you can take a labeled list of non-nouns (from WordNet, for example) and filter against them.
If you're more ambitious, you can look in the NLTK Book at the chapter on parsers.
Good luck! This sounds like a mostly doable project!
Can you be more specific what your input is? If you just have input like this:
1 cup flour
2 lemon peels
1 cup packed brown sugar
It won't be too hard to parse it without using any NLP at all.

Resources