I want to allow mathematical variables in my Lemon parser-driven app. For example, if the user enters x^2+y, I want to then be able to evaluate this for 100000 different pairs of values of x and y, hopefully without having to reparse each time. The only way I can think of to do that is to have the parser generate a tree of objects, which then evaluates the expression when given the input. Is there a better/simpler/faster way?
Performance may be an issue here. But I also care about ease of coding and code upkeep.
If you want the most maintainable code, evaluate the expression as you parse. Don't build a tree.
If you want to re-execute the expression a lot, and the expression is complicated, you'll need to avoid reparsing (in order of most to least maintainable): build tree and evaluate, generate threaded code and evaluate, generate native code and evaluate.
If the expressions are generally as simple as your example, a recursive descent hand-coded parser that evaluates on the fly will likely be very fast, and work pretty well, even for 100,000 iterations. Such parsers will likely take much less time to execute than Lemon.
That is indeed how you would typically do it, unless you want to generate actual (real or virtual) code. x and y would just be variables in your case so you would fill in the actual values and then call your Evaluate function to evaluate the expression. The tree node would then contain pointers to the variables x and y, and so on. No need to parse it for each pair of test values.
Related
I'm looking for algorithm to help me predict next token given a string/prefix and Context free grammar.
First question is what is the exact structure representing CFG. It seems it is a tree, but what type of tree ? I'm asking because the leaves are always ordered , is there a ordered-tree ?
May be if i know the correct structure I can find algorithm for bottom-up search !
If it is not exactly a Search problem, then the next closest thing it looks like Parsing the prefix-string and then Generating the next-token ? How do I do that ?
any ideas
my current generated grammar is simple it has no OR rules (except when i decide to reuse the grammar for new sequences, i will be). It is generated by Sequitur algo and is so called SLG(single line grammar) .. but if I generate it using many seq's the TOP rule will be Ex:>
S : S1 z S3 | u S2 .. S5 S1 | S4 S2 .. |... | Sn
S1 : a b
S2 : h u y
...
..i.e. top-heavy SLG, except the top rule all others do not have OR |
As a side note I'm thinking of a ways to convert it to Prolog and/or DCG program, where may be there is easier way to do what I want easily ?! what do you think ?
TL;DR: In abstract, this is a hard problem. But it can be pretty simple for given grammars. Everything depends on the nature of the grammar.
The basic algorithm indeed starts by using some parsing algorithm on the prefix. A rough prediction can then be made by attempting to continue the parse with each possible token, retaining only those which do not produce immediate errors.
That will certainly give you a list which includes all of the possible continuations. But the list may also include tokens which cannot appear in a correct input. Indeed, it is possible that the correct list is empty (because the given prefix is not the prefix of any correct input); this will happen if the parsing algorithm is unable to correctly verify whether a token sequence is a possible prefix.
In part, this will depend on the grammar itself. If the grammar is LR(1), for example, then the LR(1) parsing algorithm can precisely identify the continuation set. If the grammar is LR(k) for some k>1, then it is theoretically possible to produce an LR(1) grammar for the same language, but the resulting grammar might be impractically large. Otherwise, you might have to settle for "false positives". That might be acceptable if your goal is to provide tab-completion, but in other circumstances it might not be so useful.
The precise datastructure used to perform the internal parse and exploration of alternatives will depend on the parsing algorithm used. Many parsing algorithms, including the standard LR parsing algorithm whose internal data structure is a simple stack, feature a mutable internal state which is not really suitable for the exploration step; you could adapt such an algorithm by making a copy of the entire internal data structure (that is, the stack) before proceeding with each trial token. Alternatively, you could implement a copy-on-write stack. But the parser stack is not usually very big, so copying it each time is generally feasible. (That's what Bison does to produce expanded error messages with an "expected token" list, and it doesn't seem to trigger unacceptable runtime overhead in practice.)
Alternatively, you could use some variant of CYK chart parsing (or a GLR algorithm like the Earley algorithm), whose internal data structures can be implemented in a way which doesn't involve destructive modification. Such algorithms are generally used for grammars which are not LR(1), since they can cope with any CFG although highly ambiguous grammars can take a long time to parse (proportional to the cube of the input length). As mentioned above, though, you will get false positives from such algorithms.
If false positives are unacceptable, then you could use some kind of heuristic search to attempt to find an input sequence which completes the trial prefix. This can in theory take quite a long time, but for many grammars a breadth-first search can find a completion within a reasonable time, so you could terminate the search after a given maximum time. This will not produce false positives, but the time limit might prevent it from finding the complete set of possible continuations.
I have created a grammar to read a file of equations then created AST nodes for each rule.My question is how can I do simplification or substitute vales on the equations that the parser is able to read correctly. in which stage? before creating AST nodes or after?
Please provide me with ideas or tutorials to follow.
Thank you.
I'm assuming you equations are something like simple polynomials over real-value variables, like X^2+3*Y^2
You ask for two different solutions to two different problems that start with having an AST for at least one equation:
How to "substitute values" into the equation and compute the resulting value, e.g, for X==3 and Y=2, substitute into the AST for the formula above and compute 3^2+3*2^2 --> 21
How to do simplification: I assume you mean algebraic simplification.
The first problem of substituting values is fairly easy if yuo already have the AST. (If not, parse the equation to produce the AST first!) Then all you have to do is walk the AST, replacing every leaf node containing a variable name with the corresponding value, and then doing arithmetic on any parent nodes whose children now happen to be numbers; you repeat this until no more nodes can be arithmetically evaluated. Basically you wire simple arithmetic into a tree evaluation scheme.
Sometimes your evaluation will reduce the tree to a single value as in the example, and you can print the numeric result My SO answer shows how do that in detail. You can easily implement this yourself in a small project, even using JavaCC/JJTree appropriately adapted.
Sometimes the formula will end up in a state where no further arithmetic on it is possible, e.g., 1+x+y with x==0 and nothing known about y; then the result of such a subsitution/arithmetic evaluation process will be 1+y. Unfortunately, you will only have this as an AST... now you need to print out the resulting AST in order for the user to see the result. This is harder; see my SO answer on how to prettyprint a tree. This is considerably more work; if you restrict your tree to just polynomials over expressions, you can still do this in small project. JavaCC will help you with parsing, but provides zero help with prettyprinting.
The second problem is much harder, because you must not only accomplish variable substitution and arithmetic evaluation as above, but you have to somehow encode knowledge of algebraic laws, and how to match those laws to complex trees. You might hardwire one or two algebraic laws (e.g., x+0 -> x; y-y -> 0) but hardwiring many laws this way will produce an impossible mess because of how they interact.
JavaCC might form part of such an answer, but only a small part; the rest of the solution is hard enough so you are better off looking for an alternative rather than trying to build it all on top of JavaCC.
You need a more organized approach for this: a Program Transformation System (PTS). A typical PTS will allow you specify
a grammar for an arbitrary language (in your case, simply polynomials),
automatically parses instance to ASTs and can regenerate valid text from the AST. A good PTS will let you write source-to-source transformation rules that the PTS will apply automatically the instance AST; in your case you'd write down the algebraic laws as source-to-source rules and then the PTS does all the work.
An example is too long to provide here. But here I describe how to define formulas suitable for early calculus classes, and how to define algebraic rules that simply such formulas including applying some class calculus derivative laws.
With sufficient/significant effort, you can build your own PTS on top of JavaCC/JJTree. This is likely to take a few man-years. Easier to get a PTS rather than repeat all that work.
I want to parse some grammar like the following
OUTPUT data
GROUPBY key
TO location
USING object
the order of the GROUPBY TO USING clauses is allowed to vary, but each clause can occur at most once.
Is there a convenient or built-in way to parse this in FParsec? I read some questions and answers that mentions Haskell Parsec permute. There doesn't seem to be a permute in FParsec. If this is the way to go, what would I go about building a permute in FParsec?
I don't think there's a permutation parser in FParsec. I see a few directions you could take it though.
In general, what #FuleSnabel suggests is pretty sound, and probably simplest to implement. Don't make the parser responsible for asserting the property that each clause appears at most once. Instead parse each clause separately, allowing for duplicates, then inspect the resulting AST and error out if your property doesn't hold.
You could generate all permutations of your parsers and combine them with choice. Obviously this approach doesn't scale, but for three parsers I'd say it's fair game.
You could write your own primitive for parsing using a collection of parsers applied in any order. This would be a variant of many where in each step you make a choice of a parser, then discard that parser. So in each step you choose from a shrinking list of parsers until you can't parse anymore, finally returning the results collected along the way.
You could use user state to keep track of parsers already used and fail if a parser would be used twice within the same context. Not sure if this would yield a particularly nice solution - haven't really tried it before.
The problem is the following:
I developed an expression evaluation engine that provides a XPath-like language to the user so he can build the expressions. These expressions are then parsed and stored as an expression tree. There are many kinds of expressions, including logic (and/or/not), relational (=, !=, >, <, >=, <=), arithmetic (+, -, *, /) and if/then/else expressions.
Besides these operations, the expression can have constants (numbers, strings, dates, etc) and also access to external information by using a syntax similar to XPath to navigate in a tree of Java objects.
Given the above, we can build expressions like:
/some/value and /some/other/value
/some/value or /some/other/value
if (<evaluate some expression>) then
<evaluate some other expression>
else
<do something else>
Since the then-part and the else-part of the if-then-else expressions are expressions themselves, and everything is considered to be an expression, then anything can appear there, including other if-then-else's, allowing the user to build large decision trees by nesting if-then-else's.
As these expressions are built manually and prone to human error, I decided to build an automatic learning process capable of optimizing these expression trees based on the analysis of common external data. For example: in the first expression above (/some/value and /some/other/value), if the result of /some/other/value is false most of the times, we can rearrange the tree so this branch will be the left branch to take advantage of short-circuit evaluation (the right side of the AND is not evaluated since the left side already determined the result).
Another possible optimization is to rearrange nested if-then-else expressions (decision trees) so the most frequent path taken, based on the most common external data used, will be executed sooner in the future, avoiding unnecessary evaluation of some branches most of the times.
Do you have any ideas on what would be the best or recommended approach/algorithm to use to perform this automatic refactoring of these expression trees?
I think what you are describing is compiler optimizations which is a huge subject with everything from
inline expansion
deadcode elimination
constant propagation
loop transformation
Basically you have a lot of rewrite rules that are guaranteed to preserve the functionality of the code/xpath.
In the question on rearranging of the nested if-else I don't think you need to resort to machine-learning.
One (I think optimal) approach would be to use Huffman coding of your links huffman_coding
Take each path as a letter and we then encode them with Huffman coding and get a so called Huffman tree. This tree will have the least evaluations running on a (large enough) sample with the same distribution you made the Huffman tree from.
If you have restrictions on ``evaluate some expression''-expresssion or that they have different computational cost etc. You probably need another approach.
And remember, as always when it comes to optimization you should be careful and only do things that really matter.
The title sums it up. Presumably anything that can be done with source-code-generating parser-generators (which essentially hard-code the grammar-to-be-parsed into the program) can be done with a configurable parser (which would maintain the grammar-to-be-parsed soft-coded as a data structure).
I suppose the hard-coded code-generated-parser will have a performance bonus with one less level of indirection, but the messiness of having to compile and run it (or to exec() it in dynamic languages) and the overall clunkiness of code-generation seems quite a big downside. Are there any other benefits of code-generating your parsers that I'm not aware of?
Most of the places I see code generation used is to work around limitations in the meta-programming ability of the languages (i.e. web frameworks, AOP, interfacing with databases), but the whole lex-parse thing seems pretty straightforward and static, not needing any of the extra metaprogramming dynamism that you get from code-generation. What gives? Is the performance benefit that great?
If all you want is a parser that you can configure by handing it grammar rules, that can be accomplished. An Earley parser will parse any context-free language given just a set of rules. The price is significant execution time: O(N^3), where N is the length of the input. If N is large (as it is for many parseable entities), you can end with Very Slow parsing.
And this is the reason for a parser generator (PG). If you parse a lot of documents, Slow Parsing is bad news. Compilers are one program where people parse a lot of documents, and no programmer (or his manager) wants the programmer waiting for the compiler. There's lots of other things to parse: SQL querys, JSON documents, ... all of which have this "Nobody is willing to wait" property.
What PGs do is to take many decisions that would have to occur at runtime (e.g., for an Earley parser), and precompute those results at parser-generation time. So an LALR(1) PG (e.g., Bison) will produce parsers that run in O(N) time, and that's obviously a lot faster in practical circumstances. (ANTLR does something similar for LL(k) parsers). If you want full context free parsing that is usually linear, you can use a variant of LR parsing called GLR parsing; this buys you the convienience of an "configurable" (Earley) parser, with much better typical performance.
This idea of precomputing in advance is generally known as partial evaluation, that is, given a function F(x,y), and knowledge that x is always a certain constant x_0, compute a new function F'(y)=F(x0,y) in which decisions and computations solely dependent on the value of x are precomputed. F' usually runs a lot faster than F. In our case, F is something like generic parsing (e.g., an Earley parser), x is a grammar argument with x0 being a specific grammar, and F' is some parser infrastructure P and additional code/tables computed by the PG such that F'=PG(x)+P.
In the comments to your question, there seems to be some interest in why one doesn't just run the parser generator in effect at runtime. The simple answer is, it pays a significant part of the overhead cost you want to get rid of at runtime.