While finding follow sets, rules such as
A->aA can lead to infinite recursion. Is there any coding technique to avoid it?
Note that the above example is just an example, in practice such a recursion could happen indirectly as well.
Here is my sample C code for finding follow sets. The grammar is stored as an array of linked lists. Please tell me if the code is unclear at any point.
set findFollowSet(char nonTerminal[], Grammar G, hashTable2 h) //later assume that all first sets are already in the hashtable.
{
LINK temp1 = find2(h, nonTerminal);
set s= createEmptySet();
set temp = createEmptySet();
char lhs[80] = "\0";
int i;
//special case
if(temp1->numRightSideOf==0) //its not on right side of any grammar rule
return insert(s, "$");
for(i=0;i<temp1->numRightSideOf;i++)
{
link l = G.rules[temp1->rightSideOf[i]];
strcpy(lhs, l->symbol); //storing the lhs just in case the nonTerm appears on the rightmost end of the rule.
printf("!!!!! %s\n", lhs);
sleep(1);
//finding nonTerminal in G
while(l!=NULL)
{
if(strcmp(l->symbol, nonTerminal) == 0)
break;
l=l->next;
}
//found the nonTerminal in G
if(l->next!=NULL)
{
temp = findFirstSet(l->next, G, h);
temp = removeElement(temp, "EPSILON");
}
else //its on the rightmost end of the rule
temp = findFollowSet(lhs, G, h);
s = setUnion(s, temp); destroySet(temp);
}
return s;
}
FIRST and FOLLOW sets are defined recursively, so you need to find the recursive closure. What this mean in practice is that you don't find the FOLLOW set for a single non-terminal -- you find all the FOLLOW sets for all the terminals simultaneously, by starting with all sets empty and going over the grammar adding symbols to different sets, until no more symbols can be added to any set. So you end up with something like:
FOLLOW[*] = {}; // all follow sets start empty
done = false;
while (!done)
done = true;
for (R : each rule in the grammar)
A = RHS[R];
tmp = FOLLOW[A];
for (S : each symbol in LHS[R] from right to left)
if (S is terminal)
tmp = {S};
else
if (!(FOLLOW[S] contains tmp))
done = false
FOLLOW[S] |= tmp
if (epsilon in FIRST[S])
tmp |= FIRST[S] - epsilon
else
tmp = FIRST[S]
Ok I got the answer but its inefficient.
So if anyone wants to suggest some more efficient answer, please feel welcomed.
Just store the recursion stack explicitly and at each recursive call, check if the entry already exists in the stack.
Mind you, you need to check the entire stack not just the top of it.
Related
I working on a language similar to ruby called gaiman and I'm using PEG.js to generate the parser.
Do you know if there is a way to implement heredocs with proper indentation?
xxx = <<<END
hello
world
END
the output should be:
"hello
world"
I need this because this code doesn't look very nice:
def foo(arg) {
if arg == "here" then
return <<<END
xxx
xxx
END
end
end
this is a function where the user wants to return:
"xxx
xxx"
I would prefer the code to look like this:
def foo(arg) {
if arg == "here" then
return <<<END
xxx
xxx
END
end
end
If I trim all the lines user will not be able to use a string with leading spaces when he wants. Does anyone know if PEG.js allows this?
I don't have any code yet for heredocs, just want to be sure if something that I want is possible.
EDIT:
So I've tried to implement heredocs and the problem is that PEG doesn't allow back-references.
heredoc = "<<<" marker:[\w]+ "\n" text:[\s\S]+ marker {
return text.join('');
}
It says that the marker is not defined. As for trimming I think I can use location() function
I don't think that's a reasonable expectation for a parser generator; few if any would be equal to the challenge.
For a start, recognising the here-string syntax is inherently context-sensitive, since the end-delimiter must be a precise copy of the delimiter provided after the <<< token. So you would need a custom lexical analyser, and that means that you need a parser generator which allows you to use a custom lexical analyser. (So a parser generator which assumes you want a scannerless parser might not be the optimal choice.)
Recognising the end of the here-string token shouldn't be too difficult, although you can't do it with a single regular expression. My approach would be to use a custom scanning function which breaks the here-string into a series of lines, concatenating them as it goes until it reaches a line containing only the end-delimiter.
Once you've recognised the text of the literal, all you need to normalise the spaces in the way you want is the column number at which the <<< starts. With that, you can trim each line in the string literal. So you only need a lexical scanner which accurately reports token position. Trimming wouldn't normally be done inside the generated lexical scanner; rather, it would be the associated semantic action. (Equally, it could be a semantic action in the grammar. But it's always going to be code that you write.)
When you trim the literal, you'll need to deal with the cases in which it is impossible, because the user has not respected the indentation requirement. And you'll need to do something with tab characters; getting those right probably means that you'll want a lexical scanner which computes visible column positions rather than character offsets.
I don't know if peg.js corresponds with those requirements, since I don't use it. (I did look at the documentation, and failed to see any indication as to how you might incorporate a custom scanner function. But that doesn't mean there isn't a way to do it.) I hope that the discussion above at least lets you check the detailed documentation for the parser generator you want to use, and otherwise find a different parser generator which will work for you in this use case.
Here is the implementation of heredocs in Peggy successor to PEG.js that is not maintained anymore. This code was based on the GitHub issue.
heredoc = "<<<" begin:marker "\n" text:($any_char+ "\n")+ _ end:marker (
&{ return begin === end; }
/ '' { error(`Expected matched marker "${begin}", but marker "${end}" was found`); }
) {
const loc = location();
const min = loc.start.column - 1;
const re = new RegExp(`\\s{${min}}`);
return text.map(line => {
return line[0].replace(re, '');
}).join('\n');
}
any_char = (!"\n" .)
marker_char = (!" " !"\n" .)
marker "Marker" = $marker_char+
_ "whitespace"
= [ \t\n\r]* { return []; }
EDIT: above didn't work with another piece of code after heredoc, here is better grammar:
{ let heredoc_begin = null; }
heredoc = "<<<" beginMarker "\n" text:content endMarker {
const loc = location();
const min = loc.start.column - 1;
const re = new RegExp(`^\\s{${min}}`, 'mg');
return {
type: 'Literal',
value: text.replace(re, '')
};
}
__ = (!"\n" !" " .)
marker 'Marker' = $__+
beginMarker = m:marker { heredoc_begin = m; }
endMarker = "\n" " "* end:marker &{ return heredoc_begin === end; }
content = $(!endMarker .)*
I store various formulas in Postgres and I want to use those formulas in my code. It would look something like this:
var amount = 100;
var formula = '5/105'; // normally something I would fetch from Postgres
var total = amount * formula; // should return 4.76
Is there a way to evaluate the string in this manner?
As far as I'm aware, there isn't a formula solver package developed for Dart yet. (If one exists or gets created after this post, we can edit it into the answer.)
EDIT: Mattia in the comments points out the math_expressions package, which looks pretty robust and easy to use.
There is a way to execute arbitrary Dart code as a string, but it has several problems. A] It's very roundabout and convoluted; B] it becomes a massive security issue; and C] it only works if the Dart is compiled in JIT mode (so in Flutter this means it will only work in debug builds, not release builds).
So the answer is that unfortunately, you will have to implement it yourself. The good news is that, for simple 4-function arithmetic, this is pretty straight-forward, and you can follow a tutorial on writing a calculator app like this one to see how it's done.
Of course, if all your formulas only contain two terms with an operator between them like in your example snippet, it becomes even easier. You can do the whole thing in just a few lines of code:
void main() {
final amount = 100;
final formula = '5/105';
final pattern = RegExp(r'(\d+)([\/+*-])(\d+)');
final match = pattern.firstMatch(formula);
final value = process(num.parse(match[1]), match[2], num.parse(match[3]));
final total = amount * value;
print(total); // Prints: 4.761904761904762
}
num process(num a, String operator, num b) {
switch (operator) {
case '+': return a + b;
case '-': return a - b;
case '*': return a * b;
case '/': return a / b;
}
throw ArgumentError(operator);
}
There are a few packages that can be used to accomplish this:
pub.dev/packages/function_tree
pub.dev/packages/math_expressions
pub.dev/packages/expressions
I used function_tree as follows:
double amount = 100.55;
String formula = '5/105*.5'; // From Postgres
final tax = amount * formula.interpret();
I haven't tried it, but using math_expressions it should look like this:
double amount = 100.55;
String formula = '5/105*.5'; // From Postgres
Parser p = Parser();
// Context is used to evaluate variables, can be empty in this case.
ContextModel cm = ContextModel();
Expression exp = p.parse(formula) * p.parse(amount.toString());
// or..
//Expression exp = p.parse(formula) * Number(amount);
double result = exp.evaluate(EvaluationType.REAL, cm);
// Result: 2.394047619047619
print('Result: ${result}');
Thanks to fkleon for the math_expressions help.
I'm giving Kotlin a go; coding contently, I have an ArrayList of chars which i want to classify depending on how brackets are matched:
(abcde) // ok characters other than brackets can go anywhere
)abcde( // ok matching the brackets 'invertedly' are ok
(({()})) // ok
)()()([] // ok
([)] // bad can't have different overlapping bracket pairs
((((( // bad all brackets need to have a match
My solution comes out(recursive):
//charList is a property
//Recursion starter'upper
private fun classifyListOfCharacters() : Boolean{
var j = 0
while (j < charList.size ) {
if (charList[j].isBracket()){
j = checkMatchingBrackets(j+1, charList[j])
}
j++
}
return j == commandList.size
}
private fun checkMatchingBrackets(i: Int, firstBracket :Char) : Int{
var j = i
while (j < charList.size ) {
if (charList[j].isBracket()){
if (charList[j].matchesBracket(firstBracket)){
return j //Matched bracket normal/inverted
}
j = checkMatchingBrackets(j+1, charList[j])
}
j++
}
return j
}
This works, but is this how you do it in Kotlin? It feels like I've coded java in Kotlin syntax
Found this Functional languages better at recursion, I've tried thinking in terms of manipulating functions and sending them down the recursion but to no avail. I'd be glad to be pointed in the right direction, code, or some pseudo-code of a possible refactoring.
(Omitted some extension methods regarding brackets, I think it's clear what they do)
Another, possibly a simpler approach to this problem is maintaining a stack of brackets while you iterate over the characters.
When you encounter another bracket:
If it matches the top of the stack, you pop the top of the stack;
If it does not match the top of the stack (or the stack is empty), you push it onto the stack.
If any brackets remain on the stack at the end, it means they are unmatched, and the answer is false. If the stack ends up empty, the answer is true.
This is correct, because a bracket at position i in a sequence can match another one at position j, only if there's no unmatched bracket of a different kind between them (at position k, i < k < j). The stack algorithm simulates exactly this logic of matching.
Basically, this algorithm could be implemented in a single for-loop:
val stack = Stack<Char>()
for (c in charList) {
if (!c.isBracket())
continue
if (stack.isNotEmpty() && c.matchesBracket(stack.peek())) {
stack.pop()
} else {
stack.push(c)
}
}
return stack.isEmpty()
I've reused your extensions c.isBracket(...) and c.matchesBracket(...). The Stack<T> is a JDK class.
This algorithm hides the recursion and the brackets nesting inside the abstraction of the brackets stack. Compare: your current approach implicitly uses the function call stack instead of the brackets stack, but the purpose is the same: it either finds a match for the top character or makes a deeper recursive call with another character on top.
Hotkey's answer (using a for loop) is great. However, you asked for an optimized recursion solution. Here is an optimized tail recursive function (Note the tailrec modifier before the function):
tailrec fun isBalanced(input: List<Char>, stack: Stack<Char>): Boolean = when {
input.isEmpty() -> stack.isEmpty()
else -> {
val c = input.first()
if (c.isBracket()) {
if (stack.isNotEmpty() && c.matchesBracket(stack.peek())) {
stack.pop()
} else {
stack.push(c)
}
}
isBalanced(input.subList(1, input.size), stack)
}
}
fun main(args: Array<String>) {
println("check: ${isBalanced("(abcde)".toList(), Stack())}")
}
This function calls itself until the input becomes empty and returns true if the stack is empty when the input becomes empty.
If we look at the decompiled Java equivalent of the generated bytecode, this recursion has been optimized to an efficient while loop by the compiler so we won't get StackOverflowException (removed Intrinsics null checks):
public static final boolean isBalanced(#NotNull String input, #NotNull Stack stack) {
while(true) {
CharSequence c = (CharSequence)input;
if(c.length() == 0) {
return stack.isEmpty();
}
char c1 = StringsKt.first((CharSequence)input);
if(isBracket(c1)) {
Collection var3 = (Collection)stack;
if(!var3.isEmpty() && matchesBracket(c1, ((Character)stack.peek()).charValue())) {
stack.pop();
} else {
stack.push(Character.valueOf(c1));
}
}
input = StringsKt.drop(input, 1);
}
}
I'm trying to write what I would think of as an extremely simple piece of code in Rascal: Testing if list A contains list B.
Starting out with some very basic code to create a list of strings
public list[str] makeStringList(int Start, int End)
{
return [ "some string with number <i>" | i <- [Start..End]];
}
public list[str] toTest = makeStringList(0, 200000);
My first try was 'inspired' by the sorting example in the tutor:
public void findClone(list[str] In, str S1, str S2, str S3, str S4, str S5, str S6)
{
switch(In)
{
case [*str head, str i1, str i2, str i3, str i4, str i5, str i6, *str tail]:
{
if(S1 == i1 && S2 == i2 && S3 == i3 && S4 == i4 && S5 == i5 && S6 == i6)
{
println("found duplicate\n\t<i1>\n\t<i2>\n\t<i3>\n\t<i4>\n\t<i5>\n\t<i6>");
}
fail;
}
default:
return;
}
}
Not very pretty, but I expected it to work. Unfortunately, the code runs for about 30 seconds before crashing with an "out of memory" error.
I then tried a better looking alternative:
public void findClone2(list[str] In, list[str] whatWeSearchFor)
{
for ([*str head, *str mid, *str end] := In)
if (mid == whatWeSearchFor)
println("gotcha");
}
with approximately the same result (seems to run a little longer before running out of memory)
Finally, I tried a 'good old' C-style approach with a for-loop
public void findClone3(list[str] In, list[str] whatWeSearchFor)
{
cloneLength = size(whatWeSearchFor);
inputLength = size(In);
if(inputLength < cloneLength) return [];
loopLength = inputLength - cloneLength + 1;
for(int i <- [0..loopLength])
{
isAClone = true;
for(int j <- [0..cloneLength])
{
if(In[i+j] != whatWeSearchFor[j])
isAClone = false;
}
if(isAClone) println("Found clone <whatWeSearchFor> on lines <i> through <i+cloneLength-1>");
}
}
To my surprise, this one works like a charm. No out of memory, and results in seconds.
I get that my first two attempts probably create a lot of temporary string objects that all have to be garbage collected, but I can't believe that the only solution that worked really is the best solution.
Any pointers would be greatly appreciated.
My relevant eclipse.ini settings are
-XX:MaxPermSize=512m
-Xms512m
-Xss64m
-Xmx1G
We'll need to look to see why this is happening. Note that, if you want to use pattern matching, this is maybe a better way to write it:
public void findClone(list[str] In, str S1, str S2, str S3, str S4, str S5, str S6) {
switch(In) {
case [*str head, S1, S2, S3, S4, S5, S6, *str tail]: {
println("found duplicate\n\t<S1>\n\t<S2>\n\t<S3>\n\t<S4>\n\t<S5>\n\t<S6>");
}
default:
return;
}
}
If you do this, you are taking advantage of Rascal's matcher to actually find the matching strings directly, versus your first example in which any string would match but then you needed to use a number of separate comparisons to see if the match represented the combination you were looking for. If I run this on 110145 through 110150 it takes a while but works and it doesn't seem to grow beyond the heap space you allocated to it.
Also, is there a reason you are using fail? Is this to continue searching?
It's an algorithmic issue like Mark Hills said. In Rascal some short code can still entail a lot of nested loops, almost implicitly. Basically every * splice operator on a fresh variable that you use on the pattern side in a list generates one level of loop nesting, except for the last one which is just the rest of the list.
In your code of findClone2 you are first generating all combinations of sublists and then filtering them using the if construct. So that's a correct algorithm, but probably slow. This is your code:
void findClone2(list[str] In, list[str] whatWeSearchFor)
{
for ([*str head, *str mid, *str end] := In)
if (mid == whatWeSearchFor)
println("gotcha");
}
You see how it has a nested loop over In, because it has two effective * operators in the pattern. The code runs therefore in O(n^2), where n is the length of In. I.e. it has quadratic runtime behaviour for the size of the In list. In is a big list so this matters.
In the following new code, we filter first while generating answers, using fewer lines of code:
public void findCloneLinear(list[str] In, list[str] whatWeSearchFor)
{
for ([*str head, *whatWeSearchFor, *str end] := In)
println("gotcha");
}
The second * operator does not generate a new loop because it is not fresh. It just "pastes" the given list values into the pattern. So now there is actually only one effective * which generates a loop which is the first on head. This one makes the algorithm loop over the list. The second * tests if the elements of whatWeSearchFor are all right there in the list after head (this is linear in the size of whatWeSearchFor and then the last *_ just completes the list allowing for more stuff to follow.
It's also nice to know where the clone is sometimes:
public void findCloneLinear(list[str] In, list[str] whatWeSearchFor)
{
for ([*head, *whatWeSearchFor, *_] := In)
println("gotcha at <size(head)>");
}
Rascal does not have an optimising compiler (yet) which might possibly internally transform your algorithms to equivalent optimised ones. So as a Rascal programmer you are still asked to know the effect of loops on your algorithms complexity and know that * is a very short notation for a loop.
I've been trying to implement a BASIC language interpreter (in C/C++) but I haven't found any book or (thorough) article which explains the process of parsing the language constructs. Some commands are rather complex and hard to parse, especially conditionals and loops, such as IF-THEN-ELSE and FOR-STEP-NEXT, because they can mix variables with constants and entire expressions and code and everything else, for example:
10 IF X = Y + Z THEN GOTO 20 ELSE GOSUB P
20 FOR A = 10 TO B STEP -C : PRINT C$ : PRINT WHATEVER
30 NEXT A
It seems like a nightmare to be able to parse something like that and make it work. And to make things worse, programs written in BASIC can easily be a tangled mess. That's why I need some advice, read some book or whatever to make my mind clear about this subject. What can you suggest?
You've picked a great project - writing interpreters can be lots of fun!
But first, what do we even mean by an interpreter? There are different types of interpreters.
There is the pure interpreter, where you simply interpret each language element as you find it. These are the easiest to write, and the slowest.
A step up, would be to convert each language element into some sort of internal form, and then interpret that. Still pretty easy to write.
The next step, would be to actually parse the language, and generate a syntax tree, and then interpret that. This is somewhat harder to write, but once you've done it a few times, it becomes pretty easy.
Once you have a syntax tree, you can fairly easily generate code for a custom stack virtual machine. A much harder project is to generate code for an existing virtual machine, such as the JVM or CLR.
In programming, like most engineering endeavors, careful planning greatly helps, especially with complicated projects.
So the first step is to decide which type of interpreter you wish to write. If you have not read any of a number of compiler books (e.g., I always recommend Niklaus Wirth's "Compiler Construction" as one of the best introductions to the subject, and is now freely available on the web in PDF form), I would recommend that you go with the pure interpreter.
But you still need to do some additional planning. You need to rigorously define what it is you are going to be interpreting. EBNF is great for this. For a gentile introduction EBNF, read the first three parts of a Simple Compiler at http://www.semware.com/html/compiler.html It is written at the high school level, and should be easy to digest. Yes, I tried it on my kids first :-)
Once you have defined what it is you want to be interpreting, you are ready to write your interpreter.
Abstractly, you're simple interpreter will be divided into a scanner (technically, a lexical analyzer), a parser, and an evaluator. In the simple pure interpolator case, the parser and evaluator will be combined.
Scanners are easy to write, and easy to test, so we won't spend any time on them. See the aforementioned link for info on crafting a simple scanner.
Lets (for example) define your goto statement:
gotostmt -> 'goto' integer
integer -> [0-9]+
This tells us that when we see the token 'goto' (as delivered by the scanner), the only thing that can follow is an integer. And an integer is simply a string a digits.
In pseudo code, we might handle this as so:
(token - is the current token, which is the current element just returned via the scanner)
loop
if token == "goto"
goto_stmt()
elseif token == "gosub"
gosub_stmt()
elseif token == .....
endloop
proc goto_stmt()
expect("goto") -- redundant, but used to skip over goto
if is_numeric(token)
--now, somehow set the instruction pointer at the requested line
else
error("expecting a line number, found '%s'\n", token)
end
end
proc expect(s)
if s == token
getsym()
return true
end
error("Expecting '%s', found: '%s'\n", curr_token, s)
end
See how simple it is? Really, the only hard thing to figure out in a simple interpreter is the handling of expressions. A good recipe for handling those is at: http://www.engr.mun.ca/~theo/Misc/exp_parsing.htm Combined with the aforementioned references, you should have enough to handle the sort of expressions you would encounter in BASIC.
Ok, time for a concrete example. This is from a larger 'pure interpreter', that handles a enhanced version of Tiny BASIC (but big enough to run Tiny Star Trek :-) )
/*------------------------------------------------------------------------
Simple example, pure interpreter, only supports 'goto'
------------------------------------------------------------------------*/
#include <stdio.h>
#include <stdlib.h>
#include <stdarg.h>
#include <string.h>
#include <setjmp.h>
#include <ctype.h>
enum {False=0, True=1, Max_Lines=300, Max_Len=130};
char *text[Max_Lines+1]; /* array of program lines */
int textp; /* used by scanner - ptr in current line */
char tok[Max_Len+1]; /* the current token */
int cur_line; /* the current line number */
int ch; /* current character */
int num; /* populated if token is an integer */
jmp_buf restart;
int error(const char *fmt, ...) {
va_list ap;
char buf[200];
va_start(ap, fmt);
vsprintf(buf, fmt, ap);
va_end(ap);
printf("%s\n", buf);
longjmp(restart, 1);
return 0;
}
int is_eol(void) {
return ch == '\0' || ch == '\n';
}
void get_ch(void) {
ch = text[cur_line][textp];
if (!is_eol())
textp++;
}
void getsym(void) {
char *cp = tok;
while (ch <= ' ') {
if (is_eol()) {
*cp = '\0';
return;
}
get_ch();
}
if (isalpha(ch)) {
for (; !is_eol() && isalpha(ch); get_ch()) {
*cp++ = (char)ch;
}
*cp = '\0';
} else if (isdigit(ch)) {
for (; !is_eol() && isdigit(ch); get_ch()) {
*cp++ = (char)ch;
}
*cp = '\0';
num = atoi(tok);
} else
error("What? '%c'", ch);
}
void init_getsym(const int n) {
cur_line = n;
textp = 0;
ch = ' ';
getsym();
}
void skip_to_eol(void) {
tok[0] = '\0';
while (!is_eol())
get_ch();
}
int accept(const char s[]) {
if (strcmp(tok, s) == 0) {
getsym();
return True;
}
return False;
}
int expect(const char s[]) {
return accept(s) ? True : error("Expecting '%s', found: %s", s, tok);
}
int valid_line_num(void) {
if (num > 0 && num <= Max_Lines)
return True;
return error("Line number must be between 1 and %d", Max_Lines);
}
void goto_line(void) {
if (valid_line_num())
init_getsym(num);
}
void goto_stmt(void) {
if (isdigit(tok[0]))
goto_line();
else
error("Expecting line number, found: '%s'", tok);
}
void do_cmd(void) {
for (;;) {
while (tok[0] == '\0') {
if (cur_line == 0 || cur_line >= Max_Lines)
return;
init_getsym(cur_line + 1);
}
if (accept("bye")) {
printf("That's all folks!\n");
exit(0);
} else if (accept("run")) {
init_getsym(1);
} else if (accept("goto")) {
goto_stmt();
} else {
error("Unknown token '%s' at line %d", tok, cur_line); return;
}
}
}
int main() {
int i;
for (i = 0; i <= Max_Lines; i++) {
text[i] = calloc(sizeof(char), (Max_Len + 1));
}
setjmp(restart);
for (;;) {
printf("> ");
while (fgets(text[0], Max_Len, stdin) == NULL)
;
if (text[0][0] != '\0') {
init_getsym(0);
if (isdigit(tok[0])) {
if (valid_line_num())
strcpy(text[num], &text[0][textp]);
} else
do_cmd();
}
}
}
Hopefully, that will be enough to get you started. Have fun!
I will certainly get beaten by telling this ...but...:
First, I am actually working on a standalone library ( as a hobby ) that is made of:
a tokenizer, building linear (flat list) of tokens from the source text and following the same sequence as the text ( lexems created from the text flow ).
A parser by hands (syntax analyse; pseudo-compiler )
There is no "pseudo-code" nor "virtual CPU/machine".
Instructions(such as 'return', 'if' 'for' 'while'... then arithemtic expressions ) are represented by a base c++-struct/class and is the object itself. The base object, I name it atom, have a virtual method called "eval", among other common members, that is the "execution/branch" also by itself. So no matter I have an 'if' statement with its possible branchings ( single statement or bloc of statements/instructions ) as true or false condition, it will be called from the base virtual atom::eval() ... and so on for everything that is an atom.
Even 'objects' such as variables are 'atom'. 'eval()' will simply return its value from a variant container held by the atom itself ( pointer, refering to the 'local' variant instance (the instance variant iself) held the 'atom' or to another variant held by an atom that is created in a given 'bloc/stack'. So 'atom' are 'inplace' instructions/objects.
As of now, as an example, chunk of not really meaningful 'code' as below just works:
r = 5!; // 5! : (factorial of 5 )
Response = 1 + 4 - 6 * --r * ((3+5)*(3-4) * 78);
if (Response != 1){ /* '<>' also is not equal op. */
return r^3;
}
else{
return 0;
}
Expressions ( arithemtics ) are built into binary tree expression:
A = b+c; =>
=
/ \
A +
/ \
b c
So the 'instruction'/statement for expression like above is the tree-entry atom that in the above case, is the '=' (binary) operator.
The tree is built with atom::r0,r1,r2 :
atom 'A' :
r0
|
A
/ \
r1 r2
Regarding 'full-duplex' mecanism between c++ runtime and the 'script' library, I've made class_adaptor and adaptor<> :
ex.:
template<typename R, typename ...Args> adaptor_t<T,R, Args...>& import_method(const lstring& mname, R (T::*prop)(Args...)) { ... }
template<typename R, typename ...Args> adaptor_t<T,R, Args...>& import_property(const lstring& mname, R (T::*prop)(Args...)) { ... }
Second: I know there are plenty of tools and libs out there such as lua, boost::bind<*>, QML, JSON, etc... But in my situation, I need to create my very own [edit] 'independant' [/edit] lib for "live scripting". I was scared that my 'interpreter' could take a huge amount of RAM, but I am surprised that it is not as big as using QML,jscript or even lua :-)
Thank you :-)
Don't bother with hacking a parser together by hand. Use a parser generator. lex + yacc is the classic lexer/parser generator combination, but a Google search will reveal plenty of others.