I wanted to match for a string which starts with a '#', then matches everything until it matches the character that follows '#'. This can be achieved using capturing groups like this: #(.)[^(?1)]*(?1)(EDIT this regex is also erroneous). This matches #$foo$, does not match #%bar&, matches first 6 characters of #"foo"bar.
But since flex lex does not support capturing groups, what is the workaround here?
As you say, (f)lex does not support capturing groups, and it certainly doesn't support backreferences.
So there is no simple workaround, but there are workarounds. Here are a few possibilities:
You can read the input one character at a time using the input() function, until you find the matching character (but you have to create your own buffer to store the characters, because characters read by input() are not added to the current token). This is not the most efficient because reading one character at a time is a bit clunky, but it's the only interface that (f)lex offers. (The following snippet assumes you have some kind of expandable stringBuilder; if you are using C++, this would just be replaced with a std::string.)
#. { StringBuilder sb = string_builder_new();
int delim = yytext[1];
for (;;) {
int next = input();
if (next == delim) break;
if (next == EOF ) { /* Signal error */; break; }
string_builder_addchar(next);
}
yylval = string_builder_release();
return DELIMITED_STRING;
}
Even less efficiently, but perhaps more conveniently, you can get (f)lex to accumulate the characters in yytext using yymore(), matching one character at a time in a start condition:
%x DELIMITED
%%
int delim;
#. { delim = yytext[1]; BEGIN(DELIMITED); }
<DELIMITED>.|\n { if (yytext[0] == delim) {
yylval = strdup(yytext);
BEGIN(INITIAL);
return DELIMITED_STRING;
}
yymore();
}
<DELIMITED><<EOF>> { /* Signal unterminated string error */ }
The most efficient solution (in (f)lex) is to just write one rule for each possible delimiter. While that's a lot of rules, they could be easily generated with a small script in whatever scripting language you prefer. And, actually, there are not that many rules, particularly if you don't allow alphabetic and non-printing characters to be delimiters. This has the additional advantage that if you want Perl-like parenthetic delimiters (#(Hello) instead of #(Hello(), you can just modify the individual pattern to suit (as I've done below). [Note 1] Since all the actions are the same; it might be easier to use a macro for the action, making it easier to modify.
/* Ordinary punctuation */
#:[^:]*: { yylval = strndup(yytext + 2, yyleng - 3); return DELIMITED_STRING; }
#:[^:]*: { yylval = strndup(yytext + 2, yyleng - 3); return DELIMITED_STRING; }
#![^!]*! { yylval = strndup(yytext + 2, yyleng - 3); return DELIMITED_STRING; }
#\.[^.]*\. { yylval = strndup(yytext + 2, yyleng - 3); return DELIMITED_STRING; }
/* Matched pairs */
#<[^>]*> { yylval = strndup(yytext + 2, yyleng - 3); return DELIMITED_STRING; }
#\[[^]]*] { yylval = strndup(yytext + 2, yyleng - 3); return DELIMITED_STRING; }
/* Trap errors */
# { /* Report unmatched or invalid delimiter error */ }
If I were writing a script to generate these rules, I would use hexadecimal escapes for all the delimiter characters rather than trying to figure out which ones needed escapes.
Notes:
Perl requires nested balanced parentheses in constructs like that. But you can't do that with regular expressions; if you wanted to reproduce Perl behaviour, you'd need to use some variation on one of the other suggestions. I'll try to revisit this answer later to address that feature.
I am trying to design a parser using Ragel and C++ as host langauge.
There is a particular case where a parameter can be defined in two formats :
a. Integer : eg. SignalValue = 24
b. Hexadecimal : eg. SignalValue = 0x18
I have the below code to parse such a parameter :
INT = ((digit+)$incr_Count) %get_int >!(int_error); #[0-9]
HEX = (([0].'x'.[0-9A-F]+)$incr_Count) %get_hex >!(hex_error); #[hexadecimal]
SIGNAL_VAL = ( INT | HEX ) %/getSignalValue;
However in the above defined parser command, only the integer values(as defined in section a) gets recognized and parsed correctly.
If an hexadecimal number(eg. 0x24) is provided, then the number gets stored as ´0´ . There is no error called in case of hexadecimal number. The parser recognizes the hexadecimal, but the value stored is '0'.
I seem to be missing out some minor details with Ragel. Has anyone faced a similar situation?
The remaning part of the code :
//Global
int lInt = -1;
action incr_Count {
iGenrlCount++;
}
action get_int {
int channel = 0xFF;
std::stringstream str;
while(iGenrlCount > 0)
{
str << *(p - iGenrlCount);
iGenrlCount--;
}
str >> lInt; //push the values
str.clear();
}
action get_hex {
std::stringstream str;
while(iGenrlCount > 0)
{
str << std::hex << *(p - iGenrlCount);
iGenrlCount--;
}
str >> lInt; //push the values
}
action getSignalValue {
cout << "lInt = " << lInt << endl;
}
It's not a problem with your FSM (which looks fine for the task you have), it's more of a C++ coding issue. Try this implementation of get_hex():
action get_hex {
std::stringstream str;
cout << "get_hex()" << endl;
while(iGenrlCount > 0)
{
str << *(p - iGenrlCount);
iGenrlCount--;
}
str >> std::hex >> lInt; //push the values
}
Notice that it uses str just as a string buffer and applies std::hex to >> from std::stringstream to int. So in the end you get:
$ ./a.out 245
lInt = 245
$ ./a.out 0x245
lInt = 581
Which probably is what you want.
Trying to do something similar to this question except allow underscores from the second character onwards. Not just camel case.
I can test the parser in isolation successfully but when composed in a higher level parser, I get errors
Take the following example:
#![allow(dead_code)]
#[macro_use]
extern crate nom;
use nom::*;
type Bytes<'a> = &'a [u8];
#[derive(Clone,PartialEq,Debug)]
pub enum Statement {
IF,
ELSE,
ASSIGN((String)),
IDENTIFIER(String),
EXPRESSION,
}
fn lowerchar(input: Bytes) -> IResult<Bytes, char>{
if input.is_empty() {
IResult::Incomplete(Needed::Size(1))
} else if (input[0] as char)>='a' && 'z'>=(input[0] as char) {
IResult::Done(&input[1..], input[0] as char)
} else {
IResult::Error(error_code!(ErrorKind::Custom(1)))
}
}
named!(identifier<Bytes,Statement>, map_res!(
recognize!(do_parse!(
lowerchar >>
//alt_complete! is not having the effect it's supposed to so the alternatives need to be ordered from shortest to longest
many0!(alt!(
complete!(is_a!("_"))
| complete!(take_while!(nom::is_alphanumeric))
)) >> ()
)),
|id: Bytes| {
//println!("{:?}",std::str::from_utf8(id).unwrap().to_string());
Result::Ok::<Statement, &str>(
Statement::IDENTIFIER(std::str::from_utf8(id).unwrap().to_string())
)
}
));
named!(expression<Bytes,Statement>, alt_complete!(
identifier //=> { |e: Statement| e }
//| assign_expr //=> { |e: Statement| e }
| if_expr //=> { |e: Statement| e }
));
named!(if_expr<Bytes,Statement>, do_parse!(
if_statement: preceded!(
tag!("if"),
delimited!(tag!("("),expression,tag!(")"))
) >>
//if_statement: delimited!(tag!("("),tag!("hello"),tag!(")")) >>
if_expr: expression >>
//else_statement: opt_res!(tag!("else")) >>
(Statement::IF)
));
#[cfg(test)]
mod tests {
use super::*;
use IResult::*;
//use IResult::Done;
#[test]
fn ident_token() {
assert_eq!(identifier(b"iden___ifiers"), Done::<Bytes, Statement>(b"" , Statement::IDENTIFIER("iden___ifiers".to_string())));
assert_eq!(identifier(b"iden_iFIErs"), Done::<Bytes, Statement>(b"" , Statement::IDENTIFIER("iden_iFIErs".to_string())));
assert_eq!(identifier(b"Iden_iFIErs"), Error(ErrorKind::Custom(1))); // Supposed to fail since not valid
assert_eq!(identifier(b"_den_iFIErs"), Error(ErrorKind::Custom(1))); // Supposed to fail since not valid
}
#[test]
fn if_token() {
assert_eq!(if_expr(b"if(a)a"), Error(ErrorKind::Alt)); // Should have passed
assert_eq!(if_expr(b"if(hello)asdas"), Error(ErrorKind::Alt)); // Should have passed
}
#[test]
fn expr_parser() {
assert_eq!(expression(b"iden___ifiers"), Done::<Bytes, Statement>(b"" , Statement::IDENTIFIER("iden___ifiers".to_string())));
assert_eq!(expression(b"if(hello)asdas"), Error(ErrorKind::Alt)); // Should have been able to recognise an IF statement via expression parser
}
}
How would you write a Parsing Expression Grammar in any of the following Parser Generators (PEG.js, Citrus, Treetop) which can handle Python/Haskell/CoffeScript style indentation:
Examples of a not-yet-existing programming language:
square x =
x * x
cube x =
x * square x
fib n =
if n <= 1
0
else
fib(n - 2) + fib(n - 1) # some cheating allowed here with brackets
Update:
Don't try to write an interpreter for the examples above. I'm only interested in the indentation problem. Another example might be parsing the following:
foo
bar = 1
baz = 2
tap
zap = 3
# should yield (ruby style hashmap):
# {:foo => { :bar => 1, :baz => 2}, :tap => { :zap => 3 } }
Pure PEG cannot parse indentation.
But peg.js can.
I did a quick-and-dirty experiment (being inspired by Ira Baxter's comment about cheating) and wrote a simple tokenizer.
For a more complete solution (a complete parser) please see this question: Parse indentation level with PEG.js
/* Initializations */
{
function start(first, tail) {
var done = [first[1]];
for (var i = 0; i < tail.length; i++) {
done = done.concat(tail[i][1][0])
done.push(tail[i][1][1]);
}
return done;
}
var depths = [0];
function indent(s) {
var depth = s.length;
if (depth == depths[0]) return [];
if (depth > depths[0]) {
depths.unshift(depth);
return ["INDENT"];
}
var dents = [];
while (depth < depths[0]) {
depths.shift();
dents.push("DEDENT");
}
if (depth != depths[0]) dents.push("BADDENT");
return dents;
}
}
/* The real grammar */
start = first:line tail:(newline line)* newline? { return start(first, tail) }
line = depth:indent s:text { return [depth, s] }
indent = s:" "* { return indent(s) }
text = c:[^\n]* { return c.join("") }
newline = "\n" {}
depths is a stack of indentations. indent() gives back an array of indentation tokens and start() unwraps the array to make the parser behave somewhat like a stream.
peg.js produces for the text:
alpha
beta
gamma
delta
epsilon
zeta
eta
theta
iota
these results:
[
"alpha",
"INDENT",
"beta",
"gamma",
"INDENT",
"delta",
"DEDENT",
"DEDENT",
"epsilon",
"INDENT",
"zeta",
"DEDENT",
"BADDENT",
"eta",
"theta",
"INDENT",
"iota",
"DEDENT",
"",
""
]
This tokenizer even catches bad indents.
I think an indentation-sensitive language like that is context-sensitive. I believe PEG can only do context-free langauges.
Note that, while nalply's answer is certainly correct that PEG.js can do it via external state (ie the dreaded global variables), it can be a dangerous path to walk down (worse than the usual problems with global variables). Some rules can initially match (and then run their actions) but parent rules can fail thus causing the action run to be invalid. If external state is changed in such an action, you can end up with invalid state. This is super awful, and could lead to tremors, vomiting, and death. Some issues and solutions to this are in the comments here: https://github.com/dmajda/pegjs/issues/45
So what we are really doing here with indentation is creating something like a C-style blocks which often have their own lexical scope. If I were writing a compiler for a language like that I think I would try and have the lexer keep track of the indentation. Every time the indentation increases it could insert a '{' token. Likewise every time it decreases it could inset an '}' token. Then writing an expression grammar with explicit curly braces to represent lexical scope becomes more straight forward.
You can do this in Treetop by using semantic predicates. In this case you need a semantic predicate that detects closing a white-space indented block due to the occurrence of another line that has the same or lesser indentation. The predicate must count the indentation from the opening line, and return true (block closed) if the current line's indentation has finished at the same or shorter length. Because the closing condition is context-dependent, it must not be memoized.
Here's the example code I'm about to add to Treetop's documentation. Note that I've overridden Treetop's SyntaxNode inspect method to make it easier to visualise the result.
grammar IndentedBlocks
rule top
# Initialise the indent stack with a sentinel:
&{|s| #indents = [-1] }
nested_blocks
{
def inspect
nested_blocks.inspect
end
}
end
rule nested_blocks
(
# Do not try to extract this semantic predicate into a new rule.
# It will be memo-ized incorrectly because #indents.last will change.
!{|s|
# Peek at the following indentation:
save = index; i = _nt_indentation; index = save
# We're closing if the indentation is less or the same as our enclosing block's:
closing = i.text_value.length <= #indents.last
}
block
)*
{
def inspect
elements.map{|e| e.block.inspect}*"\n"
end
}
end
rule block
indented_line # The block's opening line
&{|s| # Push the indent level to the stack
level = s[0].indentation.text_value.length
#indents << level
true
}
nested_blocks # Parse any nested blocks
&{|s| # Pop the indent stack
# Note that under no circumstances should "nested_blocks" fail, or the stack will be mis-aligned
#indents.pop
true
}
{
def inspect
indented_line.inspect +
(nested_blocks.elements.size > 0 ? (
"\n{\n" +
nested_blocks.elements.map { |content|
content.block.inspect+"\n"
}*'' +
"}"
)
: "")
end
}
end
rule indented_line
indentation text:((!"\n" .)*) "\n"
{
def inspect
text.text_value
end
}
end
rule indentation
' '*
end
end
Here's a little test driver program so you can try it easily:
require 'polyglot'
require 'treetop'
require 'indented_blocks'
parser = IndentedBlocksParser.new
input = <<END
def foo
here is some indented text
here it's further indented
and here the same
but here it's further again
and some more like that
before going back to here
down again
back twice
and start from the beginning again
with only a small block this time
END
parse_tree = parser.parse input
p parse_tree
I know this is an old thread, but I just wanted to add some PEGjs code to the answers. This code will parse a piece of text and "nest" it into a sort of "AST-ish" structure. It only goes one deep and it looks ugly, furthermore it does not really use the return values to create the right structure but keeps an in-memory tree of your syntax and it will return that at the end. This might well become unwieldy and cause some performance issues, but at least it does what it's supposed to.
Note: Make sure you have tabs instead of spaces!
{
var indentStack = [],
rootScope = {
value: "PROGRAM",
values: [],
scopes: []
};
function addToRootScope(text) {
// Here we wiggle with the form and append the new
// scope to the rootScope.
if (!text) return;
if (indentStack.length === 0) {
rootScope.scopes.unshift({
text: text,
statements: []
});
}
else {
rootScope.scopes[0].statements.push(text);
}
}
}
/* Add some grammar */
start
= lines: (line EOL+)*
{
return rootScope;
}
line
= line: (samedent t:text { addToRootScope(t); }) &EOL
/ line: (indent t:text { addToRootScope(t); }) &EOL
/ line: (dedent t:text { addToRootScope(t); }) &EOL
/ line: [ \t]* &EOL
/ EOF
samedent
= i:[\t]* &{ return i.length === indentStack.length; }
{
console.log("s:", i.length, " level:", indentStack.length);
}
indent
= i:[\t]+ &{ return i.length > indentStack.length; }
{
indentStack.push("");
console.log("i:", i.length, " level:", indentStack.length);
}
dedent
= i:[\t]* &{ return i.length < indentStack.length; }
{
for (var j = 0; j < i.length + 1; j++) {
indentStack.pop();
}
console.log("d:", i.length + 1, " level:", indentStack.length);
}
text
= numbers: number+ { return numbers.join(""); }
/ txt: character+ { return txt.join(""); }
number
= $[0-9]
character
= $[ a-zA-Z->+]
__
= [ ]+
_
= [ ]*
EOF
= !.
EOL
= "\r\n"
/ "\n"
/ "\r"
I have a string
Ex: "We prefer questions that can be answered; not just discussed "
now i want to split this string from ";"
like
We prefer questions that can be answered
and
not just discussed
is this possible in DXL.
i am learning DXL, so i don't have any idea whether we can split or not.
Note : This is not a home work.
I'm sorry for necroing this post. Being new to DXL I spent some time with the same challenge. I noticed that the implementations available on the have different specifications of "splitting" a string. Loving the Ruby language, I missed an implementation which comes at least close to the Ruby version of String#split.
Maybe my findings will be helpful to anybody.
Here's a functional comparison of
Variant A: niol's implementation (which at a first glance, appears to be the same implementation which is usually found at Capri Soft,
Variant B: PJT's implementation,
Variant C: Brett's implementation and
Variant D: my implementation (which provides the correct functionality imo).
To eliminate structural difference, all implementations were implemented in functions, returning a Skip list or an Array.
Splitting results
Note that all implementations return different results, depending on their definition of "splitting":
string mellow yellow; delimiter ello
splitVariantA returns 1 elements: ["mellow yellow" ]
splitVariantB returns 2 elements: ["m" "llow yellow" ]
splitVariantC returns 3 elements: ["w" "w y" "" ]
splitVariantD returns 3 elements: ["m" "w y" "w" ]
string now's the time; delimiter
splitVariantA returns 3 elements: ["now's" "the" "time" ]
splitVariantB returns 2 elements: ["" "now's the time" ]
splitVariantC returns 5 elements: ["time" "the" "" "now's" "" ]
splitVariantD returns 3 elements: ["now's" "the" "time" ]
string 1,2,,3,4,,; delimiter ,
splitVariantA returns 4 elements: ["1" "2" "3" "4" ]
splitVariantB returns 2 elements: ["1" "2,,3,4,," ]
splitVariantC returns 7 elements: ["" "" "4" "3" "" "2" "" ]
splitVariantD returns 7 elements: ["1" "2" "" "3" "4" "" "" ]
Timing
Splitting the string 1,2,,3,4,, with the pattern , for 10000 times on my machine gives these timings:
splitVariantA() : 406 ms
splitVariantB() : 46 ms
splitVariantC() : 749 ms
splitVariantD() : 1077 ms
Unfortunately, my implementation D is the slowest. Surprisingly, the regular expressions implementation C is pretty fast.
Source code
// niol, modified
Array splitVariantA(string splitter, string str){
Array tokens = create(1, 1);
Buffer buf = create;
int str_index;
buf = "";
for(str_index = 0; str_index < length(str); str_index++){
if( str[str_index:str_index] == splitter ){
array_push_str(tokens, stringOf(buf));
buf = "";
}
else
buf += str[str_index:str_index];
}
array_push_str(tokens, stringOf(buf));
delete buf;
return tokens;
}
// PJT, modified
Skip splitVariantB(string s, string delimiter) {
int offset
int len
Skip skp = create
if ( findPlainText(s, delimiter, offset, len, false)) {
put(skp, 0, s[0 : offset -1])
put(skp, 1, s[offset +1 :])
}
return skp
}
// Brett, modified
Skip splitVariantC (string s, string delim) {
Skip skp = create
int i = 0
Regexp split = regexp "^(.*)" delim "(.*)$"
while (split s) {
string temp_s = s[match 1]
put(skp, i++, s[match 2])
s = temp_s
}
put(skp, i++, s[match 2])
return skp
}
Skip splitVariantD(string str, string pattern) {
if (null(pattern) || 0 == length(pattern))
pattern = " ";
if (pattern == " ")
str = stringStrip(stringSqueeze(str, ' '));
Skip result = create;
int i = 0; // index for searching in str
int j = 0; // index counter for result array
bool found = true;
while (found) {
// find pattern
int pos = 0;
int len = 0;
found = findPlainText(str[i:], pattern, pos, len, true);
if (found) {
// insert into result
put(result, j++, str[i:i+pos-1]);
i += pos + len;
}
}
// append the rest after last found pattern
put(result, j, str[i:]);
return result;
}
Quick join&split I could come up with. Seams to work okay.
int array_size(Array a){
int size = 0;
while( !null(get(a, size, 0) ) )
size++;
return size;
}
void array_push_str(Array a, string str){
int array_index = array_size(a);
put(a, str, array_index, 0);
}
string array_get_str(Array a, int index){
return (string get(a, index, 0));
}
string str_join(string joiner, Array str_array){
Buffer joined = create;
int array_index = 0;
joined += "";
for(array_index = 0; array_index < array_size(str_array); array_index++){
joined += array_get_str(str_array, array_index);
if( array_index + 1 < array_size(str_array) )
joined += joiner;
}
return stringOf(joined)
}
Array str_split(string splitter, string str){
Array tokens = create(1, 1);
Buffer buf = create;
int str_index;
buf = "";
for(str_index = 0; str_index < length(str); str_index++){
if( str[str_index:str_index] == splitter ){
array_push_str(tokens, stringOf(buf));
buf = "";
}else{
buf += str[str_index:str_index];
}
}
array_push_str(tokens, stringOf(buf));
delete buf;
return tokens;
}
If you only split the string once this is how I would do it:
string s = "We prefer questions that can be answered; not just discussed"
string sub = ";"
int offset
int len
if ( findPlainText(s, sub, offset, len, false)) {
/* the reason why I subtract one and add one is to remove the delimiter from the out put.
First print is to print the prefix and then second is the suffix.*/
print s[0 : offset -1]
print s[offset +1 :]
} else {
// no delimiter found
print "Failed to match"
}
You could also use regular expressions refer to the DXL reference manual. It would be better to use regular expressions if you want to split up the string by multiple delimiters such as str = "this ; is an;example"
ACTUALLY WORKS:
This solution will split as many times as needed, or none, if the delimiter doesn't exist in the string.
This is what I have used instead of a traditional "split" command.
It actually skips the creation of an array, and just loops through each string that would be in the array and calls "someFunction" on each of those strings.
string s = "We prefer questions that can be answered; not just discussed"
// for this example, ";" is used as the delimiter
Regexp split = regexp "^(.*);(.*)$"
// while a ";" exists in s
while (split s) {
// save the text before the last ";"
string temp_s = s[match 1]
// call someFunction on the text after the last ";"
someFunction(s[match 2])
// remove the text after the last ";" (including ";")
s = temp_s
}
// call someFunction again for the last (or only) string
someFunction(s)
Sorry for necroing an old post; I just didn't find the other answers useful.
Perhaps someone would find handy this fused solution as well. It splits string in Skip, based on delimiter, which can actually have length more then one.
Skip splitString(string s1, string delimit)
{
int offset, len
Skip splited = create
while(findPlainText(s1, delimit, offset, len, false))
{
put(splited, s1[0:offset-1], s1[0:offset-1])
s1 = s1[offset+length(delimit):length(s1)-1]
}
if(length(s1)>0)
{
put (splited, s1, s1)
}
return splited
}
I tried this out and worked out for me...
string s = "We prefer questions that can be answered,not just discussed,hiyas"
string sub = ","
int offset
int len
string s1=s
while(length(s1)>0){
if ( findPlainText(s1, sub, offset, len, false)) {
print s1[0 : offset -1]"\n"
s1= s1[offset+1:length(s1)]
}
else
{
print s1
s1=""
}
}
Here is a better implementation. This is a recursive split of the string by searching a keyword.
pragma runLim, 10000
string s = "We prefer questions that can be answered,not just discussed,hiyas;
Next Line,Var1,Nemesis;
Next Line,Var2,Nemesis1;
Next Line,Var3,Nemesis2;
New,Var4,Nemesis3;
Next Line,Var5,Nemesis4;
New,Var5,Nemesis5;"
string sub = ","
int offset
int len
string searchkey=null
string curr=s
string nxt=s
string searchline=null
string Modulename=""
string Attributename=""
string Attributevalue=""
while(findPlainText(curr,"Next Line", offset,len,false))
{
int intlen=offset
searchkey=curr[offset:length(curr)]
if(findPlainText(searchkey,"Next Line",offset,len,false))
{
curr=searchkey[offset+1:length(searchkey)]
}
if(findPlainText(searchkey,";",offset,len,false))
{
searchline=searchkey[0:offset]
}
int counter=0
while(length(searchline)>0)
{
if (findPlainText(searchline, sub, offset, len, false))
{
if(counter==0)
{
Modulename=searchline[0 : offset -1]
counter++
}
else if(counter==1)
{
Attributename=searchline[0 : offset -1]
counter++
}
searchline= searchline[offset+1:length(searchline)]
}
else
{
if(counter==2)
{
Attributevalue=searchline[0:length(searchline)-2]
counter++
}
searchline=""
}
}
print "Modulename="Modulename " Attributename=" Attributename " Attributevalue= "Attributevalue "\n"
}