Using Xtext from command line - xtext

I'm trying to achieve an independent command line flavor of Xtext
Input: grammar file (grammar.xtext)
Output: interpreter jar file (interpreter.jar)
Usage:
java -jar interpreter.jar input.mydsl
If input.mydsl has incorrect syntax, interpreter prints "ERROR" and exists
If input.mydsl has correct syntax, it should be interpreted.
I look for a complete solution based on maven that ideally has the following features:
small grammar, for instance, calculator of ints and strings
interpretation of 5+8*2 -> 21
interpretation of "abcd"+"ef" -> "abcdef"
simple type checker (5 +"text" -> error)
simple maven pom.xml that newbies can understand
easily ported to web (later)

Related

Running Antlr4 parser with lexer grammar gets token recognition errors

I'm trying to create a grammar to parse Solr queries (only mildly relevant and you don't need to know anything about solr to answer the question -- just know more than I do about antlr 4.7). I'm basing it on the QueryParser.jj file from solr 6. I looked for an existing one, but there doesn't seem to be one that isn't old and out-of-date.
I'm stuck because when I try to run the parser I get "token recognition error"s.
The lexer I created uses lexer modes which, as I understand it means I need to have a separate lexer grammar file. So, I have a parser and a lexer file.
I whittled it down to a simple example to show I'm seeing. Maybe someone can tell me what I'm doing wrong. Here's the parser (Junk.g4):
grammar Junk;
options {
language = Java;
tokenVocab=JLexer;
}
term : TERM '\r\n';
I can't use an import because of the lexer modes in the lexer file I'm trying to create (the tokens in the modes become "undefined" if I use an import). That's why I reference the lexer file with the tokenVocab parameter (as shown in the XML example in github).
Here's the lexer (JLexer.g4):
lexer grammar JLexer;
TERM : TERM_START_CHAR TERM_CHAR* ;
TERM_START_CHAR : [abc] ;
TERM_CHAR : [efg] ;
WS : [ \t\n\r\u3000]+ -> skip;
If I copy the lexer code into the parser, then things work as expected (e.g., "aeee" is a term). Also, if I run the lexer file with grun (specifying tokens as the target), then the string parses as a TERM (as expected).
If I run the parser ("grun Junk term -tokens"), then I get:
line 1:0 token recognition error at: 'a'
line 1:1 token recognition error at: 'e'
line 1:2 token recognition error at: 'e'
line 1:3 token recognition error at: 'e'
[#0,4:5='\r\n',<'
'>,1:4]
I "compile" the lexer first, then "compile" the parser and then javac the resulting java files. I do this in a batch file, so I'm pretty confident that I'm doing this every time.
I don't understand what I'm doing wrong. Is it the way I'm running grun? Any suggestions would be appreciated.
Always trust your intuition! There is some convention internal to grun :-) See here TestRig.java c. lines 125, 150. Would have been lot nicer if some additional CLI args were also added.
When lexer and grammar are compiled separately, the grammar name - in your case - would be (insofar as TestRig goes) "Junk" and the two files must be named "JunkLexer.g4" and "JunkParser.g4". Accordingly the headers in parser file JunkParser.g4 should be modified too
parser grammar JunkParser;
options { tokenVocab=JunkLexer; }
... stuff
Now you can run your tests
> antlr4 JunkLexer
> antlr4 JunkParser
> javac Junk*.java
> grun Junk term -tokens
aeee
^Z
[#0,0:3='aeee',<TERM>,1:0]
[#1,6:5='<EOF>',<EOF>,2:0]
>

OpenCV 3.1: CMake error if source or bin path contains "++"

If the source or binary path in CMake contain the character sequence "++" (without quotation marks) I get a CMake error when trying to create a project for OpenCV 3.1:
CMake Error at cmake/OpenCVUtils.cmake:76 (if):
if given arguments:
"G:/Desktop/C++ projects/project" "MATCHES" "^G:/Desktop/C++ projects/sources" "OR" "G:/Desktop/C++ projects/project" "MATCHES" "^G:/Desktop/C++ projects/project"
Regular expression "^G:/Desktop/C++ projects/sources" cannot compile
Call Stack (most recent call first):
CMakeLists.txt:437 (ocv_include_directories)
Apparently this line inside OpenCVUtils causes the problem:
if("${__abs_dir}" MATCHES "^${OpenCV_SOURCE_DIR}" OR "${__abs_dir}" MATCHES "^${OpenCV_BINARY_DIR}")
I noticed the problem because I have a folder called "C++ Projects" where I keep C++ projects and libraries. Does anyone know, why the sequence causes the problem and if there is a quick way to fix this? I will also report this as a bug in the OpenCV bug tracker.
+ is a special character used in pattern matching (documentation). The MATCHES indicates a pattern matching.
Either the strings have to be escaped first or the real fix would be to test whether __abs_dir is the beginning of the string of OpenCV_SOURCE_DIR:
string(FIND "${OpenCV_SOURCE_DIR}" "${__abs_dir}" strPosSrc)
string(FIND "${OpenCV_SOURCE_DIR}" "${__abs_dir}" strPosBin)
if (strPosSrc EQUALS 0 OR strPosBin EQUALS 0)
So basically it is a bug in OpenCV. Ask them to fix it.
Missing CMake feature
Overall I think it is a missing CMake feature that it does not provide a method to escape input strings.
There are bugs that could be solved by such a function:
https://cmake.org/Bug/view.php?id=15908
https://cmake.org/Bug/view.php?id=10365

Showing full expected and value information when ?_assertEqual fails

I'm coding a unit test where a (rather lengthy) binary is generated, and I want to assert that the generated binary equals the one I expect to be generated. I'm running eunit through "rebar eunit".
Thing is, when this assertion fails, the output is abreviated with "...", and I want to see the complete output so I can spot where the difference is.
I'm now using "?debugFmt()" as a temporary solution, but I'd like to know if there's an alternative to it (a config option or argument somewhere that can be applied to "?_assertEqual()" so the output is only shown when the assertion fails).
Thanks in advance!
EDIT: Due to legoscia's answer, I'm including a test sample using a test generator, with multiple asserts:
can_do_something(SetupData) ->
% ... some code ...
[?_assertEqual(Expected1, Actual1), ?_assertEqual(Expected2, Actual2)].
The best I can think of for actually showing the value in the console is something like this:
Actual =:= Expected orelse ?assert(?debugFmt("~p is not ~p", [Actual, Expected]))
?debugFmt returns ok, which is not true, so the assertion will always fail.
Alternatively, to use it as a test generator, the entire thing can be put inside ?_assert:
?_assert(Actual =:= Expected orelse ?debugFmt("~p is not ~p", [Actual, Expected]))
The way I usually achieve this is by having Eunit output XML files (in "Surefire" format, AKA "Junit" format). The XML files have much higher limits for term print depth, and thus probably contain the information you need.
Add this to your rebar.config:
{eunit_opts,
[verbose,
%% eunit truncates output from tests - capture full output in
%% XML files in .eunit
{report,{eunit_surefire,[{dir,"."}]}}]}.
Then you can find the results for module foo in .eunit/TEST-foo.xml. I find the files quite readable in a text editor.
1). Open your eunit sources. In my system:
cd /usr/lib/erlang/lib/eunit-2.3.2/src
2). Edit eunit_lib.erl in such way:
diff
54c54
< format_exception(Exception, 20).
---
> format_exception(Exception, 99999).
3). sudo erlc -I ../include eunit_lib.erl
4). mv eunit_lib.beam ../ebin
5). Have a good day))
This PR introduces print_depth option to eunit:test/2:
eunit:test(my_test, [{print_depth, 200}]).
It should be available starting from OTP-23.
Setting print_depth to a larger number will decrease truncation of the output.

Ninja build in xText

I'm trying to define a grammar for ninja build with xtext.
There are three tricky points that I can't answer.
Indentations by tab:
How to handle indentations. A rule in a ninja build file might have several variable definitions with preceding tab spacing (similar to make files). This becomes a problem when the language has SL comments, ignores white-spaces and does indentation by tabs (python, make,...)
cflags = -g
rule cc
command = gcc $cflags -c $in -o $out
Cross referencing reserved set of variable names:
There exists a set of reserved variables. Auto-complete should be able to reference both the reserved and the user defined set of variables.
command = gcc $cflags -c $in -o $out
Autocompleting cross referenced variable names which aren't seperated with WS
org.eclipse.xtext.common.Terminals hides WS tokens. ID tokens are seperated by white spaces. But in ninja script (similar to make files) the parsing should be done with longest matching variable name.
some_var = some_value
command = $some_var.h
Any ideas are appreciated. Thanks.
Check out the Xtext 2.8.0 release: https://www.eclipse.org/Xtext/releasenotes.html
The Whitespace-Aware Languages section states:
Xtext 2.8 supports languages in which whitespace is used to specify
the structure, e.g. using indentation to delimit code blocks as in
Python. This is done through synthetic tokens defined in the grammar:
terminal BEGIN: 'synthetic:BEGIN';
terminal END: 'synthetic:END';
These tokens can be used like other terminals in grammar rules:
WhitespaceAwareBlock:
BEGIN
...
END;
The new example language Home Automation available in the Eclipse examples (File → New → Example → Xtext Examples) demonstrates this concept. It allows code like the following:
Rule 'Report error' when Heater.error then
var String report
do
Thread.sleep(500)
report = HeaterDiagnostic.readError
while (report == null)
println(report)
More details are found in the documentation.

ANTLR 2.7 Get a Stream of Objects from the Parser

I'm using ANTLR 2.7.6 to parse the messy output of another application. Sadly, I do not have the ability to upgrade to ANTLR 3, even though it has been out for quite a while. A log file of the sort I will be parsing is better conceptualized as a list of objects than a tree of objects, and could be very large (>100 MB) so it is not practical to read it all into one AST. (My application is multithreaded and will process half a dozen to a dozen of these files at once, so memory will fill up quick.) I want to be able to read out each of these objects as from a stream so I can process them one by one. Note that the objects themselves could be conceptualized as small trees. Is there a way to get my ANTLR parser to act like an object stream, an iterator, or something of that nature?
[See Javadoc for ANTLR 2.]
Edit: Here is a conceptual example of what I would like to do with the parser.
import java.io.FileReader;
import antlr.TokenStream;
import antlr.CharBuffer;
//...
FileReader fileReader = new FileReader(filepath);
TokenStream lexer = new MyExampleLexer(new CharBuffer(fileReader));
MyExampleParser parser = new MyExampleParser(lexer);
for (Object obj : parser)
{
processObject(obj);
}
Am I perhaps working with the wrong paradigm of how to use an Antlr parser? (I realize that the parser does not implement Iterator; but that is conceptually the sort of behavior I'm looking for.)
AFAIK, ANTLR v2.x buffers the creating of tokens. The parser takes a TokenBuffer, which in its turn takes a TokenStream. This TokenStream is then polled through its nextToken() method when the parser needs more tokens.
In other words, if you provide the input source as a file, ANTLR does not read the entire file and create tokens of it, but only when needed are tokens created (and discarded).
Note that I never worked with ANTLR 2.x, so I could be wrong. Have you observed something different? If so, how do you offer the source to ANTLR: as a file, or as a big string? If it's the latter, I recommend providing a file instead.
EDIT
Let's say you want to parse a file that consists of lines with numbers, delimited by white spaces (which you want to ignore). You also want your parser to process the file line by line because collecting all numbers at once would result in memory problems.
You can do this by letting your main parser rule, parse, return a list of numbers for each line. If the EOF (end-of-file) is reached, you simply return null instead of a list.
A demo using ANTLR 2.7.6:
file: My.g
class MyParser extends Parser;
parse returns [java.util.List<Integer> numbers]
{
numbers = new java.util.ArrayList<Integer>();
}
: (n:Number {numbers.add(Integer.valueOf(n.getText()));})+ LineBreak
| EOF {numbers = null;}
;
class MyLexer extends Lexer;
Number
: ('0'..'9')+
;
LineBreak
: ('\r')? '\n'
;
Space
: (' ' | '\t') {$setType(Token.SKIP);}
;
file: Main.java
import antlr.*;
public class Main {
public static void main(String[] args) throws Exception {
MyLexer lexer = new MyLexer(new java.io.StringReader("1 2 3\n4 5 6 7 8\n9 10\n"));
MyParser parser = new MyParser(new TokenBuffer(lexer));
int line = 0;
java.util.List<Integer> numbers = null;
while((numbers = parser.parse()) != null) {
line++;
System.out.println("line " + line + " = " + numbers);
}
}
}
To run the demo on:
*nix
java -cp antlr-2.7.6.jar antlr.Tool My.g
javac -cp antlr-2.7.6.jar *.java
java -cp .:antlr-2.7.6.jar Main
or on:
Windows
java -cp antlr-2.7.6.jar antlr.Tool My.g
javac -cp antlr-2.7.6.jar *.java
java -cp .;antlr-2.7.6.jar Main
which will produce the following output:
line 1 = [1, 2, 3]
line 2 = [4, 5, 6, 7, 8]
line 3 = [9, 10]
Warning
Anyone trying this code, please note that this uses ANTLR 2.7.6. Unless you have a very compelling reason to use this version, it is highly recommended to use the latest stable version of ANTLR (v3.3 at the time of this writing).

Resources