I got the FunctionDecl for the definition of a function. There is no declaration for this function.
For example:
int foo(char c, double d)
{
...
}
How do I get the signature (qualifier, return type, function name, parametrs) as a valid signature I could use to make a declaration?
I found that the easiest way is to use the lexer to get the signature of the function. Since I wanted to make a declaration out of a definition, I wanted the declaration to look exactly like the definition.
Therefore I defined a SourceRange from the start of the function to the beginning of the body of the function (minus the opening "{") and let the lexer give me this range as a string.
static std::string getDeclaration(const clang::FunctionDecl* D)
{
clang::ASTContext& ctx = D->getASTContext();
clang::SourceManager& mgr = ctx.getSourceManager();
clang::SourceRange range = clang::SourceRange(D->getSourceRange().getBegin(), D->getBody()->getSourceRange().getBegin());
StringRef s = clang::Lexer::getSourceText(clang::CharSourceRange::getTokenRange(range), mgr, ctx.getLangOpts());
return s.substr(0, s.size() - 2).str().append(";");
}
This solution assums that the FunctionDecl is a definition (has a body).
Maybe this is what you were looking for...
bool VisitDecl(Decl* D) {
auto k = D->getDeclKindName();
auto r = D->getSourceRange();
auto b = r.getBegin();
auto e = r.getEnd();
auto& srcMgr = Context->getSourceManager();
if (srcMgr.isInMainFile(b)) {
auto d = depth - 2u;
auto fname = srcMgr.getFilename(b);
auto bOff = srcMgr.getFileOffset(b);
auto eOff = srcMgr.getFileOffset(e);
llvm::outs() << std::string(2*d,' ') << k << "Decl ";
llvm::outs() << "<" << fname << ", " << bOff << ", " << eOff << "> ";
if (D->getKind() == Decl::Kind::Function) {
auto fnDecl = reinterpret_cast<FunctionDecl*>(D);
llvm::outs() << fnDecl->getNameAsString() << " ";
llvm::outs() << "'" << fnDecl->getType().getAsString() << "' ";
} else if (D->getKind() == Decl::Kind::ParmVar) {
auto pvDecl = reinterpret_cast<ParmVarDecl*>(D);
llvm::outs() << pvDecl->getNameAsString() << " ";
llvm::outs() << "'" << pvDecl->getType().getAsString() << "' ";
}
llvm::outs() << "\n";
}
return true;
}
Sample output:
FunctionDecl <foo.c, 48, 94> foo 'int (unsigned int)'
ParmVarDecl <foo.c, 56, 69> x 'unsigned int'
CompoundStmt <foo.c, 72, 94>
ReturnStmt <foo.c, 76, 91>
ParenExpr <foo.c, 83, 91>
BinaryOperator <foo.c, 84, 17>
ImplicitCastExpr <foo.c, 84, 84>
DeclRefExpr <foo.c, 84, 84>
ParenExpr <foo.c, 28, 45>
BinaryOperator <foo.c, 29, 43>
ParenExpr <foo.c, 29, 39>
BinaryOperator <foo.c, 30, 12>
IntegerLiteral <foo.c, 30, 30>
IntegerLiteral <foo.c, 12, 12>
IntegerLiteral <foo.c, 43, 43>
You will notice the reinterpret_cast<OtherDecl*>(D) function calls. Decl is the base class for all AST OtherDecl classes like FunctionDecl or ParmVarDecl. So reinterpreting the pointer is allowed and gets you access to that particular AST node's attributes. Since these more-specific AST Nodes inherit the NamedDecl and ValueDecl classes, obtaining the function name and the function type (signature) is simple. The same can be applied to the base class Stmt and other inherited classes like the OtherExpr classes.
Related
I'm trying to run a parallel for loop with triSYCL. This is my code:
#define TRISYCL_OPENCL
#define OMP_NUM_THREADS 8
#define BOOST_COMPUTE_USE_CPP11
//standart libraries
#include <iostream>
#include <functional>
//deps
#include "CL/sycl.hpp"
struct Color
{
float r, g, b, a;
friend std::ostream& operator<<(std::ostream& os, const Color& c)
{
os << "(" << c.r << ", " << c.g << ", " << c.b << ", " << c.a << ")";
return os;
}
};
struct Vertex
{
float x, y;
Color color;
friend std::ostream& operator<<(std::ostream& os, const Vertex& v)
{
os << "x: " << v.x << ", y: " << v.y << ", color: " << v.color;
return os;
}
};
template<typename T>
T mapNumber(T x, T a, T b, T c, T d)
{
return (x - a) / (b - a) * (d - c) + c;
}
int windowWidth = 640;
int windowHeight = 720;
int main()
{
auto exception_handler = [](cl::sycl::exception_list exceptions) {
for (std::exception_ptr const& e : exceptions)
{
try
{
std::rethrow_exception(e);
} catch (cl::sycl::exception const& e)
{
std::cout << "Caught asynchronous SYCL exception: " << e.what() << std::endl;
}
}
};
cl::sycl::default_selector defaultSelector;
cl::sycl::context context(defaultSelector, exception_handler);
cl::sycl::queue queue(context, defaultSelector, exception_handler);
auto* pixelColors = new Color[windowWidth * windowHeight];
{
cl::sycl::buffer<Color, 2> color_buffer(pixelColors, cl::sycl::range < 2 > {(unsigned long) windowWidth,
(unsigned long) windowHeight});
cl::sycl::buffer<int, 1> b_windowWidth(&windowWidth, cl::sycl::range < 1 > {1});
cl::sycl::buffer<int, 1> b_windowHeight(&windowHeight, cl::sycl::range < 1 > {1});
queue.submit([&](cl::sycl::handler& cgh) {
auto color_buffer_acc = color_buffer.get_access<cl::sycl::access::mode::write>(cgh);
auto width_buffer_acc = b_windowWidth.get_access<cl::sycl::access::mode::read>(cgh);
auto height_buffer_acc = b_windowHeight.get_access<cl::sycl::access::mode::read>(cgh);
cgh.parallel_for<class init_pixelColors>(
cl::sycl::range<2>((unsigned long) width_buffer_acc[0], (unsigned long) height_buffer_acc[0]),
[=](cl::sycl::id<2> index) {
color_buffer_acc[index[0]][index[1]] = {
mapNumber<float>(index[0], 0.f, width_buffer_acc[0], 0.f, 1.f),
mapNumber<float>(index[1], 0.f, height_buffer_acc[0], 0.f, 1.f),
0.f,
1.f};
});
});
std::cout << "cl::sycl::queue check - selected device: "
<< queue.get_device().get_info<cl::sycl::info::device::name>() << std::endl;
}//here the error appears
delete[] pixelColors;
return 0;
}
I'm building it with this CMakeLists.txt file:
cmake_minimum_required(VERSION 3.16.2)
project(acMandelbrotSet_stackoverflow)
set(CMAKE_CXX_STANDARD 17)
set(SRC_FILES
path/to/main.cpp
)
find_package(OpenCL REQUIRED)
set(Boost_INCLUDE_DIR path/to/boost)
include_directories(${Boost_INCLUDE_DIR})
include_directories(path/to/SYCL/include)
set(LIBS PRIVATE ${Boost_LIBRARIES} OpenCL::OpenCL)
add_executable(${PROJECT_NAME} ${SRC_FILES})
set_target_properties(${PROJECT_NAME} PROPERTIES DEBUG_POSTFIX _d)
target_link_libraries(${PROJECT_NAME} ${LIBS})
When I try to run it, I get this message: libc++abi.dylib: terminating with uncaught exception of type trisycl::non_cl_error from path/to/SYCL/include/triSYCL/command_group/detail/task.hpp line: 278 function: trisycl::detail::task::get_kernel, the message was: "Cannot use an OpenCL kernel in this context".
I've tried to create a lambda of mapNumber in the kernel but that didn't make any difference. I've also tried to use this before the end of the scope to catch errors:
try
{
queue.wait_and_throw();
} catch (cl::sycl::exception const& e)
{
std::cout << "Caught synchronous SYCL exception: " << e.what() << std::endl;
}
but nothing was printed to the console except the error from before. And I've also tried to make an event of the queue.submit call and then call event.wait() before the end of the scope but again the exact same output.
Does any body have an idea what else I could try?
The problem is that triSYCL is a research project looking deeper at some aspects of SYCL while not providing a global generic SYCL support for an end-user. I have just clarified this on the README of the project. :-(
Probably the problem here is that the OpenCL SPIR kernel has not been generated.
So you need to first compile the specific (old) Clang & LLVM from triSYCL https://github.com/triSYCL/triSYCL/blob/master/doc/architecture.rst#trisycl-architecture-for-accelerator. But unfortunately there is no simple Clang driver to use all the specific Clang & LLVM to generate the kernels from the SYCL source. Right know it is done with some ad-hoc awful Makefiles (look around https://github.com/triSYCL/triSYCL/blob/master/tests/Makefile#L360) and, even if you can survive to this, you might encounter some bugs...
The good news is now there are several other implementations of SYCL which are quite easier to use, quite more complete and quite less buggy! :-) Look at ComputeCpp, DPC++ and hipSYCL for example.
Preface: this may be an stupid uniformed question.
I have a grammar I wrote with the pyparsing library (and the help of stack-overflow posts) that parses nested expressions with parenthesis, curly, and square brackets. I'm curious what productions in a grammar table would look like. I was wondering if there was a way to automatically generate this for an arbitrary pyparsing context free grammar.
For reference the pyparsing grammer is defined here:
def parse_nestings(string, only_curl=False):
r"""
References:
http://stackoverflow.com/questions/4801403/pyparsing-nested-mutiple-opener-clo
CommandLine:
python -m utool.util_gridsearch parse_nestings:1 --show
Example:
>>> from utool.util_gridsearch import * # NOQA
>>> import utool as ut
>>> string = r'lambda u: sign(u) * abs(u)**3.0 * greater(u, 0)'
>>> parsed_blocks = parse_nestings(string)
>>> recombined = recombine_nestings(parsed_blocks)
>>> print('PARSED_BLOCKS = ' + ut.repr3(parsed_blocks, nl=1))
>>> print('recombined = %r' % (recombined,))
>>> print('orig = %r' % (string,))
PARSED_BLOCKS = [
('nonNested', 'lambda u: sign'),
('paren', [('ITEM', '('), ('nonNested', 'u'), ('ITEM', ')')]),
('nonNested', '* abs'),
('paren', [('ITEM', '('), ('nonNested', 'u'), ('ITEM', ')')]),
('nonNested', '**3.0 * greater'),
('paren', [('ITEM', '('), ('nonNested', 'u, 0'), ('ITEM', ')')]),
]
Example:
>>> from utool.util_gridsearch import * # NOQA
>>> import utool as ut
>>> string = r'\chapter{Identification \textbf{foobar} workflow}\label{chap:application}'
>>> parsed_blocks = parse_nestings(string)
>>> print('PARSED_BLOCKS = ' + ut.repr3(parsed_blocks, nl=1))
PARSED_BLOCKS = [
('nonNested', '\\chapter'),
('curl', [('ITEM', '{'), ('nonNested', 'Identification \\textbf'), ('curl', [('ITEM', '{'), ('nonNested', 'foobar'), ('ITEM', '}')]), ('nonNested', 'workflow'), ('ITEM', '}')]),
('nonNested', '\\label'),
('curl', [('ITEM', '{'), ('nonNested', 'chap:application'), ('ITEM', '}')]),
]
"""
import utool as ut # NOQA
import pyparsing as pp
def as_tagged(parent, doctag=None):
"""Returns the parse results as XML. Tags are created for tokens and lists that have defined results names."""
namedItems = dict((v[1], k) for (k, vlist) in parent._ParseResults__tokdict.items()
for v in vlist)
# collapse out indents if formatting is not desired
parentTag = None
if doctag is not None:
parentTag = doctag
else:
if parent._ParseResults__name:
parentTag = parent._ParseResults__name
if not parentTag:
parentTag = "ITEM"
out = []
for i, res in enumerate(parent._ParseResults__toklist):
if isinstance(res, pp.ParseResults):
if i in namedItems:
child = as_tagged(res, namedItems[i])
else:
child = as_tagged(res, None)
out.append(child)
else:
# individual token, see if there is a name for it
resTag = None
if i in namedItems:
resTag = namedItems[i]
if not resTag:
resTag = "ITEM"
child = (resTag, pp._ustr(res))
out += [child]
return (parentTag, out)
def combine_nested(opener, closer, content, name=None):
r"""
opener, closer, content = '(', ')', nest_body
"""
import utool as ut # NOQA
ret1 = pp.Forward()
_NEST = ut.identity
#_NEST = pp.Suppress
opener_ = _NEST(opener)
closer_ = _NEST(closer)
group = pp.Group(opener_ + pp.ZeroOrMore(content) + closer_)
ret2 = ret1 << group
if ret2 is None:
ret2 = ret1
else:
pass
#raise AssertionError('Weird pyparsing behavior. Comment this line if encountered. pp.__version__ = %r' % (pp.__version__,))
if name is None:
ret3 = ret2
else:
ret3 = ret2.setResultsName(name)
assert ret3 is not None, 'cannot have a None return'
return ret3
# Current Best Grammar
nest_body = pp.Forward()
nestedParens = combine_nested('(', ')', content=nest_body, name='paren')
nestedBrackets = combine_nested('[', ']', content=nest_body, name='brak')
nestedCurlies = combine_nested('{', '}', content=nest_body, name='curl')
nonBracePrintables = ''.join(c for c in pp.printables if c not in '(){}[]') + ' '
nonNested = pp.Word(nonBracePrintables).setResultsName('nonNested')
nonNested = nonNested.leaveWhitespace()
# if with_curl and not with_paren and not with_brak:
if only_curl:
# TODO figure out how to chain |
nest_body << (nonNested | nestedCurlies)
else:
nest_body << (nonNested | nestedParens | nestedBrackets | nestedCurlies)
nest_body = nest_body.leaveWhitespace()
parser = pp.ZeroOrMore(nest_body)
debug_ = ut.VERBOSE
if len(string) > 0:
tokens = parser.parseString(string)
if debug_:
print('string = %r' % (string,))
print('tokens List: ' + ut.repr3(tokens.asList()))
print('tokens XML: ' + tokens.asXML())
parsed_blocks = as_tagged(tokens)[1]
if debug_:
print('PARSED_BLOCKS = ' + ut.repr3(parsed_blocks, nl=1))
else:
parsed_blocks = []
return parsed_blocks
I am struggling to get my head around LPEG. I have managed to produce one grammar which does what I want, but I have been beating my head against this one and not getting far. The idea is to parse a document which is a simplified form of TeX. I want to split a document into:
Environments, which are \begin{cmd} and \end{cmd} pairs.
Commands which can either take an argument like so: \foo{bar} or can be bare: \foo.
Both environments and commands can have parameters like so: \command[color=green,background=blue]{content}.
Other stuff.
I also would like to keep track of line number information for error handling purposes. Here's what I have so far:
lpeg = require("lpeg")
lpeg.locale(lpeg)
-- Assume a lot of "X = lpeg.X" here.
-- Line number handling from http://lua-users.org/lists/lua-l/2011-05/msg00607.html
-- with additional print statements to check they are working.
local newline = P"\r"^-1 * "\n" / function (a) print("New"); end
local incrementline = Cg( Cb"linenum" )/ function ( a ) print("NL"); return a + 1 end , "linenum"
local setup = Cg ( Cc ( 1) , "linenum" )
nl = newline * incrementline
space = nl + lpeg.space
-- Taken from "Name-value lists" in http://www.inf.puc-rio.br/~roberto/lpeg/
local identifier = (R("AZ") + R("az") + P("_") + R("09"))^1
local sep = lpeg.S(",;") * space^0
local value = (1-lpeg.S(",;]"))^1
local pair = lpeg.Cg(C(identifier) * space ^0 * "=" * space ^0 * C(value)) * sep^-1
local list = lpeg.Cf(lpeg.Ct("") * pair^0, rawset)
local parameters = (P("[") * list * P("]")) ^-1
-- And the rest is mine
anything = C( (space^1 + (1-lpeg.S("\\{}")) )^1) * Cb("linenum") / function (a,b) return { text = a, line = b } end
begin_environment = P("\\begin") * Ct(parameters) * P("{") * Cg(identifier, "environment") * Cb("environment") * P("}") / function (a,b) return { params = a[1], environment = b } end
end_environment = P("\\end{") * Cg(identifier) * P("}")
texlike = lpeg.P{
"document";
document = setup * V("stuff") * -1,
stuff = Cg(V"environment" + anything + V"bracketed_stuff" + V"command_with" + V"command_without")^0,
bracketed_stuff = P"{" * V"stuff" * P"}" / function (a) return a end,
command_with =((P("\\") * Cg(identifier) * Ct(parameters) * Ct(V"bracketed_stuff"))-P("\\end{")) / function (i,p,n) return { command = i, parameters = p, nodes = n } end,
command_without = (( P("\\") * Cg(identifier) * Ct(parameters) )-P("\\end{")) / function (i,p) return { command = i, parameters = p } end,
environment = Cg(begin_environment * Ct(V("stuff")) * end_environment) / function (b,stuff, e) return { b = b, stuff = stuff, e = e} end
}
It almost works!
> texlike:match("\\foo[one=two]thing\\bar")
{
command = "foo",
parameters = {
{
one = "two",
},
},
}
{
line = 1,
text = "thing",
}
{
command = "bar",
parameters = {
},
}
But! First, I can't get the line number handling part to work at all. The function within incrementline is never fired.
I also can't quite work out how nested capture information is passed to handling functions (which is why I have scattered Cg, C and Ct semirandomly over the grammar). This means that only one item is returned from within a command_with:
> texlike:match("\\foo{text \\command moretext}")
{
command = "foo",
nodes = {
{
line = 1,
text = "text ",
},
},
parameters = {
},
}
I would also love to be able to check that the environment start and ends match up but when I tried to do so, my back references from "begin" were not in scope by the time I got to "end". I don't know where to go from here.
Late answer but hopefully it'll offer some insight if you're still looking for a solution or wondering what the problem was.
There are a couple of issues with your grammar, some of which can be tricky to spot.
Your line increment here looks incorrect:
local incrementline = Cg( Cb"linenum" ) /
function ( a ) print("NL"); return a + 1 end,
"linenum"
It looks like you meant to create a named capture group and not an anonymous group. The backcapture linenum is essentially being used like a variable. The problem is because this is inside an anonymous capture, linenum will not update properly -- function(a) will always receive 1 when called. You need to move the closing ) to the end so "linenum" is included:
local incrementline = Cg( Cb"linenum" /
function ( a ) print("NL"); return a + 1 end,
"linenum")
Relevant LPeg documentation for Cg capture.
The second problem is with your anything non-terminal rule:
anything = C( (space^1 + (1-lpeg.S("\\{}")) )^1) * Cb("linenum") ...
There are several things to be careful here. First, a named Cg capture (from incrementline rule once it's fixed) doesn't produce anything unless it's in a table or you backref it. The second major thing is that it has an adhoc scope like a variable. More precisely, its scope ends once you close it in an outer capture -- like what you're doing here:
C( (space^1 + (...) )^1)
Which means by the time you reference its backcapture with * Cb("linenum"), that's already too late -- the linenum you really want already closed its scope.
I always found LPeg's re syntax a bit easier to grok so I've rewritten the grammar with that instead:
local grammar_cb =
{
fold = pairfold,
resetlinenum = resetlinenum,
incrementlinenum = incrementlinenum, getlinenum = getlinenum,
error = error
}
local texlike_grammar = re.compile(
[[
document <- '' -> resetlinenum {| docpiece* |} !.
docpiece <- {| envcmd |} / {| cmd |} / multiline
beginslash <- cmdslash 'begin'
endslash <- cmdslash 'end'
envcmd <- beginslash paramblock? {:beginenv: envblock :} (!endslash docpiece)*
endslash openbrace {:endenv: =beginenv :} closebrace / &beginslash {} -> error .
envblock <- openbrace key closebrace
cmd <- cmdslash {:command: identifier :} (paramblock? cmdblock)?
cmdblock <- openbrace {:nodes: {| docpiece* |} :} closebrace
paramblock <- opensq ( {:parameters: {| parampairs |} -> fold :} / whitesp) closesq
parampairs <- parampair (sep parampair)*
parampair <- key assign value
key <- whitesp { identifier }
value <- whitesp { [^],;%s]+ }
multiline <- (nl? text)+
text <- {| {:text: (!cmd !closebrace !%nl [_%w%p%s])+ :} {:line: '' -> getlinenum :} |}
identifier <- [_%w]+
cmdslash <- whitesp '\'
assign <- whitesp '='
sep <- whitesp ','
openbrace <- whitesp '{'
closebrace <- whitesp '}'
opensq <- whitesp '['
closesq <- whitesp ']'
nl <- {%nl+} -> incrementlinenum
whitesp <- (nl / %s)*
]], grammar_cb)
The callback functions are straight-forwardly defined as:
local function pairfold(...)
local t, kv = {}, ...
if #kv % 2 == 1 then return ... end
for i = #kv, 2, -2 do
t[ kv[i - 1] ] = kv[i]
end
return t
end
local incrementlinenum, getlinenum, resetlinenum do
local line = 1
function incrementlinenum(nl)
assert(not nl:match "%S")
line = line + #nl
end
function getlinenum() return line end
function resetlinenum() line = 1 end
end
Testing the grammar with a non-trivial tex-like str with multiple lines:
local test1 = [[\foo{text \bar[color = red, background = black]{
moretext \baz{
even
more text} }
this time skipping multiple
lines even, such wow!}]]
Produces the follow AST in lua-table format:
{
command = "foo",
nodes = {
{
text = "text",
line = 1
},
{
parameters = {
color = "red",
background = "black"
},
command = "bar",
nodes = {
{
text = " moretext",
line = 2
},
{
command = "baz",
nodes = {
{
text = "even ",
line = 3
},
{
text = "more text",
line = 4
}
}
}
}
},
{
text = "this time skipping multiple",
line = 7
},
{
text = "lines even, such wow!",
line = 9
}
}
}
And a second test for begin/end environments:
local test2 = [[\begin[p1
=apple,
p2=blue]{scope} scope foobar
\end{scope} global foobar]]
Which seems to give approximately what you're looking for:
{
{
{
text = " scope foobar",
line = 3
},
parameters = {
p1 = "apple",
p2 = "blue"
},
beginenv = "scope",
endenv = "scope"
},
{
text = " global foobar",
line = 4
}
}
I got stuck with my xtext grammar definition. Basically I like to define multiple parameters for a component. The component should contain at least one parameter definition paramA OR paramB OR paramC OR (paramA AND paramB) OR (paramB AND paramC) OR (paramA AND paramB AND paramC).
Overall these are 6 cases, as you can see in my grammar definition:
Component:
'Define available parameters:' (
(newParamA = ParamA | newParamB = ParamB | newParamC = ParamC)
| (newParamA = ParamA & newParamB = ParamB)
| (newParamA = ParamA & newParamC = ParamC)
| (newParamB = ParamB & newParamC = ParamC)
| (newParamA = ParamA & newParamB = ParamB & newParamC = ParamC)
)
;
ParamA: ('paramA = ' paramA=Integer ';');
ParamB: ('paramB = ' paramB=Integer ';');
ParamC: ('paramC = ' paramC=Integer ';');
// Datatype
Integer returns ecore::EIntegerObject: '-'? INT;
Here is what is working when I reduce my grammar to use (newParamA = ParamA | newParamB = ParamB | newParamC = ParamC) only, means without the other cases in the first code snippet:
Define available parameters:
paramA = 1;
...
Define available parameters:
paramB = 2;
...
Define available parameters:
paramC = 3;
But I like to be able to define multiple available params in my dsl, e.g.
Define available parameters:
paramA = 1; paramB = 2;
...
Define available parameters:
paramB = 2; paramC = 3;
...
Define available parameters:
paramA = 1; paramB = 2; paramC = 3;
Any idea how to resolve that issue? Hope you can help me, I'ld appreciate any help!
This is the error I get when generating the grammar from code snippet #1:
warning(200): ../my.packagename/src-gen/my/packagename/projectname/parser/antlr/internal/InternalMyDSL.g:722:1: Decision can match input such as "'paramC = ' '-' RULE_INT ';'" using multiple alternatives: 1, 3, 4, 5
As a result, alternative(s) 3,5,4 were disabled for that input
Semantic predicates were present but were hidden by actions.
...
4514 [main] ERROR enerator.CompositeGeneratorFragment - java.io.FileNotFoundException: ..\my.packagename.ui\src-gen\my\packagename\projectname\ui\contentassist\antlr\internal\InternalMyDSLParser.java (The system cannot find the file specified)
org.eclipse.emf.common.util.WrappedException: java.io.FileNotFoundException: ..\my.packagename.ui\src-gen\my\packagename\projectname\ui\contentassist\antlr\internal\InternalMyDSLParser.java (The system cannot find the file specified)
at org.eclipse.xtext.util.Files.readFileIntoString(Files.java:129)
at org.eclipse.xtext.generator.parser.antlr.AbstractAntlrGeneratorFragment.simplifyUnorderedGroupPredicates(AbstractAntlrGeneratorFragment.java:130)
at org.eclipse.xtext.generator.parser.antlr.AbstractAntlrGeneratorFragment.simplifyUnorderedGroupPredicatesIfRequired(AbstractAntlrGeneratorFragment.java:118)
at org.eclipse.xtext.generator.parser.antlr.XtextAntlrUiGeneratorFragment.generate(XtextAntlrUiGeneratorFragment.java:86)
Here is a workaround I've tried (which works) but it's not a solution because the keywords within the language are changing to avoid the parser error:
('newParamA1 = ' paramA1=Integer ';')
| ('newParamB1 = ' paramB1=Integer ';')
| ('newParamC1 = ' paramC1=Integer ';')
| (('newParamA2 = ' paramA2=Integer ';') & ('newParamB2 = ' paramB2=Integer ';'))
| (('newParamA3 = ' paramA3=Integer ';') & ('newParamC2 = ' paramC2=Integer ';'))
| (('newParamB3 = ' paramB3=Integer ';') & ('newParamC3 = ' paramC3=Integer ';'))
| (('newParamA4 = ' paramA4=Integer ';') & ('newParamB4 = ' paramB4=Integer ';') & ('newParamC4 = ' paramC4=Integer ';'))
I think what you really want is a validation that ensures that at least one parameter is given on the semantic level rather than on the syntactic level. This will greatly simplify your grammar, e.g you could just use
(newParamA = ParamA)? & (newParamB = ParamB)? & (newParamC = ParamC)?
(parenth. added for clarity)
Also note that it's generally a good idea to avoid spaces in keywords. You should prefer 'paramA' '=' over 'paramA ='. This will greatly improve the error handling in the lexer / parser.
What you want to do is something like this:
You want a simple grammar (as Sebastian described it):
(newParamA = ParamA)? & (newParamB = ParamB)? & (newParamC = ParamC)?
To make sure that at least one parameter is required, you can write your own validator, which could look like this:
class MyDSLValidator extends AbstractMyDSLValidator {
#Check
def void atLeastOneParameter(Component component) {
if (component.newParamA == null && component.newParamB == null && component.newParamC == null) {
error('requires at least one parameter definition', MyDSLPackage.Literals.COMPONENT__PARAMA);
}
}
}
to_expr function leads to error. Could you advise what is wrong below?
context z3_cont;
expr x = z3_cont.int_const("x");
expr y = z3_cont.int_const("y");
expr ge = ((y==3) && (x==2));
ge = swap_tree( ge );
where swap_tree is a function that shall swap all operands of binary operations. It defined as follows.
expr swap_tree( expr e ) {
Z3_ast ee[2];
if ( e.is_app() && e.num_args() == 2) {
for ( int i = 0; i < 2; ++i ) {
ee[ 1 - i ] = swap_tree( e.arg(i) );
}
for ( int i = 0; i < 2; ++i ) {
cout <<" ee[" << i << "] : " << to_expr( z3_cont, ee[ i ] ) << endl;
}
return to_expr( z3_cont, Z3_update_term( z3_cont, e, 2, ee ) );
}
else
return e;
}
The problem is "referencing counting". A Z3 object can be garbage collected by the system if its reference counter is 0. The Z3 C++ API provides "smart pointers" (expr, sort, ...) for automatically managing the reference counters for us. Your code uses Z3_ast ee[2]. In the for-loop, you store the result of swap_tree(e.arg(0)) into ee[0]. Since the reference counter is not incremented, this Z3 object may be deleted when executing the second iteration of the loop.
Here is a possible fix:
expr swap_tree( expr e ) {
if ( e.is_app() && e.num_args() == 2) {
// using smart-pointers to store the intermediate results.
expr ee0(z3_cont), ee1(z3_cont);
ee0 = swap_tree( e.arg(0) );
ee1 = swap_tree( e.arg(1) );
Z3_ast ee[2] = { ee1, ee0 };
return to_expr( z3_cont, Z3_update_term( z3_cont, e, 2, ee ) );
}
else {
return e;
}
}