Parsing of language fragments - xtext

For better Unit testing of my language, I want to test every rule separately.
However, the ParseHelpercan only parse input that fully corresponds to the defined grammar.
Consider a language like HTML. I want to test parsing paragraphs without having to nest them in html->head->body etc.
I think ANTLR offers similar possibilities.
Is this achievable in Xtext too?

Github issue https://github.com/eclipse/xtext-core/issues/16 addresses this feature. I guess the next release will do what you need.

Related

How to implement a syntax analyzer for a chatbot?

I am actually trying to make a simple chatbot for information retrieval purposes in slack using python and I have come up with a Context Free Grammar (CFG) for synatx check . Now that I have a grammar, I want to create a parsing table/ parse tree for this grammar to validate my input string. It would be really helpful if you could let me know some libraries/ links/ mateirals that can help me implement a parser to perform syntax check for my chatbot.
Any help is appreciated. Thanks.
If you already wrote a context-free grammar, you can use the ChartParser of NLTK to parse any input, as described here: http://www.nltk.org/book/ch08.html
I think, however, that a hand written grammar will not be robust enough to deal with the huge amount of variations your users could write. These have gone out of fashion decades ago due to their poor performance and in constituency parsing one rather uses treebanks to generate grammars.
Depending on what exactly you want to archive, I suggest you also take a look at a dependency parser e.g. from spaCy. They are faster and allow you to easily navigate from the predicate of the sentence to its subject and objects.

Profile Antlr grammar

I found this question here in which OP asks for a way to profile an ANTLTR grammar.
However the answer is somewhat unsatisfying as it is limited to grammars without actions and - even more important - it is an automated profiling that will (as I see it) use the defaul constructor of the generated lexer/parser to construct it.
I need to profile a grammar, that does contain actions and that has to be constructed using a custom constructor. Therefore I'd need to be able to instantiate the lexer + parser myself and then profile it.
I was unable to find any information on this topic. I know there is a profiler for IntelliJ but it works quite similar to the one described in the linked question's answer (maybe it's even the same).
Does anyone know how I can profile my grammar with this special needs? I don't need any fancy GUI. I'd be satisified if I get the result printed to the console or something like that.
To wrap it up: I'm searching for either a tool or a hint on how to write some code that lets me profile my ANTLR grammar (with self-instantiated lexer/parser).
Btw my target language is Java so I guess the profiler has to be in Java as well.
A good start is setting Parser.setProfile() to true and examine what you get from Parser.getParseInfo() after a parse run. I haven't yet looked closer what's provided in detail by the profiling result, but it's on the todo list for my vscode extension for ANTLR4 to provide profiling info for grammars to help improving them.
A hint for getting from the decision info to a specific rule: there's a decision number, which is an index into ATN.decisionToState. The DecisionState instance you can get by this is an ATNState descendant, which allows to get a ATNState.ruleIndex from it. The rule index then can be used with your parser's ruleNames property to find the name of that rule. The value is also what is used for the rule's enum entry.

Parsing a Programming Language and Identifying Components of it

I'm looking for steps/libraries/approaches to solve this Problem statement.
Given a source file of a Programming language, I need to parse it and Subdivide it into components.
Example:
Given a Java File, I need to find the following in it.
list of Imports
Classes present in it
Attributes in the Class
Methods in it - along the Parameters if any.
etc.
I need to extract these and store it separately.
Reason Why I want to do it?
I want to build an Inverted Index on the top of these Components.
Example queries to Inverted index
1. Find the list of files with Class name: Sample
2. Find the positions where variable XXX is used within the class AAA.
I need to support queries likes the above
So, my plan is given a file, if I build these components from it, It would be easy to build an Inverted index on the top of it.
Example: Sample -- Class - Sample.java(Keyword - Component - FileName )
I want to build an Inverted index like above.
I see it is being implemented in many IDEs like IntelliJ.What I'm interested it how much effort it would take to build something like this. And I want to try implementing the same for at least one language.
Thanks in advance.
You can try to do this "just" a parser; for your specific example, that might be enough.
But you'll need a parser for each language. If you stick to just Java, you can find Java parsers pretty easily; just reuse one, there is little point in you reinventing one more set of grammar rules to describe Java.
For more than one language, this starts to get tricky. You can:
try to find a separate parser for each language. This may be sort of successful for mainstream languages. As you get to less well known languages, these get a lot harder to find. If you succeed, you'll have the problem that the parsers are likely incompatible technology; now gluing them together to collectively collect your index information is going to be a mess.
pick one parsing technology and get grammars for all the languages you care about. You have only two realistic choices: YACC/Bison, and ANTLR.
As a practical matter the YACC and Bison have been used to implement LOTS of languages... but the grammar files are not collected in one place, so they are hard to find. ANTLR at least has a single repository you can find at their web site. So that might kind of work.
Its going to be quite the effort to assemble all these into an integrated whole.
A complication is that you may want more than just raw syntax; you might want to know the meaning of the symbols, and for each symbol, precisely where it is defined in which file. After all, you want your index to be accurate at scale, and this will require differentiating foo the variable name from foo the function name. Arguably you need symbol tables.
As a general rule, this is where pure-parsing of languages breaks down;
there is serious Life After Parsing.
In that case, you want an integrated set of tools for extracting information from the different languages.
Our DMS Software Reengineering Toolkit is such a framework, and has some 40 languages predefined for it. We use something like OP's suggested process to build indexes of a code base for search tools based on DMS. Building something like DMS is an enormous effort.

What do people do with Parsers, like antlr javacc?

Out of curiosity, I wonder what can people do with parsers, how they are applied, and what do people usually create with it?
I know it's widely used in programming language industry, however I think this is just a tiny portion of it, right?
Besides special-purpose languages, my most ambitious use of a parser generator yet (with good old yacc back in C, and again later with pyparsing in Python) was to extract, validate and possibly alter certain meta-info from SQL queries -- parsing SQL properly is a real challenge (especially if you hope to support more than one dialect!-), a parser generator (and a lexer it sits on top of) at least remove THAT part of the job!-)
They are used to parse text....
To give a more concrete example, where I work we use lexx/yacc to parse strings coming over sockets.
Also from the name it should give you an idea what javacc is used for (java compiler compiler!)
Generally to parse Domain Specific Languages or scripting languages, or similar support for code snipits.
Previously I have seen it used to parse the command line based output of another software tool. This way the outer tool (VPN software) could re-use the base router IPSec code without modification. As lots of what was being parsed was IP Route tables and other structured repeated text.
Using a parser allowed simple changes when the formatting changed, instead of trying to find and tweak the a hand written parser. And the output did change a few times of the life of the product.
I used parsers to help process +/- 800 Clipper source files into similar PRGs that could be compiled with Alaksa Xbase 32.
You can use it to extend your favorite language by getting its language definition from their repository and then adding what you've always wanted to have. You can pass the regular syntax to your application and handle the extension in your own program.

How do I implement forward references in a compiler?

I'm creating a compiler with Lex and YACC (actually Flex and Bison). The language allows unlimited forward references to any symbol (like C#). The problem is that it's impossible to parse the language without knowing what an identifier is.
The only solution I know of is to lex the entire source, and then do a "breadth-first" parse, so higher level things like class declarations and function declarations get parsed before the functions that use them. However, this would take a large amount of memory for big files, and it would be hard to handle with YACC (I would have to create separate grammars for each type of declaration/body). I would also have to hand-write the lexer (which is not that much of a problem).
I don't care a whole lot about efficiency (although it still is important), because I'm going to rewrite the compiler in itself once I finish it, but I want that version to be fast (so if there are any fast general techniques that can't be done in Lex/YACC but can be done by hand, please suggest them also). So right now, ease of development is the most important factor.
Are there any good solutions to this problem? How is this usually done in compilers for languages like C# or Java?
It's entirely possible to parse it. Although there is an ambiguity between identifiers and keywords, lex will happily cope with that by giving the keywords priority.
I don't see what other problems there are. You don't need to determine if identifiers are valid during the parsing stage. You are constructing either a parse tree or an abstract syntax tree (the difference is subtle, but irrelevant for the purposes of this discussion) as you parse. After that you build your nested symbol table structures by performing a pass over the AST you generated during the parse. Then you do another pass over the AST to check that identifiers used are valid. Follow this with one or more additional parses over the AST to generate the output code, or some other intermediate datastructure and you're done!
EDIT: If you want to see how it's done, check the source code for the Mono C# compiler. This is actually written in C# rather than C or C++, but it does use .NET port of Jay which is very similar to yacc.
One option is to deal with forward references by just scanning and caching tokens till you hit something you know how to real with (sort of like "panic-mode" error recovery). Once you have run thought the full file, go back and try to re parse the bits that didn't parse before.
As to having to hand write the lexer; don't, use lex to generate a normal parser and just read from it via a hand written shim that lets you go back and feed the parser from a cache as well as what lex makes.
As to making several grammars, a little fun with a preprocessor on the yacc file and you should be able to make them all out of the same original source

Resources