How to generate Java code for Xbase XExpression using JvmModelInferrer? - xtext

I am trying to do the simplest example I can think of to use Xbase and the JvmModelInferrer, rather than writing a code generator. I've cut down the JVM language tutorial but I can't get correct Java code from an XExpression (or XBlockExpression). I've looked at answers like :-
How to JvmModelInferrer method body from XExpression and append boilerplate code
The specific error I am getting currently is that for an expression like 2+2, the code I generate is :-
return 2./* name is null */;
My grammar is :-
grammar org.xtext.example.mydsl.MyDsl with org.eclipse.xtext.xbase.Xbase
generate myDsl "http://www.xtext.org/example/mydsl/MyDsl"
Model:
functions+=Function*
;
Function:
'function' name=ID 'body' exp=XBlockExpression
;
and my JvmModelInferrer is :-
def dispatch void infer(Model element, IJvmDeclaredTypeAcceptor acceptor, boolean isPreIndexingPhase) {
acceptor.accept(element.toClass("my.company.Functions")) [
for (function : element.functions) {
members += function.toMethod(function.name, typeRef(Object)) [
body = function.exp
]
}
]
}
For input :-
function TwoPlusTwo body {2+2}
The generated code is :-
package my.company;
public class Functions {
public java.lang.Object TwoPlusTwo() {
return 2./* name is null */;
}
}
Am I making some completely basic error or have some fundamental misunderstanding ?
I'm using Windows 10, Eclipse 2019-12, Xtext 2.20.0, Coretto JVM
Any help would be appreciated.

As suggested by Christian, it needs to have the correct libraries added to the project.
The Five Steps to Your JVM Language tutorial says this, I had just forgotten to do it :-
In the new workbench, create a Java project (File → New → Project… →
Java Project). Xbase relies on a small runtime library on the class
path. To add this, right-click on the project and go to Java Build
Path → Libraries → Add Library and choose Xtend Library.

Related

CoreNLP : provide pos tags

I have text that is already tokenized, sentence-split, and POS-tagged.
I would like to use CoreNLP to additionally annotate lemmas (lemma), named entities (ner), contituency and dependency parse (parse), and coreferences (dcoref).
Is there a combination of commandline options and option file specifications that makes this possible from the command line?
According to this question, I can ask the parser to view whitespace as delimiting tokens, and newlines as delimiting sentences by adding this to my properties file:
tokenize.whitespace = true
ssplit.eolonly = true
This works well, so all that remains is to specify to CoreNLP that I would like to provide POS tags too.
When using the Stanford Parser standing alone, it seems to be possible to have it use existing POS tags, but copying that syntax to the invocation of CoreNLP doesn't seem to work. For example, this does not work:
java -cp *:./* -Xmx2g edu.stanford.nlp.pipeline.StanfordCoreNLP -props my-properties-file -outputFormat xml -outputDirectory my-output-dir -sentences newline -tokenized -tagSeparator / -tokenizerFactory edu.stanford.nlp.process.WhitespaceTokenizer -tokenizerMethod newCoreLabelTokenizerFactory -file my-annotated-text.txt
While this question covers programmatic invocation, I'm invoking CoreNLP form the commandline as part of a larger system, so I'm really asking whether this is possible to achieve this with commandline options.
I don't think this is possible with command line options.
If you want you can make a custom annotator and include it in your pipeline you could go that route.
Here is some sample code:
package edu.stanford.nlp.pipeline;
import edu.stanford.nlp.util.logging.Redwood;
import edu.stanford.nlp.ling.*;
import edu.stanford.nlp.util.concurrent.MulticoreWrapper;
import edu.stanford.nlp.util.concurrent.ThreadsafeProcessor;
import java.util.*;
public class ProvidedPOSTaggerAnnotator {
public String tagSeparator;
public ProvidedPOSTaggerAnnotator(String annotatorName, Properties props) {
tagSeparator = props.getProperty(annotatorName + ".tagSeparator", "_");
}
public void annotate(Annotation annotation) {
for (CoreLabel token : annotation.get(CoreAnnotations.TokensAnnotation.class)) {
int tagSeparatorSplitLength = token.word().split(tagSeparator).length;
String posTag = token.word().split(tagSeparator)[tagSeparatorSplitLength-1];
String[] wordParts = Arrays.copyOfRange(token.word().split(tagSeparator), 0, tagSeparatorSplitLength-1);
String tokenString = String.join(tagSeparator, wordParts);
// set the word with the POS tag removed
token.set(CoreAnnotations.TextAnnotation.class, tokenString);
// set the POS
token.set(CoreAnnotations.PartOfSpeechAnnotation.class, posTag);
}
}
}
This should work if you provide your token with POS tokens separated by "_". You can change it with the forcedpos.tagSeparator property.
If you set customAnnotator.forcedpos = edu.stanford.nlp.pipeline.ProvidedPOSTaggerAnnotator
to the property file, include the above class in your CLASSPATH, and then include "forcedpos" in your list of annotators after "tokenize", you should be able to pass in your own pos tags.
I may clean this up some more and actually include it in future releases for people!
I have not had time to actually test this code out, if you try it out and find errors please let me know and I'll fix it!

Get declarations from an external source in Xtext

I have the following grammar:
Model: declarations += Declaration* statements += Statement*;
Declaration: 'Declare' name=ID;
Statement: 'Execute' what=[Declaration];
With that I can write simple scripts like:
Declare step_forward
Declare turn_right
Declare turn_left
Execute step_forward
Execute turn_left
Execute step_forward
Now I want that the java program provides all declarations, so that the script only contains the Execute statements. I read about IGlobalScopeProvider which seems to be the right tool for the job, but I have no idea how to add my data to it, and how to make Xtext use it.
So, how can I provide declarations from external to my grammar?
Update
My goal was somewhat unclear, so I try to make it more concrete. I want to keep the declarations as simple java objects, for instance:
List<Move> declarations = Arrays.asList(
new Move("step_forward"),
new Move("turn_right"),
new Move("turn_left"));
and the script should be:
Execute step_forward
Execute turn_left
Execute step_forward
I'm not really sure what you are asking for. After thinking about it, I cand derive th following possible questions:
1.) You want to split your script into two files. File a will only contain your declarations and File b then will only contain Statements. But any 'what' attribute will hold a reference to the declarations of File a.
This works out of the box with your grammar.
2.) You have any Java source code which provides a class which defines, for example a 'Declare Interface', and you want the 'what' attribute to reference to this interface or to classes which implement this interface.
Updated answer You should use Xbase within your language. There you can define that your 'what' attribute references to any Java type using the Xtypes rule 'JvmTypeReference'. The modifications you have to within your grammar are not that difficult, I think it could look this:
// Grammar now inherits from the Xbase grammar
// instead of the common terminals grammar
grammar org.xtext.example.mydsl.MyDsl with org.eclipse.xtext.xbase.Xbase
generate myDsl "http://www.xtext.org/example/mydsl/MyDsl"
Model:
declarator=Declarator?
statements+=Statement*;
Declarator:
'Declare' name=ID;
Statement:
'Execute' what=JvmTypeReference;
The, you can refer to any Java type (Java API, any linked API, user-defined types) by adressing them with their qualified name. It would look like this:
Referring to JVM types look like this in an Xtext language. (Screenshot)
You can also validate whether the referenced JVM type is valid, e.g. implements a desired interface which I would define with one single, optional declarator in the model.
Referenced JVM type is checked whether it is a valid type. (Screenshot)
With Xbase it is very easy to infer a Java interface for this model element. Use the generated stub '...mydsl.MyDslJvmModelInferrer':
class MyDslJvmModelInferrer extends AbstractModelInferrer {
#Inject extension JvmTypesBuilder
#Inject extension TypeReferences
def dispatch void infer(Model element, IJvmDeclaredTypeAcceptor acceptor, boolean isPreIndexingPhase) {
acceptor.accept(
element.declaration.toInterface('declarator.' + element.declaration.name) [
members += it.toMethod("execute", TypesFactory.eINSTANCE.createJvmVoid.createTypeRef)[]
])
}
}
It derives a single interface, named individually with only one method 'execute()'.
Then, implement static checks like this, you should use the generated stub '...mydsl.validation.MyDslValidator' In my example it is very quick and dirty, but you should get the idea of it:
class MyDslValidator extends AbstractMyDslValidator {
#Check
def checkReferredType(Statement s) {
val declarator = (s.eContainer as Model).declaration.name
for (st : (s.what.type as JvmDeclaredType).superTypes) {
if (st.qualifiedName.equals('declarator.' + declarator)) {
return
}
}
(s.what.simpleName + " doesn't implement the declarator interface " + declarator).
warning(MyDslPackage.eINSTANCE.statement_What)
}
}
(I used the preferred Xtend programming language to implement the static checking!) The static check determines if the given JvmTypeReference (which is a Java class from your project) implements the declared interface. Otherwise it will introduce a warning to your dsl document.
Hopefully this will answer your question.
Next update: Your idea will not work that well! You could simply write a template with Xtend for that without using Xbase, but I cannot imagine how to use it in a good way. The problem is, I assume, you don't to generate the hole class 'Move' and the hole execution process. I have played around a little bit trying to generate usable code and seems to be hacky! Neverthess, here is my solution:
Xtext generated the stub '...mydsl.generator.MyDslGenerator' for you with the method 'void doGenerate'. You have to fill this method. My idea is the following: First, you generate an abstract and generic Executor class with two generic parameters T and U. My executor class then has an abstract method 'executeMoves()' with the return value T. If this should be void use the non-primitive 'Void' class. It holds your List, but of the generic type u which is defined as a subclass of a Move class.
The Move class will be generated, too, but only with a field to store the String. It then has to be derived. My 'MyDslGenerator' looks like that:
class MyDslGenerator implements IGenerator {
static var cnt = 0
override void doGenerate(Resource resource, IFileSystemAccess fsa) {
cnt = 0
resource.allContents.filter(typeof(Model)).forEach [ m |
fsa.generateFile('mydsl/execution/Move.java', generateMove)
fsa.generateFile('mydsl/execution/Executor' + cnt++ + '.java', m.generateExecutor)
]
}
def generateMove() '''
package mydsl.execution;
public class Move {
protected String s;
public Move(String s) {
this.s = s;
}
}
'''
def generateExecutor(Model m) '''
package mydsl.execution;
import java.util.List;
import java.util.Arrays;
/**
* The class Executor is abstract because the execution has to implemented somewhere else.
* The class Executor is generic because one does not know if the execution has a return
* value. If it has no return value, use the not primitive type 'Void':
* public class MyExecutor extends Executor_i<Void> {...}
*/
public abstract class Executor«cnt - 1»<T, U extends Move> {
#SuppressWarnings("unchecked")
private List<U> declarations = Arrays.<U>asList(
«FOR Statement s : m.statements»
(U) new Move("«s.what.name»")«IF !m.statements.get(m.statements.size - 1).equals(s)»,«ENDIF»
«ENDFOR»
);
/**
* This method return list of moves.
*/
public List<U> getMoves() {
return declarations;
}
/**
* The executor class has to be extended and the extending class has to implement this
* method.
*/
public abstract T executeMoves();
}'''
}

Invoking java functions from RASCAL

can I invoke Java functions from Rascal. I want to write RASCAL analyser, but want access CFG nodes by calling a java function. Is this possible in Rascal. To put it simply, can I wrap the existing java application and invoke it from RASCAL
Sure. It works as follows.
Treat your Rascal project in Eclipse as a Java project as well.
Add source code and libraries and make the stuff compile.
Learn about the pdb.values API (in particular IValueFactory)
In Rascal write something like this:
#javaClass{com.mypackage.MyClass}
java int myFunction(str arg);
Then in Java:
package com.mypackage;
public class MyClass {
private final IValueFactor vf;
public MyClass(IValueFactory vf) {
this.vf = vf;
}
IValue myFunction(IString x) {
return vf.integer(-1);
}
}

Is there any way to use JavaScript attribute by default?

I just want somehow to say "I want all methods in this project use [JavaScript]"
Manually annotation every method is annoying
F# 3 lets you mark a module with the ReflectedDefinition attribute (aka [JavaScript] in WebSharper) which marks all the methods underneath.
See More About F# 3.0 Language Features:
(Speaking of uncommon attributes, in F# 3.0, the
[< ReflectedDefinition >] attribute can now be placed on modules and
type definitions, as a shorthand way to apply it to each individual
member of the module/type.)
I think Phil's answer is the way to go - when you can mark an entire module or type, it does not add too much noise and it also allows you to distinguish between server-side and client-side code in WebSharper.
Just for the record, the F# compiler is open-source and so someone (who finds this issue important) could easily create branch that would add an additional command line attribute to override the setting. I think this is just a matter of adding the parameter and then setting the default value of the reflect flag in check.fs (here is the source on GitHub).
At the moment, the main F# repository does not accept contributions that add new features (see the discussion here), but it is certainly a good way to send a feature request to the F# team :-)
If you annotate all your code with the JavaScript attribute, the WebSharper compiler will try to translate everything to JavaScript. A rule of thumb in WebSharper development is to separate server-side and client-side code, so you can simply annotate the module/class containing client-side code instead of every function/member if you're targeting .NET 4.5.
namespace Website
open IntelliFactory.WebSharper
module HelloWorld =
module private Server =
[<Rpc>]
let main() = async { return "World" }
[<JavaScript>] // or [<ReflectedDefinition>]
module Client =
open IntelliFactory.WebSharper.Html
let sayHello() =
async {
let! world = Server.main()
JavaScript.Alert <| "Hello " + world
}
let btn =
Button [Text "Click Me"]
|>! OnClick (fun _ _ ->
async {
do! sayHello()
} |> Async.Start)
let main() = Div [btn]
type Control() =
inherit Web.Control()
[<JavaScript>]
override __.Body = Client.main() :> _

How to convert a pojo to xml using "as" keyword

I have a requirement to send an object as xml to a webservice. I already have the pojo, now I need to convert it to xml using Groovy. In grails I have used the as keyword, what is the equivalent code to do this in Groovy?
Example Grails code:
import grails.converters.*
render Airport.findByIata(params.iata) as XML
A naive example of doing this with StreamingMarkupBuilder would be:
class Airport {
String name
String code
int id
}
Writable pogoToXml( object ) {
new groovy.xml.StreamingMarkupBuilder().bind {
"${object.getClass().name}" {
object.getClass().declaredFields.grep { !it.synthetic }.name.each { n ->
"$n"( object."$n" )
}
}
}
}
println pogoToXml( new Airport( name:'Manchester', code:'MAN', id:1 ) )
Which should print:
<Airport><name>Manchester</name><code>MAN</code><id>1</id></Airport>
The as keyword is actually part of the Groovy language spec. The part you are missing is the XML class that does the conversion. This is really just a fancy class that walks the POJO and writes the XML (possibly using MarkupBuilder).
Groovy does not have a built-in class like grails.converters.XML that makes it so easy. Instead, you'll need to manually build the XML using MarkupBuilder or StreamingMarkupBuilder.
Neither of these will automatically convert a POJO or POGO to XML, you'll have to either process this yourself manually, or use reflection to automate the process.
I'd suggest that you might be able to copy the grails converter over, but it may have a lot of dependencies. Still, it's open source, that might be a starting point if you need a more reusable component.

Resources