How to convert a pojo to xml using "as" keyword - grails

I have a requirement to send an object as xml to a webservice. I already have the pojo, now I need to convert it to xml using Groovy. In grails I have used the as keyword, what is the equivalent code to do this in Groovy?
Example Grails code:
import grails.converters.*
render Airport.findByIata(params.iata) as XML

A naive example of doing this with StreamingMarkupBuilder would be:
class Airport {
String name
String code
int id
}
Writable pogoToXml( object ) {
new groovy.xml.StreamingMarkupBuilder().bind {
"${object.getClass().name}" {
object.getClass().declaredFields.grep { !it.synthetic }.name.each { n ->
"$n"( object."$n" )
}
}
}
}
println pogoToXml( new Airport( name:'Manchester', code:'MAN', id:1 ) )
Which should print:
<Airport><name>Manchester</name><code>MAN</code><id>1</id></Airport>

The as keyword is actually part of the Groovy language spec. The part you are missing is the XML class that does the conversion. This is really just a fancy class that walks the POJO and writes the XML (possibly using MarkupBuilder).
Groovy does not have a built-in class like grails.converters.XML that makes it so easy. Instead, you'll need to manually build the XML using MarkupBuilder or StreamingMarkupBuilder.
Neither of these will automatically convert a POJO or POGO to XML, you'll have to either process this yourself manually, or use reflection to automate the process.
I'd suggest that you might be able to copy the grails converter over, but it may have a lot of dependencies. Still, it's open source, that might be a starting point if you need a more reusable component.

Related

Grails 4 how to get an handle to artifacts in custom command

I need to build a custom command in a Grails 4 application (https://docs.grails.org/4.0.11/guide/single.html#creatingCustomCommands), and I need to get an handle to some Grails Services and Domain classes which I will query as needed.
The custom command skeleton is quite simple:
import grails.dev.commands.*
import org.apache.maven.artifact.Artifact
class HelloWorldCommand implements GrailsApplicationCommand {
boolean handle() {
return true
}
}
While the documentation says that a custom command has access to the whole application context, I haven't found any examples on how to get an handle of that and start accessing the various application artifacts.
Any hints?
EDIT: to add context and clarify the goal of the custom command in order for further recommendation/best practices/etc.: the command reads data from a file in a custom format, persist the data, and writes reports in another custom format.
Will eventually be replaced by a recurrent job, once the data will be available on demand from a third party REST API.
See the project at github.com/jeffbrown/marco-vittorini-orgeas-artifacts-cli.
grails-app/services/marco/vittorini/orgeas/artifacts/cli/GreetingService.groovy
package marco.vittorini.orgeas.artifacts.cli
class GreetingService {
String greeting = 'Hello World'
}
grails-app/commands/marco/vittorini/orgeas/artifacts/cli/HelloCommand.groovy
package marco.vittorini.orgeas.artifacts.cli
import grails.dev.commands.*
class HelloCommand implements GrailsApplicationCommand {
GreetingService greetingService
boolean handle() {
println greetingService.greeting
return true
}
}
EDIT:
I have added a commit at github.com/jeffbrown/marco-vittorini-orgeas-artifacts-cli/commit/49a846e3902073f8ea0539fcde550f6d002b9d89 which demonstrates accessing a domain class, which was part of the question I overlooked when writing the initial answer.

Multiple Generator calls on Xtext language

We are having a project where we are using Xtext to generate a grammar and create from this language a java output file.
Next to that we want to create also a kind of json output file. which has another extension.
For this we want to split the generators to not mix the java generation and the json generation.
Is there anyway we can call 2 Igenerators when compiling dsl grammar?
Example:
We have 1 language common which generates java file for this we have one IGenerator2 which calls like standard CommonGenerator.
Now we want to create second generator for json file with CommonLineageGenerator.
I have read several threads now were i found the following
component = org.eclipse.xtext.generator.GeneratorComponent {
register = CommonStandaloneSetup{}
outlet = {
path = "${runtimeProject}/src-gen"
}
}
component = org.eclipse.xtext.generator.GeneratorComponent {
register = CommonStandaloneSetup2{}
outlet = {
path = "${runtimeProject}/src-gen"
}
}
Where the StandaloneSetup contains an override of the Igenerator2 bindings
return Guice.createInjector(new CommonRuntimeModule())
return Guice.createInjector(new CommonRuntimeModule() {
override Class<? extends IGenerator2> bindIGenerator2() {
return CommonTracingGenerator;
}
});
We are also using mwe2 , to generate our language configuration.
When executing our compilation it now seems although that he is only taking one Generator. Is there anyway we can accomplish this. A wrapper is also possibility but we really want to avoid of mixing the two types of generations.
kr

CoreNLP : provide pos tags

I have text that is already tokenized, sentence-split, and POS-tagged.
I would like to use CoreNLP to additionally annotate lemmas (lemma), named entities (ner), contituency and dependency parse (parse), and coreferences (dcoref).
Is there a combination of commandline options and option file specifications that makes this possible from the command line?
According to this question, I can ask the parser to view whitespace as delimiting tokens, and newlines as delimiting sentences by adding this to my properties file:
tokenize.whitespace = true
ssplit.eolonly = true
This works well, so all that remains is to specify to CoreNLP that I would like to provide POS tags too.
When using the Stanford Parser standing alone, it seems to be possible to have it use existing POS tags, but copying that syntax to the invocation of CoreNLP doesn't seem to work. For example, this does not work:
java -cp *:./* -Xmx2g edu.stanford.nlp.pipeline.StanfordCoreNLP -props my-properties-file -outputFormat xml -outputDirectory my-output-dir -sentences newline -tokenized -tagSeparator / -tokenizerFactory edu.stanford.nlp.process.WhitespaceTokenizer -tokenizerMethod newCoreLabelTokenizerFactory -file my-annotated-text.txt
While this question covers programmatic invocation, I'm invoking CoreNLP form the commandline as part of a larger system, so I'm really asking whether this is possible to achieve this with commandline options.
I don't think this is possible with command line options.
If you want you can make a custom annotator and include it in your pipeline you could go that route.
Here is some sample code:
package edu.stanford.nlp.pipeline;
import edu.stanford.nlp.util.logging.Redwood;
import edu.stanford.nlp.ling.*;
import edu.stanford.nlp.util.concurrent.MulticoreWrapper;
import edu.stanford.nlp.util.concurrent.ThreadsafeProcessor;
import java.util.*;
public class ProvidedPOSTaggerAnnotator {
public String tagSeparator;
public ProvidedPOSTaggerAnnotator(String annotatorName, Properties props) {
tagSeparator = props.getProperty(annotatorName + ".tagSeparator", "_");
}
public void annotate(Annotation annotation) {
for (CoreLabel token : annotation.get(CoreAnnotations.TokensAnnotation.class)) {
int tagSeparatorSplitLength = token.word().split(tagSeparator).length;
String posTag = token.word().split(tagSeparator)[tagSeparatorSplitLength-1];
String[] wordParts = Arrays.copyOfRange(token.word().split(tagSeparator), 0, tagSeparatorSplitLength-1);
String tokenString = String.join(tagSeparator, wordParts);
// set the word with the POS tag removed
token.set(CoreAnnotations.TextAnnotation.class, tokenString);
// set the POS
token.set(CoreAnnotations.PartOfSpeechAnnotation.class, posTag);
}
}
}
This should work if you provide your token with POS tokens separated by "_". You can change it with the forcedpos.tagSeparator property.
If you set customAnnotator.forcedpos = edu.stanford.nlp.pipeline.ProvidedPOSTaggerAnnotator
to the property file, include the above class in your CLASSPATH, and then include "forcedpos" in your list of annotators after "tokenize", you should be able to pass in your own pos tags.
I may clean this up some more and actually include it in future releases for people!
I have not had time to actually test this code out, if you try it out and find errors please let me know and I'll fix it!

Dataflow output parameterized type to avro file

I have a pipeline that successfully outputs an Avro file as follows:
#DefaultCoder(AvroCoder.class)
class MyOutput_T_S {
T foo;
S bar;
Boolean baz;
public MyOutput_T_S() {}
}
#DefaultCoder(AvroCoder.class)
class T {
String id;
public T() {}
}
#DefaultCoder(AvroCoder.class)
class S {
String id;
public S() {}
}
...
PCollection<MyOutput_T_S> output = input.apply(myTransform);
output.apply(AvroIO.Write.to("/out").withSchema(MyOutput_T_S.class));
How can I reproduce this exact behavior except with a parameterized output MyOutput<T, S> (where T and S are both Avro code-able using reflection).
The main issue is that Avro reflection doesn't work for parameterized types. So based on these responses:
Setting Custom Coders & Handling Parameterized types
Using Avrocoder for Custom Types with Generics
1) I think I need to write a custom CoderFactory but, I am having difficulty figuring out exactly how this works (I'm having trouble finding examples). Oddly enough, a completely naive coder factory appears to let me run the pipeline and inspect proper output using DataflowAssert:
cr.RegisterCoder(MyOutput.class, new CoderFactory() {
#Override
public Coder<?> create(List<? excents Coder<?>> componentCoders) {
Schema schema = new Schema.Parser().parse("{\"type\":\"record\,"
+ "\"name\":\"MyOutput\","
+ "\"namespace\":\"mypackage"\","
+ "\"fields\":[]}"
return AvroCoder.of(MyOutput.class, schema);
}
#Override
public List<Object> getInstanceComponents(Object value) {
MyOutput<Object, Object> myOutput = (MyOutput<Object, Object>) value;
List components = new ArrayList();
return components;
}
While I can successfully assert against the output now, I expect this will not cut it for writing to a file. I haven't figured out how I'm supposed to use the provided componentCoders to generate the correct schema and if I try to just shove the schema of T or S into fields I get:
java.lang.IllegalArgumentException: Unable to get field id from class null
2) Assuming I figure out how to encode MyOutput. What do I pass to AvroIO.Write.withSchema? If I pass either MyOutput.class or the schema I get type mismatch errors.
I think there are two questions (correct me if I am wrong):
How do I enable the coder registry to provide coders for various parameterizations of MyOutput<T, S>?
How do I values of MyOutput<T, S> to a file using AvroIO.Write.
The first question is to be solved by registering a CoderFactory as in the linked question you found.
Your naive coder is probably allowing you to run the pipeline without issues because serialization is being optimized away. Certainly an Avro schema with no fields will result in those fields being dropped in a serialization+deserialization round trip.
But assuming you fill in the schema with the fields, your approach to CoderFactory#create looks right. I don't know the exact cause of the message java.lang.IllegalArgumentException: Unable to get field id from class null, but the call to AvroCoder.of(MyOutput.class, schema) should work, for an appropriately assembled schema. If there is an issue with this, more details (such as the rest of the stack track) would be helpful.
However, your override of CoderFactory#getInstanceComponents should return a list of values, one per type parameter of MyOutput. Like so:
#Override
public List<Object> getInstanceComponents(Object value) {
MyOutput<Object, Object> myOutput = (MyOutput<Object, Object>) value;
return ImmutableList.of(myOutput.foo, myOutput.bar);
}
The second question can be answered using some of the same support code as the first, but otherwise is independent. AvroIO.Write.withSchema always explicitly uses the provided schema. It does use AvroCoder under the hood, but this is actually an implementation detail. Providing a compatible schema is all that is necessary - such a schema will have to be composed for each value of T and S for which you want to output MyOutput<T, S>.

Biztalk mapping Date to String

I'm working on a biztalk project and use a map to create the new message.
Now i want to map a datefield to a string.
I thought i can do it on this way with an Function Script with inline C#
public string convertDateTime(DateTime param)
{
return System.Xml.XmlConvert.ToString(param,ÿyyyMMdd");
}
But this doesn't work and i receive an error. How can i do the convert in the map?
It's a Biztalk 2006 project.
Without the details of the error you are seeing it is hard to be sure but I'm quite sure that your map is failing because all the parameters within the BizTalk XSLT engine are passed as strings1.
When I try to run something like the function you provided as inline C# I get the following error:
Object of type 'System.String' cannot be converted to type 'System.DateTime'
Replace your inline C# with something like the following:
public string ConvertDateTime(string param1)
{
DateTime inputDate = DateTime.Parse(param1);
return inputDate.ToString("yyyyMMdd");
}
Note that the parameter type is now string, and you can then convert that to a DateTime and perform your string format.
As other answers have suggested, it may be better to put this helper method into an external class - that way you can get your code under test to deal with edge cases, and you also get some reuse.
1 The fact that all parameters in the BizTalk XSLT are strings can be the source of a lot of gotchas - one other common one is with math calculations. If you return numeric values from your scripting functoids BizTalk will helpfully convert them to strings to map them to the outbound schema but will not so helpfully perform some very random rounding on the resulting values. Converting the return values to strings yourself within the C# will remove this risk and give you the expected results.
If you're using the mapper, you just need a Scripting Functiod (yes, using inline C#) and you should be able to do:
public string convertDateTime(DateTime param)
{
return(param.ToString("YYYYMMdd");
}
As far as I know, you don't need to call the System.Xml namespace in anyway.
I'd suggest
public static string DateToString(DateTime dateValue)
{
return String.Format("{0:yyyyMMdd}", dateValue);
}
You could also create a external Lib which would provide more flexibility and reusability:
public static string DateToString(DateTime dateValue, string formatPicture)
{
string format = formatPicture;
if (IsNullOrEmptyString(formatPicture)
{
format = "{0:yyyyMMdd}";
}
return String.Format(format, dateValue);
}
public static string DateToString(DateTime dateValue)
{
return DateToString(dateValue, null);
}
I tend to move every function I use twice inside an inline script into an external lib. Iit will give you well tested code for all edge cases your data may provide because it's eays to create tests for these external lib functions whereas it's hard to do good testing on inline scripts in maps.
This blog will solve your problem.
http://biztalkorchestration.blogspot.in/2014/07/convert-datetime-format-to-string-in.html?view=sidebar
Regards,
AboorvaRaja
Bangalore
+918123339872
Given that maps in BizTalk are implemented as XSL stylesheets, when passing data into a msxsl scripting function, note that the data will be one of types in the Equivalent .NET Framework Class (Types) from this table here. You'll note that System.DateTime isn't on the list.
For parsing of xs:dateTimes, I've generally obtained the /text() node and then parse the parameter from System.String:
<CreateDate>
<xsl:value-of select="userCSharp:GetDateyyyyMMdd(string(s0:StatusIdChangeDate/text()))" />
</CreateDate>
And then the C# script
<msxsl:script language="C#" implements-prefix="userCSharp">
<![CDATA[
public System.String GetDateyyyyMMdd(System.String p_DateTime)
{
return System.DateTime.Parse(p_DateTime).ToString("yyyyMMdd");
}
]]>

Resources