I have a JBehave test which lists some expected results in an ExamplesTable
Then result is :
|name|value|
|foo|2011-05-29|
|bar|baz|
And the object under test is something like:
class A {
private Date foo;
private String bar;
/* ... */
}
How do I tell JBehave to consider the parameter for foo as a date? I would prefer to implement my own converter.
See the documentation on parameter converters. Of course, if you are taking in an ExampleTable object and calling get on the row, you need to convert it yourself, or reuse an existing converter. Also look at JBEHAVE-398 which I've not tried yet, but might help if you are using JBehave 3.2 or higher.
Related
I was testing my Dataflow pipeline using DirectRunner from my Mac and got lots of "WARNING" message like this, may I know how to get rid of them because it is too much that I can not even see my debug message.
Thanks
Apr 05, 2018 2:14:48 PM org.apache.beam.sdk.util.MutationDetectors$CodedValueMutationDetector verifyUnmodifiedThrowingCheckedExceptions
WARNING: Coder of type class org.apache.beam.sdk.coders.SerializableCoder has a #structuralValue method which does not return true when the encoding of the elements is equal.
Element com.apigee.analytics.platform.core.service.schema.EventRow#4a590d0b
It may help to ensure that all serialized values have proper equals() implementations since SerializableCoder expects them:
The structural value of the object is the object itself. The SerializableCoder should be only used for objects with a proper Object#equals implementation.
You can implement your own Coder for your POJOs. SerializableCoder does not guarantee a deterministic encoding according to docs:
SerializableCoder does not guarantee a deterministic encoding, as Java
serialization may produce different binary encodings for two equivalent
objects.
This article explains custom coders in details.
I had this same problem. I was using SerializableCoder for a class implementing Serializable, and my tests were failing because the PAssert() containsInAnyOrder() method was not using MyClass.equals() to evaluate object equality. The signature of my equals() method was:
public boolean equals(MyClass other) {...}
All I had to do to fix it was to define equals in terms of Object:
public boolean equals(Object other) {...}
This made the warnings go away, and made the tests pass.
Just add
https://projectlombok.org/features/EqualsAndHashCode
#EqualsAndHashCode
public class YourRecord implements Serializable {
I am trying to understand when does Orika use converters to do the mapping versus direct casting.
I have the following mapping:
Class A {
Map<String, Object> props;
}
Class B {
String bStr;
int bInt;
}
My mapping is defined as props['aStr'] => bStr and props['aInt'] => bInt
When I look at the generated code, I see that for the String case, it uses a converter and calls its convert method for the transformation:
destination.setBStr("" + ((ma.glasnost.orika.Converter)usedConverters[0]).convert(
((java.lang.Object) ((java.util.Map) source.getProps().get("aStr"),
(ma.glasnost.orika.metadata.Type) usedTypes[0]))
But, for the integer case it directly casts it like this:
destination.setBInt((java.lang.Integer)(java.lang.Object) ((java.util.Map)
source.getProps().get("aInt")))
The above line of code ends up giving class cast exception.
For fixing this issue, I was thinking along the lines of using a custom converter but if the above line of code doesn't use the converter then that wont work.
Of course, I can always do this is in my custom mapper but just trying to understand how the code is generated for type conversion.
Thanks!!
In Orika there is two stages: config-time and runtime, as optimization Orika resolve all used converter in the config-time and cache them into each generated mapper so it will be accessible directly O(1) but in the config time it will try to find in a list O(n) of registered converters which one "canConvert" between two given types, canConvert is a method in Converter interface .
So this solution offer the best of the two worlds:
A very flexible way to register a converter with arbitrary conditions
An efficient resolution and conversion operation in the runtime.
Orika by default, leverage the existence of .toString in every object to offer implicit coercion to String for every Object. The problem here is that there is no Converter from Object to Integer.
Maybe this can be an issue of error reporting. Ideally Orika should report that an Object have to be converted to Integer and there is no appropriate converter registered.
Are there any plans to introduce attributes
for classes, methods, parameters of methods,
something like C# or Java attributes ?
[Test]
class SomeClass
{
[Test]
someMethod()
}
or
#Test
class SomeClass
{
#Test
someMethod(#Test int param)
}
For many frameworks it would be very useful
In dart, they are called metadata / annotation. The syntax is quite close to java. Here's a example :
#Test testMethod() {}
In Dart Specification you can read :
Metadata consists of a series of annotations, each of which begin with the character #, followed a constant expression that starts with an identifier. It is a compile time error if the expression is not one of the following:
A reference to a compile-time constant variable.
A call to a constant constructor.
[....]
Metadata can appear before a library, class, typedef, type parameter, constructor, factory, function, field, parameter, or variable declaration and before an import or export directive.
There're already some annotations predifined in dart:core. Particulary #override, #deprecated and #proxy.
Dart already has annotations, similar to Java in some ways, they're just not used in very many places yet, and they're not accessible from reflection yet either.
See this article: http://news.dartlang.org/2012/06/proposal-to-add-metadata-to-dart.html
Here's a brief introduction to the two metadata annotations currently available in the Dart meta library:
Dart Metadata is your friend.
This doesn't preclude you from using your own, but these are the two that have tooling integration with the Dart Editor.
I was trying to define a dynamic method using the GroovyDSL scripting capabilities in IntelliJ. The starting guide "Scripting IDE for DSL awareness" gives you a good idea on how to get started with this. The dynamic method definition in my DSL is more complex than the example though and I am not quite sure how to build this. You can imagine it working like dynamic finder methods in Grails except that you can combine an arbitrary number of criteria with a boolean And keyword in any order. All keywords are not defined in the class that I am invoking the method on but could be defined in the GDSL file. The method always starts with submitTransactionWith. Here's an example:
clientDSL.submitTransactionWithCriteria1AndCriteria2AndCriteria3(arg1, arg2, arg3)
I started out with this but that only works for one criteria and doesn't take into account that you can combined multiple criteria in any order.
def ctx = context(ctype: "my.client.ClientDSL")
contributor(ctx) {
['Criteria1', 'Criteria2', 'Criteria3'].each {
method name: "submitTransactionWith${it}",
type: 'java.util.Map',
params: [args: 'java.util.Collection']
}
}
I was wondering if there's special support for that kind of dynamic method. I'd also be interested in how the DSL for Grails is modeled internally in IntelliJ.
The Grails DSL in located in ${idea.home}/plugins/GrailsGriffon/lib/standardDsls
It would probably help you for your problem. They create string arrays of all the method name combinations ahead of time, and then just iterate through them in their contributor creating a method using the arrays of strings for names.
I'm using an interpreter for my domain specific language rather than a compiler (despite the performance). I'm struggling to understand some of the concepts though:
Suppose I have a DSL (in XML style) for a game so that developers can make building objects easily:
<building>
<name> hotel </name>
<capacity> 10 </capacity>
</building>
The DSL script is parsed, then what happens?
Does it execute an existing method for creating a new building? As I understand it does not simply transform the DSL into a lower level language (as this would then need to be compiled).
Could someone please describe what an interpreter would do with the resulting parsed tree?
Thank you for your help.
Much depends on your specific application details. For example, are name and capacity required? I'm going to give a fairly generic answer, which might be a bit overkill.
Assumptions:
All nested properties are optional
There are many nested properties, possibly of varying depth
This invites 2 ideas: structuring your interpreter as a recursive descent parser and using some sort of builder for your objects. In your specific example, you'd have a BuildingBuilder that looks something like (in Java):
public class BuildingBuilder {
public BuildingBuilder() { ... }
public BuildingBuilder setName(String name) { ... return this; }
public BuildingBuilder setCapacity(int capacity) { ... return this; }
...
public Building build() { ... }
}
Now, when your parser encounters a building element, use the BuildingBuilder to build a building. Then add that object to whatever context the DSL applies to (city.addBuilding(building)).
Note that if the name and capacity are exhaustive and are always required, you can just create a building by passing the two parameters directly. You can also construct the building and set the properties directly as encountered instead of using the builder (the Builder Pattern is nice when you have many properties and said properties are both immutable and optional).
If this is in a non-object-oriented context, you'll end up implementing some sort of buildBuilding function that takes the current context and the inner xml of the building element. Effectively, you are building a recursive descent parser by hand with an xml library providing the actual parsing of individual elements.
However you implement it, you will probably appreciate having a direct semantic mapping between xml elements in your DSL and methods/objects in your interpreter.