Persisting date and nested with JSON - grails

I'm trying to save nested person, which is json array and complains about requiring a Set.
Another problem I encountered, is that another field date cannot be null, but contains value already.
What I need to do before for adding params into my object or I have to change my json is built? I'm trying to save json post like this:
// relationship of Test
//static hasMany = [people: Person, samples: Sample]
def jsonParams= JSON.parse(request.JSON.toString())
def testInstance= new Test(jsonParams)
//Error requiring a Set
[Failed to convert property value of type 'org.codehaus.groovy.grails.web.json.JSONArray' to required type 'java.util.Set' for property 'people'; nested exception is java.lang.IllegalStateException: Cannot convert value of type [java.lang.String] to required type [com.Person] for property 'people[0]': no matching editors or conversion strategy found]]
//error saying its null
Field error in object 'com.Test' on field 'samples[2].dateTime': rejected value [null]; codes [com.Sample]
//...
"samples[0].dateTime_hour":"0",
"samples[0].dateTime_minute":"0",
"samples[0].dateTime_day":"1",
"samples[0].dateTime_month":"0",
"samples[0].dateTime_year":"-1899",
"samples[0]":{
"dateTime_minute":"0",
"dateTime_day":"1",
"dateTime_year":"-1899",
"dateTime_hour":"0",
"dateTime_month":"0"
},
"people":[
"1137",
"1141"
], //...

First off, ths line is unnecessary:
def jsonParams= JSON.parse(request.JSON.toString())
The request.JSON can be directly passed to the Test constructor:
def testInstance = new Test(request.JSON)
I'm not sure what your Person class looks like, but I'm assuming those numbers (1137, 1141) are ids. If that is the case, then your json should work - there's a chance that passing the request.JSON directly could help. I tested your JSON locally and it has no problem associating the hasMany collection. I also used:
// JSON numbers rather than strings
"people": [1137, 1141]
// using Person map with the id
"people: [{
"id": 1137
}, {
"id": 1141
}]
Both of these worked as well and are worth trying.
Concerning the null dateTime, I would rework your JSON. I would send the dateTime in a single field, instead of splitting the value into hour/minute/day/etc. The default formats are yyyy-MM-dd HH:mm:ss.S and yyyy-MM-dd'T'hh:mm:ss'Z', but these can be defined by the grails.databinding.dateFormats config setting (config.groovy). There are other ways to do the binding as well (#BindingFormat annotation) but it's going to be easiest to just send the date in a way that grails can handle without additional configuration.
If you are dead set on splitting the dateTime into pieces, then you could use the #BindUsing annotation:
class Sample{
#BindUsing({obj, source ->
def hour = source['dateTime_hour']
def minute = source['dateTime_minute']
...
// set obj.dateTime based on these pieces
})
Date dateTime
}
An additional comment on your JSON, you seem to have samples[0] defined twice and are using 2 syntaxes for your internal collections (JSON arrays and indexed keys). I personally would stick with a single syntax to clean it up:
"samples": [
{"dateTime": "1988-01-01..."}
{"dateTime": "2015-10-21..."}
],"people": [
{"id": "1137"},
{"id": "1141"}
],

Related

failure in serializing optional date type filed to avro regardless of null value or non-null value

We are using avro1.8.2 to serialize data with optional date type field to be published to topic.
record aRecord {
/** Variable: lastUpdate
* lastUpdate indicates the latest date and time the reference asset was updated
*/
union {null, date} lastUpdate = null;
/** Variable: businessDate
* businessDate indicates the business date of the reference asset price
*/
union {null, date} businessDate = null;
}
Ran into the following exception while using the avro tool generated java class to serialize the data:
Error serializing avro message
Caused by: org.apache.avro.AvroRunTimeException: Unknown datum type org.joda.time.LocalDate: 2021-09-17
at org.apache.avro.generic.GenericData.getSchemaName(GenericData.java:772)
at org.apache.avro.specific.SpecificData.getSchemaName(SpecificData.java:302)
at org.apache.avro.generic.GenericData.resolveUnion
Please note that2 this happens regardless of the value is null or non-null (as shown value 2021-09-17 also caused the exception)
We did the following investigation and experiment but could not figure it out why:
Making the date field mandatory, the issue is resolved.
This is because DATE_CONVERSION is added to the corresponding field in the java class generated by avro tool.
If this field is defined as optional and default value is null, DATA_CONVERSION is not added to the java file generated by avro tool.
Using avro 1.9.1 resolved the issue unfortunately we must use avro 1.8.2
We also tried a few other versions of kafka-avro-serializer and spring-boot kafka framework. Nothing works for us.
Other projects that depend on avro1.8.2 seems to be able to handle this and we checked all the places as far as we considered relevant
and all the codes are the same except that somehow they have DATE_CONVERSION in place in the java file
generated by avro tool (although they are defined in advl file exactly the same).
Debuggin into the GenericData.java we found that if DATE_CONVERSION is in place for optional date field, getSchemaName is not called at all.
The getSchemaName basically checks of the type of the object, whether it's an Int, Record, String,...etc.
The date is a logicaltype of joda. Its real type is int as far as we understand
So our questions are:
How to make avro tool enable DATE_CONVERSION for optional "date" type field using avro 1.8.2?
If DATE_CONVERSION is not the key to resolve the issue, what's the best practice to serialize date type field using avro 1.8.2?
and this field could be null (default) or non-null.
Thanks.
SpecificData specificData = SpecificData.get();
specificData.addLogicalTypeConversion(new DateConversion());
DatumWriter<MessageClass> dw = new SpecificDatumWriter<MessageClass>(message.getSchema(), specificData);
DataFileWriter<MessageClass> dfw = new DataFileWriter<MessageClass>(dw);
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
dfw.create(message.getSchema(), outputStream);
dfw.append(message);
dfw.close();
ProducerRecord<String, byte[]> record = new ProducerRecord<>(topic, key, outputStream.toByteArray());
return kafkaProducer.send(record, new Callback());
The above code fixed the issue. MessageClass is the java code generated by avro tool.
message is wrapped in specificData which is constructed with new DateConversion()
DATE_CONVERSION is exactly what is needed for optional date field during serialization.
Note that this solution is only needed as a workaround to avro1.8.

Neo4j-OGM/Spring-Data-Neo4j: Migrate property type from Integer to String

In a large database I have to change the data type of a property for a type of nodes from Integer to String (i.e. 42 to "42") in order to also support non-numerical IDs.
I've managed to do the migration itself and the property now has the expected type in the database.
I have verified this using the Neo4j-Browsers ability to show the query result as JSON:
"graph": {
"nodes": [
{
"id": "4190",
"labels": [
"MyEntity"
],
"properties": {
"id": "225"
}
}
}
Note that the "id" property is different from the node's own (numerical) id.
In the corresponding Spring-Data-Neo4j 4app, I adjusted the type of the corresponding property from Integer to String as well. I expected that to be enough, however upon first loading an affected entity I now receive:
org.neo4j.ogm.exception.MappingException: Error mapping GraphModel to instance of com.example.MyEntity
[...]
Caused by: java.lang.RuntimeException: java.lang.IllegalArgumentException: Can not set java.lang.String field de.moneysoft.core.model.base.UriEntity.transfermarktId to java.lang.Integer
at org.neo4j.ogm.entity.io.FieldWriter.write(FieldWriter.java:43)
at org.neo4j.ogm.entity.io.FieldWriter.write(FieldWriter.java:68)
at org.neo4j.ogm.context.GraphEntityMapper.writeProperty(GraphEntityMapper.java:232)
at org.neo4j.ogm.context.GraphEntityMapper.setProperties(GraphEntityMapper.java:184)
at org.neo4j.ogm.context.GraphEntityMapper.mapNodes(GraphEntityMapper.java:151)
at org.neo4j.ogm.context.GraphEntityMapper.mapEntities(GraphEntityMapper.java:135)
... 122 common frames omitted
Caused by: java.lang.IllegalArgumentException: Can not set java.lang.String field com.example.MyEntity.id to java.lang.Integer
at sun.reflect.UnsafeFieldAccessorImpl.throwSetIllegalArgumentException(UnsafeFieldAccessorImpl.java:167)
at sun.reflect.UnsafeFieldAccessorImpl.throwSetIllegalArgumentException(UnsafeFieldAccessorImpl.java:171)
at sun.reflect.UnsafeObjectFieldAccessorImpl.set(UnsafeObjectFieldAccessorImpl.java:81)
at java.lang.reflect.Field.set(Field.java:764)
at org.neo4j.ogm.entity.io.FieldWriter.write(FieldWriter.java:41)
... 127 common frames omitted
I am not aware of Neo4j-OGM storing any kind of model or datatype (at least I don't see it in the graph). Why does it still believe that my property is an Integer?
Edit:
Node Entity after migration:
#NodeEntity
public class MyEntity
{
#Property
protected String name;
#Property
private String id;
}
I am not aware of any other relevant code.
Well, if the error you see looks implausible, it probably is.
After a good nights sleep, I realized that I had connected to the wrong database instance: Not the one that was migrated and that I was looking at in the browser, but another one that contained an unmigrated state.
After connecting to the correct instance, everything worked as expected!

Grails convert Blob column to deserialize into object

Given below domain class, I want to convert the Blob data into Java object by deserializing the bytes. What is the approach to follow? Do I need to specify any converter to GORM to invoke it after fetching the data from DB?
class SpringMessage {
static mapping = {
datasource 'staging_oracle'
message type: 'blob', column: 'message_bytes'
createdDate type: Date, column: 'created_date'
}
static constraints = {
}
String messageId
Blob message //It holds the serialized bytes
Date createdDate
}
Ideally I do not want to have "Blob" property on the domain class. Instead I want to declare actual Java class type (ex: Foo message) but hope to specify some type of converter in mapping. i.e
static mapping = {
message type:'blob', converter:FooDeserializer
}
Note the converter argument for message column in mapping. Is there such a feature in Grails? Or any other feature which allows me to do some post processing after the data is fetched from GORM?
I use Grails 2.3.3.
As of now, I do the deserialization outside the grails generated findBy method. I was hoping that there would be some callback method which might be called by findByXXX implementations to do this convertion from Blob to java object.
ObjectInputStream is = new ObjectInputStream(springMessage.message.getBinaryStream())
Message<?> message = is.readObject()

Elasticsearch default mapping

My current understanding-
Elasticsearch creates the mapping indices the first time it receives the JSON datasets.
This mapping cannot be changed, but the datasets can be re-mapped.
Question-
Forget re-mapping. Is there any way to tell ES to behave by default as-
"Consider everything that is not a date to be of string type"?
Also, will i be losing out on much if i do this?
Update-
i added the file- config/mappings/_default/mapping.json with the following contents-
{
"dynamic_templates": [
{
"template_1": {
"match": "*",
"match_mapping_type": "int",
"mapping": {
"type": "string"
}
},
"template_2": {
"match": "*",
"match_mapping_type": "long",
"mapping": {
"type": "string"
}
}
}
]
}
i also tried placing the following at- config/default_mapping.json
{
"_default_" : {
"match": "*",
"match_mapping_type": "int",
"mapping": {
"type": "string"
}
}
}
My 'motive' is to get rid of errors that crop up if int and long types change to string. Will this map all int and long values as string across all indexes that are created in the future? Do i need to nest this dynamic_templates key within _all?
Update II-
Adding this mapping file causes elasticsearch to cough up-
[2014-02-04 10:48:34,396][DEBUG][action.admin.indices.create] [Her] [logstash-2014.02.04] failed to create
org.elasticsearch.index.mapper.MapperParsingException: mapping [mapping.json]
at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService$2.execute(MetaDataCreateIndexService.java:312)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:298)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:135)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:701)
Caused by: java.lang.ClassCastException: java.util.ArrayList cannot be cast to java.util.Map
at org.elasticsearch.index.mapper.DocumentMapperParser.extractMapping(DocumentMapperParser.java:268)
at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:155)
at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:314)
at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:193)
at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService$2.execute(MetaDataCreateIndexService.java:309)
... 5 more
2014-02-04 10:48:34 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2014-02-04 10:48:33 +0000 error_class="Net::HTTPServerException" error="400 \"Bad Request\"" instance=17509700
When you start from scratch, thus without mapping, you rely on defaults. Every time you send a document the fields that weren't mapped yet are automatically mapped based on their json type (and conventions for dates). That said, if you send a field in your first document as a number and that same field becomes a string in your second document, the index operation for the second document will return an error.
There are apis to manage mappings, which doesn't mean that you have to declare all your fields. You can just specify the ones that you want to behave differently from the default. You can specify mappings while creating an index, using the put mapping api if the index already exists, or even include them in index templates, for indices that have yet to be created.
Changing the mappings is possible, but only backwards compatible changes can be applied. You can always add new fields, but you can't change the type or the analyzer for an existing field. What you could do in that case is trying to make the change backwards compatible by using multi-fields, otherwise you need to reindex against the updated mappings.
As for your last question, if you index everything as a string, you lose what you can usually do with numbers e.g. range queries. Whether this is feasible or not depends on your data and what you need to do with it.

Groovy Dynamic List Interaction

I am using an older version of grails (1.1.1) and I am working on a legacy application for a government client.
Here is my question (in psuedo form):
I have a domain that is a Book. It has a sub domain of type Author associated with it (1:many relationship). The Author domain has a firstName and lastName field.
def c = Book.createCriteria()
def booklist = c.listDistinct {
author {
order('lastName', 'asc')
order('firstName', 'asc')
}
}
Let's say I have a list of fields I want to use for an excel export later. This list has both the author domain call and the title of the column I want to use.
Map fields = ['author.lastName' : 'Last Name', 'author.firstName', 'First Name']
How can I dynamically call the following code--
booklist.eachWithIndex(){
key, value ->
println key.fields
}
The intent is that I can create my Map of fields and use a loop to display all data quickly without having to type all of the fields by hand.
Note - The period in the string 'author.lastName' throws an error when trying to output key['author.lastName'] too.
I don't recall the version of Groovy that came with Grails 1.1, but there are a number of language constructs to do things like this. If it's an old version, some things may not be available - so your mileage may vary.
Map keys can be referenced with quotes strings, e.g.
def map = [:]
map."person.name" = "Bob"
The above will have a key of person.name in the map.
Maps can contain anything, including mixed types in Groovy - so you really just need to work around string escapes or other special cases if you are using more complex keys.
You can also use a GString in the above
def map = [:]
def prop = "person.name"
map."${prop}" = "Bob"
You can also get a map of property/value off of a class dynamically by the properties field on it. E.g.:
class Person { String name;String location; }
def bob = new Person(name:'Bob', location:'The City')
def properties = bob.properties
properties.each { println it }

Resources