I have a json record like the one below -:
val warningJson = """{"p1":"{"trasanction_id": 197, "customer_id": 27, "datetime": "1576499008876", "amount": 6094, "state": "SUCCESS"}","p2":"{"trasanction_id": 197, "customer_id": 27, "datetime": "1576499017565", "amount": 547, "state": "SUCCESS"}","p3":"{"trasanction_id": 198, "customer_id": 27, "datetime": "1576499029116", "amount": 6824, "state": "SUCCESS"}"}"""
However, I want it to parse to avro data record and passing the scema like -:
val outputSchemaStringTestData = """{"type":"record","name":"Warning","namespace":"test","fields":[{"name":"p1","type":"string"},{"name":"p2","type":"string"},{"name":"p3","type":"string"}]}"""
Here, using this schema when I create a generic record using the following code I am able to do it -:
val genericRecord: GenericRecord = new GenericData.Record(outputSchemaTestData)
genericRecord.put("p1", """{"trasanction_id": 197, "customer_id": 27, "datetime": "1576499008876", "amount": 6094, "state": "SUCCESS"}""")
genericRecord.put("p2", """{"trasanction_id": 197, "customer_id": 27, "datetime": "1576499017565", "amount": 547, "state": "SUCCESS"}""")
genericRecord.put("p3", """{"trasanction_id": 198, "customer_id": 27, "datetime": "1576499029116", "amount": 6824, "state": "SUCCESS"}""")
However I am using the same schema to parse warningJson using the code mentioned below by passing the outputSchemaStringTestData , I am getting an error which is mentioned below -:
com.fasterxml.jackson.core.JsonParseException: Unexpected character ('t' (code 116)): was expecting comma to separate Object entries
at [Source: (String)"{"p1":"{"trasanction_id": 197, "customer_id": 27, "datetime": "1576499008876", "amount": 6094, "state": "SUCCESS"}","p2":"{"trasanction_id": 197, "customer_id": 27, "datetime": "1576499017565", "amount": 547, "state": "SUCCESS"}","p3":"{"trasanction_id": 198, "customer_id": 27, "datetime": "1576499029116", "amount": 6824, "state": "SUCCESS"}"}"; line: 1, column: 11]
org.apache.avro.SchemaParseException: com.fasterxml.jackson.core.JsonParseException: Unexpected character ('t' (code 116)): was expecting comma to separate Object entries
at [Source: (String)"{"p1":"{"trasanction_id": 197, "customer_id": 27, "datetime": "1576499008876", "amount": 6094, "state": "SUCCESS"}","p2":"{"trasanction_id": 197, "customer_id": 27, "datetime": "1576499017565", "amount": 547, "state": "SUCCESS"}","p3":"{"trasanction_id": 198, "customer_id": 27, "datetime": "1576499029116", "amount": 6824, "state": "SUCCESS"}"}"; line: 1, column: 11]
My code for conversion-:
package com.Izac.Cep.KafkaSourceAndSink;
import org.apache.avro.Schema;
import org.apache.avro.generic.GenericData;
import org.apache.avro.generic.GenericDatumReader;
import org.apache.avro.generic.GenericDatumWriter;
import org.apache.avro.generic.GenericRecord;
import org.apache.avro.io.*;
import java.io.*;
public class JsonToAvro {
public static GenericRecord createGenericRecord(final String schemastr, final String json) throws Exception {
//Schema.Parser schemaParser = new Schema.Parser();
//Schema schema = schemaParser.parse(schemastr);
Schema schema= Schema.parse(schemastr);
byte[] avroByteArray = fromJasonToAvro(json, schemastr);
DatumReader<GenericRecord> reader1 = new GenericDatumReader<GenericRecord>(schema);
Decoder decoder1 = DecoderFactory.get().binaryDecoder(avroByteArray, null);
GenericRecord result = reader1.read(null, decoder1);
return result;
}
private static byte[] fromJasonToAvro(String json, String schemastr) throws Exception {
Schema schema = Schema.parse(schemastr);
InputStream input = new ByteArrayInputStream(json.getBytes());
DataInputStream din = new DataInputStream(input);
Decoder decoder = DecoderFactory.get().jsonDecoder(schema, din);
DatumReader<Object> reader = new GenericDatumReader<Object>(schema);
Object datum = reader.read(null, decoder);
GenericDatumWriter<Object> w = new GenericDatumWriter<Object>(schema);
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
Encoder e = EncoderFactory.get().binaryEncoder(outputStream, null);
w.write(datum, e);
e.flush();
return outputStream.toByteArray();
}
}
Related
I am new to RestAssured and trying to the following code:
#Test
public void testJsonPath() {
Response response = RestAssured
.given()
.param("id", "2172797")
.param("appid", "439d4b804bc8187953eb36d2a8c26a02")
.when()
.get("https://samples.openweathermap.org/data/2.5/weather");
String value = "weather[*].description";
System.out.println(value);
String data = response.then().contentType(ContentType.JSON).extract().path(value);
System.out.println(data);
}
JSON:
{
"coord": {
"lon": 145.77,
"lat": -16.92
},
"weather": [
{
"id": 802,
"main": "Clouds",
"description": "scattered clouds",
"icon": "03n"
}
],
"base": "stations",
"main": {
"temp": 300.15,
"pressure": 1007,
"humidity": 74,
"temp_min": 300.15,
"temp_max": 300.15
},
"visibility": 10000,
"wind": {
"speed": 3.6,
"deg": 160
},
"clouds": {
"all": 40
},
"dt": 1485790200,
"sys": {
"type": 1,
"id": 8166,
"message": 0.2064,
"country": "AU",
"sunrise": 1485720272,
"sunset": 1485766550
},
"id": 2172797,
"name": "Cairns",
"cod": 200
}
Getting the following error:
java.lang.IllegalArgumentException: Invalid JSON expression:
Script1.groovy: 1: unexpected token: ] # line 1, column 36.
weather[*].description
Why do we get the following error when using * ?. When replacing it String value = "weather[0].description"; it works fine. Can someone please help me with this.
I have checked the postman as well and the API is giving the correct output.
Note: When using creating code (weather[*].description) from http://jsonpath.com/. It gives the same output.
If there is anything I am missing anything please let me know as I am new to it. Any help to this would be great.
It can also be great if someone can give me a brief on this and let me know whom to refer to get the best output from this.
Why do we get the following error when using * ?. When replacing it String value = "weather[0].description"; it works fine. Can someone please help me with this.
Rest Assured uses Groovy's GPath notation and is not to be confused with Jayway's JsonPath syntax
Official Documentation
GPath
JsonPath
Edit :
{"users":[{"firstName":"sijo","lastName":"john","subjectId":1,"id":1},{"firstName":"sonia","lastName":"sahay","subjectId":2,"id":2},{"firstName":"shreya","lastName":"sahay","subjectId":1,"id":3}],"subjects":[{"id":1,"name":"Devops"},{"id":2,"name":"SDET"}]}
For the above JSON you can use the below to fetch all the lastName
JsonPath abc = new JsonPath(res);
String deal = abc.getString("users.lastName");
System.out.println(deal);
I'm trying to create a structure for the following json object using swift decodable.
{
"template": [
{
"id": 8,
"question": "Favorite Color?",
"category": "Color",
"section": "Favorite Colors",
"is_active": 1,
},
[
{
"id": 14,
"question_id": 8,
"option_name": "Red",
"is_active": 1,
},
{
"id": 16,
"question_id": 8,
"option_name": "Orange",
"is_active": 1,
}
],
{
"id": 9,
"question": "What cars do you drive?",
"category": "Cars",
"section": "Favorite Cars",
"is_active": 1,
},
[
{
"id": 15,
"question_id": 9,
"option_name": "Toyota",
"is_active": 1,
},
{
"id": 18,
"question_id": 9,
"option_name": "Honda",
"is_active": 1,
},
{
"id": 19,
"question_id": 9,
"option_name": "BMW",
"is_active": 1,
}
]
]
}
I have some like:
public struct GameTemplate:Decodable {
question:String?
}
public struct Game:Decodable {
let template[GameTemplate]
}
For some reason when i tried to parse it doesn't work i get an error stating that struct is not a dictionary. I have tried casting the struct value but that didn't work either at this point just need to get a nice and clean json object after is decoded.
your JSON format is not consistent.
Just take the first category of color :
{
"template": [
{
"id": 8,
"question": "Favorite Color?",
"category": "Color",
"section": "Favorite Colors",
"is_active": 1,
},
[
{
"id": 14,
"question_id": 8,
"option_name": "Red",
"is_active": 1,
},
{
"id": 16,
"question_id": 8,
"option_name": "Orange",
"is_active": 1,
}
],
]
}
The template is an array having dictionary on 0 index and array on 1 index.
It is decodable in a different way but that's an extra effort.
If possible make the JSON data consistent and club categories in one index of array as :
{
"template": [
{
"id": 8,
"question": "Favorite Color?",
"category": "Color",
"section": "Favorite Colors",
"is_active": 1,
"subCategory": [
{
"id": 14,
"question_id": 8,
"option_name": "Red",
"is_active": 1,
},
{
"id": 16,
"question_id": 8,
"option_name": "Orange",
"is_active": 1,
}
]
}
]
}
and the same way club the different category of cars.
It will be easy for you to decode as:
public struct GameTemplate:Decodable {
question: String?
subCategory: [SubCategory]
}
public struct SubCategory:Decodable {
option_name: String?
}
public struct Game:Decodable {
let template: [GameTemplate]
}
Hope you get it what I am trying to explain.
Hi Im get json data with Alamofire and get like this:
{
"prices": [
{
"id": 1,
"value": 1.327,
"stationId": 24,
"type": 0,
"score": 5
},
{
"id": 2,
"value": 1.319,
"stationId": 25,
"type": 0,
"score": 4
},...],
"stations": [
{
"id": 24,
"name": "...",
"address": "...",
"brandId": 1,
"location": ".."
},
{
"id": 25,
"name": "..",
"address": "..",
"brandId": 1,
"location": ".."
},..],
"brands": [
{
"id": 6,
"name": "AGIP"
},
{
"id": 2,
"name": "EKO"
}, ...]
How can I get all data with "type": 0
And then when get all data with type compare id from prices stations and brands and put to array or dictionary
You could map your response using this pod AlamofireObjectMapper and then you can filter this object using for cycle.
I'm not sure the best way of approaching this; I've got a generic system for requesting data objects from an api and returning json by using ActiveModelSerializers and it's been fantasticly simple.
For one class I need to return data formatted differently than the normal class serilization and unsure of the best approach. No matter what it seems that I'm going to have to have an 'elsif' for this one class in the return (which pains me a little bit).
Currently I've got a class named 'curve' which a user has_many of, and so requesting a user's curve gets me something like this:
[{
"id": 7,
"name": "A",
"angle": 30,
"date": "2017-05-23T01:52:00.589-04:00",
"direction": "left",
"top": "C1",
"bottom": "C4",
"risser": 3,
"sanders": 8
}, {
"id": 8,
"name": "B",
"angle": 0,
"date": "2017-05-23T01:52:56.107-04:00",
"direction": "right",
"top": "C5",
"bottom": "C6",
"risser": null,
"sanders": null
}, {
"id": 9,
"name": "A",
"angle": 22,
"date": "2017-05-25T01:56:00.656-04:00",
"direction": "right",
"top": "C3",
"bottom": "C5",
"risser": null,
"sanders": null
}, {
"id": 11,
"name": "C",
"angle": 3,
"date": "2017-05-26T01:57:08.078-04:00",
"direction": "right",
"top": "C4",
"bottom": "C7",
"risser": null,
"sanders": null
}]
But I actually need each to be grouped by the name like so:
[{
"name": "A",
"points": [{
"id": 7,
"name": "A",
"angle": 30,
"date": "2017-05-23T01:52:00.589-04:00",
"direction": "left",
"top": "C1",
"bottom": "C4",
"risser": 3,
"sanders": 8
}, {
"id": 9,
"name": "A",
"angle": 22,
"date": "2017-05-25T01:56:00.656-04:00",
"direction": "right",
"top": "C3",
"bottom": "C5",
"risser": null,
"sanders": null
}]
},
{
"name": "B",
"points": [{
"id": 8,
"name": "B",
"angle": 0,
"date": "2017-05-23T01:52:56.107-04:00",
"direction": "right",
"top": "C5",
"bottom": "C6",
"risser": null,
"sanders": null
}]
},
{
"name": "C",
"points": [{
"id": 11,
"name": "C",
"angle": 3,
"date": "2017-05-26T01:57:08.078-04:00",
"direction": "right",
"top": "C4",
"bottom": "C7",
"risser": null,
"sanders": null
}]
}
]
Now I know I can do group_by and fiddle with the response - but even then it won't be using the default Curve serializer. I could also create a custom GroupedCurve serializer and possibly process each curve with a CurveSerializer - but at that point isn't it just like getting the default array and doing a map, and constructing my own hashes?
What's the best / cleanest way of processing data from top to the bottom format?
UPDATE:
Something like the following 'does the job' but the job is dirty:
def self.reformat_curves(curves)
arr = []
curves.each do |i|
alpha = i[:name]
entry = arr.find{|chunk| chunk[:name]==alpha}
if entry.nil?
arr << {name: alpha, points: [i]}
else
entry[:points] << i
end
end
return arr
end
I have a simple avro schema, from which I generated a java class using the avro-maven-plugin.
The avro schema is as follows:
{
"type": "record",
"name": "addressGeo",
"namespace": "com.mycompany",
"doc": "Best record address and list of geos",
"fields": [
{
"name": "version",
"type": "int",
"default": 1,
"doc": "version the class"
},
{
"name": "eventType",
"type": "string",
"default": "addressGeo",
"doc": "event type"
},
{
"name": "parcelId",
"type": "long",
"doc": "ParcelID of the parcel. Join parcelid and sequence with ParcelInfo"
},
{
"name": "geoCodes",
"type": {"type": "array", "items": "com.mycompany.geoCode"},
"doc": "Multiple Geocodes, with restrictions information"
},
{
"name": "brfAddress",
"type": ["null", "com.mycompany.address"],
"doc": "Address cleansed version of BRF"
}
]
}
If I construct a simple object using the builder, and serialize it using json, I get the following output:
{
"version": 1,
"eventType": {
"bytes": [
97,
100,
100,
114,
101,
115,
115,
71,
101,
111
],
"length": 10,
"string": null
},
"parcelId": 1,
"geoCodes": [
{
"version": 1,
"latitude": 1,
"longitude": 1,
"geoQualityCode": "g",
"geoSourceTypeID": 1,
"restrictions": "NONE"
}
],
"brfAddress": {
"version": 1,
"houseNumber": "1",
"houseNumberFraction": null,
"streetDirectionPrefix": null,
"streetName": "main",
"streetSuffix": "street",
"streetDirectionSuffix": null,
"fullStreetAddress": "1 main street, seattle, wa, 98101",
"unitPrefix": null,
"unitNumber": null,
"city": "seattle",
"state": "wa",
"zipCode": "98101",
"zipPlusFour": null,
"addressDPV": "Y",
"addressQualityCode": "good",
"buildingNumber": "1",
"carrierRoute": "t",
"censusTract": "c",
"censusTractAndBlock": "b",
"dataCleanerTypeID": 1,
"restrictions": "NONE"
}
}
Note the output of the eventType field. It is coming through as an array of bytes whereas the type of the field is a CharSequence.
Any idea why serialization is doing this? It works fine for other types that are strings.
I am using google-gson to serialize the object to json.
You might be working with a older version of avro, that uses CharSequence. Ideally string type should be java String type. I would suggest to update the avro version or have a look at this one - Apache Avro: map uses CharSequence as key