I have a simple schema as follows:
{
"name": "owner",
"type": "record",
"doc": "todo",
"fields": [
{ "name": "version", "type": "int", "default": 1},
{ "name": "name", "type": "string" },
{ "name": "age", "type": "int" },
]
}
However when I use the avro-maven-plugin to generate a java object from this specificaion, it does not set the default value of version field to 1.
How do I make that happen?
Nevermind, It works fine as is.
I was looking at the generated Java class, and could not figure out where it was setting the default value to 1. But when I serialize it using json, I see the default value in the output. Also, the getter returns the default value as well.
Related
I am facing a strange issue that I extract a schema of api response and added json file in my serenity project. While validating schema, what ever the schema provided the test was passing, moreover if I changed any type of key like I change the data-type of any key value correct schema( like changed the name data-type from string to integer) then test failed.
Scenario:
My API response:
{
"name":"Alex",
"age" : 20,
"city":"New York"
}
My Schema for this API: Test Passed which is ok
{
"type": "object",
"properties": {
"name": {
"type": "string"
},
"age": {
"type": "integer"
},
"city": {
"type": "string"
}
},
"required": [
"name",
"age",
"city"
]
}
If I changed the schema from correct to wrong that is remove any key value pair the test even passed which is not correct
{
"type": "object",
"properties": {
"name": {
"type": "string"
},
"city": {
"type": "string"
}
},
"required": [
"name",
"city"
]
}
Moreover if I write only " { } " in the Schema file the test passed
The method I am using for validation is matchesJsonSchemaInClassPath
The schema validation only checks the data-types that are coming for values in JSON are correct by matching with schema. For the data validation there is another method in Serenity BDD that is VerifyResponseData.ofTheresponse(jsonobj)
This works for me
I am completely new to Avro serialization and I am trying to get my head around how complex types are defined.
I am puzzled by how Avro generates the Enums in Java.
{
"type":"record",
"namespace":"com.example",
"name": "Customer",
"doc":"Avro Schema for our Customer",
"fields":[
{"name":"first_name","type":"string","doc":"First Name of Customer"},
{"name":"last_name","type":"string","doc":"Last Name of Customer"},
{"name":"automated_email","type":"boolean","doc":"true if the user wants marketing email", "default":true},
{
"name": "customer_type",
"type": ["null",
{
"name": "Customertype",
"type": "enum",
"symbols": ["OLD","NEW"]
}
]
}
]
}
Notice the customer_type field. If I give null, then in my generated sources I get the correct Enum type which is :
private com.example.Customertype customer_type;
But the moment I remove the null value and define customer_type in the following way:
{
"name": "customer_type",
"type": [
{
"name": "Customertype",
"type": "enum",
"symbols": ["OLD","NEW"]
}
]
}
The declaration changes to :
private Object customer_type;
What does that "null" string signify ? Why is it important ?
I have tried looking through the AVRO specification but nothing has given me a clear cut answer why this is working the way it is.
I am using the AVRO Maven plugin.
Any beginner resources for AVRO will also be appreciated.
Thank you.
If you are going to remove the null, you should remove the [ and ] brackets (because it is no longer a union).
So your customer_type schema should look like this:
{
"name": "customer_type",
"type": {
"name": "Customertype",
"type": "enum",
"symbols": ["OLD","NEW"]
}
}
The complete schema is the following:
{
"type": "record",
"name": "envelope",
"fields": [
{
"name": "before",
"type": [
"null",
{
"type": "record",
"name": "row",
"fields": [
{
"name": "username",
"type": "string"
},
{
"name": "timestamp",
"type": "long"
}
]
}
]
},
{
"name": "after",
"type": [
"null",
"row"
]
}
]
}
I wanted to programmatically extract the following sub-schema:
{
"type": "record",
"name": "row",
"fields": [
{
"name": "username",
"type": "string"
},
{
"name": "timestamp",
"type": "long"
}
]
}
As you see, field "before" is nullable. I can extract it's schema by doing:
schema.getField("before").schema()
But the schema is not a record as it contains NULL at the beginning(UNION type) and I can't go inside to fetch schema of "row".
["null",{"type":"record","name":"row","fields":[{"name":"username","type":"string"},{"name":"tweet","type":"string"},{"name":"timestamp","type":"long"}]}]
I want to fetch the sub-schema because I want to create GenericRecord out of it. Basically I want to create two GenericRecords "before" and "after" and add them to the main GenericRecord created from full schema.
Any help will be highly appreciated.
Good news, if you have a union schema, you can go inside to fetch the list of possible options:
Schema unionSchema = schema.getField("before").schema();
List<Schema> unionSchemaContains = unionSchema.getTypes();
At that point, you can look inside the list to find the one that corresponds to the Type.RECORD.
i.e. is it possible to make field required similar to ProtoBuf:
message SearchRequest {
required string query = 1;
}
All fields are required in Avro by default. As is mentioned in the official documentation, if you want to make something optional, you have to make it nullable by unioning its type with null, like this
{ "namespace": "example.avro",
"type": "record",
"name": "User",
"fields": [
{"name": "name", "type": "string"},
{"name": "favorite_number", "type": ["int", "null"]},
{"name": "favorite_color", "type": ["string", "null"]}
]
}
In this example, name is required, favorite_number and favorite_color are optional. I recommend spending some more time with the documentation.
I have two questions:
Is it possible to use the same reader and parse records that were written with two schemas that are compatible, e.g. Schema V2 only has an additional optional field compared to Schema V1 and I want the reader to understand both? I think the answer here is no, but if yes, how do I do that?
I have tried writing a record with Schema V1 and reading it with Schema V2, but I get the following error:
org.apache.avro.AvroTypeException: Found foo, expecting foo
I used avro-1.7.3 and:
writer = new GenericDatumWriter<GenericData.Record>(SchemaV1);
reader = new GenericDatumReader<GenericData.Record>(SchemaV2, SchemaV1);
Here are examples of the two schemas (I have tried adding a namespace as well, but no luck).
Schema V1:
{
"name": "foo",
"type": "record",
"fields": [{
"name": "products",
"type": {
"type": "array",
"items": {
"name": "product",
"type": "record",
"fields": [{
"name": "a1",
"type": "string"
}, {
"name": "a2",
"type": {"type": "fixed", "name": "a3", "size": 1}
}, {
"name": "a4",
"type": "int"
}, {
"name": "a5",
"type": "int"
}]
}
}
}]
}
Schema V2:
{
"name": "foo",
"type": "record",
"fields": [{
"name": "products",
"type": {
"type": "array",
"items": {
"name": "product",
"type": "record",
"fields": [{
"name": "a1",
"type": "string"
}, {
"name": "a2",
"type": {"type": "fixed", "name": "a3", "size": 1}
}, {
"name": "a4",
"type": "int"
}, {
"name": "a5",
"type": "int"
}]
}
}
},
{
"name": "purchases",
"type": ["null",{
"type": "array",
"items": {
"name": "purchase",
"type": "record",
"fields": [{
"name": "a1",
"type": "int"
}, {
"name": "a2",
"type": "int"
}]
}
}]
}]
}
Thanks in advance.
I encountered the same issue. That might be a bug of avro, but you probably can work around by adding "default": null to the field of "purchase".
Check my blog for details: http://ben-tech.blogspot.com/2013/05/avro-schema-evolution.html
You can do opposite of it . Mean you can parse data schem 1 and write data from schema 2 . Beacause at write time it write data into file and if we don't provide any field at reading time than it will be ok. But if we write less field than read than it will not recognize extra field at reading time so , it will give error .
Best way is to have a schema mapping to maintain the schema like Confluent Avro schema registry.
Key Take Aways:
1. Unlike Thrift, avro serialized objects do not hold any schema.
2. As there is no schema stored in the serialized byte array, one has to provide the schema with which it was written.
3. Confluent Schema Registry provides a service to maintain schema versions.
4. Confluent provides Cached Schema Client, which checks in cache first before sending the request over the network.
5. Json Schema present in “avsc” file is different from the schema present in Avro Object.
6. All Avro objects extends from Generic Record
7. During Serialization : based on schema of the Avro Object a schema Id is requested from the Confluent Schema Registry.
8. The schemaId which is a INTEGER is converted to Bytes and prepend to serialized AvroObject.
9. During Deserialization : First 4 bytes are removed from the ByteArray. 4 bytes are converted back to INTEGER(SchemaId)
10. Schema is requested from the Confluent Schema Registry and using this schema the byteArray is deserialized.
http://bytepadding.com/big-data/spark/avro/avro-serialization-de-serialization-using-confluent-schema-registry/