Avro schema not compatible - avro

I have a JSON Avro schema that is quite large, we need to evolve it and have added a lot more fields into it. I thought I had given them all default values and we are using BACKWARD compatibility. When I try to evolve the schema I get the error This schema is incompatible with the latest version
Is there a way I can get an intuitive error message that will give me some indication of why in the schema won't evolve?
Thanks

Related

Saxon XSD or Schema parser in java

Is there any way we can parse Schema or XSD file using saxon?, I need to display all possible XPath for given XSD.
I found a way in org.apache.xerces but wanted to implement logic in Saxon as it supports XSLT 3.0 (we want to use same lib for XSLT related functionality as well)
thanks in advance
Saxon-EE of course includes an XSD processor that parses schema documents. I think your question is not about the low-level process of parsing the documents, it is about the higher-level process of querying the schemas once they have been parsed.
Saxon-EE offers several ways to access the components of a compiled schema programmatically.
You can export the compiled schema as an SCM file in XML format. This format isn't well documented but its structure corresponds very closely to the schema component model defined in the W3C specifications.
You can access the compiled schema from XPath using extension functions such as saxon:schema() and saxon:schema - see http://www.saxonica.com/documentation/index.html#!functions/saxon/schema
You can also access the schema at the Java level: the methods are documented in the Javadoc, but they are really designed for internal use, rather than for the convenience of this kind of application.
Of course, getting access to the compiled schema doesn't by itself solve your problem of displaying all valid paths. Firstly, the set of all valid paths is in general infinite (because types can be recursive, and because of wildcards). Secondly, features such as substitution groups and types derived by extension make if challenging even when the result is finite. But in principle, the information is there: from an element name with a global declaration, you can find its type, and from its type you can find the set of valid child elements, and so on recursively.

Do inferred mapping models always result in lightweight migrations?

We've had a few cases of our App seemingly having trouble migrating user data when an inferred mapping model has been used. The App has needed too long to complete the migration and the migration has failed. Yes – we shouldn't be migrating during launch!
I'd like to know if it's possible that an inferred mapping model might not result in a lightweight migration. All the accounts I've read suggest that inferred mappings are necessarily lightweight, but I've not seen a strong statement that this is a guarantee.
The situation where we've had a problem included a property being deleted from the schema which was stored as external binary data (Allows External Storage was ticked in the schema editor). I wondered if this particular migration, with its model inferred automatically, might still require a heavy migration where the whole database is drawn in to memory.
Is there a way to tell if a specific inferred migration is heavy weight?
Unless you yourself define a custom mapping model, the migration is by definition lightweight. This is the only possible interpretation of the definitions of "lightweight" and "custom" migration in the documentation.
This is independent from the migration failures you have seen. Maybe some changes necessitate a custom migration which is why the lightweight migration fails.

How can I migrate a Rails app from using MongoDB to PostgreSQL?

I have an existing Rails app that I've put about 100 hours of development into. I would like to push it up to Heroku but I made the mistake of using mongoDB for all of my developmental work. Now I have no schemas or anything of the sort and I'm trying to push out to Heroku and use PostgreSQL. Is there a way I can remove Mongoid and use Postgres? I've tried using DataMapper, but that seems to be doing more harm than good.
use postgresql's json data type, transform mongo collections to tables, each table should be the id and doc (json), then its easy to move from one to the other.
Whether the migration is easy or hard depends on a very large number of things including how many different versions of data structures you have to accommodate. In general you will find it a lot easier if you approach this in stages:
Ensure that all the Mongo data is consistent in structure with your RDBMS model and that the data structure versions are all the same.
Move your data. Expect that problems will be found and you will have to go back to step 1.
The primary problems you can expect are data validation problems because you are moving from a less structured data platform to a more structured one.
Depending on what you are doing regarding MapReduce you may have some work there as well.

How to fix records when models have be updated with mongodb

I have a rails project that uses mongodb, the issue i am having is when i have records (documents) made from a previous model. (i'm gettin klass errors, just for the older records)
Is there a quick way to fix those mongodb documents the rails way, using some command.
Or is there a command i can run with mongoid for it to open the specific model up in mongo, then i can poke with the document manually (removing unneeded associations).
The concept of schema migration would need to exist in mongoid and I don't think it does. If you have made simple changes like renaming or removing fields then you can easily do that with an update statement, but for anything more complicated you will need to write code.
The code you will need to write will most likely need to go down to the driver level to alter the objects since the mapping layer is no longer compatible.
In general you need to be careful when you make schema changes, in your objects, since the server doesn't have that concept and can't enforce them. It is ultimately up to your code, or the framework you are using, to maintain compatibility.
This is generally an issue when you mapping system without doing batch upgrades to keep things at the same schema, from the mapping layer perspective.

Delphi Component Serialization

Has anyone run into issues serializing components into a file and reading them back, specifically in the area where the component vendor upgrades the VCL components. For example a file serialized with DelphiX and then years later read back with delphiY. Do the serialization formats change and if so what can be done to prevent errors reading in the componets when upgrading.
The built-in RTTI based system for serializing published properties is vulnerable to changes in the components. Going forwards is manageable as long as old properties are kept in new objects. I.e. you leave the property interface as is, but can toss away the contents if you like. Going backwards is worse - as a newer version saved property can't be opened in older version load, and that will be a problem.
There are components / libs (http://www.torry.net/quicksearchd.php?String=RTTI&Title=Yes) that can add serialization in XML format and this may help a bit as you can choose to skip content you don't know.
You still need to be mindful about how you design your published content and should probably find a way to "ignore but propagate" content that your current version don't understand. This will allow you to open and change a file in a newer format while attempting to keep newer attributes, instead of stripping them.
Formats will defintely change, as vendors will add features to their components. Serialization simply loops over all published properties and saves them to a stream. When they are read back, each of the properties that is read from the stream will be set back to the component. If the property does not exist anymore, you have a problem. I don't think you can do anything about that besides some basic exception handling.
Best way to guarantee compatibility is to do your own serialization.
Thanks for the reply. I was trying to avoid custom serialization and take advantage of the each component serialization technique, but with the lack opf any way to "patch" an upgrade to a new component format I guess custom serialization is the only method.

Resources