Does it make sense to use the schema registry with Avro SpecificRecord? - avro

I am a consumer and want to use SpecificRecord in Avro for type safety to avoid manually mapping things from a GenericRecord. As I understand things it makes little sense for me to still integrate with the schema registry because I could not do anything about the new schema in my consumer code anyways. Note that the schema registry has the full transitive mode enabled for any schema changes (I know that default values can still be changed and might lead to breaking changes).
Why use schema registry together with SpecificRecord in consumer code?

I can answer my own question, because we tried it, and it’s impossible to use the schema registry together with SpecificRecord if you want to have schema evolution. Your code simply breaks at runtime, because it cannot find the required version. Hence, the only way to work with Avro and schema evolution is via GenericRecord, and integrating with the schema registry in case of SpecificRecord provides no value.
Compare this with Protobuf, where you have both, generated code and schema evolution.

Related

How do I know if a ruby on rails application use database partitioning?

I would like to disable my rails application's oracle database partitioning, but :
I don't know how to tell whether my app is using database partitioning
I don't know how to find the place my app use partitioning, since I didn't write most of the application's code myself
Can I just system search the code base for the keyword 'Partition' and look for any result that has the key word partition in raw SQL statement?
How should I go about this?
Thanks!
Update:
I have 2 answers below and they seem to understand my question differently
I am confused now as well. I want to disable the partition feature of my Oracle Database (https://www.oracle.com/technetwork/database/options/partitioning/overview/index.html), does that means I cannot use the 'partition by' keyword (Oracle "Partition By" Keyword) anymore?
Partitioning is done at the schema level declaratively. Usually, one would not expect the application code to directly need anything specific to use partitioning since it is done at the data definition level. You can connect to the schema owner account and check the data dictionary views USER_PART_TABLES for partitioned tables owned by the user and USER_PART_INDEXES for the indexes.

Grails database migration plugin

While reading about the database migration plugin in the book "The Definitive Guide To Grails 2", I came across a question. I understand this plugin is used to migrate an older schema to a newer one that the code base might be expecting to work with. Immediately, the one scenario I could think of why this might be necessary is that a code base expecting to work with a newer schema might try to access properties in domain classes that might not be there (null exceptions). I wanted to know if anyone can help me as to other reasons for migrating the schema so that I can better my thinking on this. Thank you.
The Database Migration Grails plugin is a convenient way to update your database schema. It's not necessarily just to migrate to a completely different schema. The plugin is actually just a wrapper around Liquibase. It aims to integrate the database management into your codebase which makes it easily versioned and tracked with the rest of your code. It also allows for you to easily update your database in a controlled way (dbm-update on start). This works great for continuous deployments.

which NoSQL database is suitable for this rails application

Building a rails application which will be used to edit documents, two persons may edit the same document concurrently but on their own branch, each can't see others' change until they are ready to push and merge it back to the master branch, and until they pull the latest changes from the master into their own branch.
what's the best NoSQL DB or solution for this rails application?
You could do all of this with the filesystem and git so I'm not sure why you'd even need a database here except for auxiliary functions. There's nothing in your requirements that would promote one DB over another.
I'd go with whatever you know best. Even a regular SQL DB would have zero trouble handling this.
Are the documents stored as plain text or encoded in a format (such that it needs to be binary in the DB) that requires the server to decoded and perform the merge itself?
The scenario you are describing it some what documented collection oriented in that you're effectively pushing the same type of metadata but potentially with different key/values. You may be interested in something like CouchDB or MongoDB. Both express a document in a JSON (BSON in MongoDB) like fashion, allowing each document to have differing keys.

DataMapper with legacy DB schema. Primary key via sequences table

UPDATE: I wrote a Sequence property type for DataMapper in the end. Take and use at your own risk ;) https://gist.github.com/959059
We're moving a large, already in production PHP web application to Ruby on Rails. Our schema is far from compatible with ActiveRecord's defaults, and it's too large to simply migrate the schema, so I've ditched ActiveRecord and started using DataMapper, which allows us to hide the schema differences more easily. This is working well with some read-only tests I've done.
Now, one of the biggest incompatibilities with our schema is that we use ADODB and generate primary keys prior to the insert, using a sequences table (this is a common pattern), instead of with auto_increment.
Is there a way to tell DataMapper to generate IDs in the same way? I don't see mention of it in the docs.
We can't really switch the tables to use auto_increment because the size of the application means we're actually running a hybrid Rails/PHP setup with some proxying and session sharing so we can progressively migrate across, therefore the PHP application needs to keep working with the schema as-is (or with only minor changes).
I should really have posted that edit as an answer:
I wrote a Sequence property type for DataMapper in the end. Take and use at your own risk ;) https://gist.github.com/959059

Rails3 and legacy database

I'm wondering if working with rails (3) is a good idea when a huge/ugly legacy database is already there (Oracle, SQLServer).
I only have experience with ActiveRecord, is there another ORM more suitable for that kind of job?
Cheers
ActiveRecord can still do the job - for example there are directives that can be applied within your model that make non-conventional table names, primary key names (multi-column PKs, if you have them, used to require some additional work, not sure how true that is in AR3).
For both Oracle and SQL Server you're going to need to get the relevant DB adapters; I don't think either is bundled with AR.
A lot of legacy DB Rails work only needs read-only access - if that's the case - and you can get access to do so - then you may find that defining views that are more "AR-friendly" and referencing those through your models may make life easier. If update is going to be necessary then either a useable primary key will be needed or you'll have to consider dropping down to building and executing custom SQL, something that's fully supported in AR for occasions when the abstractions can't cope.

Resources