I'm building an app that requires HIPAA compliance, which, to cut to the chase, means that I can't allow for certain connections to be freely viewable in the database (patients and recommendations for them).
These tables are connected through the patients_recommendations table in my app, which worked well until I added the encryption via attr_encrypted. In an effort to cut down on the amount of encrypting and decrypting (and associated overhead), I'd like to be able to simply encrypt the patient_id in the patients_recommendations table. However, upon changing the data type to string and the column name to encrypted_patient_id, the app breaks with the following error when I try to reseed my database:
can't write unknown attribute `patient_id'
I assume this is because the join is looking for the column directly and not by going through the model (makes sense, using the model is probably slower). Is there any way that I can make Rails go through the model (where attr_encrypted has added the necessary helper methods)?
Update:
In an effort to find a work-around, I've tried adding a before_save to the model like so:
before_save :encrypt_patient_id
...
private
def encrypt_patient_id
self.encrypted_patient_id = PatientRecommendation.encrypt(:patient_id, self.patient_id)
self.patient_id = nil
end
This doesn't work either, however, resulting in the same error of unknown attribute. Either solution would work for me (though the first would address the primary problem), any ideas why the before_save isn't being called when created through an association?
You should probably store the PII data and the PHI data in separate DBs. Encrypt the PII data (including any associations to a provider or provider location) and then hash out all of the PHI data to make it easier. As long as there are not direct associations between the two, it would be acceptable to not have the PHI data encrypted as it's anonymized.
Plan A
Don't set patient_id to nil in encrypt_patient_id since it does not exist and the problem could go away.
Also, ending a callback with a nil or false will halt the callback chain, put an explicite true at the end of method.
Plan B, rethink your options
There are more options - from database-level transparent encryption (which formally encrypts the data on disk), to encrypted filesystems for storing certain tablespaces, to flat out encryption of data in the columns.
Encrypting the join columns sounds like a road to unhappiness for a variety of reasons ranging from reporting issues to performance issues when joining is necessary which might be pretty severe,
the trouble you're currently experiencing with the seed looks like its the first bump caused by this on what promises to be a bad road (in this case activerecord seems to be confused how to handle your association, it tries to set patient_id on initialize and breaks).
The overhead of encrypting restricted data might not be as high as you think, not sure how things go for HIPAA but for PCI you're not exactly encouraged to render the protected data on screen so encryption incurs only a small overhead because it happens relatively rarely (business-need-to-know etc).
Also, memory is probably considered to be 'not at rest and not in transit', you could in theory cache some of the clear values for limited periods of time and thus save up on the decryption overhead.
Basically, encrypting data might not be that bad, and encrypting keys in database might be worse then you think
I suggest we talk directly, I'm doing PCI DSS compliance stuff and this topic interests me.
Option: 1-way hashes for primary/foreign keys
PatientRecommendation would have hash of patient_id - call it patient_hash and Patient would be capable of generating the same patient_hash from its id - but I'd suggest storing the patient_hash in both tables, for Patient it would be the primary key for join and for PatientRecommendation it would be the foreign key for join,
thus you define rails relation in these terms and rails will no longer be confused about your relation scheme
has_many :patient_recommendations, primary_key: :patient_hash, foreign_key: :patient_hash
and the result is cryptographically more robust and easy for the database to handle
IF you're adamant about not storing the patient_hash in Patient you could use a plain SQL statement to do the relation - less convenient but workable - something in the lines of this pseudosql:
JOIN ON generate_hash(patient.id) = patient_recommendations.patient_hash
Oracle, for example, has an option to make functional indexes (think create index generate_hash(patient.id)) so this approach could be pretty efficient depending on your choice of database.
However, playing with join keys will complicate your life a lot, even with these measures
I'll expand on this post later on with additional options
Related
I have many different models (close to 20) that share some common attributes but also differ to some degree in others. STI seems attractive at first, but I have no idea how the various models will evolve over time with rapid product development.
A good parallel to our application that comes to mind is Yelp. How would Yelp manage something in Rails? All of the postings have some common attributes like "address". Yet, they differ quite a lot on others. For example, you have a reservation option for restaurants and maybe not for others. Restaurants also have a ton of other attributes like "Alcohol allowed" that don't apply to others. Doing this with STI will get out of hand pretty quickly.
So whats the next best option? HStore with Postgres? I am not comfortable using HStore for anything but small things. HStore solves some problems while introduces others like lack of data types, lack of referential integrity checks etc. I'd like a solid relational database as the foundation to build upon. So in the Yelp case, probably, a restaurant model is where I am going. I've taken a look at suggestions like here - http://mediumexposure.com/multiple-table-inheritance-active-record/, but I am not happy to do so much monkey patching to get something so common going.
So I am wondering what other alternatives exist (if any) or should I just bite the bullet, grind my teeth and copy those common attributes into the 20 models? I am thinking my problems would come from the migration files rather than the code itself. For example, if I setup my migrations to loop through tables and set those attributes on the tables, then would I have mitigated the extent of the problem with having different models?
Am I overlooking something critical that might cause a ton of problems down the road with a separate models?
I see a few options here:
Bite the bullet and create your 20 different models with a lot of the same attributes. It's possible that these models will drift over time - adding new fields to one specific type - and you'll create a 200 column table with STI. Maybe you don't - the future is hard to see, especially with exploratory/agile software.
Store non referential fields in a NoSQL (document) database. Use your relational database for parts of the record that are relational (a user has many reviews and a review has one business), but keep the type specific stuff in a NoSQL database. Keep an external_document_id in your Rails models and external_record_id / external_record_type in your NoSQL document schema so you can still query all bars that allow smoking using whatever NoSQL ORM you end up using.
Create an Attributes model. An attribute belongs_to :parent_object, polymorphic: true with a key and value field. With this approach you might have a base Business model and each business can has_many :attributes. Certain (non-relational?) attributes of the business (allows_smoking) are one Attribute record. An Attribute's key could be a string or could be a numeral you have Ruby constants for. You're essentially using the Attribute entities to create a SQL version of option #2. It might be a good option, and I've used this myself for User or Profile models. (Although there are some performance hits to be aware of with this approach).
I'd really worry about having that many (independent) models for something that sounds subclass-ey. It's possible you might be able to DRY up common behavior/methods by using Concerns (syntactic sugar over the mixin concept, see an awesome SO answer on concerns in Rails 4). You still have your (initial) migration problem, of course.
Adding another option here: Serialized LOB (272). ActiveRecord allows you to do this to an object using serialize:
class User < ActiveRecord::Base
serialize :preferences
end
user = User.create(preferences: { "background" => "black", "display" => large })
User.find(user.id).preferences # => { "background" => "black", "display" => large }
(Example code from ActiveRecord::Base docs.)
The important consequence to understand is that attributes stored in a Serialized LOB will not be indexable and certainly not searchable in any performant manner. If you later discover that a column needs to be available as an index you'll have to write [most likely] a Ruby program to perform the transformation (though by default serialization is in Yaml so any Yaml parser will suffice).
The advantage is that you don't have to make any technology changes to your stack in order to apply this pattern. Its easy to moderate - based on the amount of data you have collected - to migrate away from this pattern.
I am relatively new to rails. I understand that rails lets you play with your database values with much ease but I am a little bit in the blind about what kind of approach is more energy efficient on the database and which not.
Here is a case in point. I have a model appointment which belongs_to user. In my syntax I can sometimes say process_user #appointment.user. When I write that, does that run a separate SELECT query on the database to retrieve that user? Is it more efficient to write process_user #appointment.user_id where user_id is an attribute in the appointment and then try use the user_id value to perform my evaluation related tasks as long as I don't need the whole user object #appointment.user.
Frankly, from a peace of mind point of view, I just love to be able to use process_user #appointment.user because it reads better, looks nicer and works better when preparing logic. Is it a performance efficient way?
You are perfectly fine with using code like process_user #appointment.user, as ActiveRecord tries its best to minimize the number of database queries. Of course it does not handle all situations perfectly, but your example is a very basic one. There would probably no immediate database query happen and the object would only be loaded when its attributes are accessed.
If you notice performance problems in a running large-scaled application and you can track the problems down to ActiveRecord using profiling, it is probably time to optimize. Trying to pre-optimize from the very beginning would be against Rails' philosophy and will only result in ugly (and possible even slower) code. Remember that the real performance bottlenecks are often at places where you would never expect them.
EDIT: As Winfield pointed out, optimizing the number of queries does usually not mean to manage foreign keys or similar internals by yourself. There are quite a number of flags and options for DB access methods that allow you to control how your database is queries.
You can eagerly load your associated users with your Appointment models:
Appointment.all(:include => :user)
...which will join in the users or do a separate lookup for all the associated users in a single query.
This will then load the user association in advance (eagerly) so the user attribute is already populated with the object when you reference it, instead of having to stop and execute a separate query to look it up one by one (N+1 queries).
In my Rails application, I have a variety of database tables that contain user data. Some of these tables have a lot of rows (as many as 500,000 rows per user in some cases) and are queried frequently. Whenever I query any table for anything, the user_id of the current user is somewhere in the query - either directly, if the table has a direct relation with the user, or through a join, if they are related through some other tables.
Should I denormalize the user_id and include it in every table, for faster performance?
Here's one example:
Address belongs to user, and has a user_id
Envelope belongs to user, and has a user_id
AddressesEnvelopes joins an Address and an Envelope, so it has envelope_id and address_id -- it doesn't have user_id, but could get to it through either the envelope or the address (which must belong to the same user).
One common expensive query is to select all the AddressesEnvelopes for a particular user, which I could accomplish by joining with either Address or Envelope, even though I don't need anything from those tables. Or I could just duplicate the user id in this table.
Here's a different scenario:
Letter belongs to user, and has a user_id
Recepient belongs to Letter, and has a letter_id
RecepientOption belongs to Recepient, and has a recepient_id
Would it make sense to duplicate the user_id in both Recepient and RecepientOption, even though I could always get to it by going up through the associations, through Letter?
Some notes:
There are never any objects that are
shared between users. An entire
hierarchy of related objects always
belongs to the same user.
The user owner of objects never changes.
Database performance is important because it's a data intensive application. There are many queries and many tables.
So should I include user_id in every table so I can use it when creating indexes? Or would that be bad design?
I'd like to point out that it isn't necessary to denormalize, if you are willing to work with composite primary keys. Sample for AddressEnvelop case:
user(
#user_id
)
address(
#user_id
, #addres_num
)
envelope(
#user_id
, #envelope_num
)
address_envelope(
#user_id
, #addres_num
, #envelope_num
)
(the # indicates a primary key column)
I am not a fan of this design if I can avoid it, but considering the fact that you say that all these objects are tied to a user, this type of design would make it relatively simply to partition your data (either logically, put ranges of users in separate tables or physically, using multiple databases or even machines)
Another thing that would make sense with this type of design is using clustered indexes (in MySQL, the primary key of InnoDB tables are built from a clustered index). If you ensure the user_id is always the first column in your index, it will ensure that for each table, all data for one user is stored close together on disk. This is great when you always query by user_id, but it can hurt perfomance if you query by another object (in which case duplication like you sugessted may be a better solution)
At any rate, before you change the design, first make sure your schema is already optimized, and you have proper indexes on your foreign key columns. If performance really is paramount, you should simply try several solutions and do benchmarks.
As long as you
a) get a measurable performance improvement
and
b) know which parts of your database are real normalized data and which are redundant improvements
there is no reason not to do it!
Do you actually have a measured performance problem? 500 000 rows isn't very large table. Your selects should be reasonable fast if they are not very complex and you have proper indexes on your columns.
I would first see if there are slow queries and try to optimize them with indexes. If that is not enough, only then I would look into denormalization.
Denormalizations that you suggest seem reasonable if you can't achieve the required performance with other means. Just make sure that you keep denormalized fields up-to-date.
How do people generate auto_incrementing integers for a particular user in a typical saas application?
For example, the invoice numbers for all the invoices for a particular user should be auto_incrementing and start from 1. The rails id field can't be used in this case, as it's shared amongst all the users.
Off the top of my head, I could count all the invoices a user has, and then add 1, but does anyone know of any better solution?
Typical solution for any relation database could be a table like
user_invoice_numbers (user_id int primary key clustered, last_id int)
and a stored procedure or a SQL query like
update user_invoice_numbers set last_id = last_id + 1 where user_id = #user_id
select last_id from user_invoice_numbers where user_id = #user_id
It will work for users (if each user has a few simultaneously running transactions) but will not work for companies (for example when you need companies_invoice_numbers) because transactions from different users inside the same company may block each other and there will be a performance bottleneck in this table.
The most important functional requirement you should check is whether your system is allowed to have gaps in invoice numbering or not. When you use standard auto_increment, you allow gaps, because in most database I know, when you rollback transaction, the incremented number will not be rolled back. Having this in mind, you can improve performance using one of the following guidelines
1) Exclude the procedure that you use for getting new numbers from the long running transactions. Let's suppose that insert into invoice procedure is a long running transaction with complex server-side logic. In this case you first acquire a new id , and then, in separate transaction insert new invoice. If last transaction will be rolled back, auto-number will not decrease. But user_invoice_numbers will not be locked for long time, so a lot of simultaneous users could insert invoices at the same time
2) Do not use a traditional transactional database to store the data with last id for each user. When you need to maintain simple list of keys and values there are lot of small but fast database engines that can do that work for you. List of Key/Value databases. Probably memcached is the most popular. In the past, I saw the projects where simple key/value storages where implemented using Windows Registry or even a file system. There was a directory where each file name was the key and inside each file was the last id. And this rough solution was still better then using SQL table, because locks were issued and released very quickly and were not involved into transaction scope.
Well, if my proposal for the optimization seems to be overcomplicated for your project, forget about this now, until you will actually run into performance issues. In most projects simple method with an additional table will work pretty fast.
You could introduce another table associated with your "users" table that tracks the most recent invoice number for a user. However, reading this value will result in a database query, so you might as well just get a count of the user's invoices and add one, as you suggested. Either way, it's a database hit.
If the invoice numbers are independent for each user/customer then it seems like having "lastInvoice" field in some persistent store (eg. DB record) associated with the user is pretty unavoidable. However this could lead to some contention for the "latest" number.
Does it really matter if we send a user invoices 1, 2, 3 and 5, and never send them invoice
4? If you can relax the requirement a bit.
If the requirement is actually "every invoice number must be unique" then we can look at all the normal id generating tricks, and these can be quite efficient.
Ensuring that the numbers are sequenctial adds to the complexity, does it add to the business benefit?
I've just uploaded a gem that should resolve your need (a few years late is better than never!) :)
https://github.com/alisyed/sequenceid/
Not sure if this is the best solution, but you could store the last Invoice ID on the User and then use that to determine the next ID when creating a new Invoice for that User. But this simple solution may have problems with integrity, will need to be careful.
Do you really want to generate the invoice IDs in an incremental format? Would this not open security holes (where in, if a user can guess the invoice number generation, they can change it in the request and may lead to information disclosure).
I would ideally generate the numbers randomly (and keep track of used numbers). This prevents collisions as well (Chances of collision are reduced as the numbers are allocated randomly over a range).
The DB load on my site is getting really high so it is time for me to cache common queries that are being called 1000s of times an hour where the results are not changing.
So for instance on my city model I do the following:
def self.fetch(id)
Rails.cache.fetch("city_#{id}") { City.find(id) }
end
def after_save
Rails.cache.delete("city_#{self.id}")
end
def after_destroy
Rails.cache.delete("city_#{self.id}")
end
So now when I can City.find(1) the first time I hit the DB but the next 1000 times I get the result from memory. Great. But most of the calls to city are not City.find(1) but #user.city.name where Rails does not use the fetch but queries the DB again... which makes sense but not exactly what I want it to do.
I can do City.find(#user.city_id) but that is ugly.
So my question to you guys. What are the smart people doing? What is
the right way to do this?
With respect to the caching, a couple of minor points:
It's worth using slash for separation of object type and id, which is rails convention. Even better, ActiveRecord models provide the cacke_key instance method which will provide a unique identifier of table name and id, "cities/13" etc.
One minor correction to your after_save filter. Since you have the data on hand, you might as well write it back to the cache as opposed to delete it. That's saving you a single trip to the database ;)
def after_save
Rails.cache.write(cache_key,self)
end
As to the root of the question, if you're continuously pulling #user.city.name, there are two real choices:
Denormalize the user's city name to the user row. #user.city_name (keep the city_id foreign key). This value should be written to at save time.
-or-
Implement your User.fetch method to eager load the city. Only do this if the contents of the city row never change (i.e. name etc.), otherwise you can potentially open up a can of worms with respect to cache invalidation.
Personal opinion:
Implement basic id based fetch methods (or use a plugin) to integrate with memcached, and denormalize the city name to the user's row.
I'm personally not a huge fan of cached model style plugins, I've never seen one that's saved a significant amount of development time that I haven't grown out of in a hurry.
If you're getting way too many database queries it's definitely worth checking out eager loading (through :include) if you haven't already. That should be the first step for reducing the quantity of database queries.
If you need to speed up sql queries on data that doesnt change much over time then you can use materialized views.
A matview stores the results of a query into a table-like structure of
its own, from which the data can be queried. It is not possible to add
or delete rows, but the rest of the time it behaves just like an
actual table. Queries are faster, and the matview itself can be
indexed.
At the time of this writing, matviews are natively available in Oracle
DB, PostgreSQL, Sybase, IBM DB2, and Microsoft SQL Server. MySQL
doesn’t provide native support for matviews, unfortunately, but there
are open source alternatives to it.
Here is some good articles on how to use matviews in Rails
sitepoint.com/speed-up-with-materialized-views-on-postgresql-and-rails
hashrocket.com/materialized-view-strategies-using-postgresql
I would go ahead and take a look at Memoization, which is now in Rails 2.2.
"Memoization is a pattern of
initializing a method once and then
stashing its value away for repeat
use."
There was a great Railscast episode on it recently that should get you up and running nicely.
Quick code sample from the Railscast:
class Product < ActiveRecord::Base
extend ActiveSupport::Memoizable
belongs_to :category
def filesize(num = 1)
# some expensive operation
sleep 2
12345789 * num
end
memoize :filesize
end
More on Memoization
Check out cached_model