Auditing and model lifecycle management for instances and their associations? - ruby-on-rails

I am trying to write an application to track legal case requests. The main model is Case, which has_many Subjects, Keywords, Notes, and Evidences (which, in turn, has_many CustodyLogs). Since the application is legal-related, there are some requirements that are out of the ordinary:
CRUD operations must be logged, including what the operation was, who the actor was, and when the operation occurred
There needs to be some way to validate the data (i.e. recording MD5 checksums of records)
Some data should be write-once (i.e. the app can create an audit log entry, but that log cannot be edited or deleted from within the application thereafter)
Changes to associated objects probably should be logged throughout the nesting. For example, adding a CustodyLog to a piece of Evidence should have a log for itself, a log for it's Evidence, and a log for the parent Case. This is to ensure that the last update timestamp for the Case accurately reflects the real last update, and not just the last time that the Case model data itself changed.
I've got bits of this working, but I'm running into a problem. Authentication is being handled by an external web single-sign-on service, so the only visibility to the ID of the logged in user is in a request variable. If I put audit logging in the model, through a callback, for example, I can be fairly sure that all data modifications are logged, but the model has no visibility to the request variables, so I can't log the user ID. This also ensures that changes to the state machine (currently using state_machine plugin) get logged.
If, on the other hand, I put the audit logging in the application controller, I lose the ability to be sure that all CRUD operations are logged (code in the Case model calling Subject.create, for example, wouldn't be logged). I also think that I'd lose state changes.
Is there a way to be sure that all CRUD operations are logged throughout the association tree such that the user ID of the logged in user is recorded?

CRUD operations must be logged, including what the operation was, who the actor was, and when the operation occurred
This can be addressed with an ActiveRecord::Callbacks and an attr_accessor field.
In any of the models that need to be logged add the following.
attr_accessor :modifier_id, :modifier
valiadate :valid_user
before_validate :populate_modifier
before_save :write_save_attempted_to_audit_log
after_save :write_save_completed_to_audit_log
def populate_modifier
self.modifier = User.find_by_id(modifier_id) unless modifier
end
def valid_user
unless modifier
errors.add(:modifiers_user_id, "Unknown user attempted to modify this record")
write_unauthorized_modification_to_audit_log
end
end
def write_save_attempted_to_audit_log
# announce that user is attempting to save a record with timestamp to audit log
# use ActiveRecord::Dirty.changes to show the change the might be made
end
def write_save_competed_to_audit_log
# announce that user has successfully changed the record with timestamp to audit log
end
def write_unauthorized_modification
# announce that a change was attempted without a user
end
Because you're likely to use this in a few models you can abstract it into a plugin, and add it only with needed with a method call like audit_changes. See any of the acts_as plugins for inspiration on how to accomplish this.
In the controllers you will need to remember to add #thing.modifier = #current_user before attempting to save.
There needs to be some way to validate the data (i.e. recording MD5 checksums of records)
As for a checksum... of an operation? You could override inspect to print a string containing all the information in the record in a consistent fashion and then generate a checksum for that. While you're at it, might as well add it to the to the audit log as part of the writing to log methods.
Some data should be write-once (i.e. the app can create an audit log entry, but that log cannot be edited or deleted from within the application thereafter)
Write each access log as a separate file with a deterministic name ("/logs/audits/#{class}/#{id}/#{timestamp}") and remove the write permission once it's saved. File.chmod(0555, access_log_file)
Changes to associated objects probably should be logged throughout the nesting. For example, adding a CustodyLog to a piece of Evidence should have a log for itself, a log for it's Evidence, and a log for the parent Case. This is to ensure that the last update timestamp for the Case accurately reflects the real last update, and not just the last time that the Case model data itself changed.
As for the 4th requirement. That will automatically get rolled into my solution for the first if you use accepts_nested_attributes_for on any of your nested relationships. And :autosave => true for belongs_to relationships.
If you're saving checksums into audit logs, you can roll a check into the before_save method to ensure the object you're working on, was has not been tampered with. Just by checking the latest audit log for the object and matching up the checksums.

Related

Uniqueness validation with transaction

I use transaction to save Hotel model. Here is a code:
def init
Hotel.transaction do
#hotel.save!
create_related_models
end
end
I have uniqueness validation on :name in hotel. Validation isn't work inside transaction. What is the way of implementing database related validations inside a transaction?
Explanation why validation not work.
When user submit a form then request takes about 10s. If he click another time (on save button) during request then he will save two hotels with the same name(which is issue). It is because first transaction didn't finish, when another transaction start. So when another start there is no hotel in database so validation return true
You will need to either:
Add a database constraint to prevent this behaviour, and catch and handle any error. The application cannot have visibility of un-committed RDBMS transactions. Only the database can do that.
Add a locking mechanism in the application, which will be difficult if you are running on multiple threads (Heroku dynos?).
Greatly reduce the time taken for the transaction.
Move the creation of the related models outside of the transaction, and provide a mechanism for manually deleting the Hotel record is a problem arises during execution.
Remove the uniqueness validation.

How to properly enforce a conditional read-only record on Rails?

So a situation came up at work and I wanted to discuss it here because we could not get to an agreement between us:
We have two models, Order and Passport, which are related in a way that an Order has_one passport and a passport has_many orders. Whenever an order is completed, its associated passport must be 'locked', that is, turned into read-only (that information was already used to clear customs, so it can't be changed afterwards). We want to enforce that rule in the Passport model and we've thought of the following options:
Creating a validation. CONS: There will be records yielding valid? => false when technically the record is fine (although it can't be saved). For example, if other records have a validates_associated :passport on them, that could be a problem.
Overriding the readonly? method. CONS: This will raise an exception when trying to update that record, although you would expect that calling a save method won't ever raise one.
Creating a before_save callback. This has two flavors: either raise an exception (which is pretty much like the readonly? option) or add an #error and return false to stop the callback chain. CONS: Adding validation errors from outside a proper validation can be considered a bad practice. Also, you might find yourself calling valid? and getting true and then call save and get false.
This situation made us think a lot about the relationship between validations and Rails. What exactly does it mean for a record to be valid?? Does it imply that the save will work?
I would like to listen to your opinions to learn about this scenario. Maybe the best approach is neither one of the three! Thanks!
What about marking this record as read-only by using readonly! instance method? See the API
You could do it in a constructor, like:
class Passport < ActiveRecord::Base
def initialize(*args)
super(*args)
readonly! if orders.count>0 # or similar
end
end
I think there is an extra alternative. What you describe dictates that the Passport model can have some different states. I would consider using a state machine to describe the relevant orders status for the passport.
eg:
open
pending
locked
other_update_actions ...
With that in mind, all relevant order actions will trigger an event to the passport model and its state.
If it is possible to integrate the update actions to certain events then you could handle the readonly part in a more elegant way (incompatible state transition).
As an extra check you can always keep an ugly validator as a last resort to prevent the model from being updated without the state machine.
you can check the aasm gem for this

Race conditions in AR destroy callbacks

I seem to have a race condition in my Rails app. While deleting a user and all of the associated models that depend on it, new associated models are sometimes created by the user. User deletions can take a while if we're deleting a lot of content, so it makes sense that race conditions would exist here. This ends up creating models that point to a user that doesn't exist.
I've tried fixing this by creating a UserDeletion model, which acts as a sort of mutex lock. Before it starts deleting the user, it'll create a new UserDeletion record. When a user tries to create new content, it checks to make sure an associated UserDeletion record doesn't exist. After it's done, it deletes it.
This hasn't solved the problem, though, so I'm wondering how other people have handled similar issues with AR callbacks and race conditions.
First of all when there is a lot of content associated, we moved on to use manual delete process using SQL DELETE instead off Rails destroy. (Though this may not work for You, If You carelessly introduced a lot of callback dependencies that does something after record is destroyed)
def custom_delete
self.class.transaction do
related_objects.delete_all
related_objects_2.delete_all
delete
end
end
If You find Yourself writing this all the time, You can simply wrap it inside class method that accepts list of related_objects keys to delete.
class ActiveRecord::Base
class << self
def bulk_delete_related(*args)
define_method "custom_delete" do
ActiveRecord::Base.transaction do
args.each do |field|
send(field).delete_all
end
end
delete
end
end
end
end
class SomeModel < ActiverRecord::Base
bulk_delete :related_objects, :related_objects2, :related_object
end
I inserted the class method inside ActiveRecord::Base class directly, but probably You should better extract it to module. Also this only speeds things up, but does not resolve the original problem.
Secondly You can introduce FK constraints (we did that to ensure integrity, as we do a lot of custom SQL). It will work the way that User won't be deleted as long as there are linked objects. Though it might not be what You want. To increase effectivity of this solution You can always delegate user deletion to a background job, that will retry deleting user until it's actually can be deleted (no new objects dropped in)
Alternatively You can do the other way around as we did at my previous work in some cases. If it's more important to delete user rather than to be sure that there are no zombie records, use some swipe process to clean up time to time.
Finally the truth is somewhere in the middle - apply constraints to relations that definitely need to be cleaned up before removing user and just rely on sweeper to remove less important ones that shouldn't interfere with user deletion.
Problem is not trivial but it should be solvable to some extent.

Which rails ActiveRecord callback to sync with service (stripe) when creating a new record and still properly use errors?

I have a User and a StripeCustomer model. Every User embeds one and accepts_nested_attributes_for StripeCustomer.
When creating a new user, I always create a corresponding StripeCustomer and if you provide either a CC or a coupon code, I create a subscription.
In my StripeCustomer:
attr_accessible :coupon_id, :stripe_card_token
What I'd like to do is, if the coupon is invalid, do:
errors.add :coupon_id, "bad coupon id"
So that normal rails controller patters like:
if #stripe_customer.save
....
else
....
end
will just work. And be able to use normal rails field_with_errors stuff for handling a bad coupon.
So the question is, at which active record callback should I call Stripe::Customer.create and save the stripe_customer_token?
I had it on before_create, because I want it done only if you are really going to persist the record. But this does strange things with valid? and worse, if you are going to create it via a User, the save of User and StripeCustomer actually succeeds even if you do errors.add in the before_create callback! I think the issue is that the save will only fail if you add errors and return false at before_validation.
That last part I'm not sure if it is a mongoid issue or not.
I could move it to before_validation :on => :create but then it would create a new Stripe::Customer even if I just called valid? which I don't want.
Anyway, I'm generically curious about what the best practices are with any model that is backed by or linked to a record on a remote service and how to handle errors.
Ok here is what I did, I split the calls to stripe into 2 callbacks, one at before_validation and one before_create (or before_update).
In the before_validation, I do whatever I can to check the uncontrolled inputs (directly from user) are valid. In the stripe case that just means the coupon code so I check with stripe that it is valid and add errors to :coupon_code as needed.
Actually creating/updating customers with stripe, I wait to do until before_create/before_update (I use two instead of just doing before_save because I handle these two cases differently). If there is an error then, I just don't handle the exception instead of trying to add to errors after validation which (a) doesn't really make any sense and (b) sort of works but fails to prevent saves on nested models (in mongoid anyway, which is very bad and strange).
This way I know by the time I get to persisting, that all the attributes are sound. Something could of course still fail but I've minimized my risk substantially. Now I can also do things like call valid? without worrying about creating records with stripe I didn't want.
In retrospect this seems pretty obvious.
I'm not sure I totally understand the scenario. you wrote:
Every User embeds one and accepts_nested_attributes_for StripeUser
Did you mean StripeCustomer?
So you have a User that has a Customer that holds the coupon info?
If so, I think it should be enough to accept nested attributed for the customer in the user, put the validation in the customer code and that's it.
See here
Let me know if I got your question wrong...

Record changes pend approval by a privileged user; Its like versioning combined with approvals

I have a requirement that certain attribute changes to records are not reflected in the user interface until those changes are approved. Further, if a change is made to an approved record, the user will be presented with the record as it exists before approval.
My first try...
was to go to a versioning plugin such as paper_trail, acts_as_audited, etc. and add an approved attribute to their version model. Doing so would not only give me the ability to 'rollback' through versions of the record, but also SHOULD allow me to differentiate between whether a version has been approved or not.
I have been working down this train of thought for awhile now, and the problem I keep running into is on the user side. That is, how do I query for a collection of approved records? I could (and tried) writing some helper methods that get a collection of records, and then loop over them to find an "approved" version of the record. My primary gripe with this is how quickly the number of database hits can grow. My next attempt was to do something as follows:
Version.
where(:item_type => MyModel.name, :approved => true).
group(:item_type).collect do |v|
# like the 'reify' method of paper_trail
v.some_method_that_converts_the_version_to_a_record
end
So assuming that the some_method... call doesn't hit the database, we kind of end up with the data we're interested in. The main problem I ran into with this method is I can't use this "finder" as a scope. That is, I can't append additional scopes to this lookup to narrow my results further. For example, my records may also have a cool scope that only shows records where :cool => true. Ideally, I would want to look up my records as MyModel.approved.cool, but here I guess I would have to get my collection of approved models and then loop over them for cool ones would would result in the very least in having a bunch of records initialized in memory for no reason.
My next try...
involved creating a special type of "pending record" that basically help "potential" changes to a record. So on the user end you would lookup whatever you wanted as you normally would. Whenever a pending record is apply!(ed) it would simply makes those changes to the actual record, and alls well... Except about 30 minutes into it I realize that it all breaks down if an "admin" wishes to go back and contribute more to his change before approving it. I guess my only option would be either to:
Force the admin to approve all changes before making additional ones (that won't go over well... nor should it).
Try to read the changes out of the "pending record" model and apply them to the existing record without saving. Something about this idea just doesn't quite sound "right".
I would love someone's input on this issue. I have been wrestling with it for some time, and I just can't seem to find the way that feels right. I like to live by the "if its hard to get your head around it, you're probably doing it wrong" mantra.
And this is kicking my tail...
How about, create an association:
class MyModel < AR::Base
belongs_to :my_model
has_one :new_version, :class_name => MyModel
# ...
end
When an edit is made, you basically clone the existing object to a new one. Associate the existing object and the new one, and set a has_edits attribute on the existing object, the pending_approval attribute on the new one.
How you treat the objects once the admin approves it depends on whether you have other associations that depend on the id of the original model.
In any case, you can reduce your queries to:
objects_pending_edits = MyModel.where("has_edits = true").all
then with any given one, you can access the new edits with obj.new_version. If you're really wanting to reduce database traffic, eager-load that association.

Resources