I'm trying to generalize some logic to do some manipulation to a ActiveRecord::Relation. Issue is that the aim is to prevent authorization issues, so the flag needs to be set whenever a new ActiveRecord::Relation is instantiated or changed, but I'm not sure how to access the ActiveRecord::Relation data. I think some pagination gems might have a solution, but unsure.
Specific issue is that for Pundit we use something like:
policy_scope(Model)
Ignoring the specifics of exactly how policy_scope works (as it's pretty flexible), it might modify the query to use something like:
Model.where(user_id: current_user.id)
And yes, some care is needed to ensure it doesn't perform a union rather than an intersect on the ids, but that is another matter and handled within the policy itself.
To scope a Model or database query in general to a specific scope. I'd like to add a check on that to ensure and verify that all database queries are scoped. A way this could be done would be to add a flag of some sort to the query itself automatically an unflag it if is scoped, with an error being generated if the query is ran while it is flagged.
THe problem trying to solved here is that it can be very problematic if certain database queries are not scoped when it comes to multi-tenanting and other use cases.
Related
I made a new Rails project and added Devise for user management. I also made tables like 'posts' and 'tags' and they have a field 'user_id' because the data is per user.
Now I can make queries where I always include user_id as filter. This works fine, but I'm afraid that someday I will forget to filter on user_id and a user can see data of other users.
Is there a way in models to force a certain filter?
For some models like 'posts' and 'tags' I would like to always filter on the current_user. Is there a way to automatically do this or maybe raise an exception if I forget to filter on user?
Any tips are welcome.
(I could use something like Apartment, but I prefer a single database/schema for now)
In your proxy method for params, you can use the require method to require the user_id field. Thus if you restrain from using params directly, as everybody should, your contraint will be enforced.
Another way is to have a before_action filter, where you check your condition. That way, unless you purposefully exclude a method from this filter, your check will always be enforced (or 422 will be returned).
Putting a condition on the model itself seems wrong to me: the model should not know what the access conditions are because access control is an orthogonal feature and should not be entangled with the model.
In my Ruby on Rails project, I have a mailer that basically prepares a daily digest of things that happened in the system for a given user. In the mailer controller, I am gathering all the relevant records from the various models according to some common pattern (within a certain date, not authored by this user, not flagged, etc) and with minor differences from model to model.
There are half a dozen of models involved here (and counting), and most of them have unified column names for certain things (like date of publishing, or whether an item is flagged by admin or not). Hence, the 'where's that go into query are mostly the same. There are minor differences in conditions, but at least 2 or 3 conditions are exactly the same. I easily assume there may be even more similar conditions between models, since we are just starting the feature and haven't figured out the eventual shape of the data yet.
I basically chain the 'where' calls upon each model. It irritates me to have 6 lines of code so close to each other, spanning so far to the right of my code editor, and yet so similar. I am dreaded by the idea that at some point we will have to change one of the 'core' conditions, munging with that many lines of code all at once.
What I'd love to do is to move a core set of conditions that goes into each query into some sort of Proc or whatever, then simply call it upon each model like a scope, and after that continue the 'where' chain with model-specific conditions. Much like a scope on each model.
What I am struggling with is how exactly to do that, while keeping the code inside mailer. I certainly know that I can declare a complex scope inside a concern, then mix it into my models and start each of queries with that scope. However, this way the logic will go away from the mailer into an uncharted territory of model concerns, and also it will complicate each model with a scope that is currently only needed for one little mailer in a huge system. Also, for some queries, a set of details from User model is required for a query, and I don't want each of my models to handle User.
I like the way scopes are defined in the Active Record models via lambdas (like scope :pending, -> { where(approved: [nil, false]) }), and was looking for a way to use similar syntax outside model class and inside my mailer method (possibly with a tap or something like that), but I haven't found any good examples of such an approach.
So, is it possible to achieve? Can I collect the core 'where' calls inside some variable in my mailer method and apply them to many models, while still being able to continue the where chain after that?
The beauty of Arel, the technology behind ActiveRecord query-building, is it's all completely composable, using ordinary ruby.
Do I understand your question right that this is what you want to do?
def add_on_something(arel_scope)
arel_scope.where("magic = true").where("something = 1")
end
add_on_something(User).where("more").order("whatever").limit(10)
add_on_something( Project.where("whatever") ).order("something")
Just ordinary ruby method will do it, you don't need a special AR feature. Because AR scopes are already composable.
You could do something like:
#report_a = default_scope(ModelA)
#report_b = default_scope(ModelB)
private
def default_scope(model)
model.
where(approved: [nil, false]).
order(:created_at)
# ...
end
In my app there're objects, and they belong to countries, regions, cities, types, groups, companies and other sets. Every set is rather simple - it has id, name and sometimes some pointers to other sets, and it never changes. Some sets are small and I load them in before_filter like that:
#countries = Country.all
#regions = Region.all
But then I call, for example,
offer.country.name
or
region.country.name
and my app performs a separate db query-by-id, although I've already loaded them all. After that I perform query through :include, and this case ids, generated by eager loading, do not depend on either I've already loaded this data with another query-by-id or not.
So I want some cache. For example, I may generate hashes with keys as records-ids in my before_filter and then call #countries[offer.country_id].name. This case it seems I don't need eager loading and it's easy turn on Rails.cache here. But maybe there's some smart built-in rails solution that does not require to rewrite everything?
Caching lists of models like that won't cache individual instances of that exist in other model's associations.
The Rails team has worked on implementing Identity Maps in Rails 3.1 to solve this exact problem, but it is disabled by default for now. You can enable it and see if it works for your problem.
I have an application where I would like to override the behavior of destroy for many of my models. The use case is that users may have a legitimate need to delete a particular record, but actually deleting the row from the database would destroy referential integrity that affects other related models. For example, a user of the system may want to delete a customer with whom they no longer do business, but transactions with that customer need to be maintained.
It seems I have at least two options:
Duplicate data into the necessarily models effectively denormalizing my data model so that deleted records won't affect related data.
Override the "destroy" behavior of ActiveRecord to do something like set a flag indicating the user "deleted" the record and use this flag to hide the record.
Am I missing a better way?
Option 1 seems like a horrible idea to me, though I'd love to hear arguments to the contrary.
Option 2 seems somewhat Rails-ish but I'm wondering the best way to handle it. Should I create my own parent class that inherits from ActiveRecord::Base, override the destroy method there, then inherit from that class in the models where I want this behavior? Should I also override finder behavior so records marked as deleted aren't returned by default?
If I did this, how would I handle dynamic finders? What about named scopes?
If you're not actually interested in seeing those records again, but only care that the children still exist when the parent is destroyed, the job is simple: add :dependent => :nullify to the has_many call to set references to the parent to NULL automatically upon destruction, and teach the view to deal with that reference being missing. However, this only works if you're okay with not ever seeing the row again, i.e. viewing those transactions shows "[NO LONGER EXISTS]" under company name.
If you do want to see that data again, it sounds like what you want has nothing to do with actually destroying records, which means that you will never need to refer to them again. Hiding seems to be the way to go.
Instead of overriding destroy, since you're not actually destroying the record, it seems significantly simpler to put your behavior in a hide method that triggers a flag, as you suggested.
From there, whenever you want to list these records and only include visible records, one simple solution is to include a visible scope that doesn't include hidden records, and not include it when you want to find that specific, hidden record again. Another path is to use default_scope to hide hidden records and use Model.with_exclusive_scope { find(id) } to pull up a hidden record, but I'd recommend against it, since it could be a serious gotcha for an incoming developer, and fundamentally changes what Model.all returns to not at all reflect what the method call suggests.
I understand the desire to make the controllers look like they're doing things the Rails way, but when you're not really doing things the Rails way, it's best to be explicit about it, especially when it's really not that much of a pain to do so.
I wrote a plugin for this exact purpose, called paranoia. I "borrowed" the idea from acts_as_paranoid and basically re-wrote AAP using much less code.
When you call destroy on a record, it doesn't actually delete it. Instead, it will set a deleted_at column in your database to the current time.
The README on the GitHub page should be helpful for installation & usage. If it isn't, then let me know and I'll see if I can fix that for you.
In a stats part of a Rails app, I have some custom SQL calls that are called with ActiveRecord::Base.execute() from the model code. They return various aggregates.
Some of these (identical) queries are run in a loop in the controller, and it seems that they aren't cached by the ActiveRecord query cache.
Is there any way to cache custom SQL queries within a single request?
Not sure if AR supports query caching for #execute, you might want to dig in the documentation.
Anyway, what you can do, is to use memoization, which means that you'll keep the results manually until the current request is over.
do something like this in your model:
def repeating_method_with_execute
#rs ||= ActiveRecord::Base.execute(...)
end
This will basically run the query only on the first time and then save the response to #rs until the entire request is over.
If I am not wrong, Rails 2.x already has a macro named memoization on ActiveRecord that does all that automatically
hope it helps