Let's say I want to update /groups/$groupID and users/$userID in the single update call. The update should fail if any of the update calls fail. I'm getting
Listener at / failed: permission_denied
when trying to update via
ref().update({
'groups/...': 'value1',
'users/...': 'value2'
})
probably because I have no rules defined at the root / level, but separately in each child (separately in /groups, /users, etc.).
If I would, indeed, define a write rule at the root level, wouldn't that mean that all the child write rules are overridden by it? I wouldn't want that to happen. I've looked into transaction API and it seems that the transactions are ran 'on a reference', which in this case would also be the root reference.
Is there a different way of running a transaction that runs multiple updates?
Related
I'm trying to generalize some logic to do some manipulation to a ActiveRecord::Relation. Issue is that the aim is to prevent authorization issues, so the flag needs to be set whenever a new ActiveRecord::Relation is instantiated or changed, but I'm not sure how to access the ActiveRecord::Relation data. I think some pagination gems might have a solution, but unsure.
Specific issue is that for Pundit we use something like:
policy_scope(Model)
Ignoring the specifics of exactly how policy_scope works (as it's pretty flexible), it might modify the query to use something like:
Model.where(user_id: current_user.id)
And yes, some care is needed to ensure it doesn't perform a union rather than an intersect on the ids, but that is another matter and handled within the policy itself.
To scope a Model or database query in general to a specific scope. I'd like to add a check on that to ensure and verify that all database queries are scoped. A way this could be done would be to add a flag of some sort to the query itself automatically an unflag it if is scoped, with an error being generated if the query is ran while it is flagged.
THe problem trying to solved here is that it can be very problematic if certain database queries are not scoped when it comes to multi-tenanting and other use cases.
I am looking for a Ruby/Rails tool that will help me accomplish the following:
I would like to store the following string, and ones similar to it, in my database. When an object is created, updated, deleted, etc., I want to run through all the strings, check to see if the CRUD event matches the conditions of the string, and if so, run the actions specified.
When a new ticket is created and it's category=6 then notify user 1234 via email
I am planning to create an interface that builds these strings, so it doesn't need to be a human-readable string. If a JSONish structure is better, or a tool has an existing language, that would be fantastic. I'm kinda thinking something along the lines of:
{
object_types: ['ticket'],
events: ['created', 'updated'],
conditions:'ticket.category=6',
actions: 'notify user',
parameters: {
user:1234,
type:'email'
}
}
So basically, I need the following:
Monitor CRUD events - It would be nice if the tool had a way to do this, but Ican use Rails' ModelObservers here if the tool doesn't natively provide it
Find all matching "rules" - This is my major unknown...
Execute the requested method/parameters - Ideally, this would be defined in my Ruby code as classes/methods
Are there any existing tools that I should investigate?
Edit:
Thanks for the responses so far guys! I really appreciate you pointing me down the right paths.
The use case here is that we have many different clients, with many different business rules. For the rules that apply to all clients, I can easily create those in code (using something like Ruleby), but for all of the client-specific ones, I'd like to store them in the database. Ideally, the rule could be written once, stored either in the code, or in the DB, and then run (using something Resque for performance).
At this point, it looks like I'm going to have to roll my own, so any thoughts as to the best way to do that, or any tools I should investigate, would be greatly appreciated.
Thanks again!
I don't think it would be a major thing to write something yourself to do this, I don't know of any gems which would do this (but it would be good if someone wrote one!)
I would tackle the project in the following way, the way I am thinking is that you don't want to do the rule matching at the point the user saves as it may take a while and could interrupt the user experience and/or slow up the server, so...
Use observers to store a record each time a CRUD event happens, or to make things simpler use the Acts as Audited gem which does this for you.
1.5. Use a rake task, running from your crontab to run through the latest changes, perhaps every minute, or you could use Resque which does a good job of handling lots of jobs
Create a set of tables which define the possible rules a user could select from, perhaps something like
Table: Rule
Name
ForEvent (eg. CRUD)
TableInQuestion
FieldOneName
FieldOneCondition etc.
MethodToExecute
You can use a bit of metaprogramming to execute your method and since your method knows your table name and record id then this can be picked up.
Additional Notes
The best way to get going with this is to start simple then work upwards. To get the simple version working first I'd do the following ...
Install acts as audited
Add an additional field to the created audit table, :when_processed
Create yourself a module in your /lib folder called something like processrules which roughly does this
3.1 Grabs all unprocessed audit entries
3.2 Marks them as processed (perhaps make another small audit table at this point to record events happening)
Now create a rules table which simply has a name and condition statement, perhaps add a few sample ones to get going
Name: First | Rule Statement: 'SELECT 1 WHERE table.value = something'
Adapt your new processrules method to execute that sql for each changed entry (perhaps you want to restrict it to just the tables you are working with)
If the rule matched, add it to your log file.
From here you can extrapolate out the additional functionality you need and perhaps ask another question about the metaprogramaming side of dynamically calling methods as this question is quite broad, am more than happy to help further.
I tend to think the best way to go about task processing is to setup the process nicely first so it will work with any server load and situation then plug in the custom bits.
You could make this abstract enough so that you can specify arbitrary conditions and rules, but then you'd be developing a framework/engine as opposed to solving the specific problems of your app.
There's a good chance that using ActiveRecord::Observer will solve your needs, since you can hardcode all the different types of conditions you expect, and then only put the unknowns in the database. For example, say you know that you'll have people watching categories, then create an association like category_watchers, and use the following Observer:
class TicketObserver < ActiveRecord::Observer
# observe :ticket # not needed here, since it's inferred by the class name
def after_create(ticket)
ticket.category.watchers.each{ |user| notify_user(ticket, user) }
end
# def after_update ... (similar)
private
def notify_user(ticket, user)
# lookup the user's stored email preferences
# send an email if appropriate
end
end
If you want to store the email preference along with the fact that the user is watching the category, then use a join model with a flag indicating that.
If you then want to abstract it a step further, I'd suggest using something like treetop to generate the observers themselves, but I'm not convinced that this adds more value than abstracting similar observers in code.
There's a Ruby & Rules Engines SO post that might have some info that you might find useful. There's another Ruby-based rules engine that you may want to explore that as well - Ruleby.
Hope that this helps you start your investigation.
In my rails application, I have some code like this:
def foo
if object_bar_exists
raise "can't create bar twice!"
end
Bar.create
end
Which could be invoked by two different requests coming into the application server. If this code is run by two requests simultaneously, and they both run the if check at the same time, neither will find the other's bar, and 2 bars will be created.
What's the best way to create a "mutex" for "the collection of bars"? A special purpose mutex table in the DB?
update
I should emphasize that I cannot use a memory mutex here, because the concurrency is across requests/processes and not threads.
The best thing to do is perform your operations in a DB transaction. Because you will probably eventually have multiple applications running and they very possibly won't share memory, you won't be able to create a Mutex lock on the application level, especially if those two application services are running on entirely different physical boxes. Here's how to accomplish the DB transaction:
ActiveRecord::Base.transaction do
# Transaction code goes here.
end
If you want to ensure a rollback on the DB transaction then you'll have to have validation enabled on the Bar class so that an invalid save request will cause a rollback:
ActiveRecord::Base.transaction do
bar = Bar.new(params[:bar])
bar.save!
end
If you already have a bar object in the DB, you can lock that object pessimistically like this:
ActiveRecord::Base.transaction do
bar = Bar.find(1, :lock => true)
# perform operations on bar
end
If they all the requests are coming into the same machine, and the same ruby virtual machine, you could use Ruby's built in Mutex class: Mutex Docs.
If there are multiple machines or rvms, you will have to use a database transaction to create / get the Bar object, assuming it's stored in the db somehow.
I would probably create a Singleton object in the lib directory, so that you only have one instance of the thing, and use Mutexes to lock on it.
This way, you can ensure only one access to the thing at any given point in time. Of course, any other requests will block on it, so that's something to keep in mind.
For multiple machines, you'd have to store a token in a database, and synchronize some kind of access to the token. Like, the token has to be queried and pulled out, and keep track of some number or something to ensure that people cannot remove the token at the same time. Or use a centralized locking web-service so that your token handling is only in one spot.
I have a Postgres database (9) that I am writing a trigger for. I want the trigger to set the modification time, and user id for a record. In Firebird you have a CONNECTIONID that you can use in a trigger, so you could add a value to a table when you connect to the database (this is a desktop application, so connections are persistent for the lifetime of the app), something like this:
UserId | ConnectionId
---------------------
544 | 3775
and then look up in the trigger that connectionid 3775 belongs to userid 544 and use 544 as the user that modified the record.
Is there anything similar I can use in Postgres?
you could use the process id. It can be retrieved with:
pg_backend_pid()
With this pid you can also use the table pg_stat_activity to get more information about the current backend, althouht you already should know everything, since you are using this backend.
Or better. Just create a serial, and retrieve one value from it for each connection:
CREATE SEQUENCE 'connectionids';
And then:
SELECT next_val('connectionids');
in each connection, to retrieve a connection unique id.
One way is to use the custom_variable_classes configuration option. It appears to be designed to allow the configuration of add-on modules, but can also be used to store arbitrary values in the current database session.
Something along the lines of the following needs to be added to postgresql.conf:
custom_variable_classes = 'local'
When you first connect to the database you can store whatever information you require in the custom class, like so:
SET local.userid = 'foobar';
And later in on you can retrieve this value with the current_setting() function:
SELECT current_setting('local.userid');
Adding an entry to a log table might look something like this:
INSERT INTO audit_log VALUES (now(), current_setting('local.userid'), ...)
While it may work for your desktop use case, note that process ID numbers do rollover (32768 is a common upper limit), so using them as a unique key to identify a user can run into problems. If you ever end up with leftover data from a previous session in the table that's tracking user->process mapping, that can collide with newer connections assigned the same process id once it's rolled over. It may be sufficient for your app to just make sure you aggressively clean out old mapping entries, perhaps at startup time given how you've described its operation.
To avoid this problem in general, you need to make a connection key that includes an additional bit of information, such as when the session started:
SELECT procpid,backend_start FROM pg_stat_activity WHERE procpid=pg_backend_pid();
That has to iterate over all of the connections active at the time to compute, so it does add a bit of overhead. It's possible to execute that a bit more efficiently starting in PostgreSQL 8.4:
SELECT procpid,backend_start FROM pg_stat_get_activity(pg_backend_pid());
But that only really matters if you have a large number of connections active at once.
Use current_user if you need the database user (I'm not sure that's what you want by reading your question).
I'm working on a Rails app that has one database per account. (I know this is a controversial approach in itself, but I'm confident it's the right one in this case.)
I'd like to automate entirely the process of creating a new user account, which means I need to be able create a new database and populate it with some seed data programatically from within a Rails app.
My question, then, is how best to do this? I don't think I can just run migrations from within the app (or, if I can, how?), and just running the straight SQL queries within the app with hardcoded CREATE TABLE statements seems a really unwieldy way of doing things. What approach should I take, then?
Thanks in advance for your help!
David
This is an approach that my application requires. The app provides a web front-end onto a number of remote embedded devices which in turn monitor sensors. Each embedded device runs a ruby client process which reads a config file to determine its setup. There is a need to be able to add a new sensor type.
The approach I have is that each sensor type has it's own data table, which is written into by every device which has that sensor. So in order to be able to create a new sensor type, I need to be able to set up new tables.
One initial issue is that the remote embedded devices do not have a rails app on them - therefore table name pluralization is a bad plan, as the pluralization rules are not accessible to the remote devices. Therefore I set
ActiveRecord::Base.pluralize_table_names = false
in config/environment.rb
The data on each sensor device type is held in a SensorType model - which has two fields - the sensor name, and the config file contents.
Within the SensorType model class, there are methods for:
Parsing the config file to extract field names and types
Creating a migration to build a new model
Altering a particular field in the DB from a generic string to char(17) as it is a MAC address used for indexing
Altering the new model code to add appropriate belongs_to relationships
Build partial templates for listing the data in the table (a header partial and a line_item partial)
These methods are all bound together by a create_sensor_table method which calls all the above, and performs the appropriate require or load statements to ensure the new model is immediately loaded. This is called from the create method in the SensorTypeController as follows:
# POST /device_types
# POST /device_types.xml
def create
#sensor_type = SensorType.new(params[:sensor_type])
respond_to do |format|
if #sensor_type.save
#sensor_type.create_sensor_tables
flash[:notice] = 'SensorType was successfully created.'
#etc