I am looking for a data structure or design pattern to work with rails and active record to provide a way to make configurable events and event triggers based on real time events.
While this is not the end usage for this sort of system, the follow example I believe demonstrates what I am trying to do. Similar to an log monitoring system like splunk, essentially what I am trying to do is create a system where I can take some attribute from an object and then compare it to a desired value and take perform an action if the evaluation is true.
Is there a library to do this or would something need to be rolled out from scratch. The way I was thinking about designing this would be similar to the following:
I would have an Actor (not in the traditional concurrency sense) which would house the attributes that I want to compare to. Then I would have a Trigger model which would have a pointer to the actor_id, attribute (IE count), comparator (<, <=, ==, >=, >), value, and action_id. The action_id would point to an object with a perform method that would just house the code that needs to run when the trigger is fired.
So in the end the trigger would evaluate to something like:
action.perform if actor.send(attribute) comparator value
Another option, possibly a more standard one, seems to develope a DSL (IE FQL for facebook). Would this be a better and more flexible approach.
Is this something that a library can handle or if not is this a decent structure for a system like the one I am proposing?
EDIT: Looks like a DSL might be the most flexible way to go. Tutorials for writing DSL in Ruby
If I understand the question correctly, what you have written is nearly correct as it stands. You can send comparison operators as symbols, like so:
comparator = :>
action.perform if actor.send(attribute).send(comparator, value)
# action will perform if actor's attribute is greater than value
Related
As our app scales, we have plenty of tables with millions of records, and some of ActiveAdmin's conventions are starting to fail us.
The ActiveAdmin convention is to simply declare a bunch of different filters that operate on a particular ransackable scope, and ActiveAdmin will automatically apply all of those filters to the scope, in addition to applying the scope set by your Authorization adapter (e.g. from a CanCan ability).
But with millions of records in the table, this approach makes it painfully easy for any AdminUser to specify a set of filters that, when combined with each other, cause a pathologically expensive Postgres query.
The answer is not simply to "add more indexes"; for the purpose of this question please assume we know how to tune our database.
What WOULD help us if it were possible to take full control over the process by which ActiveAdmin assembles all of the filter params into a final scope. In particular we want the ability to:
Enforce that a datetime range specified in the filter params is within acceptable boundaries (and to display the truncated range in the filter input on the next render of the index page).
Enforce a certain order/algorithm to how scopes are applied to construct a more finely tuned query, rather than the unordered free-for-all that the AA defaults. This is not a case of us wanting to "fight the postgres planner", but more so tune the way AA constructs certain queries to prevent pathological combinations of filters from applying (or to warn the user in a flash method that they need to refine their query to something more specific).
Preserve all other AA conventions which still work just as find, such as defining an index block with standard table rendering; continuing to use the same filter DSL would be nice too; all we want to override is the resource scope assembly.
I've poked around a bit, but I can't seem to find a surefire way to get this to work. Any ideas?
'Full control' could be a big topic but I would start by overriding index and looking for a 'q' parameter which is submitted from Filters::ViewHelper. Inspect the query parameters and modify or remove them if the query looks too expensive.
I am wondering what is the best or most idiomatic way to expose all events in a window to a custom function. The following example is constructed following the stock price style examples used in the Esper online documentation.
Suppose we have the following Esper query:
select avg(price), custom_function(price) from OrderEvent#unique(symbol)
The avg(price) part returns an average of the most recent price for each symbol. Suppose we want custom_function to work in a similar manner, but it needs complex logic - it would want to iterate over every value in the window each time the result is needed (eg outlier detection methods might need such an algorithm).
To be clear, I'm requiring the algorithm look something like:
custom_function(window):
for each event in window:
update calculation
and there is no clever way to update the calculation as events enter or leave the window.
A custom aggregation could achieve this by pushing and popping events to a set, but this becomes problematic when primitive types are used. It also feels wasteful, as presumably esper already has the collection of events in the window so we prefer not to duplicate that.
Esper docs mention many ways to customize things, see this Solution Pattern, for example. Also mentioned is that the 'pull API' can iterate all events in a window.
What approaches are considered best to solve this type of problem?
For access to all events at the same time use window(*) or prevwindow(*).
select MyLibrary.computeSomething(window(*)) from ...
The computeSomething is a public static method of class MyLibrary (or define a UDF).
For access to individual event-at-a-time you could use an enumeration method. The aggregate-method has an initial value and accumulator lambda. There is an extension API to extend the existing enumeration methods that you could also use.
select window(*).aggregate(..., (value, eventitem) => ...) from ...
link to window(*) doc and
link to enum method aggregate doc and
link to enum method extension api doc
In several places in a rather complex Rails app there are references to a particular kind of object; let's call them "apples". I'd like to change all of these user-facing references from "apples" to "oranges". This would be simple enough, except that I'd like to retain Apple as class, so I don't want to touch the myriad methods, variables, symbols, etc. that use the word "apple".
There are several orders of magnitude more instances of apple in the code proper than there are user-facing instances of "apple". My question is: How can I zero in on the relatively few displayed instances? Is there a way to perform a search on all and only what is displayed by a browser?
Unless you've taken a disciplined approach to separate your language from your code, such as using localization files, then no, there's no easy way to find instances of displayed text. How is a search supposed to differentiate between 'apple' used as a type column and 'apple' inserted into a page?
This is why you might want to take an approach where you don't embed language in your controllers and models. Instead you could create a helper method to describe them for you:
You have <%= pluralize_model(#apple, 10) %> left.
That method, if constructed properly, would render '10 apples' or whatever term you'd like to use for that type of object.
I have a table with a couple of boolean columns.
A PurchaseOrder needs to be marked as complete (first boolean), before it is invoiced (the second boolean).
I'd appreciate some pointers as to how to validate legal combinations of these booleans.
What I have in mind is something like,
:validates (!:complete and !:invoiced) or (:complete && :invoiced) or (:complete && !:invoiced)
Is this possible?
Use a custom validator
Sounds like your PurchaseOrder goes through several states during its life cycle, like a finite-state machine.
One thing we use in production systems is acts_as_state_machine, to help accomplish this behavior.
It allows you to define:
a series of states that an object can be in
the events that move the object between states
and finally which states you can move between, and which states you can't move between. For example, you might want to allow your PurchaseOrder to be go from complete->to->invoiced, but not backwards from invoiced->to->complete. acts_as_state_machine allows you to set this up in a declarative style.
acts_as_state_machine will allow you to define those complex behaviors. Reading it takes a bit of time to understand it, but for systems like these, it has been a life saver.
I'm using Rails3.rc and Active Record 3 (with meta_where) and just started to switch to Sequel, because it seems to be much faster and offers some really great features.
I'm already using the active_model plugin (and some others).
As far as I know, I should use User[params[:id]] instead of User.find(params[:id]). But this doesn't raise if no record exists and doesn't convert the value to an integer (type of PK), so it's as a string in the where clause. I'm not sure if this is causing any performance issues. Does this harm identity_map? What's the best way to solve both these issues?
Is there an easy way to flip the usage of associations like User.messages_dataset and User.messages so that User.messages behaves like in Active Record (User.messages_data_set). I guess I'd use the #..._dataset a lot but never need the array method, because I could just add .all?
I noticed some same (complex) queries are executed several times within one action sometimes. Is there something like the Active Record query cache? (identity_map doesn't seem to work for these cases).
Is there a to_sql I can call to get the raw SQL a dataset would produce?
You can either use:
User[params[:id].to_i] || raise Sequel::Error
Or write your own method that does something like that. Sequel supports non-integer primary keys, so it wouldn't do the conversion automatically. It shouldn't have any affect on an identity map. Note that Sequel doesn't use an identity map unless you are using the identity_map plugin. I guess the best way is to write your own helper method.
Not really. You can use the association_proxies plugin so that non-array methods are sent to the dataset instead of the array of objects. In general, you shouldn't be using the association dataset method much. If you are using it a lot, it's a sign that you should have an association for that specific usage.
There is and will never be a query cache. You should write your actions so that the results of the first query are cached and reused.
Dataset#sql gives you the SELECT SQL for the dataset.