I have a grails application which works with a legacy database.
I have in almost all tables a column, let's say include. I want to include such entities in query results if and only if this column has a nonzero value.
Is there any way for specifying this on a per class / application criteria in grails, perhaps in a static mappings block? Currently I'm I specifying AND include == 1 whenever I make a database query.
I think this plugin will get you what you need.
Related
I'm wondering how one might index a virtual attribute on a model with Thinking Sphinx. Given a Project model and some instance method which returns a boolean derived by some other information from another model, say Users, whose attribute is derived and is not on the project table in the database.
For example, suppose we have a method is_user_eligible such that we can query Project.first.is_user_eligible, and get a true or false response. This works in the ORM already.
How can I index this virtual attribute with Thinking Sphinx? I'm able to index virtual attributes in my django project which is on Haystack backed by Elasticsearch. I facilitated this by having a #property decorator on the model method. I figured I should be able to do this with Rails/ThinkingSphinx too, yet I get all sorts of bizarre SQL errors when trying to index. I've tried all sorts of various constructions in setting up my index (e.g. has -vs- indexes) and all result in some sort of SQL error while indexing.
Is this possible with Thinking Sphinx? If so, how can I index a virtual attribute?
You've made it clear that the value is not available as a column on the projects table, but is it on an associated model instead? If so, then you could refer to it via the association:
has user.is_eligible, :as => :is_user_eligible
However, if it's not a column, but can be determined within the context of the SQL query, then you can use a SQL snippet as the attribute definition (I know my example is rather contrived, but should give you some idea):
has "(users.foo = 'bar' || users.baz = 'qux')",
:as => :is_user_eligible,
:type => :boolean
If you're referring to associations that aren't used elsewhere in the index definition, you can force the references, or provide a SQL join statement:
join users
# or through more than one association:
join users.addresses
# or via your own custom join:
join "INNER JOIN users ON users.project_id = projects.id"
But if you cannot determine this value via SQL at all, then the only way to do this with Thinking Sphinx is use real-time indices instead of SQL-backed indices. What this then means is that instead of referring to associations and columns in your index definitions, you refer to methods instead. So, your attribute would become:
has is_user_eligible, :type => :boolean
The type must be specified - SQL indices can guess attribute types due to column types, but real-time indices don't have that reference point.
I realise the link to the real-time indices feature is a blog post I wrote over two years ago. However, the feature certainly works - I and others have been using it in production for quite some time (including with Flying Sphinx).
On the topic of has vs indexes: if you want to use the value as a filter or for sorting, then it must be an attribute, and thus you should use the has method. However, if it's textual data that you expect search queries to match on, then it should be a field, and thus use the indexes method.
Certainly I'd recommend switching to real-time indices anyway: it removes the need for deltas and you get up-to-date Sphinx records without needing to run 'ts:index' regularly (or at all - use ts:generate should your data end up in an out-of-date state). But make sure you switch all index definitions to real-time, instead of having some real-time and others SQL-backed.
When writing a where condition on an Entity Framework context object, is there a shorthand way of testing all the columns, sort of like this:
context.tableName.where(t => t.AnyColumn.Contains(...))
Or am I forced to test each column individually?
There is no out-of-the-box way to do that but you can write your own method which will use reflection to get the list of your model's properties and check each of them
I don't know that this is possible you may have to search each field individually, but why not search for a value in a specific column instead of searching the whole table, it reduces the room for error and makes for a quicker query
I have a Rails 3.2.x app with mongo and mongoid.
I want to have a 'scope' defined in my model that would sort my 'users' by name.
The catch is that the sort should be case insensitive which is not supported by mongo itself.
Is there a way to do it correctly using scopes?
This is a classic case calling for a persisted calculated field pattern.
Say the field you want to sort on is called name. In your model, trap assignments to name and set an additional field you can query. For example:
def name=(value)
self['name'] = value
self['_lc_name'] = value.downcase
end
You don't need to define the calculated field as a Mongoid field.
Create an index on _lc_name for faster sorting using index in Mongoid.
You have to use the mongodb aggregation framework to achieve this.
A quick summary would be that you can do a pipe like command to achieve this (like in unix). You can do 'n' of operations using the aggregation framework.
I have Project and Entry as models. Projects can have many entries, and entries belong to only a project. Entries have dates.
One reporting requirement is to show Projects that have Entries for a particular month. I have been successful in using scopes to achieve this, i.e. Project.with_entries.on(param_the_month).
The issue is that I now want to display the entries for that month only, grouped by projects.
If I do projects.each do |p|, then query for the entries (p.entries), the returned entries are for all months, not just the month I specified.
While this is an obvious result, is there a way in Rails to simply return the entries for that month using my original chained scope?
Edit: I did misunderstand :)
Take 2: You can merge scopes across models. So if you can create a where-type scope on Entry to select entries from a given month, you can then try something like
Project.with_entries.on(param_the_month).merge(Entry.on(param_the_month))
I've called it on by analogy with your scope on Project - without seeing your data model I can't say how exactly to implement it.
has-many associations also accept scopes, so you can do projects.entries.your_scope to filter them. The downside is that this would require another database query for every project, which might be slow depending on the size of your database.
An alternative that does not require extra queries would be to fetch the entries already filtered, and then go upward to get the projects:
entries = Entry.my_conditions.includes(:project)
entries_by_project = entries.group_by(&:project)
Now you have a hash whose keys are the projects, and the values are only the entries of that project that pass your conditions.
You can add includes into your scope, this way it will not query for those records again, it will eager load them when you use project scope.
scope :my_scope, includes(:entries).where( :active => true )
Say I wanted to allow an administrative user to add a field to an ActiveRecord Model via an interface in the Rails app. I believe the normal ActiveRecord::Migration code would be adequate for modifying the AR Model's table structure (something that would not be wise for many applications - I know). Of course, only certain types of fields could be added...in theory.
Obviously, the forms that add (or edit) records to this newly modified ActiveRecord Model would need to be build dynamically at run-time. A common form_for approach won't do. This discussion suggests this can only be accomplished with JavaScript.
http://groups.google.com/group/rubyonrails-talk/browse_thread/thread/fc0b55fd4b2438a5
I've used Ruby in the past to query an object for it's available methods. I seem to remember it was insanely slow. I'm too green with Ruby and Rails to know an elegant way to approach this. I hope someone here may. I'm also open to entirely different approaches to this problem that don't involve modifying the database.
To access the columns which are currently defined for a model, use the columns method - it will give you, for each column, its name, type and other information (such as whether it is a primary key, etc.)
However, modifying the schema at runtime is delicate.
The schema is pre-loaded (and cached, from the DB driver) by each model class when it is first loaded. In production mode, Rails only does this once per model, around startup.
In order to force Rails to refresh its cached schema following your modification, you should force Ruby to reload the affected model's class (pretty much what Rails does for you automatically, after each request, when running in development mode - see how to reload a class using remove_const followed by load.)
If you have a Mongrel cluster, you also have to inform the other processes in the cluster, which run in their own separate memory space, to also reload their model's classes (some clusters will allow you to create a 'restart.txt' file, which will cause an automatic soft-restart of all processes in your cluster with no additional work required on your behalf.)
Now, these having been said, depending on the actual problem that you need to solve you may not need to dynamically alter the schema after all. Instead of adding, say, columns col1, col2 and col3 to some table entries (model Entry), you can use a table called dyn_attribs, where Entry has_many :dyn_attribs, and where dyn_attribs has both a key column (which in this case can have values col1, col2 or col3) and a value column (which lists the corresponding values for col1, col2 etc.)
Thus, instead of:
my_entry = Entry.find(123)
col1 = my_entry.col1
#do something with col1
you would use:
my_entry = Entry.find(123, :include => :dyn_attribs)
dyn_attribs = my_entry.dyn_attribs.inject(HashWithIndifferentAccess.new) { |s,a|
s[a.key] = a.value ; s
}
col1 = dyn_attribs[:col1]
#do something with col1
The above inject call can be factored away into the model, or even into a base class inherited from by all models that may require additional, dynamic columns/attributes (see Polymorphic associations on how to make several models share the same dyn_attribs table for dynamic attributes.)
UPDATE
Adding or renaming a column via a regular HTML form.
Assume that you have a DynAttrTable model representing a table with dynamic attributes, as well as a DynAttrDef defining the dynamic attribute names for a given table.
Run:
script/generate scaffold_resource DynAttrTable name:string
script/generate scaffold_resource DynAttrDef name:string
rake db:migrate
Then edit the generated models:
class DynAttrTable < ActiveRecord::Base
has_many :dyn_attr_defs
end
class DynAttrDef < ActiveRecord::Base
belongs_to :dyn_attr_table
end
You may continue to edit the controllers and the views like in this tutorial, replacing Recipe with DynAttrTable, and Ingredient with DynAttrDef.
Alternatively, use one of the plugins reviewed here to automatically put the dyn_attr_tables and dyn_attr_defs tables under management by an automated interface (with all its bells and whistles), with virtually zero implementation effort on your behalf.
This should get you going.
Say I wanted to allow an
administrative user to add a field to
an ActiveRecord Model via an interface
in the Rails app.
I've solved this sort of problem before by having an extra model called AdminAdditions. The table includes an id, an admin user id, a model name string, a type string, and a default value string.
I override the model's find and save methods to add attributes from its admin_additions, and save them appropriately when changed. The model table has a large text field, initially empty, where I save nondefault values of the added attributes.
Essentially the views and controllers can pretend that every attribute of the model has its own column. This means form_for and so on all work.
ActiveRecord::Migration.add_column(User, "email", :string)
You could use Flex Attributes for this, though if you want to be able to search or order by these new columns you'll have to write (a lot of) custom SQL.
I have seen the dynamic alteration/migration of tables offered as a solution many times but I have never actually seen it implemented. There are many reasons why this solution is rarely implemented.
If the table is large then the table may/will be locked for extended periods of what is supposed to be up-time.
Why is your model changing dynamically? It is quite rare for a models structure to need to change dynamically. It is more often an indication that you are trying to model something specific in a generalised way.
This is often an attempt a producing a "Categorised" model than could be better solved by another approach.
DDL statements are often not allowed by the same user that is being used for day to day DML requirements. Whilst this could be the case, and often is in the ROR arena it is not always the "right" way to do it.
What are you trying to achieve here? A better understanding of the problem would probably reveal a more natural solution.
If you were doing this with PostgreSQL now you could probably get away with a JSON type field and then just store whatever in the json hash.