This is the first time I'm modelling a hierarchy within the same model (product categories).
I found a great post on this topic. Since I use Rails 4 & Postgres, which according to the article supports recursive querying (this is the first time I hear this term), the "Adjacency List With Recursive Query" seems to be the way to go because it's both easy to model and fast to query.
The article suggests the acts_as_sane_tree gem, which supports recursive querying. This repo hasn't been updated for two years and I'm not sure whether it supports Rails 4. The project is a fork of the acts_as_tree gem, which supports Rails 4 and is well maintained.
Which gem should I use? And does the acts_as_tree gem support recursive querying to avoid expensive queries?
If you are in doubt what gem to use, I always suggest to takes a look at the Ruby Toolbox. It helps to evaluate if a gem project is still active, how many developer using this gem and a lot more. Why do you know to do that? Do do not want to choose a gem that is not maintained anymore. You want to use the tools that the community uses and stay as close to the mainstream. If you do not follow the community you will run into problems if you need a bug fixed, further documentation or want to update your Rails version.
In this case for nested ActiveRecord awesome_nested_set and ancestry are good candidates. I would not choose the Recursive Query implementation, because most databases do not support this. Unless you have a very good reason, it is not worth to bind your app to a specific database management system.
have you consider ancestry gem?
"It exposes all the standard tree structure relations (ancestors, parent, root, children, siblings, descendants) and all of them can be fetched in a single SQL query."
I'd agree with the accepted answer on one point - it's good to go for gems that are well maintained.
On two points I disagree:
Firstly, just because a gem is popular doesn't make it the right choice, or even a good choice. Taking ancestry gem as an example. It's been around a long time and is popular, but it requires you to add a special column to your tables which it fills with magic voodoo (I'm very uncomfortable with that sort of thing). Whereas a gem like acts_as_recursive_tree does all the same things as ancestry, also using single queries, but it only requires you to make a parent_id column that holds the ID of the parent - probably what you already have before even hunting for a gem.
Another example - there was a gem for linking uploaded files to records. I chose to use it because it seemed the popular choice. But I ditched it as soon as I discovered it was actually modelling a many-to-many relationships, not with a joining table, but by putting comma-separated list of IDs into a single field (can you believe it?)
Secondly, if the database you have chosen has cool features like recursive query implementation, then by all means use it - that's part of the reason you chose the superior database in the first place, isn't it? Unless you have a need for you application to be database-agnostic, then don't be scared of using the features your database provides. Mitigating against the very unlikely possibility that sometime in the future you'll want to switch to a database that has less features than your current one is certainly not worth the cost of avoiding the more powerful gems that use the features of your database.
Anyway, my recommendation is acts_as_recursive_tree It's very easy to user and powerful, and actively maintained.
Related
I'm extending an existing Rails app, and I have to add multi-tenant support to it. I've done some reading, and seeing how this app is going to be hosted on Heroku, I thought I could take advantage of Postgres' multi-schema functionality.
I've read that there seems to be some performance issues with backups when multiple schemas are in use. This information I felt was a bit outdated. Does anyone know if this is still the case?
Also, are there any other performance issues, or caveats I should take into consideration?
I've already thought about adding a field to every table so I can use a single schema, and have that field reference to the tenants table, but given the time windows multiple schemas seem the best solution.
I use postgres schemas for a multi-tenancy site based on some work by Ryan Bigg and the Apartment gem.
https://leanpub.com/multi-tenancy-rails
https://github.com/influitive/apartment
I find that having seperate schemas for each client an elegant solution which provides a higher degree of data segregation. Personally I find the performance improves because Postgres can simply return all results from a table without have to filter to an 'owner_id'.
I also think it makes for simpler migrations and allows you to adjust individual customer data without making global changes. For example you can add columns to specific customers schemas and use feature flags to enable custom features.
My main argument relating to performance would be that backup is a periodic process, whereas customer table scoping would be on every access. On that basis, I would take any performance hit on backup over slowing down the customer experience.
Sorry if the question is awkwardly phrased -- still a Mongo/Mongoid/Rails newbie here.
What I'm trying to ask:
In my development, I've been changing around the design of my Models as I go, adding some fields here, removing some fields there (one of the great things about MongoDB/Mongoid is that you can do this very quickly and easily!)
Everything is working, but in browsing through the development database, I've got some "detritus" -- documents with the old fields (and data) that aren't being used. It's no big deal other than to my garbage-collective sensibilities. I could, in theory, drop the DB and start from scratch, but that's messy.
Is there a utility / gem / etc. that will, essentially, look at the current document design and drop any fields in the live DB that don't match up to the data model?
I know this can be done manually, and I know about the mongoid migrations gems that are out there -- those are both good and, ultimately, more thorough solutions (which I'll look at).
For now, though, I'm wondering if there's a simple "quick shot" type of utility to simply sync up the DB and drop any fields that aren't explicitly specified in my models.
Thanks!
I don't know of any tools that do this for you. It's not a common ask because most people see this flexibility as a useful feature.
For your development database, you should probably clear it out and start over.
In production you have a couple choices:
Write your code to be robust against fields being missing and the database documents not matching your mongoid model. In other words, be prepared for the two to get out of sync.
Get in the habit of migrating your data every time you change the model. It's a bit of hassle and not strictly necessary if you follow the first, but if the untidiness bothers you, this is a fine idea. See Managing mongoid migrations for strategies.
Is there a Rails 3 model versioning plugin that supports point-in-time queries (not just recovery) in an SQL database?
To be concrete: I have a table, documents. I want to be able to say, "as of 9/17/2010, which documents contained the text 'foo'?". This requirement seems to rule out all of the single-table versioning solutions like vestal_versions, and none of the other ones seem to have this feature either. All of the plugins I've looked at are documented as black boxes, so perhaps they store enough data internally to do this sort of query but you would never know it from the docs. In terms of Slowly Changing Dimensions, Type 2 is probably the sort of solution I'm looking for, although ultimately I'll use whatever works.
I also need to keep track of which user made changes, although that's probably possible to do outside of the versioning system too.
Is there one that I'm missing? Or am I using the wrong search term? Or do I get to roll my own?
I think you get to roll your own. If you really need to be able to perform queries on the versions, then the versions should probably be their own model, and making a versioning gem bend to fit your needs will be more painful than doing it yourself.
Sorry, and have fun :/
I'm considering using Sequel for some of my hairier SQL that I find too hard to craft in Active Record.
Are there any things I need to be aware of when using Sequel and ActiveRecord on the same project? (Besides the obvious ones like no AR validations in sequel etc...)
Disclaimer: I'm the Sequel maintainer.
Sequel is easy to use along side of or instead of ActiveRecord when using Rails. You do have to setup the database connection manually, but other than that, the usage is similar. Your Sequel model files go in app/models and work similarly to ActiveRecord models.
Setting up the database connections isn't tedious, it's generally one line in environment.rb to require sequel, and a line in each environment file (development.rb, test.rb, production.rb) to do something like:
DB = Sequel.connect(...)
So it's only tedious if you consider 4 lines of setup code tedious.
Using raw SQL generally isn't a problem unless you are targeting multiple databases. The main reason to avoid it is the increased verbosity. Sequel supports using raw SQL at least as easily as ActiveRecord, but the times where you need to use raw SQL are generally fairly rare in Sequel.
BTW, Sequel ships with multiple validation plugins. The validation_class_methods plugin is similar to ActiveRecord validations, using class methods. The validation_helpers plugin has a simpler implementation using instance level methods, but both can do roughly the same thing.
Finally, I'll say that if you already have working ActiveRecord code that does what you want, it's probably not worth the effort to port the code to Sequel unless you plan on adding features.
Personally, I wouldn't do it. Just managing connection more-or-less by hand would be tedious, for a start. I'd be more inclined, if I felt Sequel was the stronger option, to hold off for Rails 3.0 (or perhaps start developing against Edge Rails) where it should be fairly easy to switch ORMs, if Yehuda and co are doing their stuff right. A lot more Merb-like than now, at least.
This was DHH's take on the subject (I'm not saying it should be taken as gospel truth, mind, but it is, so to speak, from the horse's mouth):
But Isn’t Sql Dirty?
Ever since programmers started to
layer object-oriented systems on top
of relational databases, they’ve
struggled with the question of how
deep to run the abstraction. Some
object-relational mappers seek to
eradicate the use of SQL entirely,
striving for object oriented purity by
forcing all queries through another OO
layer.
Active Record does not. It was built
upon the notion that SQL is neither
dirty nor bad, just verbose in the
trivial cases. The focus is on
removing the need to deal with the
verbosity in those trivial cases but
keeping the expressiveness around for
hard queries – the type SQL was
created to deal with elegantly.
Therefore, you shouldn’t feel guilty
when you use find_by_sql() to handle
either performance bottlenecks or hard
queries. Start out using the
object-oriented interface for
productivity and pleasure, and the dip
beneath the surface for a
close-to-the-metal experience when you
need to.
(Quote was found here, original text is on p334 of AWDRWR, the "hammock" book).
I think that's reasonable.
Are we talking about something that find_by_sql can't handle? Or are we talking about complex non-SELECT stuff that execute can't deal with?
Any examples we could look at?
Our company loves reports that calculate obscure metrics--metrics that cannot be calculated with ActiveRecord's finders (except find_by_sql) and where ruport's ruby-based capabilities are just too slow.
Is there a plugin or gem or db adapter out there that will do large calculations in the database layer? What's your solution to creating intricate reports?
Although not database agnostic, our solution is plpgsql functions where it becomes really slow to use Ruby and ActiveRecord.
Thoughtbot's Squirrel plugin adds a lot of Ruby-ish functionality to ActiveRecord's find method, with multi-layered conditionals, ranges, and nested model associations:
www.thoughtbot.com/projects/squirrel/
Is there anything inherent about your reports that prevents the use of an SQL view or stored procedure?
In one particular project, a technique I often find useful is to create your SQL query (that may be quite complex) as a named view in the database, and then use
YourModel.connection.select_all(query)
to pull back the data. It's not an optimal approach; I'm keen to explore improvements to it.
Unfortunately, as you suggested, the support for doing computing complex database-based reports within rails seems fairly limited.
It sounds as if your tables could be normalized. At one place I worked, the amount of normalization we did was impacting our reporting needs, so we created some shadow tables that contained a bunch of the aggregate data, and did reporting against that.
I agree with Neil N's comment that the question is a little vague, but perhaps this gets you moving in the right direction?
You might want to look at using DataMapper or Sequel for your ORM if you're finding that ActiveRecord lacks the expressiveness you need for complex queries. Switching away from ActiveRecord wouldn't be a decision to take likely, but it might be worth investigating at least.