Retrieving x number of random objects - ruby-on-rails

I'm just trying to load 5 random objects in a rails controller
Thing.all(:limit => 5, :order => "RANDOM()")
Is that the least expensive way to do it?

Short answer: no.
What you have asked the db to do is: go order the entire thing table in a random order... then grab me five of them. If your thing table has a lot of rows... that's a very expensive operation.
A better option (if the ids are auto-increment and thus likely concurrent) is to generate a set of random ids within the id-range for your thing table and go fetch these individual things by those ids.

This is the best way:
Thing.all.sample(5)

Related

Merging without rewriting one table

I'm wondering about something that doesn't seem efficient to me.
I have 2 tables, one very large table DATA (millions of rows and hundreds of cols), with an id as primary key.
I then have another table, NEW_COL, with variable rows (1 to millions) but alwas 2 cols : id, and new_col_name.
I want to update the first table, adding the new_data to it.
Of course, i know how to do it with a proc sql/left join, or a data step/merge.
Yet, it seems inefficient, as far as I see with time executing, (which may be wrong), these 2 ways of doing rewrite the huge table completly, even when NEW_DATA is only 1 row (almost 1 min).
I tried doing 2 sql, with alter table add column then update, but it's waaaaaaaay too slow as update with joining doesn't seem efficient at all.
So, is there an efficient way to "add a column" to an existing table WITHOUT rewriting this huge table ?
Thanks!
SAS datasets are row stores and not columnar stores like tables in other databases. As such, adding rows is far easier and efficient than adding columns. A key joined view could be argued as the most 'efficient' way to add a column to a data rectangle.
If you are adding columns so often that the 1 min resource incursion is a problem you may need to upgrade hardware with faster drives, less contentious operating environment, or more memory and SASFILE if the new columns are often yet temporary in nature.
#Richard answer is perfect. If you are adding columns on regular basis then there is problem with your design. You either need to give more details on what you are doing and someone can suggest you.
I would try hash join. you can find code for simple hash join. This is efficient way of joining because in your case you have one large table and one small table if it fit into memory, it much better than a left join. I have done various joins using and query run times was considerably less( to order of 10)
By Altering table approach you are rewriting the table and also it causes lock on your table and nobody can use the table.
You should perform this joins when workload is less, which means during not during office and you may need to schedule the jobs in night, when more SAS resources are available
Thanks for your answers guys.
To add information, i don't have any constraint about table locking, balance load or anything as it's a "projet tool" script I use.
The goal is, in data prep step 'starting point data generator', to recompute an already existing data, or add a new one (less often but still quite regularly). Thus, i just don't want to "lose" time to wait for the whole table to rewrite while i only need to update one data for specific rows.
When i monitor the servor, the computation of the data and the joining step are very fast. But when I want tu update only 1 row, i see the whole table rewriting. Seems a waste of ressource to me.
But it seems it's a mandatory step, so can't do much about it.
Too bad.

Randomize Selections in a List of 100

This is a follow-up to this last question I asked: Sort Users by Number of Followers. That code is:
#ordered_users = User.all.sort{|a,b| b.followers.count <=> a.followers.count}
What I hope to accomplish is take the ordered users and get the top 100 of those and then randomly choose 5 out of that 100. Is there a way to accomplish this?
Thanks.
users_in_descending_order_of_followers = User.all.sort_by { |u| -u.followers.count }
sample_of_top = users_in_descending_order_of_followers.take(100).sample(5)
You can use sort_by which can be easier to use than sort, and combine take and sample to get the top 100 users and sample 5 of those users.
User.all.sort can "potentially" pose some problems in the long-run, depending on the number of total users, and the availability of resources particularly computer memory, not to mention it would be a lot slower because you're calling 2x .followers.count inside the sort block, which essentially calls 2xN times more DB query; N being the number of users. This is because User.all.sort will immediately execute the User.all query, thereby fetching all User records into memory, as opposed to your usual User.all, which is lazy loaded, until you (for example use .each, or better yet .find_each somewhere down the line)
I suggest something like below (I extended Deekshith's answer referring to your link to the other question):
User.joins(:followers).order('count(followers.user_id) desc').limit(100).sample(5)
.joins, .order, and .limit above will all extend the SQL string query into one string, then executes that SQL string, and finally run .sample(5) (not a SQL anymore!, but is already just a plain ruby method at this point), finally yielding the result that you needed.
I would strongly consider using a counter cache on the User model, to hold the count of followers.
This would give a very small performance impact on adding or removing followers, and greatly increase performance when performing sorts:
User.order(followers_count: :desc)
This would be particularly noticeable if you wanted the top-n users by follower count, or finding users with no followers.
User.order(followers_count: :desc).limit(100).sample(5)
This method will out-perform others using count(*). Add an index on followers_count for best effect.

What is the difference between these two statements, and why would you choose them?

I'm a beginner at rails. And I've come to understand two different ways to return the same result.
What is the difference between these two? And what situation would require you to choose one from the other?
Example 1:
Object.find(:all).select {|c| c.name == "Foobar" }.size
Example 2:
Object.count(:conditions => ['name = ?', 'Foobar'])
FURTHER NOTE:
I seriously wish I could vote everyone correct answers for this one. Thank you so much. I just had a serious rails affirmation.
Object.count always hits the DB, and the find()....size() call can optimize. Good discussion here
http://rhnh.net/2007/09/26/counting-activerecord-associations-count-size-or-length
Example 1:
This constructs a query:
SELECT * FROM objects
then turns all the records into a collection of objects in your memory, then iterates through every object to see if it meets the condition, and then counts the number of elements that meet condition.
Example 2:
This constructs a query:
SELECT count(id) FROM objects WHERE name = 'Foobar'
lets sql do all the hard work, and returns just an integer - a number of objects meeting condition.
Usually you want no 2 - faster and less memory
Example 1 will load all of your records from the DB (assuming Object is an ActiveRecord model), then uses Ruby to reduce the set, and then return the size of that array. So this is potentially memory and CPU heavy - not good.
Example 2 performs the count in SQL, so all the heavy lifting is performed in the database, not in Ruby. Much better :)
In example 1, you are getting all objects from the datastore, and then iterating over all of them, selecting the objects that has the name Foobar. And then getting the size of that array. Example 1 is the clear loser here.
Example 1 sql:
select * from whatever
# then iterate over entire array
Example two executes a where clause in SQL to the datastore.
select count(id) from whatever where name = 'foobar'
# The SQL above is sql-server accurate, but not necessarily mysql or sqlite3

Does Ruby on Rails "has_many" array provide data on a "need to know" basis?

On Ruby on Rails, say, if the Actor model object is Tom Hanks, and the "has_many" fans is 20,000 Fan objects, then
actor.fans
gives an Array with 20,000 elements. Probably, the elements are not pre-populated with values? Otherwise, getting each Actor object from the DB can be extremely time consuming.
So it is on a "need to know" basis?
So does it pull data when I access actor.fans[500], and pull data when I access actor.fans[0]? If it jumps from each record to record, then it won't be able to optimize performance by doing sequential read, which can be faster on the hard disk because those records could be in the nearby sector / platter layer -- for example, if the program touches 2 random elements, then it will be faster just to read those 2 records, but what if it touches all elements in random order, then it may be faster just to read all records in a sequential way, and then process the random elements. But how will RoR know whether I am doing only a few random elements or all elements in random?
Why would you want to fetch 50000 records if you only use 2 of them? Then fetch only those two from DB. If you want to list the fans, then you will probably use pagination - i.e. use limit and offset in your query, or some pagination gem like will_paginate.
I see no logical explanation why should you go the way you try to. Explain a real situation so we could help you.
However there is one think you need to know wile loading many associated objects from DB - use :include like
Actor.all(:include => :fans)
this will eager-load all the fans so there will only be 2 queries instead of N+1, where N is a quantity of actors
Look at the SQL which is spewed out by the server in development mode, and that will tell you how many fan records are being loaded. In this case actor.fans will indeed cause them all to be loaded, which is probably not what you want.
You have several options:
Use a paginator as suggested by Tadas;
Set up another association with the fans table that pulls in just the ones you're interested in. This can be done either with a conditions on the has_many statement, e.g.
has_many :fans, :conditions => "country of residence = 'UK'"
Specifying the full SQL to narrow down the rows returned with the :finder_sql option
Specifying the :limit option which will, well, limit, the number of rows returned.
All depends on what you want to do.

The order of elements vary in an IQueryable, every time I query with the same condition

I am working on a ASP.NET MVC application where we have to write our own code for paging.
And I see elements getting repeated on different pages. This is happening because the order of elements in the IQueryable varies randomly and one have to query the database for each page.
I solved this problem by ordering the elements by date of creation and then the order I wanted. But I feels its heavy. Is there a better way to solve this problem.
If you fetch rows from a database via IQuerable and order on one column, if there are rows that have the same value in that column the order of those rows may vary, you need to do as you do now, that is order by two columns, first the one you are really interested in and then the second (like date), there should not be that big a performance hit since the sorting is done in the database.
But what need to do is specify the ordering like this:
Context.Products
.Where(p => p.ProductID > 100)
.OrderBy(p => p.CategoryID)
.ThenBy(p => p.Date).ToList();
Notice the ThenBy, that will generate the correct SQL.
No, that is the correct way. A database might give back rows in a non-determistic order if you do not tell it to order by something.
Have a look at: http://srtsolutions.com/blogs/billwagner/archive/2006/10/14/paging-output-with-linq2sql.aspx of Bill Wagner.
He uses Take() and Skip() to create paged output.

Resources