Postgresql error with Rails 3 using order("RANDOM()") - ruby-on-rails

Im trying to query my db for records that are similar to the currently viewed record (based on taggings), which I have working but I would like to randomize the order.
my development environment is mysql so I would do something like:
#tattoos = Tattoo.tagged_with(tags, :any => true).order("RAND()").limit(6)
which works, but my production environment is heroku which is using postgresql so I tried using this:
#tattoos = Tattoo.tagged_with(tags, :any => true).order("RANDOM()").limit(6)
but I get the following error:
ActionView::Template::Error (PGError: ERROR: for SELECT DISTINCT, ORDER BY expressions must appear in select list
SELECT DISTINCT tattoos.* FROM "tattoos" JOIN taggings
tattoos_taggings_color_fantasy_newschool_nerdy_tv_477 ON
tattoos_taggings_color_fantasy_newschool_nerdy_tv_477.taggable_id = tattoos.id AND
tattoos_taggings_color_fantasy_newschool_nerdy_tv_477.taggable_type = 'Tattoo' WHERE
(tattoos_taggings_color_fantasy_newschool_nerdy_tv_477.tag_id = 3 OR
tattoos_taggings_color_fantasy_newschool_nerdy_tv_477.tag_id = 4 OR
tattoos_taggings_color_fantasy_newschool_nerdy_tv_477.tag_id = 5 OR
tattoos_taggings_color_fantasy_newschool_nerdy_tv_477.tag_id = 24 OR
tattoos_taggings_color_fantasy_newschool_nerdy_tv_477.tag_id = 205) ORDER BY RANDOM() LIMIT 6):

After analyzing the query more closely, I have to correct my first draft. The query would require a DISTINCT or GROUP BY the way it is.
The (possibly) duplicate tattoos.* come from first joining to (possibly) multiple rows in the table taggings. Your query engine then tries to get rid of such duplicates again by using DISTINCT - in a syntactically illegal way.
DISTINCT basically sorts the resulting rows by the resulting columns from left to right and picks the first for each set of duplicates. That's why the leftmost ORDER BY column have to match the SELECT list.
MySQL is more permissive and allows the non-standard use of DISTINCT, but PostgreSQL throws an error.
ORMs often produce ineffective SQL statements (they are just crutches after all). However, if you use appropriate PostgreSQL libraries, such an illegal statement shouldn't be produced to begin with. I am no Ruby expert, but something's fishy here.
The query is also very ugly and inefficient.
There are several ways to fix it. For instance:
SELECT *
FROM (<query without ORDER BY and LIMIT>) x
ORDER BY RANDOM()
LIMIT 6
Or, better yet, rewrite the query with this faster, cleaner alternative doing the same:
SELECT ta.*
FROM tattoos ta
WHERE EXISTS (
SELECT 1
FROM taggings t
WHERE t.taggable_id = ta .id
AND t.taggable_type = 'Tattoo'
AND t.tag_id IN (3, 4, 5, 24, 205)
)
ORDER BY RANDOM()
LIMIT 6;
You'll have to implement it in Ruby yourself.

not sure about the random, as it should work.
But take a note of http://railsforum.com/viewtopic.php?id=36581
which has code that might suit you
/lib/agnostic_random.rb
module AgnosticRandom
def random
case DB_ADAPTER
when "mysql" then "RAND()"
when "postgresql" then "RANDOM()"
end
end
end
/initializers/extend_ar.rb (name doesn't matter)
ActiveRecord::Base.extend AgnosticRandom

Related

Is there anyway to make a lesser impact on my database with this request?

For the analytics of my site, I'm required to extract the 4 states of my users.
#members = list.members.where(enterprise_registration_id: registration.id)
# This pulls roughly 10,0000 records.. Which is evidently a huge data pull for Rails
# Member Load (155.5ms)
#invited = #members.where("user_id is null")
# Member Load (21.6ms)
#not_started = #members.where("enterprise_members.id not in (select enterprise_member_id from quizzes where quizzes.section_id IN (?)) AND enterprise_members.user_id in (select id from users)", #sections.map(&:id) )
# Member Load (82.9ms)
#in_progress = #members.joins(:quizzes).where('quizzes.section_id IN (?) and (quizzes.completed is null or quizzes.completed = ?)', #sections.map(&:id), false).group("enterprise_members.id HAVING count(quizzes.id) > 0")
# Member Load (28.5ms)
#completes = Quiz.where(enterprise_member_id: registration.members, section_id: #sections.map(&:id)).completed
# Quiz Load (138.9ms)
The operation returns a 503 meaning my app gives up on the request. Any ideas how I can refactor this code to run faster? Maybe by better joins syntax? I'm curious how sites with larger datasets accomplish what seems like such trivial DB calls.
The answer is your indexes. Check your rails logs (or check the console in development mode) and copy the queries to your db tool. Slap an "Explain" in front of the query and it will give you a breakdown. From here you can see what indexes you need to optimize the query.
For a quick pass, you should at least have these in your schema,
enterprise_members: needs an index on enterprise_member_id
members: user_id
quizes: section_id
As someone else posted definitely look into adding indexes if needed. Some of how to refactor depends on what exactly you are trying to do with all these records. For the #members query, what are you using the #members records for? Do you really need to retrieve all attributes for every member record? If you are not using every attribute, I suggest only getting the attributes that you actually use for something, .pluck usage could be warranted. 3rd and 4th queries, look fishy. I assume you've run the queries in a console? Again not sure what the queries are being used for but I'll toss in that it is often useful to write raw sql first and query on the db first. Then, you can apply your findings to rewriting activerecord queries.
What is the .completed tagged on the end? Is it supposed to be there? only thing I found close in the rails api is .completed? If it is a custom method definitely look into it. You potentially also have an use case for scopes.
THIRD QUERY:
I unfortunately don't know ruby on rails, but from a postgresql perspective, changing your "not in" to a left outer join should make it a little faster:
Your code:
enterprise_members.id not in (select enterprise_member_id from quizzes where quizzes.section_id IN (?)) AND enterprise_members.user_id in (select id from users)", #sections.map(&:id) )
Better version (in SQL):
select blah
from enterprise_members em
left outer join quizzes q on q.enterprise_member_id = em.id
join users u on u.id = q.enterprise_member_id
where quizzes.section_id in (?)
and q.enterprise_member_id is null
Based on my understanding this will allow postgres to sort both the enterprise_members table and the quizzes and do a hash join. This is better than when it will do now. Right now it finds everything in the quizzes subquery, brings it into memory, and then tries to match it to enterprise_members.
FIRST QUERY:
You could also create a partial index on user_id for your first query. This will be especially good if there are a relatively small number of user_ids that are null in a large table. Partial index creation:
CREATE INDEX user_id_null_ix ON enterprise_members (user_id)
WHERE (user_id is null);
Anytime you query enterprise_members with something that matches the index's where clause, the partial index can be used and quickly limit the rows returned. See http://www.postgresql.org/docs/9.4/static/indexes-partial.html for more info.
Thanks everyone for your ideas. I basically did what everyone said. I added indexes, resorted how I called everything, but the major difference was using the pluck method.. Here's my new stats :
#alt_members = list.members.pluck :id # 23ms
if list.course.sections.tests.present? && #sections = list.course.sections.tests
#quiz_member_ids = Quiz.where(section_id: #sections.map(&:id)).pluck(:enterprise_member_id) # 8.5ms
#invited = list.members.count('user_id is null') # 12.5ms
#not_started = ( #alt_members - ( #alt_members & #quiz_member_ids ).count #0ms
#in_progress = ( #alt_members & #quiz_member_ids ).count # 0ms
#completes = ( #alt_members & Quiz.where(section_id: #sections.map(&:id), completed: true).pluck(:enterprise_member_id) ).count # 9.7ms
#question_count = Quiz.where(section_id: #sections.map(&:id), completed: true).limit(5).map{|quiz|quiz.answers.count}.max # 3.5ms

In an ActiveRecord relation with eager loading, how can you pluck the primary column without duplicates?

When I have an Active Record relation that preloads other model data like this:
input = Post.active.includes(:comments)
If I try to pluck the primary column I get duplicates:
output = input.pluck(:id)
(2.8ms) SELECT "posts"."id" FROM "posts" LEFT OUTER JOIN "comments" ON "comments"."post_id" = "posts"."id" WHERE "posts"."aasm_state" = 1
=> [1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3]
Sure, I could just rewrite the first expression without 'includes':
input = Post.active
output = input.pluck(:id)
However, with the library I am using that is not an option (https://github.com/toptal/chewy -> a wrapper for ElasticSearch to help index ActiveRecord data). The first part (input) of the code is used outside of the gem, in the app code, while the last part is in the gem itself:
# app/something.rb
input = Post.active.includes(:comments)
# lib/something_else.rb
# target is the model defined in the input
ids = input.pluck("#{target.primary_key}")
The "includes" part is important because all the data is bulk imported to ElasticSearch, so eager loading is a must.
Another alternative I've tried is forking the gem and changing the mentioned code to this:
ids = input.pluck("distinct(#{target.table_name}.#{target.primary_key})")
That does provide the expected result, but it has (at least) two problems:
In SQL, 'distinct' should be used carefully, since it interacts with around 'order by' and 'group by';
Performance penalty because you are still running the LEFT OUTER JOINS that happen as a consequence of the eager loading part. In fact, this is a huge issue for me, which is my primary motivation for writing this post here at StackOverflow.
The gem maintainer posted a cleaner answer here:
input.except(:includes).pluck(:id)

Rails Postgres Error GROUP BY clause or be used in an aggregate function

In SQLite (development) I don't have any errors, but in production with Postgres I get the following error. I don't really understand the error.
PG::Error: ERROR: column "commits.updated_at" must appear in the GROUP BY clause or be used in an aggregate function
LINE 1: ...mmits"."user_id" = 1 GROUP BY mission_id ORDER BY updated_at...
^
: SELECT COUNT(*) AS count_all, mission_id AS mission_id FROM "commits" WHERE "commits"."user_id" = 1 GROUP BY mission_id ORDER BY updated_at DESC
My controller method:
def show
#user = User.find(params[:id])
#commits = #user.commits.order("updated_at DESC").page(params[:page]).per(25)
#missions_commits = #commits.group("mission_id").count.length
end
UPDATE:
So i digged further into this PostgreSQL specific annoyance and I am surprised that this exception is not mentioned in the Ruby on Rails Guide.
I am using psql (PostgreSQL) 9.1.11
So from what I understand, I need to specify which column that should be used whenever you use the GROUP_BY clause. I thought using SELECT would help, which can be annoying if you need to SELECT a lot of columns.
Interesting discussion here
Anyways, when I look at the error, everytime the cursor is pointed to updated_at. In the SQL query, rails will always ORDER BY updated_at. So I have tried this horrible query:
#commits.group("mission_id, date(updated_at)")
.select("date(updated_at), count(mission_id)")
.having("count(mission_id) > 0")
.order("count(mission_id)").length
which gives me the following SQL
SELECT date(updated_at), count(mission_id)
FROM "commits"
WHERE "commits"."user_id" = 1
GROUP BY mission_id, date(updated_at)
HAVING count(mission_id) > 0
ORDER BY updated_at DESC, count(mission_id)
LIMIT 25 OFFSET 0
the error is the same.
Note that no matter what it will ORDER BY updated_at, even if I wanted to order by something else.
Also I don't want to group the records by updated_at just by mission_id.
This PostgreSQL error is just misleading and has little explanation to solving it. I have tried many formulas from the stackoverflow sidebar, nothing works and always the same error.
UPDATE 2:
So I got it to work, but it needs to group the updated_at because of the automatic ORDER BY updated_at. How do I count only by mission_id?
#missions_commits = #commits.group("mission_id, updated_at").count("mission_id").size
I guest you want to show general number of distinct Missions related with Commits, anyway it won't be number on page.
Try this:
#commits = #user.commits.order("updated_at DESC").page(params[:page]).per(25)
#missions_commits = #user.commits.distinct.count(:mission_id)
However if you want to get the number of distinct Missions on page I suppose it should be:
#missions_commits = #commits.collect(&:mission_id).uniq.count
Update
In Rails 3, distinct did not exist, but pure SQL counting should be used this way:
#missions_commits = #user.commits.count(:mission_id, distinct: true)
See the docs for PostgreSQL GROUP BY here:
http://www.postgresql.org/docs/9.3/interactive/sql-select.html#SQL-GROUPBY
Basically, unlike Sqlite (and MySQL) postgres requires that any columns selected or ordered on must appear in an aggregate function or the group by clause.
If you think it through, you'll see that this actually makes sense. Sqlite/MySQL cheat under the hood and silently drop those fields (not sure that's technically what happens).
Or thinking about it another way if you are grouping by a field, what's the point of ordering it? How would that even make sense unless you also had an aggregate function on the ordered field?

PG::Error in GROUP BY clause

I couldn't think of a better way to refactor the below code (see this question), though I know it's very ugly. However, it's throwing a Postgres error (not with SQLite):
ActiveRecord::StatementInvalid:
PG::Error: ERROR:
column "articles.id" must appear in the GROUP BY clause or be used in an aggregate function
The query itself is:
SELECT "articles".*
FROM "articles"
WHERE "articles"."user_id" = 1
GROUP BY publication
Which comes from the following view code:
=#user.articles.group(:publication).map do |p|
=p.publication
=#user.articles.where("publication = ?", p.publication).sum(:twitter_count)
=#user.articles.where("publication = ?", p.publication).sum(:facebook_count)
=#user.articles.where("publication = ?", p.publication).sum(:linkedin_count)
In SQLite, this gives the output (e.g.) NYT 12 18 14 BBC 45 46 47 CNN 75 54 78, which is pretty much what I need.
How can I improve the code to remove this error?
When using GROUP BY you cannot SELECT fields that are not either part of the GROUP BY or used in an aggregate function. This is specified by the SQL standard, though some databases choose to execute such queries anyway. Since there's no single correct way to execute such a query they tend to just pick the first row they find and return that, so results will vary unpredictably.
It looks like you're trying to say:
"For each publication get me the sum of the twitter, facebook and linkedin counts for that publication".
If so, you could write:
SELECT publication,
sum(twitter_count) AS twitter_sum,
sum(linkedin_count) AS linkedin_sum,
sum(facebook_count) AS facebook_sum
FROM "articles"
WHERE "articles"."user_id" = 1
GROUP BY publication;
Translating that into ActiveRecord/Rails ... up to you, I don't use it. It looks like it's pretty much what you tried to write but ActiveRecord seems to be mangling it, perhaps trying to execute the sums locally.
Craig's answer explains the issue well. Active Record will select * by default, but you can override it easily:
#user.articles.select("publication, sum(twitter_count) as twitter_count").group(:publication).each do |row|
p row.publication # "BBC"
p row.twitter_count # 45
end

Rails expanding fields with scope, PG does not like it

I have a model of Widgets. Widgets belong to a Store model, which belongs to an Area model, which belongs to a Company. At the Company model, I need to find all associated widgets. Easy:
class Widget < ActiveRecord::Base
def self.in_company(company)
includes(:store => {:area => :company}).where(:companies => {:id => company.id})
end
end
Which will generate this beautiful query:
> Widget.in_company(Company.first).count
SQL (50.5ms) SELECT COUNT(DISTINCT "widgets"."id") FROM "widgets" LEFT OUTER JOIN "stores" ON "stores"."id" = "widgets"."store_id" LEFT OUTER JOIN "areas" ON "areas"."id" = "stores"."area_id" LEFT OUTER JOIN "companies" ON "companies"."id" = "areas"."company_id" WHERE "companies"."id" = 1
=> 15088
But, I later need to use this scope in more complex scope. The problem is that AR is expanding the query by selecting individual fields, which fails in PG because selected fields must in the GROUP BY clause or the aggregate function.
Here is the more complex scope.
def self.sum_amount_chart_series(company, start_time)
orders_by_day = Widget.in_company(company).archived.not_void.
where(:print_datetime => start_time.beginning_of_day..Time.zone.now.end_of_day).
group(pg_print_date_group).
select("#{pg_print_date_group} as print_date, sum(amount) as total_amount")
end
def self.pg_print_date_group
"CAST((print_datetime + interval '#{tz_offset_hours} hours') AS date)"
end
And this is the select it is throwing at PG:
> Widget.sum_amount_chart_series(Company.first, 1.day.ago)
SELECT "widgets"."id" AS t0_r0, "widgets"."user_id" AS t0_r1,<...BIG SNIP, YOU GET THE IDEA...> FROM "widgets" LEFT OUTER JOIN "stores" ON "stores"."id" = "widgets"."store_id" LEFT OUTER JOIN "areas" ON "areas"."id" = "stores"."area_id" LEFT OUTER JOIN "companies" ON "companies"."id" = "areas"."company_id" WHERE "companies"."id" = 1 AND "widgets"."archived" = 't' AND "widgets"."voided" = 'f' AND ("widgets"."print_datetime" BETWEEN '2011-04-24 00:00:00.000000' AND '2011-04-25 23:59:59.999999') GROUP BY CAST((print_datetime + interval '-7 hours') AS date)
Which generates this error:
PGError: ERROR: column
"widgets.id" must appear in the
GROUP BY clause or be used in an
aggregate function LINE 1: SELECT
"widgets"."id" AS t0_r0,
"widgets"."user_id...
How do I rewrite the Widget.in_company scope so that AR does not expand the select query to include every Widget model field?
As Frank explained, PostgreSQL will reject any query which doesn't return a reproducible set of rows.
Suppose you've a query like:
select a, b, agg(c)
from tbl
group by a
PostgreSQL will reject it because b is left unspecified in the group by statement. Run that in MySQL, by contrast, and it will be accepted. In the latter case, however, fire up a few inserts, updates and deletes, and the order of the rows on disk pages ends up different.
If memory serves, implementation details are so that MySQL will actually sort by a, b and return the first b in the set. But as far as the SQL standard is concerned, the behavior is unspecified -- and sure enough, PostgreSQL does not always sort before running aggregate functions.
Potentially, this might result in different values of b in result set in PostgreSQL. And thus, PostgreSQL yields an error unless you're more specific:
select a, b, agg(c)
from tbl
group by a, b
What Frank highlighted is that, in PostgreSQL 9.1, if a is the primary key, than you can leave b unspecified -- the planner has been taught to ignore subsequent group by fields when applicable primary keys imply a unique row.
For your problem in particular, you need to specify your group by as you currently do, plus every field that you're basing your aggregate onto, i.e. "widgets"."id", "widgets"."user_id", [snip] but not stuff like sum(amount), which are the aggregate function calls.
As an off topic side note, I'm not sure how your ORM/model works but the SQL it's generating isn't optimal. Many of those left outer joins seem like they should be inner joins. This will result in allowing the planner to pick an appropriate join order where applicable.
PostgreSQL version 9.1 (beta at this moment) might fix your problem, but only if there is a functional dependency on the primary key.
From the release notes:
Allow non-GROUP BY columns in the
query target list when the primary key
is specified in the GROUP BY clause
(Peter Eisentraut)
Some other database system already
allowed this behavior, and because of
the primary key, the result is
unambiguous.
You could run a test and see if it fixes your problem. If you can wait for the production release, this can fix the problem without changing your code.
Firstly simplify your life by storing all dates in a standard time-zone. Changing dates with time-zones should really be done in the view as a user convenience. This alone should save you a lot of pain.
If you're already in production write a migration to create a normalised_date column wherever it would be helpful.
nrI propose that the other problem here is the use of raw SQL, which rails won't poke around for you. To avoid this try using the gem called Squeel (aka Metawhere 2) http://metautonomo.us/projects/squeel/
If you use this you should be able to remove hard coded SQL and let rails get back to doing its magic.
For example:
.select("#{pg_print_date_group} as print_date, sum(amount) as total_amount")
becomes (once your remove the need for normalising the date):
.select{sum(amount).as(total_amount)}
Sorry to answer my own question, but I figured it out.
First, let me apologize to those who thought I might be having an SQL or Postgres issue, it is not. The issue is with ActiveRecord and the SQL it is generating.
The answer is... use .joins instead of .includes. So I just changed the line in the top code and it works as expected.
class Widget < ActiveRecord::Base
def self.in_company(company)
joins(:store => {:area => :company}).where(:companies => {:id => company.id})
end
end
I'm guessing that when using .includes, ActiveRecord is trying to be smart and use JOINS in the SQL, but it's not smart enough for this particular case and was generating that ugly SQL to select all associated columns.
However, all the replies have taught me quite a bit about Postgres that I did not know, so thank you very much.
sort in mysql:
> ids = [11,31,29]
=> [11, 31, 29]
> Page.where(id: ids).order("field(id, #{ids.join(',')})")
in postgres:
def self.order_by_ids(ids)
order_by = ["case"]
ids.each_with_index.map do |id, index|
order_by << "WHEN id='#{id}' THEN #{index}"
end
order_by << "end"
order(order_by.join(" "))
end
User.where(:id => [3,2,1]).order_by_ids([3,2,1]).map(&:id)
#=> [3,2,1]

Resources