I have some RAW sql and I'm not sure if it would be better as an Activerecord call or should I use RAW sql. Would this be easy to convert to AR?
select *
from logs t1
where
log_status_id = 2 and log_type_id = 1
and not exists
(
select *
from logs t2
where t2.log_version_id = t1.log_version_id
and t2.log_status_id in (1,3,4)
and log_type_id = 1
)
ORDER BY created_at ASC
So something like this?:
Log.where(:log_status_id=>2, log_type_id => 1).where.not(Log.where.....)
You could do this using AREL. See Rails 3: Arel for NOT EXISTS? for an example.
Personally I often find raw SQL to be more readable/maintainable than AREL queries, though. And I guess most developers are more familiar with it in general, too.
But in any case, your approach to separate the narrowing by log_states_id and log_type_id from the subquery is a good idea. Even if your .where.not construct won't work as written.
This should do the trick however:
Log.where(log_status_id: 2, log_type_id: 1)
.where("NOT EXISTS (
select *
from logs t2
where t2.log_version_id = logs.log_version_id
and t2.log_status_id in (1,3,4)
and t2.log_type_id = logs.log_type_id)")
.order(:created_at)
The only constellation where this might become problematic is when you try to join this query to other queries because the outer table will likely receive a different alias than logs.
Related
So I want to translate this SQL query into Rails (and in this EXACT order):
Suppose I have
WITH sub_table as (
SELECT * FROM main_table LIMIT 10 OFFSET 100 ORDER BY id
)
SELECT * FROM sub_table INNER JOIN other_table
ON sub_table.id = other_table.other_id
The importance here is that the order of execution must be:
LIMIT and OFFSET in that sub_table query MUST be executed first
The second statement should happen after.
So if the relations I have are called OtherTable and MainTable does something like this work:
subTableRelation = MainTable.order(id: :asc).limit(10).offset(100)
subTableRelation.join(OtherTable, ....)
The main question here is how Rails Relation execution order impacts things.
While ActiveRecord does not provide CTEs in its high level API, Arel will allow you to build this exact query.
Since you did not provide models and obfuscated the table names I will build this completely in Arel for the time being.
sub_table = Arel::Table.new('sub_table')
main_table = Arel::Table.new('main_table')
other_table = Arel::Table.new('other_table')
sub_table_query = main_table.project(Arel.star).take(10).skip(100).order(main_table[:id])
sub_table_alias = Arel::Nodes::As.new(Arel.sql(sub_table.name),sub_table_query)
query = sub_table.project(Arel.star)
.join(other_table).on(sub_table[:id].eq(other_table[:other_id]))
.with(sub_table_alias)
query.to_sql
Output :
WITH sub_table AS (
SELECT
*
FROM main_table
ORDER BY main_table.id
-- Output here will differ by database
LIMIT 10 OFFSET 100
)
SELECT
*
FROM sub_table
INNER JOIN other_table ON sub_table.id = other_table.other_id
If you are able to provide better context I can provided a better solution, most likely resulting in an ActiveRecord::Relation object which is likely to be preferable for chaining and model access purposes.
I have tried converting this plain sql query to rails active record but I am unable to do so.
select vote_shares.election_year as vs_election_name,
vote_shares.party as vs_party,
(sum(vote_shares.party_seats)/totals.total)*100 AS vs
from pcdemographics INNER JOIN vote_shares on vote_shares.pc_id = pcdemographics.pc_id,
(
SELECT vote_shares.election_name, sum(vote_shares.party_seats) as total
FROM `pcdemographics`
INNER JOIN vote_shares on vote_shares.pc_id = pcdemographics.pc_id
GROUP BY `election_name`
) AS totals
where vote_shares.election_name=totals.election_name
group by vote_shares.party,vote_shares.election_name;
This is what I have tried
#vssubquery = Pcdemographic.select('vote_shares.election_name, sum(vote_shares.party_seats) as total').joins('INNER JOIN vote_shares on vote_shares.pc_id = pcdemographics.pc_id')
Pcdemographic.select("vote_shares.election_year as vs_election_year,
vote_shares.party as vs_party,
(sum(vote_shares.party_seats)/'#{totals.total}')*100 AS vs").from(#vssubquery,:totals)
.joins("INNER JOIN vote_shares on vote_shares.pc_id = pcdemographics.pc_id and vote_shares.election_name='#{totals.election_name}'")
My answer might not be what you hoped for but I recommend not using AR, use Sequel (http://sequel.jeremyevans.net/) instead. It uses the concept of Datasets which I don't think has any equivalent in AR.
Disclaimer: Nobody asked me to advertise for it. I used both AR and Sequel and I found that Sequel is much better to perform complex queries and avoid the N+1 problem.
Did you try find_by_sql method?
I'm just beginning with ruby on rails and have a question regarding a bit more complex query. So far I've done simple queries while looking at rails guide and it worked really well.
Right now I'm trying to get some Ids from database and I would use those Ids to get the real objects and do something with them. Getting those is a bit more complex than simple Object.find method.
Here is how my query looks like :
select * from quotas q, requests r
where q.id=r.quota_id
and q.status=3
and r.text is not null
and q.id in
(
select A.id from (
select max(id) as id, name
from quotas
group by name) A
)
order by q.created_at desc
limit 1000;
This would give me 1000 ids when executing this query from sql manager. And I was thinking to obtain the list of ids first and then find objects by id.
Is there a way to get these objects directly by using this query? Avoiding ids lookup? I googled that you can execute query like this :
ActiveRecord::Base.connection.execute(query);
Assuming Quota has_many :requests,
Quota.includes(:requests).
where(status:3).
where('requests.text is not null').
where("quotas.id in (#{subquery_string_here})").
order('quotas.created_at desc').limit(1000)
I'm by no means an expert but most basic SQL functionality is baked into ActiveRecord. You might also want to look at the #group and #pluck methods for ways to eliminate the ugly string subquery.
Calling #to_sql on a relationship object will show you the SQL command it is equivalent to, and may help with your debugging.
I would use find_by_sql for this. I wouldn't swear that this is exactly right, but as I recall you can pretty much plonk an SQL statement into a find_by_sql and the resulting columns will be returned as attributes of an array of objects of the class you call it on:
status = 3
Quota.find_by_sql('
select *
from quotas q, requests r
where q.id=r.quota_id
and q.status= ?
and r.text is not null
and q.id in
(
select A.id from (
select max(id) as id, name
from quotas
group by name) A
)
order by q.created_at desc
limit 1000;', status)
If you come to Rails as someone used to writing raw SQL, you're probably better off using this syntax than stringing together a bunch of ActiveRecord methods - the result is the same, so it's just a matter of what you find more readable.
Btw, you shouldn't use string interpolation (i.e. #{variable} syntax) inside an SQL query. Use the '?' syntax instead (see my example) to avoid SQL injection potential.
Im trying to query my db for records that are similar to the currently viewed record (based on taggings), which I have working but I would like to randomize the order.
my development environment is mysql so I would do something like:
#tattoos = Tattoo.tagged_with(tags, :any => true).order("RAND()").limit(6)
which works, but my production environment is heroku which is using postgresql so I tried using this:
#tattoos = Tattoo.tagged_with(tags, :any => true).order("RANDOM()").limit(6)
but I get the following error:
ActionView::Template::Error (PGError: ERROR: for SELECT DISTINCT, ORDER BY expressions must appear in select list
SELECT DISTINCT tattoos.* FROM "tattoos" JOIN taggings
tattoos_taggings_color_fantasy_newschool_nerdy_tv_477 ON
tattoos_taggings_color_fantasy_newschool_nerdy_tv_477.taggable_id = tattoos.id AND
tattoos_taggings_color_fantasy_newschool_nerdy_tv_477.taggable_type = 'Tattoo' WHERE
(tattoos_taggings_color_fantasy_newschool_nerdy_tv_477.tag_id = 3 OR
tattoos_taggings_color_fantasy_newschool_nerdy_tv_477.tag_id = 4 OR
tattoos_taggings_color_fantasy_newschool_nerdy_tv_477.tag_id = 5 OR
tattoos_taggings_color_fantasy_newschool_nerdy_tv_477.tag_id = 24 OR
tattoos_taggings_color_fantasy_newschool_nerdy_tv_477.tag_id = 205) ORDER BY RANDOM() LIMIT 6):
After analyzing the query more closely, I have to correct my first draft. The query would require a DISTINCT or GROUP BY the way it is.
The (possibly) duplicate tattoos.* come from first joining to (possibly) multiple rows in the table taggings. Your query engine then tries to get rid of such duplicates again by using DISTINCT - in a syntactically illegal way.
DISTINCT basically sorts the resulting rows by the resulting columns from left to right and picks the first for each set of duplicates. That's why the leftmost ORDER BY column have to match the SELECT list.
MySQL is more permissive and allows the non-standard use of DISTINCT, but PostgreSQL throws an error.
ORMs often produce ineffective SQL statements (they are just crutches after all). However, if you use appropriate PostgreSQL libraries, such an illegal statement shouldn't be produced to begin with. I am no Ruby expert, but something's fishy here.
The query is also very ugly and inefficient.
There are several ways to fix it. For instance:
SELECT *
FROM (<query without ORDER BY and LIMIT>) x
ORDER BY RANDOM()
LIMIT 6
Or, better yet, rewrite the query with this faster, cleaner alternative doing the same:
SELECT ta.*
FROM tattoos ta
WHERE EXISTS (
SELECT 1
FROM taggings t
WHERE t.taggable_id = ta .id
AND t.taggable_type = 'Tattoo'
AND t.tag_id IN (3, 4, 5, 24, 205)
)
ORDER BY RANDOM()
LIMIT 6;
You'll have to implement it in Ruby yourself.
not sure about the random, as it should work.
But take a note of http://railsforum.com/viewtopic.php?id=36581
which has code that might suit you
/lib/agnostic_random.rb
module AgnosticRandom
def random
case DB_ADAPTER
when "mysql" then "RAND()"
when "postgresql" then "RANDOM()"
end
end
end
/initializers/extend_ar.rb (name doesn't matter)
ActiveRecord::Base.extend AgnosticRandom
I have a Custom Query that look like this
self.account.websites.find(:all,:joins => [:group_websites => {:group => :users}],:conditions=>["users.id =?",self])
where self is a User Object
I manage to generate the equivalent SQL for same
Here how it look
sql = "select * from websites INNER JOIN group_websites on group_websites.website_id = websites.id INNER JOIN groups on groups.id = group_websites.group_id INNER JOIN group_users ON (groups.id = group_users.group_id) INNER JOIN users on (users.id = group_users.user_id) where (websites.account_id = #{account_id} AND (users.id = #{user_id}))"
With the decent understanding of SQL and ActiveRecord I assumed that(which most would agree on) the result obtained from above query might take a longer time as compare to result obtained from find_by_sql(sql) one.
But Surprisingly
When I ran the above two
I found the ActiveRecord custom Query leading the way from ActiveRecord "find_by_sql" in term of load time
here are the test result
ActiveRecord Custom Query load time
Website Load (0.9ms)
Website Columns(1.0ms)
find_by_sql load time
Website Load (1.3ms)
Website Columns(1.0ms)
I repeated the test again an again and the result still the came out the same(with Custom Query winning the battle)
I know the difference aren't that big but still I just cant figure out why a normal find_by_sql query is slower than Custom Query
Can Anyone Share a light on this.
Thanks Anyway
Regards
Viren Negi
With the find case, the query is parameterized; this means the database can cache the query plan and will not need to parse and compile the query again.
With the find_by_sql case the entire query is passed to the database as a string. This means there is no caching that the database can do on the structure of the query, and it needs to be parsed and compiled on each occasion.
I think you can test this: try find_by_sql in this way (parameterized):
User.find_by_sql(["select * from websites INNER JOIN group_websites on group_websites.website_id = websites.id INNER JOIN groups on groups.id = group_websites.group_id INNER JOIN group_users ON (groups.id = group_users.group_id) INNER JOIN users on (users.id = group_users.user_id) where (websites.account_id = ? AND (users.id = ?))", account_id, users.id])
Well, the reason is probably quite simple - with custom SQL, the SQL query is sent immediately to db server for execution.
Remember that Ruby is an interpreted language, therefore Rails generates a new SQL query based on the ORM meta language you have used before it can be sent to the actual db server for execution. I would say additional 0.1 ms is the time taken by framework to generate the query.