Method executeUpdate for limit type delete data, for example 250 data out of 1000 available. and this is not working properly, is this HQL not supported?
Subscriber.executeUpdate("DELETE FROM Subscriber WHERE date_created <= :minDate LIMIT 250")
Related
I am trying to count distinct sessionIds from a measurement. sessionId being a tag, I count the distinct entries in a "parent" query, since distinct() doesn't works on tags.
In the subquery, I use a group by sessionId limit 1 to still benefit from the index (if there is a more efficient technique, I have ears wide open but I'd still like to understand what's going on).
I have those two variants:
> select count(distinct(sessionId)) from (select * from UserSession group by sessionId limit 1)
name: UserSession
time count
---- -----
0 3757
> select count(sessionId) from (select * from UserSession group by sessionId limit 1)
name: UserSession
time count
---- -----
0 4206
To my understanding, those should return the same number, since group by sessionId limit 1 already returns distinct sessionIds (in the form of groups).
And indeed, if I execute:
select * from UserSession group by sessionId limit 1
I have 3757 results (groups), not 4206.
In fact, as soon as I put this in a subquery and re-select fields in a parent query, some sessionIds have multiple occurrences in the final result. Not always, since there is 17549 rows in total, but some are.
This is the sign that the limit 1 is somewhat working, but some sessionId still get multiple entries when re-selected. Maybe some kind of undefined behaviour?
I can confirm that I get the same result.
In my experience using nested queries does not always deliver what you expect/want.
Depending on how you use this you could retrieve a list of all values for a tag with:
SHOW TAG VALUES FROM UserSession WITH KEY=sessionId
Or to get the cardinality (number of distinct values for a tag):
SHOW TAG VALUES EXACT CARDINALITY FROM UserSession WITH KEY=sessionId.
Which will return a single row with a single column count, containing a number. You can remove the EXACT modifier if you don't need to be exact about the result: SHOW TAG VALUES CARDINALITY on Influx Documentation.
I'm having a hard time converting this SQL query to ActiveRecord. Here's my SQL.
SELECT distinct stc_term_gpa,
user_id,
lesley_id,
first,
last,
FROM lesley_grades
ORDER BY user_id
and this returns 437 rows - which is want I want. I have no problems runnings this in SQL
But when I try to run this in Rails in the Console:
LesleyGrade.select(:stc_term_gpa, :user_id, :lesley_id, :first, :last).distinct.order(:user_id)
I verified the count in rails
LesleyGrade.select(:stc_term_gpa, :user_id, :lesley_id, :first, :last).distinct.order(:user_id).count(:all)
I return 1440 rows - which is not what I want...that's all the data rows in my DB.
What's going on?
UPDATE
This is strange:
When I run the active record query in Rails Console and get the count 1440
LesleyGrade.select(:stc_term_gpa, :user_id, :lesley_id, :first, :last).distinct.order(:user_id).count(:all)
I check the SQL in the console (the one that ActiveRecord produces) and this is what it produces:
SELECT DISTINCT
"lesley_grades"."stc_term_gpa",
"lesley_grades"."user_id",
"lesley_grades"."lesley_id",
"lesley_grades"."first",
"lesley_grades"."last"
FROM "lesley_grades"
ORDER BY "lesley_grades"."user_id" ASC
When I run the above SQL in a SQL client I get the 437 rows. Again a discrepancy.
HOWEVER when I'm playing with the SQL that rails produces, and take the above statement and add in lesley_grades.id in the projection on my own accord and running raw SQL in a client like this, I get 1440 row (shouldn't it still be getting 437 even if I place the ID there?)
SELECT DISTINCT
"lesley_grades"."stc_term_gpa",
**"lesley_grades"."id"**,
"lesley_grades"."user_id",
"lesley_grades"."lesley_id",
"lesley_grades"."first",
"lesley_grades"."last"
FROM "lesley_grades"
ORDER BY "lesley_grades"."user_id" ASC
So I guess the question is does ActiveRecord some how use the ID for something in the a query sneakily, which is why I'm receiving 1440? How do I get my distinct 437 rows?
You can still use find_by_sql metod
query = "SELECT distinct stc_term_gpa, user_id,lesley_id,first,last
FROM lesley_grades
ORDER BY user_id"
expectedResult = LesleyGrade.find_by_sql(query)
I'm having issues translating this SQL statement to activerecord / ruby friendly code. Note that the end_at date is really DateTime.now.
SELECT DISTINCT events.id FROM events, channels
WHERE events.channel_id = channels.id AND events.end_at >= '2015-02-11 22:55:04'
ORDER BY start_at ASC, id ASC LIMIT 40
Suggestions?
Edit: The initial genesis of this problem comes from the fact that mysql doesn't support nested limits in subqueries for the app I'm working on. So pagination + this query was causing an error:
# channels is an activerecord relation, order_by_schedule is a scope
Event.where(:channel_id => channels).where{ end_at >= DateTime.now }.order_by_schedule.limit(channels.count * event_limit)
I think you do not need to join channels, because you do not use them at all:
Event.select('DISTINCT id')
where('end_at >= NOW()').
order('started_at, id').
limit(40)
I am using sqlite3 database in my project. for that I can retrive the data from the Database using following query "select * from tablename"..
But I want to take the hundred sequence records from the database, like If I scroll the UITableView based on the I want to take 100 100 records.
I have tried the following things,
SELECT * FROM mytable ORDER BY record_date DESC LIMIT 100; - It retrives only 100 records.When I scroll the table i want to fetch the next 100 records and show it.
Is it possible to do it
Please Guide me.
You could simply use the OFFSET clause, but this would still force the database to compute all the records that you're skipping over, so it would become inefficient for a larger table.
What you should do is to save the last record_date value of the previous page, and continue with the following ones:
SELECT *
FROM MyTable
WHERE record_date < ?
ORDER BY record_date DESC
LIMIT 100
See https://www.sqlite.org/cvstrac/wiki?p=ScrollingCursor for details.
I have a table with paginated data and this is the way I select data for each page:
#visitors = EventsVisitor
.select('visitors.*, events_visitors.checked_in, events_visitors.checkin_date, events_visitors.source, events_visitors.id AS ticket_id')
.joins(:visitor)
.order(order)
.where(:event_id => params[:event_id])
.where(filter_search)
.where(mode)
.limit(limit)
.offset(offset)
Also to build table pagination I need to know total count of records. Currently my solution for this is very rough:
total = EventsVisitor
.select('count(*) as count, events_visitors.*')
.joins(:visitor)
.order(order)
.where(:event_id => params[:event_id])
.where(filter_search)
.where(mode)
.first()
.count
So my question is as follows - What is the optimal ruby way to select limited data for the current page and total count of records?
I noticed that if I do #visitors.count - additional sql query will be generated:
SELECT COUNT(count_column) FROM (SELECT 1 AS count_column FROM `events_visitors` INNER JOIN `visitors` ON `visitors`.`id` = `events_visitors`.`visitor_id` WHERE `events_visitors`.`event_id` = 1 LIMIT 15 OFFSET 0) subquery_for_count
First of all, I do not understand what is the reason to send an additional query to get a count of data that we already have, I mean that after we got data from database in #visitors we can count it with ruby without need to send additional queries to DB.
Second - I thought that maybe there are some ways to use something like .total_count that will generate similar count(*) query but without that useless limit/offset?
you should except limit and offset
http://guides.rubyonrails.org/active_record_querying.html#except .
See how kaminari does it
https://github.com/kaminari/kaminari/blob/92052eedf047d65df71cc0021a9df9df1e2fc36e/lib/kaminari/models/active_record_relation_methods.rb#L11
So it might be something like
total = #visitors.except(:offset, :limit, :order).count