I have a method in my Trial model that take the last two digits of season_year and concats a unique number on the end generated from a count method.
It all works fine and dandy, except when i delete a previous record.
For example: Say I have created 1800, 1801 and 1802. When I delete 1800 and try to create a new record I get the following error.
CACHE (0.0ms) SELECT COUNT(*) FROM "trials" WHERE "trials"."season_year" BETWEEN $1 AND $2 [["season_year", "2018-01-01"], ["season_year", "2018-12-31"]]
CACHE Trial Exists (0.0ms) SELECT 1 AS one FROM "trials" WHERE "trials"."trial_number" = $1 LIMIT $2 [["trial_number", 1802], ["LIMIT", 1]]
For some reason it keeps looping the above.
What my desired outcome is for it to check if number exists, if it does move on to the next one, but for some reason it doesn't work when a previous record is deleted.
What I'd like it to do is start at 00 every time to check if a number has been used. if it has, move on to the next one. ie. 01
Model
class Trial < ApplicationRecord
before_create :create_trial_number
def count_records_from_same_year
self.class.where(season_year: (season_year.beginning_of_year..season_year.end_of_year)).count
end
def create_trial_number
loop do
year = (season_year).strftime("%y")
self.trial_number = year.concat(sprintf '%02d', count_records_from_same_year)
break unless self.class.where(trial_number: self.trial_number).exists?
end
end
end
It looks like the code you posted does mutate any value to advance the loop? I assume you'd need to add a self.season_year += 1 before the end of the block to achieve your goal.
Your method of creating a unique ID for the trial is highly ineffective.
You can use the id from the trials table as a unique identifier for the individual trial.
Alternatively you can let the database create an automatically incrementing column. MySQL uses the AUTO_INCREMENT keyword, Postgres uses the concept of sequences.
Related
I'm creating filter for my Point model on Ruby on Rails app. App uses ActiveAdmin+Ransacker for filters. I wrote 3 methods to filter the Point:
def self.filter_by_customer_bonus(bonus_id)
Point.joins(:customer).where('customers.bonus_id = ?', bonus_id)
end
def self.filter_by_classificator_bonus(bonus_id)
Point.joins(:setting).where('settings.bonus_id = ?', bonus_id)
end
def self.filter_by_bonus(bonus_id)
Point.where(bonus_id: bonus_id)
end
Everything works fine, but I need to merge the result of 3 methods to one array. When The Points.count (on production server for example) > 1000000 it works too slow, and I need to merge all of them to one method. The problem is that I need to order the final merged array this way:
Result array should start with result of first method here, the next adding the second method result, and then third the same way.
Is it possible to move this 3 sqls into 1 to make it work faster and order it as I write before?
For example my Points are [1,2,3,4,5,6,7,8,9,10]
Result of first = [1,2,3]
Result of second = [2,3,4]
Result of third = [5,6,7]
After merge I should get [1,2,3,4,5,6,7] but it should be with the result of 1 method, not 3+merge. Hope you understand me :)
UPDATE:
The result of the first answer:
Point Load (8.0ms) SELECT "points".* FROM "points" INNER JOIN "customers" ON "customers"."number" = "points"."customer_number" INNER JOIN "managers" ON "managers"."code" = "points"."tp" INNER JOIN "settings" ON "settings"."classificator_id" = "managers"."classificator_id" WHERE "points"."bonus_id" = $1 AND "customers"."bonus_id" = $2 AND "settings"."bonus_id" = $3 [["bonus_id", 2], ["bonus_id", 2], ["bonus_id", 2]]
It return an empty array.
You can union these using or (documentation):
def self.filter_trifecta(bonus_id)
(
filter_by_customer_bonus(bonus_id)
).or(
filter_by_classificator_bonus(bonus_id)
).or(
filter_by_bonus(bonus_id)
)
end
Note: you might have to hoist those joins up to the first condition — I'm not sure of or will handle those forks well as-is.
Below gives you all the results in a single query. if you have indexes on the foreign keys used here it should be able to handle million records:
The one provided earlier does an AND on all 3 queries, thats why you had zero results, you need union, below should work. (Note: If you are using rails 5, there is active record syntax for union, which the first commenter provided.)
Updated:
Point.from(
"(#{Point.joins(:customer).where(customers: {bonus_id: bonus_id).to_sql}
UNION
#{Point.joins(:setting).where(settings: {bonus_id: bonus_id}).to_sql}
UNION
#{Point.where(bonus_id: bonus_id).to_sql})
AS points")
Instead you can also use your 3 methods like below:
Point.from("(#{Point.filter_by_customer_bonus(bonus_id).to_sql}
UNION
#{Point.filter_by_classificator_bonus(bonus_id).to_sql}
UNION
#{Point.filter_by_bonus(bonus_id).to_sql}
) as points")
So here's the lay of the land:
I have a Applicant model which has_many Lead records.
I need to group leads by applicant email, i.e. for each specific applicant email (there may be 2+ applicant records with the email) i need to get a combined list of leads.
I already have this working using an in-memory / N+1 solution
I want to do this in a single query, if possible. Right now I'm running one for each lead which is maxing out the CPU.
Here's my attempt right now:
Lead.
all.
select("leads.*, applicants.*").
joins(:applicant).
group("applicants.email").
having("count(*) > 1").
limit(1).
to_a
And the error:
Lead Load (1.2ms) SELECT leads.*, applicants.* FROM "leads" INNER
JOIN "applicants" ON "applicants"."id" = "leads"."applicant_id"
GROUP BY applicants.email HAVING count(*) > 1 LIMIT 1
ActiveRecord::StatementInvalid: PG::GroupingError: ERROR: column
"leads.id" must appear in the GROUP BY clause or be used in an
aggregate function
LINE 1: SELECT leads.*, applicants.* FROM "leads" INNER JOIN
"appli...
This is a postgres specific issue. "the selected fields must appear in the GROUP BY clause".
must appear in the GROUP BY clause or be used in an aggregate function
You can try this
Lead.joins(:applicant)
.select('leads.*, applicants.email')
.group_by('applicants.email, leads.id, ...')
You will need to list all the fields in leads table in the group by clause (or all the fields that you are selecting).
I would just get all the records and do the grouping in memory. If you have a lot of records, I would paginate them or batch them.
group_by_email = Hash.new { |h, k| h[k] = [] }
Applicant.eager_load(:leads).each_batch(10_000) do |batch|
batch.each do |applicant|
group_by_email[:applicant.email] << applicant.leads
end
end
You need to use a .where rather than using Lead.all. The reason it is maxing out the CPU is you are trying to load every lead into memory at once. That said I guess I am still missing what you actually want back from the query so it would be tough for me to help you write the query. Can you give more info about your associations and the expected result of the query?
I'm trying to find solution how to calculate sum for each method in rails.
I feel like tried hundreds different approaches, but couldn't quiet figure it out yet.
For example:
Helper return 2 ids for products: (using alert just to make it visible)
view_context.time_plus.each
returns 1,2
But when I combine it with call, and selecting multiple option it is only returns last one instead of sum for both value
view_context.time_plus.each do |i|
flash[:alert] = [Service.find_by_price_id(i).time].sum
end
I see in logs call made for both value:
Service Load (0.2ms) SELECT `services`.* FROM `services` WHERE `services`.`price_id` = 0 LIMIT 1
Service Load (0.2ms) SELECT `services`.* FROM `services` WHERE `services`.`price_id` = 1 LIMIT 1
find_by_column always returns only one record.
You can use where condition for multiple ids like this
Model.where(column_name: [array_of_ids])
if view_context.time_plus returns an array of ids
Service.where(price_id: view_context.time_plus]).sum(:time)
You can try this
flash[:alert] = view_context.time_plus.map{|i| Service.find_by_price_id(i).time}.sum
This should work
You can use inject method:
sum = view_context.time_plus.inject(0) { |sum,i| sum+=Service.find_by_price_id(i).time }
But if you are using ruby on rails, better way is use active_record and sql:
sum2 = Service.where(price_id: [1,2]).sum(:time)
I know I can find the first user in my database in the command line with,
User.first
And I can find the last with
User.last
my question is how would I call the 11th user in a database.
You can use offset with order:
User.offset(10).order(:id).first
You can do:
User.limit(1).offset(10)
That reduces the work to a SQL statement that looks like this:
SELECT `users`.* FROM `users` LIMIT 1 OFFSET 10
Using all will require loading all the users into memory and then finding the 11th one in that array. Quite pricey.
You can do
User.all[10]
User.all gives you an array or objects with index starting from 0. To access 11th user you can do that.
Is there a way to compare an associated belongs_to record with an existing record and not hit the database? For example, I have a User, and want to see if an Account belongs to the User. The simple way is account.user == user, but that loads the account's user from the database and then compares it will user. The DB call is:
SELECT `users`.* FROM `users` WHERE `users`.`id` = 1 LIMIT 1
Alternately, although it's undocumented, it looks like I can do user.accounts.include?(account), but that calls:
SELECT 1 AS one FROM `accounts` WHERE `accounts `.`user_id` = 1 AND `accounts`.`id` = 1 LIMIT 1
Of course, I could just do account.user_id == user.id, but that doesn't feel very "railsy".
I don't think it's possible because Rails relations are lazy loaded by default; the accounts table row in the database only contains the user_id column and that all what it knows about the user but whenever you need to call account.user a separate database query will be made to get other User columns than id to compare that with different User objects.