Im building a rails app that has users and scores. I want the top half of the users to get paid out. I have a separate tiebreaker input stored for each user if they get happen to tie for last place (last paid out place). For example, I need help, if their are 8 users and 4th and 5th tie in points. Then it calls my tiebreaker.
This is what I have tried:
First I am counting the users and determening the top half of the players:
theUsersCount = ParticipatingUser.where(game_id: game_id).size
numofWinners = theUsersCount / 2
Then I am taking the users and their scores and pushing it to an array then only showing the top half of the users that won.
userscores.push("#{user.username}" => playerScore})
userscores[0..numofWinners].sort_by { |y| y[:score] }
But I am unsure of how to take execute the tiebreaker if their is a tie for last place.
To get the users count you should use count rather than size - size fetches all the rows, then counts them, while count counts the rows in the DB, and returns the number:
user_count = ParticipatingUser.where(game_id: game_id).count
(actually - the above is wrong - here is an explanation - you should use size which smartly chooses between length and count - thanks #nzifnab)
Now, find the score of the user in the user_count/2 place
minimal_score = ParticipatingUser.order(:score, :desc).pluck(:score).take(user_count/2).last
And take all the users with this score or more:
winning_users = ParticipatingUser.where('score >= ?', minimal_score).order(:score, :desc)
now check if there are more users than expected:
if winning_users.size > user_count/2
then break your ties:
tie_breaker(winning_users[user_count/2-1..-1])
All together:
user_count = ParticipatingUser.where(game_id: game_id).size
minimal_score = ParticipatingUser.order(:score, :desc).pluck(:score).take(user_count/2).last
winning_users = ParticipatingUser.where('score >= ?', minimal_score).order(:score, :desc)
if winning_users.size > user_count/2
losers = tie_breaker(winning_users[user_count/2-1..-1])
winning_users -= losers
end
winning_users
Related
I have an app that saves high scores for its users. We want to show a leaderboard but we only want every user to show up once with their highest score. We have a HighScore model, which saves scores and has a few other fields (game type, game settings, etc). HighScore has a has_many_through relationship to User (through high_score_user) because a HighScore can have multiple users (in case of a game that's played with multiple players). Now I need a function that shows that leaderboard but I have yet to find a good way to write this code.
Currently I simply grab the top 500 scores, include the high_score_users, then iterate through them to filter out duplicate users. Once I have 10 scores, I return those scores. Obviously this is extremely suboptimal and it's very, very slow. Here's the code I have so far:
def self.unique_leaderboard(map_name, score_type, game_mode, game_type)
used_scores = Rails.cache.fetch("leaderboards_unique/#{map_name}/#{score_type}/#{game_type}/#{game_mode}",
expires_in: 10.minutes) do
top500 = HighScore
top500 = top500.for_map(map_name) if map_name
top500 = top500.for_type(score_type) if score_type
top500 = top500.for_gametype(game_type) if game_type
top500 = top500.for_gamemode(game_mode) if game_mode
top500 = top500.current_season
top500 = top500.ranked(game_type)
top500 = top500.includes(:high_score_users)
top500 = top500.limit(500)
used_scores = []
top500.each do |score|
break if used_scores.count >= 10
next unless (used_scores.map do |used_score|
used_score.high_score_users.map(&:user_id)
end.flatten.uniq & score.high_score_users.map(&:user_id)).empty?
used_scores << score
end
used_scores
end
HighScore.where(id: used_scores.map(&:id)).includes(:users).ranked(game_type)
end
I'm using Ruby on Rails with Postgres. How do I improve this code so it's not incredibly slow? I couldn't find a way to do it in SQL, nor could I find a way to do it properly with ActiveRecord.
I would bet that the major time killer is this code:
next unless (used_scores.map do |used_score|
used_score.high_score_users.map(&:user_id)
end.flatten.uniq & score.high_score_users.map(&:user_id)).empty?
It is being executed up to 500 times in a worst case, and each iteration is relatively heavy-weight due to several unnecessary maps on each iteration. And all this computational complexity just to track unique user_ids from the high scores (to short-circuit the iteration as soon as 10 unique top scorers are selected...
So if you just replace
...
used_scores = []
top500.each do |score|
...
end
used_scores
...
with smth. like
top_scorers = Hash.new { |h,k| h[k] = [] }
top500.each do |score|
score.high_score_users.each do |user|
top_scorers[user.id] << score
break if top_scorers.size >= 10
end
end
top_scorers.values.flatten.uniq
it should become significantly faster already.
But honestly fetching 500 high scores just to pick 10 top scorers seems weird anyway. This task can be solved perfectly fine by the database itself. If <high scores SQL query> is your high scores query (without the limit part) then something like this would do the job (pseudocode! just to illustrate the idea)
user_ids = select distinct(user_id) from <high scores SQL query> limit 10;
select * from <high scores SQL query> where user_id in <user_ids>
(this pseudocode could be "translated" into AR query(ies) in different ways, it's up to you)
I've looked over some SO discussions here and, well at least I haven't seen this perspective. I'm trying to write code to count bookings of a given resource, where I want to find the MINIMUM number of resources I need to fulfill all bookings.
Let's use an example of hotel rooms. Given that I have the following bookings
Chris: July 4-July 17
Pat: July 15-July 19
Taylor: July 10-July 11
Chris calls and would like to add some room(s) to their reservation for friends, and wonders how many rooms I have available.
Rooms_available = Rooms_in_hotel - Rooms_booked
The Rooms_booked is where I'm having trouble. It seems like most questions (and indeed my code) just looks at overlapping dates. So it would do something like this:
Booking.where("booking_end >= :start_check AND booking_start <= :end_check", { start_check: "July 4, 2021".to_date, end_check: "July 7, 2021".to_date})
This code would return 3. Which means that if the hotel theoretically had 5 rooms, I would tell Chris that there were 2 more rooms left available.
However, while this method of counting is technically accurate, it misses the possibility of an efficient overlap. Namely that since Taylor checks out 4 days before Pat, they can both be "assigned" the same room. So technically, I can offer 3 more rooms to Chris.
So my question is how do I more accurately calculate Rooms_booked allowing for efficient overlap (i.e., efficient resource allocation)? Is there a query using ActiveRecord or what calculation do I impose on top of the existing query?
i don't think just only a query could solve your problem (or very very complex query).
my idea is group (and count) by (booking_start, booking_end) in order booking_start asc then reassign, e.g. if there're 2 bookings July 15-July 19 and 3 bookings July 10-July 11 then we only could re-assign for 2 pairs, and we need 3 rooms (2 rooms for July 15-July 19-July 10-July 11 and 1 for July 10-July 11.
and re-assign in code not query (we can optimize by pick a narrow range of time)
# when Chris calls and would like to add some room(s) to their reservation for friends,
# and wonders how many available rooms.
# pick (start_time, end_time) so that the range time long enough that
# including those booking {n} month ago
# but no need too long or all the time
scope :booking_available_count, -> (start_time, end_time) {
group_by_time = \
Booking.where("booking_start >= ? AND booking_end <= ?", start_time, end_time)
.group(:booking_start, :booking_end).order(:booking_start).count
# result: {[booking_start, booking_end] => 1, [booking_start, booking_end] => 2, ... }
# in order booking_start ASC
# then we could re-assign from left to right as below
booked = 0
group_by_time.each do |(start_time, end_time), count|
group_by_time.each do |(assign_start_time, assign_end_time), assign_count|
next if end_time > assign_start_time
count -= assign_count # re-assign
break if count <= 0
end
booked += count if count > 0
end
# return number of available rooms
# allow negative number
Room.count - booked
}
I have an app that returns a bunch of records in a list in a random order with some pagination. For this, I save a seed value (so that refreshing the page will return the list in the same order again) and then use .order('random()').
However, say that out of 100 records, I have 10 records that have a preferred_level = 1 while all the other 90 records have preferred_level = 0.
Is there some way that I can place the preferred_level = 1 records first but still keep everything randomized?
For example, I have [A,1],[X,0],[Y,0],[Z,0],[B,1],[C,1],[W,0] and I hope I would get back something like [B,1],[A,1],[C,1],[Z,0],[X,0],[Y,0],[W,0].
Note that even the ones with preferred_level = 1 are randomized within themselves, just that they come before all the 0 records. In theory, I would hope whatever solution would place preferred_level = 2 before the 1 records if I were ever to add them.
------------
I had hoped it would be as intuitively simple as Model.all.order('random()').order('preferred_level DESC') but that doesn't seem to be the case. The second order doesn't seem to affect anything.
Any help would be appreciated!
This got the job done for me on a very similar problem.
select * from table order by preferred_level = 1 desc, random()
or I guess the Rails way
Model.order('preferred_level = 1 desc, random()')
I have a query that loads thousands of objects and I want to tame it by using find_in_batches:
Car.includes(:member).where(:engine => "123").find_in_batches(batch_size: 500) ...
According to the docs, I can't have a custom sorting order: http://www.rubydoc.info/docs/rails/4.0.0/ActiveRecord/Batches:find_in_batches
However, I need a custom sort order of created_at DESC. Is there another method to run this query in chunks like it does in find_in_batches so that not so many objects live on the heap at once?
Hm I've been thinking about a solution for this (I'm the person who asked the question). It makes sense that find_in_batches doesn't allow you to have a custom order because lets say you sort by created_at DESC and specify a batch_size of 500. The first loop goes from 1-500, the second loop goes from 501-1000, etc. What if before the 2nd loop occurs, someone inserts a new record into the table? That would be put onto the top of the query results and your results would be shifted 1 to the left and your 2nd loop would have a repeat.
You could argue though that created_at ASC would be safe then, but it's not guaranteed if your app specifies a created_at value.
UPDATE:
I wrote a gem for this problem: https://github.com/EdmundMai/batched_query
Since using it, the average memory of my application has HALVED. I highly suggest anyone having similar issues to check it out! And contribute if you want!
The slower manual way to do this, is to do something like this:
count = Cars.includes(:member).where(:engine => "123").count
count = count/500
count += 1 if count%500 > 0
last_id = 0
while count > 0
ids = Car.includes(:member).where("engine = "123" and id > ?", last_id).order(created_at: :desc).limit(500).ids #which plucks just the ids`
cars = Cars.find(ids)
#cars.each or #cars.update_all
#do your updating
last_id = ids.last
count -= 1
end
Can you imagine how find_in_batches with sorting will works on 1M rows or more? It will sort all rows every batch.
So, I think will be better to decrease number of sort calls. For example for batch size equal to 500 you can load IDs only (include sorting) for N * 500 rows and after it just load batch of objects by these IDs. So, such way should decrease have queries with sorting to DB in N times.
I have a model called Game in which I build up a scoped query.
Something like:
games = Game.scoped
games = games.team(team_name) if team_name
games = game.opponent(opponent_name) if opponent_name
total_games = games
I then calculate several subsets like:
wins = games.where("team_score > opponent_score").count
losses = games.where("opponent_score > team_score").count
Everything is great. Then I decided that I want to limit the original scope to show the last X number of games.
total_games = games.limit(10)
If there are 100 games that match what I want for total_games, and then I add .limit(10) - it gets the last 10. Great. But now calling
total_games.where("team_score > opponent_score").count
will reach back beyond the last 10, and into results that aren't part of total_games. Since adding .limit(10), I'll always get 10 total games, but also 10 wins, and 10 losses.
After typing this all out, I've realized that the cases where I want to use limit are for showing a smaller set of results - so I'll probably end up just looping through the results to calculate things like wins and losses (instead of doing separate queries as in my subsets above).
I tried this out when total_games had hundreds or thousands of results, and it's significantly slower to loop through than it is to just do separate queries for the subsets.
So, now I must know - what is the best way to limit a scoped query, and then do future queries of those results that restrict themselves results returned by the original .limit(x)?
I don't think you can do what you want to do without separating your query into two steps, first getting 10 games from total_games and making the DB query with all:
last_10_games = total_games.limit(10).all
then selecting from the resulting array and getting the size of the result:
wins = last_10_games.select { |g| g.team_score > g.opponent_score }.count
losses = last_10_games.select { |g| g.opponent_score > g.team_score }.count
I know this is not exactly what you asked for, but I think it's probably the most straightforward solution to the problem.