Where to keep site-wide counters, Rails - ruby-on-rails

Newb developer. Need to put some counters on the site that counts the number of times certain songs are played. It's related to the Song model, but doesn't have to be, and not of any one song instance. Just wondering where I put a counter that acts like a constant?
if Subscription.where("band_id = ? and fan_id = ?", #band.id, #fan.id).any? && #fan.donating
#song.total_sub_donate_plays += 1
#song.total_month_sub_donate_plays += 1
site_wide.counter_total +=1
site_wide.counter_total_month += 1
end

If you have memcached/redis setup then use the Rails.cache
http://api.rubyonrails.org/classes/ActiveSupport/Cache/Store.html#method-i-increment
something like:
if Subscription.where("band_id = ? and fan_id = ?", #band.id, #fan.id).any? && #fan.donating
#song.total_sub_donate_plays += 1
#song.total_month_sub_donate_plays += 1
Rails.cache.increment("stats/counters/total")
Rails.cache.increment("stats/counters/#{Date.current.strftime("%Y/%m")}", {expires_in: 40.days})
end
then later you can:
Rails.cache.read("stats/counters/total")
Rails.cache.read("stats/counters/#{Date.current.strftime("%Y/%m")}")
Another option is to create aggregate data tables. this will let you have more control in the long run.
something like:
class DailySongData
# event_date: date,
# sub_count:integer
# listing_count:integer
end
this will help organize your data for future learning

Related

Getting all the pages from an API

This is something I struggle with, or whenever I do it it seems to be messy.
I'm going to ask the question in a very generic way as it's not a single problem I'm really trying to solve.
I have an API that I want to consume some data from, e.g. via:
def get_api_results(page)
results = HTTParty.get("api.api.com?page=#{page}")
end
When I call it I can retrieve a total.
results["total"] = 237
The API limits the number of records I can retrieve in one call, say 20. So I need to call it a few more times.
I want to do something like the following, ideally breaking it into pieces so I can use things like delayed_job..etc
def get_all_api_pages
results = get_api_results(1)
total = get_api_results(1)["total"]
until page*20 > total do |p|
results += get_api_results(p)
end
end
I always feel like I'm writing rubbish whenever I try and solve this (and I've tried to solve it in a number of ways).
The above, for example, leaves me at the mercy of an error with the API, which knocks out all my collected results if I hit an error at any point.
Wondering if there is just a generally good, clean way of dealing with this situation.
I don't think you can have that much cleaner...because you only receive the total once you called the API.
Have you tried to build your own enum for this. It encapsulates the ugly part. Here is a bit of sample code with a "mocked" API:
class AllRecords
PER_PAGE = 50
def each
return enum_for(:each) unless block_given?
current_page = 0
total = nil
while total.nil? || current_page * PER_PAGE < total
current_page += 1
page = load_page(current_page)
total = page[:total]
page[:items].each do |item|
yield(item)
end
end
end
private
def load_page(page)
if page == 5
{items: Array.new(37) { rand(100) }, total: 237}
else
{items: Array.new(50) { rand(100) }, total: 237}
end
end
end
AllRecords.new.each.each_with_index do |item, index|
p index
end
You can surely clean that out a bit but i think that this is nice because it does not collect all the items first.

How do I get the desired output in my view?

I need some assistance, I'm working on creating a weekly schedule. But my output is not displaying the desired criteria. Everything is displaying, but it's not assigning the start and end time properly and I can't figure out why. I'm new to ruby so any help would be greatly appreciated. Below is the method from my control and the output in the view.
def generate_schedule
logger.info #generate_schedules
# create hash for view
#generate_schedules = Hash.new
# find current required schedule
schedule_structure = Availability.where(:addressable_type =>'CallCenter',:addressable_id => session[:employee].call_centers.first.id).all
# find all available employee times
employee_availability = Availability.where('priority > 0').where(:addressable_type => 'Employee', :addressable_id => session[:employee].call_centers.first.employees.collect(&:id)).all
# create availability to requirement hash to determine order of days
availability_to_requirement_hash = create_availability_to_requirement_hash(schedule_structure, employee_availability)
# iterate through the hash day by day, a data is no use to us
availability_to_requirement_hash.each do |day,a|
# select the employees with availability for the chosen day, sort them by priority, and then iterate through them to clear the requirements
employee_availability.select{|b| b.day_increment == day}.sort_by{|c| c.priority}.group_by(&:addressable).each do |employee, availabilities|
# select the start time for the current day defined from requirement hash
start_time = schedule_structure.select{|b| b.day_increment == day && b.priority > 0}.first.time_increment
#define the length of the shift
shift_length = employee.length_of_shift > availabilities.length ? availabilities.length : employee.length_of_shift
#define the end time for the shift
end_time = start_time + shift_length
logger.info end_time
#check if employee already defined in hash, if present add the shift as an array of date, start time and end time, if not then add employee to hash with value as array
#generate_schedules.has_key?(employee) ? #generate_schedules[employee] << [day, start_time, end_time] : #generate_schedules[employee] = [[day, start_time, end_time]]
logger.info schedule_structure
#update the schedule structure to reflect the time removed // do not change database
(1..shift_length).each do |d|
schedule_structure.select{|e| e.day_increment == day && e.priority > 0}.first.priority -= 1
end
end
end
end
Here's an example of my output:
Example of View

Retrieving only unique records with multiple requests

I have this "heavy_rotation" filter I'm working on. Basically it grabs tracks from our database based on certain parameters (a mixture of listens_count, staff_pick, purchase_count, to name a few)
An xhr request is made to the filter_tracks controller action. In there I have a flag to check if it's "heavy_rotation". I will likely move this to the model (cos this controller is getting fat)... Anyway, how can I ensure (in a efficient way) to not have it pull the same records? I've considered an offset, but than I have to keep track of the offset for every query. Or maybe store track.id's to compare against for each query? Any ideas? I'm having trouble thinking of an elegant way to do this.
Maybe it should be noted that a limit of 14 is set via Javascript, and when a user hits "view more" to paginate, it sends another request to filter_tracks.
Any help appreciated! Thanks!
def filter_tracks
params[:limit] ||= 50
params[:offset] ||= 0
params[:order] ||= 'heavy_rotation'
# heavy rotation filter flag
heavy_rotation ||= (params[:order] == 'heavy_rotation')
#result_offset = params[:offset]
#tracks = Track.ready.with_artist
params[:order] = "tracks.#{params[:order]}" unless heavy_rotation
if params[:order]
order = params[:order]
order.match(/artist.*/){|m|
params[:order] = params[:order].sub /tracks\./, ''
}
order.match(/title.*/){|m|
params[:order] = params[:order].sub /tracks.(title)(.*)/i, 'LOWER(\1)\2'
}
end
searched = params[:q] && params[:q][:search].present?
#tracks = parse_params(params[:q], #tracks)
#tracks = #tracks.offset(params[:offset])
#result_count = #tracks.count
#tracks = #tracks.order(params[:order], 'tracks.updated_at DESC').limit(params[:limit]) unless heavy_rotation
# structure heavy rotation results
if heavy_rotation
puts "*" * 300
week_ago = Time.now - 7.days
two_weeks_ago = Time.now - 14.days
three_months_ago = Time.now - 3.months
# mix in top licensed tracks within last 3 months
t = Track.top_licensed
tracks_top_licensed = t.where(
"tracks.updated_at >= :top",
top: three_months_ago).limit(5)
# mix top listened to tracks within last two weeks
tracks_top_listens = #tracks.order('tracks.listens_count DESC').where(
"tracks.updated_at >= :top",
top: two_weeks_ago)
.limit(3)
# mix top downloaded tracks within last two weeks
tracks_top_downloaded = #tracks.order("tracks.downloads_count DESC").where(
"tracks.updated_at >= :top",
top: two_weeks_ago)
.limit(2)
# mix in 25% of staff picks added within 3 months
tracks_staff_picks = Track.ready.staff_picks.
includes(:artist).order("tracks.created_at DESC").where(
"tracks.updated_at >= :top",
top: three_months_ago)
.limit(4)
#tracks = tracks_top_licensed + tracks_top_listens + tracks_top_downloaded + tracks_staff_picks
end
render partial: "shared/results"
end
I think seeking an "elegant" solution is going to yield many diverse opinions, so I'll offer one approach and my reasoning. In my design decision, I feel that in this case it's optimal and elegant to enforce uniqueness on query intersections by filtering the returned record objects instead of trying to restrict the query to only yield unique results. As for getting contiguous results for pagination, on the other hand, I would store offsets from each query and use it as the starting point for the next query using instance variables or sessions, depending on how the data needs to be persisted.
Here's a gist to my refactored version of your code with a solution implemented and comments explaining why I chose to use certain logic or data structures: https://gist.github.com/femmestem/2b539abe92e9813c02da
#filter_tracks holds a hash map #tracks_offset which the other methods can access and update; each of the query methods holds the responsibility of adding its own offset key to #tracks_offset.
#filter_tracks also holds a collection of track id's for tracks that already appear in the results.
If you need persistence, make #tracks_offset and #track_ids sessions/cookies instead of instance variables. The logic should be the same. If you use sessions to store the offsets and id's from results, remember to clear them when your user is done interacting with this feature.
See below. Note, I refactored your #filter_tracks method to separate the responsibilities into 9 different methods: #filter_tracks, #heavy_rotation, #order_by_params, #heavy_rotation?, #validate_and_return_top_results, and #tracks_top_licensed... #tracks_top_<whatever>. This will make my notes easier to follow and your code more maintainable.
def filter_tracks
# Does this need to be so high when JavaScript limits display to 14?
#limit ||= 50
#tracks_offset ||= {}
#tracks_offset[:default] ||= 0
#result_track_ids ||= []
#order ||= params[:order] || 'heavy_rotation'
tracks = Track.ready.with_artist
tracks = parse_params(params[:q], tracks)
#result_count = tracks.count
# Checks for heavy_rotation filter flag
if heavy_rotation? #order
#tracks = heavy_rotation
else
#tracks = order_by_params
end
render partial: "shared/results"
end
All #heavy_rotation does is call the various query methods. This makes it easy to add, modify, or delete any one of the query methods as criteria changes without affecting any other method.
def heavy_rotation
week_ago = Time.now - 7.days
two_weeks_ago = Time.now - 14.days
three_months_ago = Time.now - 3.months
tracks_top_licensed(date_range: three_months_ago, max_results: 5) +
tracks_top_listens(date_range: two_weeks_ago, max_results: 3) +
tracks_top_downloaded(date_range: two_weeks_ago, max_results: 2) +
tracks_staff_picks(date_range: three_months_ago, max_results: 4)
end
Here's what one of the query methods looks like. They're all basically the same, but with custom SQL/ORM queries. You'll notice that I'm not setting the :limit parameter to the number of results that I want the query method to return. This would create a problem if one of the records returned is duplicated by another query method, like if the same track was returned by staff_picks and top_downloaded. Then I would have to make an additional query to get another record. That's not a wrong decision, just one I didn't decide to do.
def tracks_top_licensed(args = {})
args = #default.merge args
max = args[:max_results]
date_range = args[:date_range]
# Adds own offset key to #filter_tracks hash map => #tracks_offset
#tracks_offset[:top_licensed] ||= 0
unfiltered_results = Track.top_licensed
.where("tracks.updated_at >= :date_range", date_range: date_range)
.limit(#limit)
.offset(#tracks_offset[:top_licensed])
top_tracks = validate_and_return_top_results(unfiltered_results, max)
# Add offset of your most recent query to the cumulative offset
# so triggering 'view more'/pagination returns contiguous results
#tracks_offset[:top_licensed] += top_tracks[:offset]
top_tracks[:top_results]
end
In each query method, I'm cleaning the record objects through a custom method #validate_and_return_top_results. My validator checks through the record objects for duplicates against the #track_ids collection in its ancestor method #filter_tracks. It then returns the number of records specified by its caller.
def validate_and_return_top_results(collection, max = 1)
top_results = []
i = 0 # offset incrementer
until top_results.count >= max do
# Checks if track has already appeared in the results
unless #result_track_ids.include? collection[i].id
# this will be returned to the caller
top_results << collection[i]
# this is the point of reference to validate your query method results
#result_track_ids << collection[i].id
end
i += 1
end
{ top_results: top_results, offset: i }
end

How to specify a minimum count on an association using active record

I have Battle class which has many Participants. I'm using this class method to return the last Battle to be voted on:
def self.get_voteable_battle
#return the battle whose submissions_deadline ended less than 3 days ago
time_now = Time.now
end_of_day = time_now.end_of_day
return self.where(:submissions_deadline => end_of_day.ago(3.days)..time_now).first
end
But I also want to ensure that there are at least 2 participants.
I can add another condition like this:
def self.get_voteable_battle
#return the battle whose submissions_deadline ended less than 3 days ago
time_now = Time.now
end_of_day = time_now.end_of_day
battle = self.where(:submissions_deadline => end_of_day.ago(3.days)..time_now).first
if battle && battle.participants.count > 1
return battle
else
return nil
end
end
But this would require another query right? Is there a way to do it in one query using active record?
You could try
self.joins(:participants).where(:submissions_deadline => end_of_day.ago(3.days)..time_now).having('count(participants.id) > ?', 1).last
It may be the battle id in the having clause but one of the two will work and i'll update. head hurts right now.

calculating royalties based on ranges in rails 3

I've built an application that tracks sales of books and is -- hopefully -- going to calculate author royalties.
Right now, I track sales in orders. Each order has_many :line_items. When a new line item is saved, I calculate the total sales of a given product, so I have a total sales count.
Each author has multiple royalty rules based on their contract. For example, 0 to 5000 copies sold, they get 10 percent. 5001 to 10,000, they get 20 percent. At first, I was calculating the author's share per line item. It was working well, but then I realized that my app is choosing which royalty rule to apply based on the total sales. If I post a big order, it's possible that the author's royalties would be calculated at the high royalty rate for the entire line item when in fact, the royalty should be calculated based on both the lower and high royalty rate (as in, one line item pushes the total sales passed the royalty rule break point).
So my question is how to best go about this. I've explored using ranges but this is all a little new to me and the code is getting a little complex. Here's the admittedly clunky code I'm using to pull all the royalty rules for a given contract into an array:
def royalty_rate
#product = Product.find_by_id(product_id)
#total_sold = #product.total_sold
#rules = Contract.find_by_product_id(#product).royalties
... where next?
end
#rules has :lower and :upper for each royalty rule, so for this product the first :lower would be 0 and the :upper would be 5000, then the second :lower would be 5001 and the second :upper would be 10,000, and so on.
Any help or ideas with this would be appreciated. It's actually the last step for me to get a fully working version up I can play with.
I was using this code below to pick out a specific rule based on the value of total_sold, but again, that has the effect of taking the cumulative sales and choosing the highest royalty rate instead of splitting them.
#rules = #contract.royalties.where("lower <= :total_sold AND upper >= :total_sold", {:total_sold => #total_sold}).limit(1)
Thanks in advance.
It sounds like you need to store the royalty calculation rules separately for each author -- or perhaps you have several schemes and each author is associated with one of them?
For the first case, perhaps something like this:
class Author
has_many :royalty_rules
end
class RoyaltyRule
belongs_to :author
# columns :lower, :upper, :rate
end
So when an Author is added you add rows to the RoyaltyRule model for each tier. Then you need a method to calculate the royalty
class Author
def royalty(product)
product = Product.find_by_id(product.id)
units = product.total_sold
amount = 0
royalty_rules.each do |rule|
case units
when 0
when Range.new(rule.lower,rule.upper)
# reached the last applicable rule -- add the part falling within the tier
amount += (units - rule.lower + 1) * rule.rate
break
else
# add the full amount for the tier
amount += (rule.upper - rule.lower + 1) * rule.rate
end
end
amount
end
end
And some specs to test:
describe Author do
before(:each) do
#author = Author.new
#tier1 = mock('tier1',:lower=>1,:upper=>5000,:rate=>0.10)
#tier2 = mock('tier2',:lower=>5001,:upper=>10000,:rate=>0.20)
#tier3 = mock('tier3',:lower=>10001,:upper=>15000,:rate=>0.30)
#author.stub(:royalty_rules) { [#tier1,#tier2,#tier3] }
end
it "should work for one tier" do
product = mock('product',:total_sold=>1000)
#author.royalty(product).should == 100
end
it "should work for two tiers" do
product = mock('product',:total_sold=>8000)
#author.royalty(product).should == (5000 * 0.10) + (3000 * 0.20)
end
it "should work for three tiers" do
product = mock('product',:total_sold=>14000)
#author.royalty(product).should == (5000 * 0.10) + (5000 * 0.20) + (4000 * 0.30)
end
# edge cases
it "should be zero when units is zero" do
product = mock('product',:total_sold=>0)
#author.royalty(product).should == 0
end
it "should be 500 when units is 5000" do
product = mock('product',:total_sold=>5000)
#author.royalty(product).should == 500
end
it "should be 500.2 when units is 5001" do
product = mock('product',:total_sold=>5001)
#author.royalty(product).should == 500.2
end
end
Notes: Author.royalty_rules needs to return the tiers sorted low to high. Also, the lowest tier starts with 1 instead of 0 for easier calculation.
So, I don't get why you can't just calculate on total quantity sold?
I'll assume they don't need to know at the exact moment of order, so why not calculate, based on quantity sold as of yesterday.
E.g., Run a rake task, in the morning (let's say) that uses the following module called RoyaltyPayments in a file in lib/royalty_payments.rb
Do something like
Module RoyaltyPayments
def royalty_range(total_sold, product_price)
sold_price = total_sold * product_price
leftover = total_sold % 5000
case total_sold
when 0..5000
total_sold * 0.1
when 5001..10000
((sold_price * 0.10)*5000) + ((sold_price * 0.2)*leftover)
else
((sold_price * 0.10)*5000) + (sold_price * 0.20)*5000) + ((sold_price * 0.3)*(total_sold - 10000)
end
end
Then make lib/tasks/royalty_payments.rake
In that file put something like:
include RoyaltyPayments
namespace :royalty_payments
desc "Make royalty calculations"
task :calculate_latest_totals
Product.all.each do |product|
total_sold = product.total_sold
royalty_range(total_sold, product.price)
end
Something like that.
You can do ranges of variables, eg. min..max
$irb
> min = 0
=> 0
> max = 5000
=> 5000
> min..max
=> 0..5000
> (min..max).class
=> Range
% is Numeric.modulo see http://www.ruby-doc.org/core/classes/Numeric.html#M000969 for details.

Resources