Rails: Track Points On A Weekly Basis - ruby-on-rails

In my current application, I need the ability to track points on a weekly basis so that the point totals for the user reset back to zero each week. I was planning on using the gem merit: https://github.com/tute/merit to track points.
In my users profile I have a field that is storing the points. What I have been unable to locate is how I can have rails on an auto basis for all users clear this field.
I have come across some information Rails reset single column I think this may be the answer in terms of resetting it every Sunday at a set time -- but I am uncertain on this last part and in addition where the code would go (model or controller)
Also, would welcome any suggestions if their is a better method.

You'd be better making a Point model, which belongs_to :user
This will allow you to add any points you want, and can then query the table based on the created_at column to get a .count of the points for the timespan you want
I can give you more info if you think it appropriate
Models
One principle we live by is to extend our models as much as possible
You want each model to hold only its data, thus ensuring more efficient db calls. I'm not super experienced with databases, but it's my opinion that having a lot of smaller models is more efficient than one huge model
So in your question, you wanted to assign some points to a user. The "right" way to do this is to store all the points perpetually; which can only be done with its own model
Points
#app/models/point.rb
Class Point < ActiveRecord::Base
belongs_to :user
end
#app/models/user.rb
Class User < ActiveRecord::Base
has_many :points
end
Points table could look like this:
points
id | user_id | value | created_at | updated_at
Saving
To save the points, you will literally just have to add extra records to the points table. The simplest way to achieve this will be to merge the params, like this:
#app/controllers/points_controller.rb
class PointsController < ApplicationController
def new
#points = Point.new
end
def create
#points = Point.new(points_params)
#points.save
end
private
def points_params
params.require(:points).permit(:value).merge(:user_id => current_user.id)
end
end
You can define the "number" of points by setting in the value column (which I'd set as default 1). This will be how StackOverflow gives different numbers of points; by setting the value column differently ;)
Counting
To get weekly countings, you'll have to create some sort of function which will allow you to split the points by week. Like this:
#app/models/point.rb -> THIS NEEDS MORE WORK
def self.weekly
where(:created_at => Time.now.next_week..Time.now.next_week.end_of_week)
end
That function won't work as it is
I'll sort out the function properly for you if you let me know a little more about how you'd like to record / display the weekly stats. Is it going to be operated via a cron job or something?

Based on your description, you might want to simply track the users points and the time that they got them. Then you can query for any 1 week period (or different periods if you decide you want all-time, annual, etc) and you won't lose historical data.

Related

How to create a class/scope method that orders objects based on values from an instance method?

I’ve been struggling for hours trying to develop a class or scope method, but have a beginner knowledge of SQLite and everything I’ve tried and read has thus far failed.
I’m trying to find a way to order lists by their average rating.
I have a has_many/belongs_to association between List and Rating. Lists have_many Ratings, each List can have_many ratings. I then have an instance method that calculates a list’s average rating:
def average_rating
self.ratings.average(:rating).to_i
end
I’m now trying to find a way to order lists by their average rating, but am not having any success. Following another post, I tried this method:
def self.highest_rating
List.all.sort_by(&:average_rating)
end
But it simply returns the all the lists in no particular order. With this query:
SELECT AVG("ratings"."rating") FROM "ratings" WHERE "ratings"."rated_id" = ?
I've thought of making average_rating an attribute on the List model, but have had difficulty even developing a scope method for that. If you could offer any advice or assistance I would greatly appreciate it!
First of all you can sort your query results by any column not the value. When you trying to sort by average rating as you do it now you like
List.all.sort_by(5) # you sort by result of the method which is simple number
It's not make big sense though.
I believe you need to add average_rating column to your List model. You will be able to calculate average rating for every list and store that value in the column and then sort lists by that column like this
def self.highest_rating
List.all.order(average_rating)
end
To calculate your average rating you may use callback every time you're creating new rating. It may look like:
class Rating < ActiveRecord::Base
after_save :calculate_average
private
def calculate_average
#your code
end
end

Retroactive changes to user records (point reconciliation)

I have three models:
Course
User
CourseCompletion
In addition to stuff like title and content, each course also has a point value. When a user completes a course, they are awarded the point value for the course, which is added to their total_points. CourseCompletion simply tracks which user has completed which courses (with columns user_id, course_id and completion_date).
One weakness with this data model is that if an admin user edits the point value of a course after a user has completed that course, the user's points are not updated. I need a way to do this retroactively. For example, if a user completes a course and earns 10 points, and then an admin changes the course to be worth 20 points, the user should have 20 points total in the end. I haven't done this sort of thing before - what would be the best approach?
My current plan is two-fold. In the first part, I make changes to the Course and CourseCompletion models:
Add a points_earned column to CourseCompletion that records how many points the user has earned for that completion.
Add a points_recently_changed column to Course. If a course's points are updated at any time, set this column to true for that course. (see my related question)
In the second part, a script or task (?) runs once per day and does the following:
Get all courses where points_recently_changed equals true
Find all course completions for those courses
Calculate the difference between course.points and course_completion.points_earned
Update the corresponding user's point total accordingly
Change course.points_recently_updated back to false.
Are there any glaring problems with this approach, or is there a "Rails Way" of doing stuff like this?
Why don't you use ActiveRecord::Calculations to get the sum of the points for the whole related courses and store them in the column. Update the column each time the admin does a change in the course points.
You can track changes in the points using ActiveRecord::Dirty
http://api.rubyonrails.org/classes/ActiveModel/Dirty.html
And calculate points using Calculations:
http://api.rubyonrails.org/classes/ActiveRecord/Calculations.html
As a possible solution:
class Course < ActiveRecord::Base
before_save :update_user_points
def update_user_points
User.all.each {|user| user.update_points } if self.points_changed?
end
end
class User < ActiveRecord::Base
def update_points
self.points = self.courses.sum(:points)
end
end
Suggestion:
I dislike saving the points in the database as it is a variable. I suggest you to do the calculation each time the user logins and keep it as cached number with expire date. So it has to be recalculated each day.
I tried Jorge's suggestion but was not satisfied with it. I ended up going with a similar approach whereby I recalculate a user's points during the login process.
User.rb
def recalculate_points
new_course_points = self.course_completion_courses.sum(:worth)
self.update_attribute(:points, self.course_completion_courses.sum(:worth))
self.save
end
session_helper.rb
def sign_in(user)
...
current_user.recalculate_points
end
Points are still stored in the User table - simply caching them doesn't work because I do some reporting that needs that information to persist in the database.

How to apply class method scope without a data table column?

Disclaimer: first question, rails newbie, please be detailed in responses.
I’m trying to create a filter with a class method that is based off a simple equation performed on two columns in my data table. I can’t figure out how to get the query/filter going such that it filters results based on the results of my equation. Here is an abbreviated set up of my table:
t.integer “horizontal_length”
t.integer “thirty_day_flow_rate”
I want a filter based off this equation: ( thirty_day_flow_rate / horizontal_length ), so users could say, "show me all the wells with a 30 day flow rate greater than 'x' barrels per foot of length"
I have created a method in my model to hold the equation and it works fine when I call it on a Well object:
class Well < ActiveRecord::Base
def flow_rate_per_foot
thirty_day_flow_rate / horizontal_length
end
However, when I want to create a filter based on the results of the equation, I am not sure how to proceed. I tried something like this, with the mimimum_flow_rate param passed in from the controller:
class Well < ActiveRecord::Base
def self.flow_rate_filter(minimum_flow_rate)
if mimimum_flow_rate.present?
where(‘flow_rate_per_foot >= ?', minimum_flow_rate)
else
Well.all
end
end
Obviously that does not work, because flow_rate_per_foot is not a column in my data table to be called. How can i work around this? Is there another way to solve this problem? I want to do this type of filtering for a number of different equations and columns, so any solution needs to be flexible. Thanks!
For reference, my controller is shown below and other filters I have set up that run directly from my data table work properly:
def index
#wells = Well.flow_rate_filter(params[:mimimum_flow_rate])
end
Thanks!
you could try using scopes or just query it manually like
#wells = Well.where("(30_day_flow_rate / horizontal_length) >= ?", params[:mimimum_flow_rate])
What I wanted to do initially does not seem possible.
I solved the problem by creating a new field in my table to hold the "flow_rate_per_foot" calculation that I had originally placed in the flow_rate_per_foot method.
Once the result of the calculation had its own field, I could then filter, search and sort based on the results, which was my overall goal.
Add this line in your model
class Well < ActiveRecord::Base
attr_accessor :minimum_flow_rate
...

Doing analytics on a large table in Rails / PostGreSQL

I have a "Vote" table in my database which is growing in size everyday, currently at around 100 million rows. For internal analytics / insights I used to have a rake task which would compute a few basic metrics, like the number of votes made daily in the past few days. It's just a COUNT with a where clause on the date "created_at".
This rake task was doing fine until I deleted the index on "created_at" because it seems that it had a negative impact on the app performance for all the other user-facing queries that didn't need this index, especially when inserting a new row.
Currently I don't have a lot of insights as to what is going on in my app and in this table. However I don't really want to add indexes on such a large table if it's only for my own use.
What else can I try ?
Alternately, you could sidestep the Vote table altogether and keep an external tally.
Every time a vote is cast, a separate tally class that keeps a running count of votes cast will be invoked. There will be one tally record per day. A tally record will have an integer representing the number of votes cast on that day.
Each increment call to the tally class will find a tally record for the current date (today), increment the vote count, and save the record. If no record exists, one will be created and incremented accordingly.
For example, let's have a class called VoteTally with two attributes: a date (date), and a vote count (integer), no timestamps, no associations. Here's what the model will look like:
class VoteTally < ActiveRecord::Base
def self.tally_up!
find_or_create_by_date(Date.today).increment!(:votes)
end
def self.tally_down!
find_or_create_by_date(Date.today).decrement!(:votes)
end
def self.votes_on(date)
find_by_date(date).votes
end
end
Then, in the Vote model:
class Vote < ActiveRecord::Base
after_create :tally_up
after_destroy :tally_down
# ...
private
def tally_up ; VoteTally.tally_up! ; end
def tally_down ; VoteTally.tally_down! ; end
end
These methods will get vote counts:
VoteTally.votes_on Date.today
VoteTally.votes_on Date.yesterday
VoteTally.votes_on 3.days.ago
VoteTally.votes_on Date.parse("5/28/13")
Of course, this is a simple example and you will have to adapt it to suit. This will result in an extra query during vote casting, but it's a hell of a lot faster than a where clause on 100M records with no index. Minor inaccuracies are possible with this solution, but I assume that's acceptable given the anecdotal nature of daily vote counts.
It's just a COUNT with a where clause on the date "created_at".
In that case the only credible index you can use is the one on created_at...
If write performance is an issue (methinks it's unlikely...) and you're using a composite primary key, clustering the table using that index might help too.
If the index has really an impact on the write performance, and it's only a few persons which run statistics now and then, you might consider another general approach:
You could separate your "transaction processing database" from your "reporting database".
You could update your reporting database on a regular basis, and create reporting-only indexes only there. What is more queries regarding reports will not conflict with transaction-oriented traffic, and it doesn't matter how long they run.
Of course, this increases a certain delay, and it increases system complexity. On the other hand, if you roll-forward your reporting database on a regular basis, you can ensure that your backup scheme actually works.

A database design for variable column names

I have a situation that involves Companies, Projects, and Employees who write Reports on Projects.
A Company owns many projects, many reports, and many employees.
One report is written by one employee for one of the company's projects.
Companies each want different things in a report. Let's say one company wants to know about project performance and speed, while another wants to know about cost-effectiveness. There are 5-15 criteria, set differently by each company, which ALL apply to all of that company's project reports.
I was thinking about different ways to do this, but my current stalemate is this:
To company table, add text field criteria, which contains an array of the criteria desired in order.
In the report table, have a company_id and columns criterion1, criterion2, etc.
I am completely aware that this is typically considered horrible database design - inelegant and inflexible. So, I need your help! How can I build this better?
Conclusion
I decided to go with the serialized option in my case, for these reasons:
My requirements for the criteria are simple - no searching or sorting will be required of the reports once they are submitted by each employee.
I wanted to minimize database load - where these are going to be implemented, there is already a large page with overhead.
I want to avoid complicating my database structure for what I believe is a relatively simple need.
CouchDB and Mongo are not currently in my repertoire so I'll save them for a more needy day.
This would be a great opportunity to use NoSQL! Seems like the textbook use-case to me. So head over to CouchDB or Mongo and start hacking.
With conventional DBs you are slightly caught in the problem of how much to normalize your data:
A sort of "good" way (meaning very normalized) would look something like this:
class Company < AR::Base
has_many :reports
has_many :criteria
end
class Report < AR::Base
belongs_to :company
has_many :criteria_values
has_many :criteria, :through => :criteria_values
end
class Criteria < AR::Base # should be Criterion but whatever
belongs_to :company
has_many :criteria_values
# one attribute 'name' (or 'type' and you can mess with STI)
end
class CriteriaValues < AR::Base
belongs_to :report
belongs_to :criteria
# one attribute 'value'
end
This makes something very simple and fast in NoSQL a triple or quadruple join in SQL and you have many models that pretty much do nothing.
Another way is to denormalize:
class Company < AR::Base
has_many :reports
serialize :criteria
end
class Report < AR::Base
belongs_to :company
serialize :criteria_values
def criteria
self.company.criteria
end
# custom code here to validate that criteria_values correspond to criteria etc.
end
Related to that is the rather clever way of serializing at least the criteria (and maybe values if they were all boolean) is using bit fields. This basically gives you more or less easy migrations (hard to delete and modify, but easy to add) and search-ability without any overhead.
A good plugin that implements this is Flag Shih Tzu which I've used on a few projects and could recommend.
Variable columns (eg. crit1, crit2, etc.).
I'd strongly advise against it. You don't get much benefit (it's still not very searchable since you don't know in which column your info is) and it leads to maintainability nightmares. Imagine your db gets to a few million records and suddenly someone needs 16 criteria. What could have been a complete no-issue is suddenly a migration that adds a completely useless field to millions of records.
Another problem is that a lot of the ActiveRecord magic doesn't work with this - you'll have to figure out what crit1 means by yourself - now if you wan't to add validations on these fields then that adds a lot of pointless work.
So to summarize: Have a look at Mongo or CouchDB and if that seems impractical, go ahead and save your stuff serialized. If you need to do complex validation and don't care too much about DB load then normalize away and take option 1.
Well, when you say "To company table, add text field criteria, which contains an array of the criteria desired in order" that smells like the company table wants to be normalized: you might break out each criterion in one of 15 columns called "criterion1", ..., "criterion15" where any or all columns can default to null.
To me, you are on the right track with your report table. Each row in that table might represent one report; and might have corresponding columns "criterion1",...,"criterion15", as you say, where each cell says how well the company did on that column's criterion. There will be multiple reports per company, so you'll need a date (or report-number or similar) column in the report table. Then the date plus the company id can be a composite key; and the company id can be a non-unique index. As can the report date/number/some-identifier. And don't forget a column for the reporting-employee id.
Any and every criterion column in the report table can be null, meaning (maybe) that the employee did not report on this criterion; or that this criterion (column) did not apply in this report (row).
It seems like that would work fine. I don't see that you ever need to do a join. It looks perfectly straightforward, at least to these naive and ignorant eyes.
Create a criteria table that lists the criteria for each company (company 1 .. * criteria).
Then, create a report_criteria table (report 1 .. * report_criteria) that lists the criteria for that specific report based on the criteria table (criteria 1 .. * report_criteria).

Resources