Need Help: Model is not working - ruby-on-rails

So what I'm doing is I have a rake task that everyday will decrease the days left on a subscription. Here is the rake task:
namespace :delete do
desc 'Remove a day for premium subscription days left'
task :premium_subscription_remove => :environment do
PremiumSubscription.find_each do | premium_subscription|
premium_subscription.premium_subscription_days_left -= 1
premium_subscription.save
end
end
end
This rake task will count down the days left on the subscription. Now I created a new model that will handle the days left once it hit zero. Once it hit zero the code will cause the subscription to auto renew. Here is the code for the renewal:
def self.renew_premium_subscription(user, premium_subscribe)
if premium_subscribe.premium_subscription_days_left <= 0
user.premium_subscriptions.where(:premium_subscribe_id => premium_subscribe.id).destroy_all
if user.points >= premium_subscribe.premium_subscription_cost
user.premium_subscriptions.where(:premium_subscribe_id => premium_subscribe.id).first_or_create
user.points = user.points - premium_subscribe.premium_subscription_cost
user.save
end
end
end
The problem I am having is that the premium_subscription_days_left is at negative two and the renew_premium_subscription has never been acted. I tried putting in random letters and the model hasn't givin an error. How does the model get acted upon inorder for the renewal? I have put code in the controller:
def renew_subscription
PremiumSubscription.renew_premium_subscription(current_user, #user)
end
But that hasn't worked at all. If anybody knows how to get this thing working it will be great. Thank you for the help : )
edit: Tried putting the update function inside of the rake task but that did not work at all.
edit 2: No such luck on getting this fixed. Anybody have a clue?
edit 3: So i though about something, is there a way to automatically call a model class. I just need to get this thing working.
Here is an outline of what I did:
Created a rake task. This rake task will be called with whenever to count down to zero.
I created a model in the premiumSubscription model that says when it hits zero it will either update the subscription or destroy it.
I have set the count down to zero, refreshed the page but the subscription isn't updated or destroyed.
edit 4: So I learned that the controller needs to be triggered by a route. Is there any way to trigger this code when the page loads?

A couple of notes:
user.save
This can fail. If save is unsuccessful it will return false. Your method does not test the return value, so an unsuccessful save will go unnoticed -- nothing is written to the log file, no error is raised, nada. Either check the result of save or call save!, which will raise an exception.
premium_subscription.premium_subscription_days_left -= 1
You might have a good reason for storing this value, but you should also consider storing the start date (or the expiration date) instead, and calculating the days left given the current date. Decrementing "days left" requires that the cron job runs when it is supposed to ... if it misses a day, the count will be off.

Related

Expire cache based on saved value

In my app there is a financial overview page with quite a lot of queries. This page is refreshed once a month after executing a background job, so I added caching:
#iterated_hours = Rails.cache.fetch("productivity_data", expires_in: 24.hours) do
FinancialsIterator.new.create_productivity_iterations(#company)
end
The cache must expire when the background job finishes, so I created a model CacheExpiration:
class CacheExpiration < ApplicationRecord
validates :cache_key, :expires_in, presence: true
end
So in the background job a record is created:
CacheExpiration.create(cache_key: "productivity_data", expires_in: DateTime.now)
And the Rails.cache.fetch is updated to:
expires_in = get_cache_key_expiration("productivity_data")
#iterated_hours = Rails.cache.fetch("productivity_data", expires_in: expires_in) do
FinancialsIterator.new.create_productivity_iterations(#company)
end
private def get_cache_key_expiration(cache_key)
cache_expiration = CacheExpiration.find_by_cache_key(cache_key)
if cache_expiration.present?
cache_expiration.expires_in
else
24.hours
end
end
So now the expiration is set to a DateTime, is this correct or should it be a number of seconds? Is this the correct approach to make sure the cache is expired only once when the background job finishes?
Explicitly setting an expires_in value is very limiting and error prone IMO. You will not be able to change the value once a cache value has been created (well you can clear the cache manually) and if ever you want to change the background job to run more/less often, you also have to remember to update the expires_in value. Additionally, the time when the background job is finished might be different from the time the first request to the view is made. As a worst case, the request is made a minute before the background job updates the information for the view. Your users will have to wait a whole day to get current information.
A more flexible approach is to rely on updated_at or in their absence created_at fields of ActiveRecord models.
For that, you can either rely on the CacheExpiration model you already created (it might already have the appropriate fields) or use the last of the "huge number of records" you create. Simply order them and take the last SomeArModel.order(created_at: :desc).first
The benefit of this approach is that whenever the AR model you create is updated/created, you cache is busted and a new one will be created. There is no longer a coupling between the time a user called the end point and the time the background job ran. In case a record is created by any means other than the background job, it will also simply be handled.
ActiveRecord models are first class citizens when it comes to caching. You can simply pass them in as cache keys. Your code would then change to:
Rails.cache.fetch(CacheExpiration.find_by_cache_key("productivity_data")) do
FinancialsIterator.new.create_productivity_iterations(#company)
end
But if at all possible, try to find an alternative model so you no longer have to maintain CacheExpiration.
Rails also has a guide on that topic

Need help on active record updating via rails

I had the follow code in a rake file that I will be run weekly.
now = Date.today
Order.where(("status NOT IN ('Completed','Canceled','Shipped') AND DATE(updated_at) <= ?"),(now-30)).update_all("status = '*'",'Pending Requestor')
The problem is it is throwing wrong number of arguments error.
looking at http://apidock.com/rails/ActiveRecord/Base/update_all/class
I tried
now = Date.today
Order.update_all("status = 'Pending Requester'",("status NOT IN ('Completed','Canceled','Shipped') AND DATE(updated_at) <= ?"),(now-30))
but that gives me a 3 for one error.
So what I need to do is I need to find all of the orders where the status is not in that list and the last time they were updated was beyond 30 days ago and automatically put them into a Pending Requester status.
Can someone help me with what I am getting wrong on this?
In your code, what is assigned to the variable now? I'm going to assume it is Time.now.
Also, all of the extra parenthesis you added aren't necessary. I've simplified your query below and wrote it out so that it is easy to understand.
Change your code to:
Order.where(
"status NOT IN (?) AND updated_at <= ?", # Simplifyied the query
%w(Completed Canceled Shipped), # Can also be written as ['Completed', 'Canceled', 'Shipped']
30.days.ago # Self-explanatory
).update_all(status: 'Pending Requestor')
where only takes 1 argument UNLESS the argument contains question marks (?). For each question mark, it receives an additional argument to substitute the question mark with a value.
Bonus: When working with statuses in Rails, I suggest learning the Enumerable convention. It's amazing!

How can I update table when a user is locked after maximum login attempts?

In my rails application, I managed to lock users after a maximum failed login treshold using Devise lockable, but how can I update the table so that I can add an entry to user denoting this user is locked also with timestamp !
I just don't know where to put that code in !
I tried to create a file called "lockable.rb" in Initializer with following code,
def lock_access!(opts = { })
#user.is_lock = "Yes"
#user.reason_of_deactivation = "Failed login attempt"
#user.deactivated_date = DateTime.now
#user.save
end
That didn't worked out !
One potential solution I see here is that you could have a condition inside a callback on after_save where, you check if the user is locked.
If the User is locked, update the timestamp with the current time or updated_at .
This solution might have problems as the callback would get executed every time a save is called on the user object, thus updating the timestamp. Please take care to add enough conditions to prevent this from happening.
Also, please write tests around this, so that at some later point of time, when you revisit that part of code, it will provide you with some context about the conditions.
After 1 hour of research and testing I myself found solution, I overrided lockable.rb in Devise gem and added code.
Created file lockable.rb in lib/devise/models/lockable.rb
def lock_access!(opts = { })
super
self.is_lock = "Yes"
self.reason_of_deactivation = "Exceeded max login threshold"
self.deactivated_date = DateTime.now
end
Closed.

Catching errors with Ruby Twitter gem, caching methods using delayed_job: What am I doing wrong?

What I'm doing
I'm using the twitter gem (a Ruby wrapper for the Twitter API) in my app, which is run on Heroku. I use Heroku's Scheduler to periodically run caching tasks that use the twitter gem to, for example, update the list of retweets for a particular user. I'm also using delayed_job so scheduler calls a rake task, which calls a method that is 'delayed' (see scheduler.rake below). The method loops through "authentications" (for users who have authenticated twitter through my app) to update each authorized user's retweet cache in the app.
My question
What am I doing wrong? For example, since I'm using Heroku's Scheduler, is delayed_job redundant? Also, you can see I'm not catching (rescuing) any errors. So, if Twitter is unreachable, or if a user's auth token has expired, everything chokes. This is obviously dumb and terrible because if there's an error, the entire thing chokes and ends up creating a failed delayed_job, which causes ripple effects for my app. I can see this is bad, but I'm not sure what the best solution is. How/where should I be catching errors?
I'll put all my code (from the scheduler down to the method being called) for one of my cache methods. I'm really just hoping for a bulleted list (and maybe some code or pseudo-code) berating me for poor coding practice and telling me where I can improve things.
I have seen this SO question, which helps me a little with the begin/rescue block, but I could use more guidance on catching errors, and one the higher-level "is this a good way to do this?" plane.
Code
Heroku Scheduler job:
rake update_retweet_cache
scheduler.rake (in my app)
task :update_retweet_cache => :environment do
Tweet.delay.cache_retweets_for_all_auths
end
Tweet.rb, update_retweet_cache method:
def self.cache_retweets_for_all_auths
#authentications = Authentication.find_all_by_provider("twitter")
#authentications.each do |authentication|
authentication.user.twitter.retweeted_to_me(include_entities: true, count: 200).each do |tweet|
# Actually build the cache - this is good - removing to keep this short
end
end
end
User.rb, twitter method:
def twitter
authentication = Authentication.find_by_user_id_and_provider(self.id, "twitter")
if authentication
#twitter ||= Twitter::Client.new(:oauth_token => authentication.oauth_token, :oauth_token_secret => authentication.oauth_secret)
end
end
Note: As I was posting this, I noticed that I'm finding all "twitter" authentications in the "cache_retweets_for_all_auths" method, then calling the "User.twitter" method, which specifically limits to "twitter" authentications. This is obviously redundant, and I'll fix it.
First what is the exact error you are getting, and what do you want to happen when there is an error?
Edit:
If you just want to catch the errors and log them then the following should work.
def self.cache_retweets_for_all_auths
#authentications = Authentication.find_all_by_provider("twitter")
#authentications.each do |authentication|
being
authentication.user.twitter.retweeted_to_me(include_entities: true, count: 200).each do |tweet|
# Actually build the cache - this is good - removing to keep this short
end
rescue => e
#Either create an object where the error is log, or output it to what ever log you wish.
end
end
end
This way when it fails it will keep moving on to the next user but will still making a note of the error. Most of the time with twitter its just better to do something like this then try to do with each error on its own. I have seen so many weird things out of the twitter API, and random errors, that trying to track down every error almost always turns into a wild goose chase, though it is still good to keep track just in case.
Next for when you should use what.
You should use a scheduler when you need something to happen based on time only, delayed jobs for when its based on an user action, but the 'action' you are going to delay would take to long for a normal response. Sometimes you can just put the thing plainly in the controller also.
So in other words
The scheduler will be fine as long as the time between updates X is less then the time it will take for the update to happen, time Y.
If X < Y then you might want to look at calling the logic from the controller when each indvidual entry is accessed, isntead of trying to do them all at once. The idea being you would only update it after a certain time as passed so. You could store the last time update either on the model itself in a field like twitter_udpate_time or in a redis or memecache instance at a unquie key for the user/auth.
But if the individual update itself is still too long, then thats when you should do the above, but instead of doing the actually update, call a delayed job.
You could even set it up that it only updates or calls the delayed job after a certain number of views, to further limit stuff.
Possible Fancy Pants
Or if you want to get really fancy you could still do it as a cron job, but have a point system based on views that weights which entries should be updated. The idea being certain actions would add points to certain users, and if their points are over a certain amount you update them, and then remove their points. That way you could target the ones you think are the most important, or have the most traffic or show up in the most search results etc etc.
Next off a nick picky thing.
http://api.rubyonrails.org/classes/ActiveRecord/Batches.html
You should be using
#authentications.find_each do |authentication|
instead of
#authentications.each do |authentication|
find_each pulls in only 1000 entries at a time so if you end up with a lof of Authentications you don't end up pulling a crazy amount of entries into memory.

Implementing a schedular task using whenever gem in rails 3

I'm trying to implement a scheduler task that deletes a user in user table who got abused more than 5 times. To achieve this in the user.rb file I have return a method report_abuse_delete method which performs the functionality of finding the user who got abuses more than 5 times and delete his records from the database.
Here is my method in User model:
def report_abuse_delete
#delete_abused_user= Abuse.find(:all, :conditions=>['count>=?',5])
#delete_abused_user.each do |d|
#abused_user= User.find(d.abuse_id)
if #abused_user.delete
render :text=> "User account has been deleted. Reason: This user has been reported spam for more than 5 times"
UserMailer.user_delete_five_spam_report(#user).deliver
end
end
end
And this is what I have written in the Scheduler.rb file
every 2.minutes do
rake "log:clear", :environment => "development"
runner "User.report_abuse_delete", :environment => "development"
end
As you can see in the scheduler.rb file I'm trying to perform a 2 functions one is clearing my log for every 2minutes and trying to run a method report_abuse_delete that I wrote in my model.
I'm facing a issue as follows for every 2 minutes my log is getting cleared but the method which I wrote in the model in not getting invoked I guess the functionality is not getting triggered. I have searched all the web and checked every possible way. I'm unable to figure out what was the problem is.
Help me out please. Any kind of help is welcome and appreciable.
You've defined report_abuse_delete as a normal - that is instance - method, but you're calling it as a class method. Try defining the method as def self.report_abuse_delete.
Also, I don't know if the render call will work: I haven't used this gem, but since you don't have any kind of user agent to see the text, I'm not sure what you'd expect it to do.

Resources