I'm using Rufus-scheduler gem in my ROR application to send emails in the background. My setup is like:
# config/initializers/rufus_scheduler.rb
scheduler = Rufus::Scheduler.new(lockfile: '.rufus-scheduler.lock')
scheduler.cron '0 2 * * fri' do
UserMailer.send_some_emails
end
Any changes I make in the .send_some_email class method isn't reflected in the Rufus-scheduler task, how can I fix this? I don't want to restart the server every time I make a change!
Let's assume UserMailer.send_some_emails is defined in whatever/user_mailer.rb
scheduler = Rufus::Scheduler.new(:lockfile => '.rufus-scheduler.lock')
scheduler.cron '0 2 * * fri' do
load 'whatever/user_mailer.rb'
UserMailer.send_some_emails
end
Related
I'm trying to lock a part of my code using Redis, with Redlock library.
I've implemented it here:
def perform(task_id, task_type)
lock_manager = Redlock::Client.new([ "redis://127.0.0.1:6379" ])
lock_key = "task_runnuer_job_#{task_type}_#{task_id}"
puts("locking! #{task_id}")
lock_task = lock_manager.lock(lock_key , 6 * 60 * 1000)
if lock_task.present?
begin
# Exec task...
Services::ExecTasks::Run.call task.id
ensure
puts("unlocking! #{task_id}")
lock_manager.unlock(lock_task)
end
else
puts("Resource is locked! #{lock_key}")
end
end
What I get when running multiple Sidekiq jobs at the same time, is the following logs:
"locking! 520"
"locking! 520"
"unlocking! 520"
"unlocking! 520"
This happens when both of my 520 task, which should not be executed together, are being called with 1ms diffrence.
Yet sometimes the lock works as expected.
I've checked Redis and it works just fine.
Any ideas? Thanks!
I am trying to run a cron job in every 10 seconds that runs a piece of code. I have used an approach which requires running a code and making it sleep for 10 seconds, but it seems to make drastically degrading the app performance. I am using whenever gem, which run every minute and sleeps for 10 seconds. How can I achieve the same w/o using sleep method. Following is my code.
every 1.minute do
runner "DailyNotificationChecker.send_notifications"
end
class DailyNotificationChecker
def self.send_notifications
puts "Triggered send_notifications"
expiry_time = Time.now + 57
while (Time.now < expiry_time)
if RUN_SCHEDULER == "true" || RUN_SCHEDULER == true
process_notes
end
sleep 10 #seconds
end
def self.process_notes
notes = nil
time = Benchmark.measure do
Note.uncached do
notes = Note.where(status: false)
notes.update_all(status: true)
end
end
puts "time #{time}"
end
end
Objective of my code is to change the boolean status of objects to true which gets checked every 10 seconds. This table has 2 million records.
I suggest using a Sidekiq background jobs for this. With the sidekiq-scheduler gem you can run ordinary sidekiq jobs schedules in whatever internal you need. Bonus points for having a web-interface to handle and monitor the jobs via the Sidekiq gem.
You would use the clockwork gem. It runs in a separate process. The configuration is pretty simple.
require 'clockwork'
include Clockwork
every(10.seconds, 'frequent.job') { DailyNotificationChecker.process_notes }
I have a rails application running with puma server. Is there any way, we can see how many number of threads used in application currently ?
I was wondering about the same thing a while ago and came upon this issue. The author included the code they ended up using to collect those stats:
module PumaThreadLogger
def initialize *args
ret = super *args
Thread.new do
while true
# Every X seconds, write out what the state of this dyno is in a format that Librato understands.
sleep 5
thread_count = 0
backlog = 0
waiting = 0
# I don't do the logging or string stuff inside of the mutex. I want to get out of there as fast as possible
#mutex.synchronize {
thread_count = #workers.size
backlog = #todo.size
waiting = #waiting
}
# For some reason, even a single Puma server (not clustered) has two booted ThreadPools.
# One of them is empty, and the other is actually doing work
# The check above ignores the empty one
if (thread_count > 0)
# It might be cool if we knew the Puma worker index for this worker, but that didn't look easy to me.
# The good news: By using the PID we can differentiate two different workers on two different dynos with the same name
# (which might happen if one is shutting down and the other is starting)
source_name = "#{Process.pid}"
# If we have a dyno name, prepend it to the source to make it easier to group in the log output
dyno_name = ENV['DYNO']
if (dyno_name)
source_name="#{dyno_name}."+source_name
end
msg = "source=#{source_name} "
msg += "sample#puma.backlog=#{backlog} sample#puma.active_connections=#{thread_count - waiting} sample#puma.total_threads=#{thread_count}"
Rails.logger.info msg
end
end
end
ret
end
end
module Puma
class ThreadPool
prepend PumaThreadLogger
end
end
This code contains logic that is specific to heroku, but the core of collecting the #workers.size and logging it will work in any environment.
I'm parsing remote JSON data into MongoDB, Actually i'm parsing dynamic JSON data,but i want to update MongoDB for every 30 Sec with dynamic data.
parsing JSON data like this
require 'open-uri'
require 'json'
result = JSON.parse(open("url_of_json_service").read)
how i can update MongoDB for every 30sec?
Using rufus-schedular gem and it's working fine.
in Gemfile
gem 'rufus-scheduler', :require => "rufus/scheduler"
in config/initializers/reminder_sheduler.rb
scheduler = Rufus::Scheduler.start_new
scheduler.cron("0 5 * * *") do
Model.send_reminder_email
end
Cron is a common solution for recurring jobs, from the cron-job you can start either a script/runner Rails script or a rake task.
I should mention that with Cron the finest granularity is 1 minute.
Another solution would be to create a background job, which runs as a Daemon and basically runs a continuous loop: loading the remote JSON, sleeping for 30 seconds, loading the remote JSON, sleeping, ...
Check out these Railscasts:
http://railscasts.com/episodes/164-cron-in-ruby-revised
http://railscasts.com/episodes/164-cron-in-ruby
http://railscasts.com/episodes/129-custom-daemon
Cron:
http://en.wikipedia.org/wiki/Cron
I would like to control my cron jobs through my administration page.
Basically I have my cronjobs in my database, and I would like to create my crontab "on the fly".
Here's an example:
require "#{RAILS_ROOT}/config/environment.rb"
Cron.all.each do |cron|
if cron.at.blank?
every eval(cron.run_interval) do
cron.cmd
end
else
every eval(cron.run_interval), :at => cron.time do
cron.cmd
end
end
end
every 1.day do
command "whenever --update-crontab"
end
But Whenever doesn't output any of the tasks inside the loop, only the "static" one.
0 0 * * * whenever --update-crontab
How can I make Whenever 'understand' my loop?
You probably need to move your eval statement higher up, for example:
eval <<-EVAL
every #{cron.run_interval} do
#{cron.cmd}
end
EVAL
Assuming that your cron.cmd is something like 'command "ls -a"'.