Does Rake task need to run in the background using Resque? - ruby-on-rails

I have this code in my rake task. It seems overkill, since the rake task is already being run as a cron job. I think I can safely take it out of Resque and run it directly, but not sure if I missed something.
desc "update daily sales"
task :daily_sales => :environment do
Resque.enqueue(DailySaleService.perform)
end

Yes, it's overkill. There is no reason to use background processing for a rake task; you use background processing to remove heavy lifting from the HTTP request/response cycle to provide users with a better front-end experience. It won't provide any value in a rake task.

Related

Rails Async Active Job doesn't execute code, while inline does

Does the :async queue adapter actually do anything?
:inline, which is what is default in Rails 4, processes jobs built with ActiveJob, uh... inline, in the current execution thread. Async, shouldn't. It should use the ConnectionPool to not run it in the current thread, and that's ideally what would be happening. It'd run perform outside of the current execution thread.
But nothing executes it.
I've pored through the docs, and the only thing I can fathom is is that :async, unlike :inline, doesn't execute tasks, and expects you to build a system around execution locally. I have to manually perform perform on all jobs in order to get them to execute locally. When I set the adapter to :inline, it works just fine without having to execute.
Is there some configuration issue I'm missing that's preventing async from working correctly (like ActionCable?).
Does it not work if executed from a rake task (or the console?).
It works fine with :sidekiq/:resque, but I don't want to be running these locally all the time.
Rails by default comes with an "immediate runner" queuing implementation. That means that each job that has been enqueued will run immediately.
This is kind of what's cueing me in there being something wrong. I have jobs that are sitting in a queue somewhere that just don't run. What could be stopping this?
This is what I discovered. With the advent of concurrent-ruby, rake tasks aren't set up to handle this.
If you read in the documentation, it says with :async, it's cleared out of memory when the process ends.
Rails itself only provides an in-process queuing system, which only
keeps the jobs in RAM. If the process crashes or the machine is reset,
then all outstanding jobs are lost with the default async back-end.
Rake processes end when they're over. So, if you're doing any sort of data changes, rake tasks won't be open long enough to run a job, which is why they run :inline just fine, but not :async.
So, I haven't figured out a solution to keep rake tasks open long enough to run something :async (and keep the app :async the entire time). I have to switch it to :inline to run tasks, and then back to :async when I'm done for the rest of my jobs. It's why it works fine with :sidekiq or :resque, because those applications keep the job information in memory and do not release when the rake task is over.
In order for rake tasks to work with :async locally, there's not much you can do other than run tasks as :inline if you're local until rake (as a task runner) understands how to stay open while asynchronous tasks have been launched (or not). As a development only feature, this isn't really high priority, but, if you're bashing your head on the table not understanding why :async by default tasks that run jobs won't actually run, that's why.
Here's what you can put at the end of a rake task to wait for the AsyncAdapter to finish processing before exiting the rake task:
if Rails.configuration.active_job.queue_adapter == :async
executor =
ActiveJob::Base._queue_adapter.instance_eval do
#scheduler.instance_eval { #async_executor }
end
sleep(0.1) while executor.scheduled_task_count > executor.completed_task_count
end

Rails: How to manage rake tasks likewise migrations

I have rails app deployed over multiple instances and had too many rake tasks to run over different instances so it is hard to manage which rake tasks is already run or which one remaining.
is there any way to manage it from db side, as schema_migrations table managed by migrations. if yes then, i want know how migrations exactly works?.
any suggestions?.
Correct way: use deploy automation. Capistrano is a good choice. Then you'll never need to worry about things like running rake task
I think the rake tasks should have no side effects if you execute it multiple times. If the task is implemented that way, then there's no need to worry about which has been done and which is not.
I think if you want to get a status tracking for the Rake Task, a simple way is to implemented a model to record the execution status of the rake task, and update the model each time rake task is done.
You can use resque-scheduler(https://github.com/resque/resque-scheduler) to manage and track your tasks .
You can use Progress Bar gem to monitor the progress of a particular rake task.
And according to the above suggestion, automated deployment through capistrano is a good option. You can manage the rake tasks running sequence in the cap script.

Find out whether specific rake task is currently running

Is there any way to find out whether particular rake task is currently running from Rails controller for example? I have an extensive rake task, which takes 5-6 hours to finish.
I need to see status of that rake task from frontend web interface, like:
Task "some operation" is running...
Also it would be nice to be able to hard stop / run that rake task from within frontend web interface.
If found Railscast devoted to it, but the method described there allows only to run rake task from controller, but not to stop/see its status.
If your job is taking 5-6 hours to complete, you must run it in background. To run long running jobs in background you can use one of these:
Resque
Sidekiq
And for tracking status of your jobs, you can use corresponding status tracking gems:
ResqueStatus
SidekiqStatus
Personally, I prefer sidekiq for its efficiency.
NOTE: All the above mentioned gems has good enough docs. Kindly refer to those.

Run a background job every few seconds

Say I have an application that needs to pull data from an API, but that there is a limit to how often I can send a query (i.e., caps at X requests / minute). In order to ensure I don't hit this limit, I want to add requests to a queue, and have a background job that will pull X requests and execute it every minute. I'm not sure what's the best method for this in Rails, however. From what I gather, DelayedJob is the better library for my needs, but I don't see any support for only running X jobs a minute. Does anyone know if there is a preferred way of implementing functionality like this?
I'm a little late but I would like to warn against using the whenever gem in your situation:
Since you're using Ruby on Rails, using the whenever gem will be loading the environment each time it gets called in cron.
Give rufus-scheduler a try.
Place the code below, for example, in config/initializers/cron_stuff.rb
require 'rufus/scheduler'
scheduler = Rufus::Scheduler.start_new
scheduler.every '20m' do
puts 'hello'
end
First, I would recommend using Sidekiq for processing background jobs. It's well supported and very simple to use. If you do use Sidekiq, then there is another gem, called Sidetiq, that will allow you to run recurring jobs.
Maybe you can try [whenever]: https://github.com/javan/whenever
Then you can add your tasks as bellow:
every 3.hours do
runner "MyModel.some_process"
rake "my:rake:task"
command "/usr/bin/my_great_command"
end
in a schedule.rb file

Monitoring rake test

I have a rake task running daily at a specified time, I just want a alert email(or sms) when it fails (or even the entire rake task hangs or even the server hangs). Earlier I was using AlertGrid, I will send a signal to alertgrid at the end of the rake task, and configured alertgrid in such a way to notify me in the absence of signal, but I cannot continue with it alertgrid now, does anyone know any alternative approach for this problem?
Or any other method to monitor rake task and intimate unsuccessful operation of rake task?
Thx.
This might not be the best approach but you could have your rake task relegate the actual job to Resque and just schedule your rake task via a cron job.
Resque has a very nice web-admin that shows off all failed jobs per queue; as for notification you could possibly modify Resque to send out an email (and/or process a different job to send out texts via the API of whatever SMS-provider you use).

Resources