Task:
Delete an item in Async manner [HomeWork]
I have already configured Active jobs with delayed_jobs in my rails application but I am still confused about performing Async task in rails project.
Let's take an example:
I have some item to delete from the database but I want to do it in Async manner. I also read about perform_later or perform_now method in delayed_job blogs. Here is my code which is working fine:
Controller class
def destroy
PostJob.perform_now(params[:id])
respond_to do |format|
format.xml { head :ok }
format.js { render 'posts.js.erb' }
end
end
Job class
class PostJob < ActiveJob::Base
queue_as :default
def perform(id)
#post = Post.find(id)
#post.destroy
end
end
According to official doc of delayed_jobs I can add handle_asynchronously in the end of method to run in async manner. How can I implement in this case?
My Question:
When I am looking at destroy method it is not deleting the element in Async way. However every steps written in destroy method is in Synchronous. Am I wrong?
If it's not then How can I implement destroy method to delete post in async manner?
Backgrounding task and cron job are same thing?
Edit -1
After giving suggestion by A Fader Darkly, I changed perform_now to perform_later which is working perfectly for Async process but it is not deleting the entry from table (code is fine because it works when i user perform_now).
Also when I am running job manually by following command, Everything works fine:
rake jobs:work
Is there any way to execute delay_job task as soon as the queue get some new data?
If you change your destroy method to call:
PostJob.perform_later(params[:id])
it should happen asynchronously. If not, you have some more set-up to do.
For your questions:
Yes you are right, but what you say is a tautology. Everything in that method is synchronous - the job queue isn't used because of the perform_now. Thus destroy isn't deleting in an async way.
See above.
Cron jobs work on the operating system level and are scheduled regularly for particular times. You could have a cron job working every minute, for example, or every day, or week (on a particular day at a particular time). They run from a schedule file called a crontab.
'Backgrounding' a task simply stops it from taking over the IO of your terminal session. So you can carry on using the terminal while the process runs in the background. Generally this is done on an ad-hoc basis, so you don't have to wait for a heavy operation to complete before going on to do different tasks.
EDIT
Based on edits to the question, it sounds like Delayed Job daemon needs to be started. From the instructions:
Note: For Rails 4 replace script/delayed_job with bin/delayed_job
When running a queue locally, omit the 'RAILS_ENV=production' part of commands.
Running Jobs
script/delayed_job can be used to manage a background process which will start working off jobs.
To do so, add gem "daemons" to your Gemfile and make sure you've run rails generate delayed_job.
You can then do the following:
RAILS_ENV=production script/delayed_job start
RAILS_ENV=production script/delayed_job stop
Runs two workers in separate processes.
RAILS_ENV=production script/delayed_job -n 2 start
RAILS_ENV=production script/delayed_job stop
Set the --queue or --queues option to work from a particular queue.
RAILS_ENV=production script/delayed_job --queue=tracking start
RAILS_ENV=production script/delayed_job --queues=mailers,tasks start
Use the --pool option to specify a worker pool. You can use this option multiple times to start different numbers of workers for different queues.
The following command will start 1 worker for the tracking queue,
2 workers for the mailers and tasks queues, and 2 workers for any jobs:
RAILS_ENV=production script/delayed_job --pool=tracking --pool=mailers,tasks:2 --pool=*:2 start
Runs all available jobs and then exits
RAILS_ENV=production script/delayed_job start --exit-on-complete
or to run in the foreground
RAILS_ENV=production script/delayed_job run --exit-on-complete
Related
I have created a rails application that runs a background process. It pings a server periodically and displays a graph for the response time. For this I am using a gem called crono. I am starting the task from the command line using 'bundle exec crono'.
How can I run the background process automatically when the rails server starts without having to start it from the command line?
Also, is there a way to automatically refresh the page periodically so that it displays an updated graph?
Edit: This application will be deployed to production.
Edit: I still couldn't get this to work. Here's the folder structure:
application/config/
ping_job.rb
cronotab.rb
cronotab uses 'crono' gem to execute the task inside ping_job.rb every 5 seconds.
require 'typhoeus'
class PingJob
def peform
#task definition goes here.
end
end
I want to run the task defined in ping_job.rb automatically when the server starts. I am thinking of using whenever gem. Any and all suggestions is welcome.
Put it in config/environment.rb right under Rails.application.initialize! this is ran to start up the rails server, so would be run after the application is initialized
Some time ago I wanted to join the start of a background process with the start of the rail server as well as you. And in the end I found out that it is the bad idea. I think the best solution is to create a deploy task that starts and restarts the process on each deploy. For example capistrano allows to do something like this:
namespace :deploy do
task :start do
invoke 'my_process:start'
end
task :stop do
invoke 'my_process:stop'
end
task :restart do
invoke 'my_process:start'
invoke 'my_process:stop'
end
end
namespace :my_process
task :start do
execute "some system command to start the process"
end
task :stop do
execute "some system command to stop the process"
end
end
Never start your process in Rails initialization files. It might start the process several times when there are few application workers on your server. Or it might start the process when you start the Rails console and so on.
I'm running Delayed Job with the pool option like:
bundle exec bin/delayed_job -m --pool=queue1 --pool=queue2 start
Will this spawn one OR multiple rails instances? (ie: will it spawn one instance for all the pools or will every pool get its own rails instance)?
When testing locally it seemed to only spawn one rails instance for all the pools.
But I want to confirm this 100% (esp on production).
I tried using commands like these to see what the DJ processes were actually pointing to:
ps aux, lsof, pstree
Anyone know for sure how this works, or any easy way to find out? I started digging through the source code but figured someone prob knows a quicker way.
Thanks!
It should spawn multiple processes, not sure why you're seeing only one.
From the readme:
Use the --pool option to specify a worker pool. You can use this option multiple times to start different numbers of workers for different queues.
The following command will start 1 worker for the tracking queue, 2 workers for the mailers and tasks queues, and 2 workers for any jobs:
RAILS_ENV=production script/delayed_job --pool=tracking --pool=mailers,tasks:2 --pool=*:2 start
Further details after discussion in comments
The question mentions "Rails instances", but instance is a generic term. The word you're looking for is process. The text quoted from DelayedJob's readme uses the word worker, short for worker process. In Rails, you usually refer to server processes as just servers, and to worker processes as just workers.
The rails console, too, is just another process.
In Rails all these processes will load the whole application, but will do different things.
Server processes will wait for incoming HTTP requests and send back responses; worker processes will periodically poll a queue (DelayedJob uses the DB) and execute jobs; the console process will start a REPL and wait for input.
They will all have access to the same code (models, DB config, assets, view template, etc), but will have very different responsibilities.
I hope this makes things clearer.
After digging through the code the short answer is..
Running something like this:
bundle exec bin/delayed_job -m --pool=queue1 --pool=queue2 start
will start ONE rails process/instance for ALL the pools/queues you specify.
Details below if you want more explanation:
In the Command class:
this loops through and setups up the workers:
def setup_pools
worker_index = 0
#worker_pools.each do |queues, worker_count|
options = #options.merge(:queues => queues)
worker_count.times do
process_name = "delayed_job.#{worker_index}"
run_process(process_name, options)
worker_index += 1
end
end
end
Which will run this for each queue:
def run(worker_name = nil, options = {})
Dir.chdir(root)
Delayed::Worker.after_fork
Delayed::Worker.logger ||= Logger.new(File.join(#options[:log_dir], 'delayed_job.log'))
worker = Delayed::Worker.new(options)
worker.name_prefix = "#{worker_name} "
worker.start
Each worker is daemonized, but there aren't new rails processes started. It just loops through each pool/queue in its own daemon.
You can see this in the "start" method:
def start
loop do
self.class.lifecycle.run_callbacks(:loop, self) do
#realtime = Benchmark.realtime do
#result = work_off
end
end
If you want to start a rails instance for each new queue you could use monit and do something like:
check process delayed_job_0
with pidfile /var/www/apps/{app_name}/shared/pids/delayed_job.0.pid
start program = "/usr/bin/env RAILS_ENV=production /var/www/apps/{app_name}/current/bin/delayed_job start -i 0"
stop program = "/usr/bin/env RAILS_ENV=production /var/www/apps/{app_name}/current/bin/delayed_job stop -i 0"
group delayed_job
check process delayed_job_1
with pidfile /var/www/apps/{app_name}/shared/pids/delayed_job.1.pid
start program = "/usr/bin/env RAILS_ENV=production /var/www/apps/{app_name}/current/bin/delayed_job start -i 1"
stop program = "/usr/bin/env RAILS_ENV=production /var/www/apps/{app_name}/current/bin/delayed_job stop -i 1"
group delayed_job
I have a rake task for use with Heroku Scheduler. My issue is, when I run the rake task on Heroku, the process never fully completes, instead it exits early.
What I am trying to do is perform an action on every member of an array. This array has 4,500 items. The function is like this on my Action model:
def my_rake_task(some_array)
i = 0
while i < some_array.count do
puts "#{i} number in {some_array[i]}"
i += 1
end
end
and is set up in scheduler.rake like:
desc "Run my rake task"
task :my_rake_task => :environment do
Action.my_rake_task
end
When I run this locally, everything is fine, and I'll see all 4,500 items in the array properly output. When I push to Heroku and check it via Heroku run rake my_rake_task it will output anywhere from ~1,000 to ~1,600 of the lines, and then stop.
My logs look like:
2015-01-17T16:44:28.259815+00:00 heroku[api]: Starting process with command `bundle exec rake my_rake_task` by my_user_name
2015-01-17T16:44:31.729533+00:00 heroku[run.8495]: Awaiting client
2015-01-17T16:44:31.765515+00:00 heroku[run.8495]: Starting process with command `bundle exec rake my_rake_task`
2015-01-17T16:44:32.155205+00:00 heroku[run.8495]: State changed from starting to up
2015-01-17T16:45:31.914275+00:00 heroku[run.8495]: State changed from up to complete
2015-01-17T16:45:31.896364+00:00 heroku[run.8495]: Process exited with status 0
I'm wondering if this could be because I'm only using one dyno? Or if there's some other reason? Any insight is much appreciated.
Based off the limited information that your provided I have a few solutions for you that can ensure your tasks run as intended.
First when you run rake tasks on Heroku, especially if they are long running tasks, run them detached. This will ensure that the connection to your terminal does not interfere with the task. For instance if you close your terminal before the task is complete it will end it prematurely.
heroku run:detached rake my_rake_task
Secondly if you are checking your logs and only see ~1600 lines of output, ensure that you are requesting more of the logs:
heroku logs -n 5000
Third you should be using a delayed_job https://github.com/collectiveidea/delayed_job
that will put everything in a queue, run tasks on worker dynos, have error reporting, and full logs for each task/job. You can view a rails cast on delayed job here http://railscasts.com/episodes/171-delayed-job
I want to start workes for the job directly after some certain method. So, I start the application with usual rails s. Upload some stuff, so the create method is invoked. After create method the :perform_analysis -method is delayed. Some data is inserted into delayed_jobs table. Normally I start the workers to work typing "script/delayed_job start" in the command line. But I would like to start the workers work automatically, so I will type nothing.
model:
after_create :perform_analysis
def perform_analysis
bla
end
handle_asynchronously :perform_analysis, :run_at => Proc.new { 5.minutes.from_now }
So, I run an application with rails s. I log in in my wep-page. Upload some files, after 5 min the jobs are delayed. Then the worker should start to work.
I have found this page that does almost what I want but somehow the workers do not start at all. So the schedule.rb is not run. Should I do something more that is not told on that webpage?
Is there any other possibility do it?
I recommend you take a look at Foreman (http://ddollar.github.com/foreman/) and have your procfile declare a worker process:
web: bundle exec rails s
worker: bundle exec rake jobs:work
This way, a single command foreman start will start both the server and worker. The output will be presented in the same window for both.
I followed the railscast which uses CollectiveIdea's fork. I'm not able to get it to work. I created a new file in my /lib folder and included this
class Device
def deliver
#my long running method
end
handle_asynchronously :deliver
end
device = Device.new
device.deliver
I do a script/delayed_job and that forks an app instance. Now,
There's no job activity going on. Nothing in the delayed_jobs table and nothing in the logs. Am I missing something here?
How do I set the interval for which the method should be run? (Ex every 30 seconds)
I'm testing this in the development mode (Rails 2.3.2), and soon will be moving this into production.
Thanks !
Do you see a process for the script/delayed_job that you ran? Do a ps aux | grep delayed_job and see if there is a process running.
AFAIK, you cannot set any time intervals using Delayed Job.
As a first step to diagnose the problem:
Stop your job workers
Launch a delayed job
Check whether it is present in the database.