how to make resque tasks into capistrano recipe - ruby-on-rails

I have two resque commands that I'd like to implement into capistrano so I can run it successfully on the server. I've checked by running these manually that they both work, however if I'm to keep these continuously running I'll end up with a broken pipe.
I'd like to be able to start resque:
queue=* rake environment resque:work
and start resque-scheduler:
rake environment resque:scheduler
anybody know how I can implement this into my deploy.rb file?

Try the capistrano-resque gem which should do exactly this (it includes support for resque-scheduler).
After setting it up, you'll get these Capistrano tasks:
➔ cap -vT | grep resque
cap resque:status # Check workers status
cap resque:start # Start Resque workers
cap resque:stop # Quit running Resque workers
cap resque:restart # Restart running Resque workers
cap resque:scheduler:restart #
cap resque:scheduler:start # Starts Resque Scheduler with default configs
cap resque:scheduler:stop # Stops Resque Schedule
(I currently help maintain this gem, so if you have any trouble with it just file an issue and I'll take a look).

Related

Run Rails Rake task on Heroku Scheduler as detached to capture log output in Papertrail

A known problem with running Rails Rake tasks on Heroku is that they don't submit their logs to Papertrail since the one-off dynos push their output to the console by default. This is solved by running your dyno in "detached" mode by using heroku run:detached rake your:task. Unfortunately, the Heroku Scheduler appears to automatically run tasks as normal instead of in detached mode so these logs are lost.
How can you make the scheduler run a task in "detached" mode so these weekly/daily/hourly tasks get their logs captured by Papertrail as expected?
You can use sidekiq, this gem will help you run any processes with schedule, and inside in your sidekiq you can run rake tasks!
https://github.com/mperham/sidekiq
Example:
class MySidekiqTask
include Sidekiq::Worker
def perform
application_name = Rails.application.class.parent_name
application = Object.const_get(application_name)
application::Application.load_tasks
Rake::Task['db:migrate'].invoke
end
end
Good instruction how setup Sidekiq in Heroku server
https://itnext.io/sidekiq-overview-and-how-to-deploy-it-to-heroku-b8811fea9347

How to run delayed jobs in production in Rails 4.2 without running rake jobs command?

In development mode, we use rake jobs:work. In the same way, inorder to test in the production mode, we use RAILS_ENV=production rake jobs:work.
As entire my application is on Apache Nginx server, is there any option like any gem / code that runs background and how it is used to run the jobs without running this command?
Delayed job is great if you don't have Redis, if you are already using Redis I would recommend Sidekiq over delayed job. The main difference is delayed job is an SQL based job worker and Sidekiq uses Redis.
Check out the Sidekiq: Getting Started guide for more information about using Sidekiq.
Delayed also comes with a script to run jobs in the background.
From the README: Running Jobs
script/delayed_job can be used to manage a background process which will
start working off jobs.
To do so, add gem "daemons" to your Gemfile and make sure you've run rails
generate delayed_job.
You can then do the following:
RAILS_ENV=production script/delayed_job start
RAILS_ENV=production script/delayed_job stop
# Runs two workers in separate processes.
RAILS_ENV=production script/delayed_job -n 2 start
RAILS_ENV=production script/delayed_job stop
# Set the --queue or --queues option to work from a particular queue.
RAILS_ENV=production script/delayed_job --queue=tracking start
RAILS_ENV=production script/delayed_job --queues=mailers,tasks start
# Use the --pool option to specify a worker pool. You can use this option multiple times to start different numbers of workers for different queues.
# The following command will start 1 worker for the tracking queue,
# 2 workers for the mailers and tasks queues, and 2 workers for any jobs:
RAILS_ENV=production script/delayed_job --pool=tracking --pool=mailers,tasks:2 --pool=*:2 start
# Runs all available jobs and then exits
RAILS_ENV=production script/delayed_job start --exit-on-complete
# or to run in the foreground
RAILS_ENV=production script/delayed_job run --exit-on-complete
Rails 4: replace script/delayed_job with bin/delayed_job
Workers can be running on any computer, as long as they have access to the
database and their clock is in sync. Keep in mind that each worker will check
the database at least every 5 seconds.
You can also invoke rake jobs:work which will start working off jobs. You can
cancel the rake task with CTRL-C.
If you want to just run all available jobs and exit you can use rake jobs:workoff
Work off queues by setting the QUEUE or QUEUES environment variable.
QUEUE=tracking rake jobs:work
QUEUES=mailers,tasks rake jobs:work
A couple more things:
You should always specify which queues to run.
You'll also need to ensure the script is run when your app is deployed. (You can do this with Capistrano, Mina, Foreman, upstart and many other ways.)
Miad is correct, Sidekiq is likely what you are looking for, unless you are literally talking about using the delayed job gem, which is another queue adapter. Sidekiq is basically a queue adapter that connects Rails' ActiveJob with Redis. You can create Jobs with ActiveJob and kick them off by calling the perform method from your Job class. This will queue them in a Sidekiq queue, pass them to redis, and they will be performed asynchronously. Your code might look something like this:
in app/jobs/your_job.rb
class YourJob < ActiveJob::Base
#specify the name of your queue
queue_as :the_queue
# you must define perform, this is where the async magic happens
def perform(something)
do_stuff_to(something)
end
end
in app/models/place_where_job_is_kicked_off.rb
class PlaceWhereJobIsKickedOff
def do_the_jobs
Something.all.each do |something|
# enqueue your jobs to be performed as soon as the queueing system is free. The queue size is set in your sidekiq.yml
# each job will be enqueued and run asynchronously, so watch out for race conditions.
YourJob.perfom_later(something)
end
end
end
in app/config/enviroments/production.rb
Rails.application.configure do
#other production configs...
#set the ActiveJob queue adapter to sidekiq
config.active_job.queue_adapter = :sidekiq
#other production configs...
end
in app/config/sidekiq.yml
:verbose: true
:pidfile: tmp/pids/sidekiq.pid
:logfile: log/sidekiq.log
# 5 jobs can run asynchronously simultaneously
:concurrency: 5
:queues:
# queue names go here [name, size]
- [the_queue, 5]
Make sure that your have Redis installed and running on your production server, and sidekiq is running. After adding the sidekiq gem to gemfile, and bundle installing, run:
sidekiq -C path/to/sidekiq.yml -d (-e environment)
this will start sidekiq as daemon process.
I think what you are looking for is sidekiq gem. it is used in order to run jobs asynchronously.
http://sidekiq.org

How to run Rails delayed_job with Passenger in production? [duplicate]

With the delayed_jobs gem(https://github.com/collectiveidea/delayed_job) in rails, I am able to queue my notifications. But I don't quite understand how can I run the queued jobs on the production server. I knew I can just run
$ rake jobs:work
in the console for the local server. As the documentation said,
You can then do the following:
RAILS_ENV=production script/delayed_job start
RAILS_ENV=production script/delayed_job stop
# Runs two workers in separate processes.
RAILS_ENV=production script/delayed_job -n 2 start
RAILS_ENV=production script/delayed_job stop
# Set the --queue or --queues option to work from a particular queue.
RAILS_ENV=production script/delayed_job --queue=tracking start
RAILS_ENV=production script/delayed_job --queues=mailers,tasks start
# Runs all available jobs and the exits
RAILS_ENV=production script/delayed_job start --exit-on-complete
# or to run in the foreground
RAILS_ENV=production script/delayed_job run --exit-on-complete
My question is how to integrate it with my Rails app?I was thinking to create a file called delayed_jobs.rb in config/initializers as:
# in config/initializers/delayed_jobs
script/delayed_job start if Rails.env.production?
But I am not sure if it is the right way to do with it. Thanks
The workers run as separate processes, not as part of your Rails application. The simplest way would be to run the rake task in a screen session to prevent it from quitting when you log out of the terminal session. But there are better ways:
You would use a system such as monit or God or run the worker script provided by delayed_job. You'll find more information in the answers to this question.
In my experiencie I've found my solution using capistrano gem, which in words of the official doc
It supports the scripting and execution of arbitrary tasks, and includes a set of sane-default deployment workflows.
Basically it is a tool that helps you to deploy your app, including all of those task like starting/stoping queues, migrating the database, bundle new gems, and all of those thing that we usually do with ssh connection.
Here is a beutifull tutorial about capistrano and webfaction as hosting. And here is a nice module to blend capistrano and delayed_job. At the end you should only be concern about the development environment, because every time that you need to deploy to production, you'll do a commit to your repository and then
$ cap production deploy
Which will manage the whole production environment, stoping/restarting those queues, restarting the app, installing gems and everything that you can perform through the capistrano scripting way.

Run delayed jobs after deployed on production server

With the delayed_jobs gem(https://github.com/collectiveidea/delayed_job) in rails, I am able to queue my notifications. But I don't quite understand how can I run the queued jobs on the production server. I knew I can just run
$ rake jobs:work
in the console for the local server. As the documentation said,
You can then do the following:
RAILS_ENV=production script/delayed_job start
RAILS_ENV=production script/delayed_job stop
# Runs two workers in separate processes.
RAILS_ENV=production script/delayed_job -n 2 start
RAILS_ENV=production script/delayed_job stop
# Set the --queue or --queues option to work from a particular queue.
RAILS_ENV=production script/delayed_job --queue=tracking start
RAILS_ENV=production script/delayed_job --queues=mailers,tasks start
# Runs all available jobs and the exits
RAILS_ENV=production script/delayed_job start --exit-on-complete
# or to run in the foreground
RAILS_ENV=production script/delayed_job run --exit-on-complete
My question is how to integrate it with my Rails app?I was thinking to create a file called delayed_jobs.rb in config/initializers as:
# in config/initializers/delayed_jobs
script/delayed_job start if Rails.env.production?
But I am not sure if it is the right way to do with it. Thanks
The workers run as separate processes, not as part of your Rails application. The simplest way would be to run the rake task in a screen session to prevent it from quitting when you log out of the terminal session. But there are better ways:
You would use a system such as monit or God or run the worker script provided by delayed_job. You'll find more information in the answers to this question.
In my experiencie I've found my solution using capistrano gem, which in words of the official doc
It supports the scripting and execution of arbitrary tasks, and includes a set of sane-default deployment workflows.
Basically it is a tool that helps you to deploy your app, including all of those task like starting/stoping queues, migrating the database, bundle new gems, and all of those thing that we usually do with ssh connection.
Here is a beutifull tutorial about capistrano and webfaction as hosting. And here is a nice module to blend capistrano and delayed_job. At the end you should only be concern about the development environment, because every time that you need to deploy to production, you'll do a commit to your repository and then
$ cap production deploy
Which will manage the whole production environment, stoping/restarting those queues, restarting the app, installing gems and everything that you can perform through the capistrano scripting way.

How do you always have delayed job running on heroku?

I have an app on Heroku running delayed jobs. However at the moment I have to start the job queue running with the terminal command:
heroku rake jobs:work
...but this means when I shut down my terminal the app's delayed job queue shuts down too.
Is there a way I can get Heroku to just always start and run delayed job in the background when the app starts up? Without having to run the command each time and without having it directly linked to my terminal shell?
Thanks very much.
Edit:
It's on the bamboo stack. Upping workers or running rake jobs:work , the delayed jobs runs for a while, but then the queue seems to just stop getting processed. There are no errors in the delayed jobs queue, the workers just stop processing the jobs. It has to explicitly restarted every 5 or 10 mins.
From the docs:
On Heroku's Aspen or Bamboo stack, use heroku workers 1
On the Cedar stack, you put this line in your Procfile:
worker: bundle exec rake jobs:work
And then do heroku scale worker=1.
we use the workless gem with our heroku stack. it starts worker when the delayed_job queue > 0 and quits the worker when delayed_job queue goes to 0.
It turns out that I was using the wrong rake gem.
The following was causing an issues with rails 3 on Heroku:
gem 'rake', '0.9.2'
Updating the gem fixed the issues, even though there were not errors in the log:
gem "rake", "0.8.7"

Resources