I am newbie to Ruby on Rails development
Can someone please give explain to me what this command line do bundle exec rake jobs:work
I don't understand what is worker and what the command line can do.
Can someone gives me some examples.
Thank you
In ruby due to GIL(global interpreter lock), you can run only one ruby thread (multithreading is supported, but it works only if you do IO) at a time. To work around this issue and make things asynchronous, people use sidekiq, delayedjob, etc.
Worker in this terminology, is a separate background ruby process that processes jobs a.k.a task you put in it. And if you use DelayedJob the bundle exec rake jobs:work will start this processes (other gems for background jobs use other commands)
Related
I have Rails 4.2 application and I want to execute some one-off code when server starts.
My first approach was to use initializer in config/initializers to run the code. But in this case code is being executed also for all the rake tasks and Sidekiq process.
So I've created a rake task to run my code. Now I wonder how to execute it along with rails server startup. Of course, I can create a shell script that will execute my rake task and then start the server. But is there any rails-way to achieve this?
Foreman is another approach advised by SO, but it's not working for me as my task is not a daemon and the process terminates immediatelly after completion. Apparently all the processes in Procfile have to be daemonized.
Can be achieved with Foreman by running in the same process as daemonized task (e.g. with rails server).
If one-off taks is rake lego:update_all
Then corresponding Procfile is
web: rake lego:update_all && rails s
sidekiq: bundle exec sidekiq
With the delayed_jobs gem(https://github.com/collectiveidea/delayed_job) in rails, I am able to queue my notifications. But I don't quite understand how can I run the queued jobs on the production server. I knew I can just run
$ rake jobs:work
in the console for the local server. As the documentation said,
You can then do the following:
RAILS_ENV=production script/delayed_job start
RAILS_ENV=production script/delayed_job stop
# Runs two workers in separate processes.
RAILS_ENV=production script/delayed_job -n 2 start
RAILS_ENV=production script/delayed_job stop
# Set the --queue or --queues option to work from a particular queue.
RAILS_ENV=production script/delayed_job --queue=tracking start
RAILS_ENV=production script/delayed_job --queues=mailers,tasks start
# Runs all available jobs and the exits
RAILS_ENV=production script/delayed_job start --exit-on-complete
# or to run in the foreground
RAILS_ENV=production script/delayed_job run --exit-on-complete
My question is how to integrate it with my Rails app?I was thinking to create a file called delayed_jobs.rb in config/initializers as:
# in config/initializers/delayed_jobs
script/delayed_job start if Rails.env.production?
But I am not sure if it is the right way to do with it. Thanks
The workers run as separate processes, not as part of your Rails application. The simplest way would be to run the rake task in a screen session to prevent it from quitting when you log out of the terminal session. But there are better ways:
You would use a system such as monit or God or run the worker script provided by delayed_job. You'll find more information in the answers to this question.
In my experiencie I've found my solution using capistrano gem, which in words of the official doc
It supports the scripting and execution of arbitrary tasks, and includes a set of sane-default deployment workflows.
Basically it is a tool that helps you to deploy your app, including all of those task like starting/stoping queues, migrating the database, bundle new gems, and all of those thing that we usually do with ssh connection.
Here is a beutifull tutorial about capistrano and webfaction as hosting. And here is a nice module to blend capistrano and delayed_job. At the end you should only be concern about the development environment, because every time that you need to deploy to production, you'll do a commit to your repository and then
$ cap production deploy
Which will manage the whole production environment, stoping/restarting those queues, restarting the app, installing gems and everything that you can perform through the capistrano scripting way.
With the delayed_jobs gem(https://github.com/collectiveidea/delayed_job) in rails, I am able to queue my notifications. But I don't quite understand how can I run the queued jobs on the production server. I knew I can just run
$ rake jobs:work
in the console for the local server. As the documentation said,
You can then do the following:
RAILS_ENV=production script/delayed_job start
RAILS_ENV=production script/delayed_job stop
# Runs two workers in separate processes.
RAILS_ENV=production script/delayed_job -n 2 start
RAILS_ENV=production script/delayed_job stop
# Set the --queue or --queues option to work from a particular queue.
RAILS_ENV=production script/delayed_job --queue=tracking start
RAILS_ENV=production script/delayed_job --queues=mailers,tasks start
# Runs all available jobs and the exits
RAILS_ENV=production script/delayed_job start --exit-on-complete
# or to run in the foreground
RAILS_ENV=production script/delayed_job run --exit-on-complete
My question is how to integrate it with my Rails app?I was thinking to create a file called delayed_jobs.rb in config/initializers as:
# in config/initializers/delayed_jobs
script/delayed_job start if Rails.env.production?
But I am not sure if it is the right way to do with it. Thanks
The workers run as separate processes, not as part of your Rails application. The simplest way would be to run the rake task in a screen session to prevent it from quitting when you log out of the terminal session. But there are better ways:
You would use a system such as monit or God or run the worker script provided by delayed_job. You'll find more information in the answers to this question.
In my experiencie I've found my solution using capistrano gem, which in words of the official doc
It supports the scripting and execution of arbitrary tasks, and includes a set of sane-default deployment workflows.
Basically it is a tool that helps you to deploy your app, including all of those task like starting/stoping queues, migrating the database, bundle new gems, and all of those thing that we usually do with ssh connection.
Here is a beutifull tutorial about capistrano and webfaction as hosting. And here is a nice module to blend capistrano and delayed_job. At the end you should only be concern about the development environment, because every time that you need to deploy to production, you'll do a commit to your repository and then
$ cap production deploy
Which will manage the whole production environment, stoping/restarting those queues, restarting the app, installing gems and everything that you can perform through the capistrano scripting way.
New to Rails and very new to Delayed Jobs.
Got one that's supposed to be triggered after 5 minutes. I finally got it to work so that if I run
rake jobs:work
in my terminal, the job starts up and works correctly. If I CTRL-C and exit that action in my terminal, the delayed job stops working correctly. This is one thing on my local server and another on Heroku, where I have to start up the delayed job using
heroku run rake jobs:work
I looked into the new Heroku toolbelt and downloaded the gem they suggest for worker maintenance, foreman, but when I run "foreman start", I get this error
ERROR: procfile does not exist
I don't know what a procfile is, I'm afraid of breaking things after spending pretty much a day debugging my delayed_jobs actions, and I want to do this right to make sure it works instead of figuring out some hacky fix that breaks down later -- so I figured I should ask this question, however obnoxiously vague it may be.
Should I be using foreman for this? Or workless? (Saw that in another SO question). Where's my procfile? Should I do anything with it?
Thanks,
Sasha
You should be using a procfile to set up your Heroku processes, this is the standard method that Heroku uses to define and control the processes.
If you haven't utilised a procfile to this point everything will probably still work as Heroku adds some default processes when you push a Rails app, including both the web and worker processes. The default worker process is set to delayed job.
Foreman was developed in order to set up your local machine to use the same approach but, unlike the Heroku service, Foreman actually requires a procfile to be present to control the services that are started when Foreman is started as it doesn't know how to setup defaults.
I would suggest creating a procfile, placed in the root directory of your project, to ensure that your processes are set up and operating in the same manner on your local machine as on Heroku. If you want to mimic what Heroku sets up automatically you add the following to the procfile depending on whether you are using the Thin web server (which Heroku recommends) or not.
With Thin in your gemfile:
web: bundle exec thin start -R config.ru -e $RACK_ENV -p $PORT
worker: bundle exec rake jobs:work
Without a special web server (eg you are using webrick, the rails default):
web: bundle exec rails server -p $PORT
worker: bundle exec rake jobs:work
Once this file is in place you can run foreman on your local machine and it will start your web server and delayed_job workers automatically.
Running through this process will only impact starting delayed_job on the local machine. As you are running the exact same command bundle exec rake jobs:work as you are currently using there should be no impact on your dj actions in either locally or on Heroku. Obviously some testing is required to make suer this is actually the case.
Workless is designed to scale workers on Heroku so that you don't have to pay for them when there is no work available. It has no bearing on the procfile or defining how to actually start a dj worker process.
as far as I know, there are 2 version of delayed_job:
original(tobi's) https://github.com/tobi/delayed_job
collectiveidea's fork: https://github.com/collectiveidea/delayed_job
when using the collectiveidea version, you should start it as below:
# Runs two workers in separate processes.
$ RAILS_ENV=production script/delayed_job -n 2 start
I am not familiar with delayed_job on Heroku, please follow its instructions.
I have a Rails 3.x application with Resque. I run the resque command with:
nohup rake RAILS_ENV=production environment resque:work QUEUE='*' & >>/tmp/resque.log 2>> /tmp/resque.err.log
Every other day the process dies, but the two output files are always empty. Any other way of figuring out why the Resque process goes down?
Try the super awesome Pry console. This is similar to irb only much more advanced.
You can use binding.pry inside your perform method or preferably in a hook which will start up a pry console using which you can debug. Helped me in similar situations a lot.