Automate a rake task to run on boot on heroku? - ruby-on-rails

Suppose there's a task
rake startupscript
that should run whenever the app boots, how can we automate that on heroku?
I know there's a heroku scheduler but that will run the task every 10 minutes instead of just once at boot. I also know of the Procfile and believe this can be a solution, although I do not yet know how to implement (and probably more importantly, I don't want to risk breaking anything else that can be configured via a Procfile, e.g. webserver etc). A lot of the Procfile docs focus on using it to alter web servers rather than app level rake tasks.
How can a rake task be made to run at boot?

You can add something like this to Procfile before you start your application services
# Run pre-release-tasks here
release: bundle exec rails db:migrate
# Then run your application
web: bundle exec puma -t 5:5 -p ${PORT:-3000} -e ${RACK_ENV:-development}
Anything tagged as release will run before the startup script runs
https://devcenter.heroku.com/articles/release-phase

Related

How does heroku and sidekiq run the jobs outside of my main dyno?

I'm confused how heroku and sidekiq work. My Procfile looks like:
web: bundle exec puma -C config/puma.rb
worker: bundle exec sidekiq -e $RAILS_ENV
Now inside my rails I run my sidekiq jobs in my code like:
SomeWorker.perform_async(some.id)
Now will this automatically somehow make this process run in the worker dyno?
If yes, how does it just know to run this out of process?
It is confusing because when I am in my main git folder I can run heroku commands and I know this are for my web dyno, but how do I then see the logs for my worker dyno or will these be in my same dyno logs?
When you setup your Procfile, you're telling Heroku to setup 2 types of dynos: web and worker. It's likely that these are using the same Rails app code but are starting up with different commands (bundle exec puma vs. bundle exec sidekiq). You then can scale however many VMs (dynos) that you need for each type of process.
The glue that holds the two together is Redis. When you run SomeWorker.perform_async(some.id) from your web process, you're adding a record to Redis describing a job to run. Your worker process watches Redis for new records and then processes them.
The Heroku logs show logs from all running dynos. So you should see logs from both your web and worker processes mixed in together.

How to run Rails delayed_job with Passenger in production? [duplicate]

With the delayed_jobs gem(https://github.com/collectiveidea/delayed_job) in rails, I am able to queue my notifications. But I don't quite understand how can I run the queued jobs on the production server. I knew I can just run
$ rake jobs:work
in the console for the local server. As the documentation said,
You can then do the following:
RAILS_ENV=production script/delayed_job start
RAILS_ENV=production script/delayed_job stop
# Runs two workers in separate processes.
RAILS_ENV=production script/delayed_job -n 2 start
RAILS_ENV=production script/delayed_job stop
# Set the --queue or --queues option to work from a particular queue.
RAILS_ENV=production script/delayed_job --queue=tracking start
RAILS_ENV=production script/delayed_job --queues=mailers,tasks start
# Runs all available jobs and the exits
RAILS_ENV=production script/delayed_job start --exit-on-complete
# or to run in the foreground
RAILS_ENV=production script/delayed_job run --exit-on-complete
My question is how to integrate it with my Rails app?I was thinking to create a file called delayed_jobs.rb in config/initializers as:
# in config/initializers/delayed_jobs
script/delayed_job start if Rails.env.production?
But I am not sure if it is the right way to do with it. Thanks
The workers run as separate processes, not as part of your Rails application. The simplest way would be to run the rake task in a screen session to prevent it from quitting when you log out of the terminal session. But there are better ways:
You would use a system such as monit or God or run the worker script provided by delayed_job. You'll find more information in the answers to this question.
In my experiencie I've found my solution using capistrano gem, which in words of the official doc
It supports the scripting and execution of arbitrary tasks, and includes a set of sane-default deployment workflows.
Basically it is a tool that helps you to deploy your app, including all of those task like starting/stoping queues, migrating the database, bundle new gems, and all of those thing that we usually do with ssh connection.
Here is a beutifull tutorial about capistrano and webfaction as hosting. And here is a nice module to blend capistrano and delayed_job. At the end you should only be concern about the development environment, because every time that you need to deploy to production, you'll do a commit to your repository and then
$ cap production deploy
Which will manage the whole production environment, stoping/restarting those queues, restarting the app, installing gems and everything that you can perform through the capistrano scripting way.

How do I keep sidekiq always up on Heroku?

Whenever I run heroku run bundle exec sidekiq, I see all my background jobs being done, however, I want them to be able to go without me needing to be there. When I exit out of that terminal tab, sidekiq stops working. How would I mitigate that?
Also, I've read something about procfiles and increasing workers. I don't know what procfiles are and I don't know how to increase workers either.
Basically, I'm a newbie trying to get sidekiq set up to run on Heroku for my Rails app. I want it to be running at all times.
Create a file named ./Procfile with the following in it:
web: bundle exec rails server -p $PORT
worker: bundle exec sidekiq
sidekiq on Heroku
more on Procfiles
foreman gem

ruby run code in separate process from webapp

I have a file with worker class (worker.rb) and I need to instantiate it in separate process from rails application after getting the command. I'm currently working on windows os.
So how to do that?
P.S. Will that code work in unix/linux env?
Check out foreman
https://github.com/ddollar/foreman
You can put a Procfile in your Rails root with instructions for starting both your rails server and your worker and then run foreman start to launch them. Here is a sample Procfile:
web: bundle exec unicorn_rails -p 8088
scheduler: bundle exec rake resque:scheduler
worker: bundle exec rake resque:work
Foreman is compatible with both Windows and Linux, so it should work regardless of your platform.

Delay Job Worker keeps turning off?

New to Rails and very new to Delayed Jobs.
Got one that's supposed to be triggered after 5 minutes. I finally got it to work so that if I run
rake jobs:work
in my terminal, the job starts up and works correctly. If I CTRL-C and exit that action in my terminal, the delayed job stops working correctly. This is one thing on my local server and another on Heroku, where I have to start up the delayed job using
heroku run rake jobs:work
I looked into the new Heroku toolbelt and downloaded the gem they suggest for worker maintenance, foreman, but when I run "foreman start", I get this error
ERROR: procfile does not exist
I don't know what a procfile is, I'm afraid of breaking things after spending pretty much a day debugging my delayed_jobs actions, and I want to do this right to make sure it works instead of figuring out some hacky fix that breaks down later -- so I figured I should ask this question, however obnoxiously vague it may be.
Should I be using foreman for this? Or workless? (Saw that in another SO question). Where's my procfile? Should I do anything with it?
Thanks,
Sasha
You should be using a procfile to set up your Heroku processes, this is the standard method that Heroku uses to define and control the processes.
If you haven't utilised a procfile to this point everything will probably still work as Heroku adds some default processes when you push a Rails app, including both the web and worker processes. The default worker process is set to delayed job.
Foreman was developed in order to set up your local machine to use the same approach but, unlike the Heroku service, Foreman actually requires a procfile to be present to control the services that are started when Foreman is started as it doesn't know how to setup defaults.
I would suggest creating a procfile, placed in the root directory of your project, to ensure that your processes are set up and operating in the same manner on your local machine as on Heroku. If you want to mimic what Heroku sets up automatically you add the following to the procfile depending on whether you are using the Thin web server (which Heroku recommends) or not.
With Thin in your gemfile:
web: bundle exec thin start -R config.ru -e $RACK_ENV -p $PORT
worker: bundle exec rake jobs:work
Without a special web server (eg you are using webrick, the rails default):
web: bundle exec rails server -p $PORT
worker: bundle exec rake jobs:work
Once this file is in place you can run foreman on your local machine and it will start your web server and delayed_job workers automatically.
Running through this process will only impact starting delayed_job on the local machine. As you are running the exact same command bundle exec rake jobs:work as you are currently using there should be no impact on your dj actions in either locally or on Heroku. Obviously some testing is required to make suer this is actually the case.
Workless is designed to scale workers on Heroku so that you don't have to pay for them when there is no work available. It has no bearing on the procfile or defining how to actually start a dj worker process.
as far as I know, there are 2 version of delayed_job:
original(tobi's) https://github.com/tobi/delayed_job
collectiveidea's fork: https://github.com/collectiveidea/delayed_job
when using the collectiveidea version, you should start it as below:
# Runs two workers in separate processes.
$ RAILS_ENV=production script/delayed_job -n 2 start
I am not familiar with delayed_job on Heroku, please follow its instructions.

Resources