heroku restart bounce kills background workers - ruby-on-rails

We have background (resque) jobs on heroku for our Ruby on Rails application.
When Heroku bounce the box as they did yesterday, our background jobs are lost.
Our background jobs run for about 2 - 6 hours.
Is there anyway to keep them running or restart them automatically after failure?

In a nut shell, no, there's no easy way (if by easy you mean transparent to you, the app developer).
The best way to handle such interruptions is to properly save the job state on receipt of the SIGTERM signal from the dyno manifold. Your worker dyno will be reconstituted in a different physical location where it can resume processing interrupted jobs.
While this involves extra development effort, the result is a more robust and resilient app (no matter what the underlying platform).

Related

Scaling Dyno worker size dynamically on Heroku Rails application

I am working on a project that launches a process via a Rails worker that is very resource intensive and it can only be handled properly by a Performance Worker on Heroku, 1X workers are killed because they use too much RAM and 2X workers can barely handle the load exceeding their RAM limits by up to 160%. A performance worker does the job fine with no issues.
My question is, is there a way to dynamically switch the Dyno size to Performance before a job initiates and then scale it back down once the job is finished or a queue is empty?
I know HireFire exists but to my knowledge this service only increases the amount of workers based on a queue length etc? Another possible solution I thought about was using the Heroku API which has a Dyno endpoint to resize the worker dyno before the job starts and then resize it back down when the job ends.
Does anyone else have other recommendations, ideas or strategies for this issue?
Thanks!
The best way is the one you mentioned: use the Heroku Platform API to scale your Dyno size up before starting the job, and then down again afterwards.
This is because tools like HireFire only work by inspecting stuff like application response time, router queue, etc. -- so there's no way for them to know you're about to run some job and then scale up just for that.
Depending on the specifics of the usage, you may be able to just create a distinct dyno-type in your procfile that only runs this particular worker and is always scaled to performance, but isn't always running? You could even just run this with one-off runs, instead of scaling it potentially (this can also be done via the API, roughly equivalent to heroku run ...). That said, #rdegges answer should certainly work.

Is it possible to redeploy a Heroku app without restarting some process types

I'm running a Rails app on Heroku, and I have defined a custom process type to perform some long-running jobs, really long-running, a job can easily take something about an hour or more. I know it's better to split it into some small chunks, but that's quite problematic for that task.
And the issue is that when I push a new version — Heroku restarts all the dynos (web, workers, long workers — everything). I wonder is it possible to restart only some process types, e.g. only the web dynos?
No, that isn't possible. The easiest and most scalable way around this would be to split your long-running jobs into smaller chunks.
That way, you would have a lot of very small jobs being processed very quickly. When your app is restarted, you would be able to restart your process, as it wouldn't stop a long-running job.
Alternatively, one-off dynos won't be restarted when your app is deployed.
Using the heroku api, you can programmatically boot one-off dynos. Using that, you could start a one-off dyno for each long-running job you need to process.
That job would be processed (for up to 24 hours, where it would be cycled), and you would be able to deploy your app without restarting it.

Current Sidekiq job lost when deploying to Heroku

I have a Sidekiq job that runs for a while and when I deploy to Heroku and the job is running, it can't finish within in the few seconds.
That is fine, as the job is designed to be able to be re-run if needed.
The problem is that the job gets lost (instead of put back to redis and run again after deploy).
I found that it is advised to set :timeout: 8 on heroku and I tried it, but it had no effect (also tried seeting to 5).
When there is an exception, I get errors reported, but I don't see any. So not sure what could be wrong.
Any tips on how to debug this?
The free version of Sidekiq will push unfinished jobs back to Redis after the timeout has passed, default of 8 seconds. Heroku gives a process 10 seconds to shut down. That means we have 2 seconds to get those jobs back to Redis or they will be lost. If your network is slow, if the Redis server is swapping, etc, that 2 sec deadline might not be met and the jobs lost.
You were on the right track: one answer is to lower the timeout so you have a better chance of meeting that deadline. But network or swapping delay can't be predicted: even 5 seconds might not be enough time.
Under normal healthy conditions, things should work as designed. Keep your machines healthy (uncongested network, plenty of RAM) and the basic fetch should work well. Sidekiq Pro's reliable fetch feature is a fundamental redesign of how Sidekiq fetches jobs and works around all of these issues by keeping jobs in Redis all the time so they can't be lost. But it comes with serious trade offs too: it's more complicated, slower and more Redis intensive than "basic" fetch.
In short, I don't know why you are losing jobs but make sure your instances and Redis server are healthy and the latency is low.
https://github.com/mperham/sidekiq/wiki/Using-Redis#life-in-the-cloud
This is actually feature of sidekiq - designed to steer you toward paying pro version:
http://sidekiq.org/products/pro
RELIABILITY
More reliable message processing.
Cloud environments are noisy and unreliable. Seeing timeouts? Wild swings in latency or performance? Ruby VM crashes or processes disappearing?
If a Sidekiq process crashes while processing a job, that job is lost.
If the Sidekiq client gets a networking error while pushing a job to Redis, an exception is raised and the job is not delivered.
Sidekiq Pro uses Redis's RPOPLPUSH command to ensure that jobs will not be lost if the process crashes or gets a KILL signal.
The Sidekiq Pro client can withstand transient Redis outages or timeouts. It will enqueue jobs locally upon error and attempt to deliver those jobs once connectivity is restored.
Deploy terminates all processes that belongs to user, therefore job is lost. There is actually not much you can do there.
As #mike-perham and #esse noted, Sidekiq is designed the way it can loose jobs due to its fetching mechanism. Your options to get around this are:
To buy Sidekiq Pro (although it was reported to cause the same issue)
To write your own fetcher (but that would mean you can not use most of 3rd party libraries, as they will not work with your custom fetcher)
To mimic Sidekiq Pro's reliable fetch by backing up your jobs data. In case you are up for this way, check out attentive_sidekiq gem which does exactly that.

Do I need a worker dyno on Heroku?

I have not specifically added any background tasks to my Rails app. The app is going to be sending emails and also going to be resizing images. But I haven't included any background processes like delayed_jobs or Resque in my app. And for the time being I am not going to be adding background processes. So do I need a worker dyno?
If I add a worker dyno, would these tasks be automatically handled by the worker dyno?
Nothing happens by magic! Unless you add delayed_job or other gem for handling tasks in background and explicitly write code that should be performed in background - it will not.
When it comes to sending email and resizing images - it's encouraged to use background workers for those tasks but it's not a must. As long as you don't see timeouts for requests you should be all ok.
If you decide upon using delayed_job in the future take a look at workless gem - it autoscales worker dyno when it's needed and then scales it back to zero. I use it in my projects and that saves my quite some money.
If you are not running background processes you should be fine without a worker dyno.
But it depends also on your normal processes. If your processes are having to long of a request time (e.g. might be the case with your image resizing) it might be useful to pack that into a background task.
Pretty good explanation of this can be found on the Heroku website:
https://devcenter.heroku.com/articles/background-jobs-queueing
Of course the beauty of Heroku is it's scalability... so if you see that your tasks are having to high response times you can always add a worker dyno later and then change the task into a background task.

Fork delayed job from the app server?

Here's my simple ideal case scenario for when I'd like delayed job to run:
When the first application server (whether through mongrel or passenger) starts, it'll start my delayed job workers.
When the last running application server terminates, it'll kill all the delayed job workers.
The first part (starting) is doable, although I'm not sure what the "right" or "best" way to do it is. Just make a conditional (on process not already running) system call to delayed_job start?
The second part (terminating) -- well, I'm not sure if it is doable or not. Definitely have no idea how this effect could be accomplished.
Any thoughts or ideas?
Is there another way that you start/end delayed job workers that you think is best?
Side question:
The main questions above are for the production environment -- a more difficult case because there are multiple app servers running at the same time. Could the same thing be easily done in the development environment (where there's guaranteed to only be one application server, not a cluster of them) by forking a child process to run the delayed job workers that would always terminate when the parent terminates? How would I go about doing this?
You could definitely pull the terminating off with god.
Simply watch the app processes and god will fire a callback when they're all stopped.

Resources