Are dynos restarted when an app is deployed? - ruby-on-rails

I've got some background jobs that run for a long time (hours).
If I deploy my app while those background jobs are running, will the dynos those jobs are attached to get restarted (thus killing the jobs)?
More specifically, those background jobs deal with downloading large files to /tmp...meaning if that dyno got restarted, it would interrupt the download.

Dyno's are restarted when you deploy, yes.
More importantly though, if you are downloading to /tmp then a deployment would create a new slug with an empty /tmp so anything downloaded would no longer exist.

Dynos are restarted on deployment.
They are also cycled once a day by Heroku automatically. Dynos can be restarted whenever it stops responding, or stopped and moved to another network location, all automatically. And as John mentioned, it will be a new instance, so all your previous downloaded files will be deleted.
You can look at some cases here - https://devcenter.heroku.com/articles/dynos#the-dyno-manager
Best practice on Heroku would be, not to have any writable files on the Dynos.

Related

How to detect orphaned sidekiq process after capistrano deploy?

We have a Rails 4.2 application that runs alongisde a sidekiq process for long tasks.
Somehow, in a deploy a few weeks ago something went south (the capistrano deploy process didn't effectively stopped it, wasn't able to figure out why) and there was left an orphaned sidekiq process running that was competing with the current one for jobs on the redis queue. Because this process source became outdated it started giving random results on our application (depending on which process captured the job) and we got a very hard time until we figured this out..
How I can stop this from happenning ever again? I mean, I can ssh into the VPS after every deploy and run ps aux | grep sidekiq to check if there is more than one.. but it's not practical.
Use your init system to manage Sidekiq, not Capistrano.
https://github.com/mperham/sidekiq/wiki/Deployment#running-your-own-process

Spin down unused Dokku containers (and spin them up upon access)

Heroku spins down containers for free accounts when the app isn't accessed for a day. For our system, deployed on Dokku, we have production, staging, as well as developer containers running the same app. Today I noticed a Dokku app hang indefinitely mid-deploy on our dev VM. After investigating, I discovered that the issue was due to insufficient VM memory. After I killed a few containers, the container started successfully. For reference, there are almost 60 containers deployed on our dev box now, but only about 5 of them are being actively used. Often, our devs deploy multiple versions of the same app when testing. Sometimes these apps are no longer needed (in which case we can simply remove them), but more often than not, they'll need to be accessed again a week or two later.
To save resources on our VMs, we would like to spin down dev containers, especially since there are likely to be multiple instances of the same app.
Is this possible with Dokku? If I simply stop containers that haven't been accessed for a while (using docker stop command), then the user accessing the app later will be greeted with a 404 page. What I would like to do instead is show the loading icon to the user until the container is spun up again.
simply with dokku commands this is not posible for the moment. maybe you can use ps:stop and try something like if you find a 502 error on nginx, you then try to run a shell script that start the application, but that will of course give the 502 error to the user the first time.

Sidekiq - Enqueued Job is running from old code

I have about 30 sidekiq jobs scheduled in the future (let's days 1 in a day for the next 30 days).
I use capistrano for deployment. So I have 5 release directories at anytime. Let's say:
/var/www/release1/ (recent)
/var/www/release2/
/var/www/release3/
/var/www/release4/
/var/www/release5/
Let's say after few days, I make a new release. Now, the previously scheduled jobs are still running from the old code. Is this expected? How can we fix this to ensure that it uses the latest release directory when it starts running rather than when it is scheduled?
I'd just like to contribute with an alternate answer for someone who might get into this situation by other reason.
It happened to me that there was a sidekiq zombie process running. So, even if I would stop sidekiq manually and restart it, I had another sidekiq process hanging running with old code. Therefore, it's a good idea to run unix htop command or ps aux | grep sidek and try to look for zombie processes.
This could be because sidekiq process didn't restart after a successful deployment.
Make sure your deployment process restarts sidekiq and make sure restart actually works, otherwise sidekiq processes are still holding on to old code.
https://github.com/mperham/sidekiq/wiki/Deployment

CSV file placed in public folder of ROR, its content gets erased daily automatically. (Heroku)

I have a CSV file placed in the public folder of my Herkou App. But the content of this file gets erased daily. What can be the issue and how can i resolve this issue.
Like most PaaS providers, Heroku does not provide a persistent filesystem:
Each dyno gets its own ephemeral filesystem, with a fresh copy of the most recently deployed code. During the dyno’s lifetime its running processes can use the filesystem as a temporary scratchpad, but no files that are written are visible to processes in any other dyno and any files written will be discarded the moment the dyno is stopped or restarted.
In other words, anything you upload through the web will be lost whenever you deploy a new version or when your dyno gets restarted. Dynos get restarted frequently (emphasis mine):
The dyno manager restarts all your app’s dynos whenever you:
create a new release by deploying new code
change your config vars
change your add-ons
run heroku restart
Dynos are also restarted at least once per day, in addition to being restarted as needed for the overall health of the system and your app.
The recommended way to store user uploads on Heroku is to use something like Amazon S3:
AWS S3, or similar storage services, are important when architecting applications for scale and are a perfect complement to Heroku’s ephemeral filesystem.

Rails Resque keeps running old code even after server restart

I've got the resque gem running and for some reason it seems to be caching the job code. Rails in currently in the dev environment and this still persists after server restart.
I've tried changing the queue name but the same code continues to be run.
The only thing that works is creating an entirely new class with a different name and then calling that.
Is there a cache that can be cleared?
There are resque worker(s) process running in the background that requires you to restart them. You need to restart your resque worker(s) in order for your changes to take effect.

Resources