Avoiding deployment gem-fail downtime while using Passenger, bundler & git-based deploys - ruby-on-rails

I'm deploying a Rails 3 app on Passenger 3.0.7 using capistrano and git-based deployment, similar to GitHub's setup:
https://github.com/blog/470-deployment-script-spring-cleaning -- this means the app operates entirely out of one directory, with no /releases/123456 and symlink switching.
If we've added any gems our app starts throwing 500 errors during deployment, during the "bundle:install" phase, but before deploy:restart. The code has been updated and it seems like passenger is already starting to use it, and required gems can't be found yet.
This is not caused by new workers being spun up, as I've tried setting the Passenger idle_time to 0 and max_instances and min_instances to the same value, so that workers are never spun down.
Running on Linux with ruby-ee 1.8.7-2011.03. Sample error from Passenger: https://gist.github.com/54794e85d2c799e4f697
I've also considered doing "two-directory" git-based deployment as a hack -- swapping in the new code once the bundle is complete. Ideas welcome.

Go with the two-directory deployment. Apart from avoiding the 500s during deployment, this will also act as a safety net if you need to rollback during/after deployment.

Related

Changes do no show up on production running Passenger

Have got new project with production server already running on passenger.
After doing normal deploy as described in 1 page docs left by old team (only cap prod deploy) we are having constant problems. First of all - code we pushed doesn't seem to be working. It should be adding new data on rake tasks to items in database. Physically code is there - in current folder. But it doesnt seems to be triggered.
I ve noticed that there were no rails or gem installed when I ve tried to do simple rails c via ssh. After installing everything and manually launching code with binding.pry added looks like code was triggered. But via normal scheduled rake task it was not.
It looks like passenger is running in daemon since there are no pid in tmp folder (usual for rails app).
1) Is there a chance restarting server will actually help - it was not restarted after deploy and I have no idea how to restart it without pid.
2)passenger-config restart-app returns actually 2 servers. Can they collide and prevent normal app work?(Update: servers are not same - single letter difference in the name)
Still passenger-config restart-app dont seem to restart server
Sorry for the wall of text by I ve spent 2 nights getting it to work and I still lost.

How to setup pgbouncer with rails on heroku?

Heroku recently decreased number of available connections to production database (from 500 to 60). Opened connections were consuming a lot of memory and causing problems, so it seems like a step in right direction.
My app has more than 100 concurrent processes which all access database at same time. Heroku suggests using https://github.com/gregburek/heroku-buildpack-pgbouncer to fix this issue.
I wasn't able to find any proper guide on how to do this. I was able to install and enable buildpack, but I have no ide what these configuration variables do and how do they work. With default configuration, i get tons of ActiveRecord::ConnectionTimeoutError errors.
Does anyone has experience with this and if can please provide provide step-by-step guide on how to do this properly and how to configure everything that needs to be configured?
What version of Rails are you running on? I just deployed pgbouncer to our production webapp using these steps (for a Rails 3.2 app running on Ruby 2.0)
Create .buildpacks file in root directory app that contains the text
https://github.com/gregburek/heroku-buildpack-pgbouncer.git#v0.2.2
https://github.com/heroku/heroku-buildpack-ruby.git
I created a file called disable_prepared_statements_monkey_patch.rb in config/initializers that contained the code posted by cwninja in this thread: https://github.com/gregburek/heroku-buildpack-pgbouncer/pull/7
Modified my Procfile to add
bin/start-pgbouncer-stunnel
before
bundle exec unicorn -p $PORT -c ./config/unicorn.rb
executed
heroku config:set PGBOUNCER_PREPARED_STATEMENTS=false --app yourapp
heroku config:add BUILDPACK_URL=https://github.com/ddollar/heroku-buildpack-multi.git --app yourapp
deployed the new .buildpacks and Procfile changes to production
It worked as advertised after that point. The first time I deployed I neglected to disable prepared statements, which caused my application to blow up with lots of 'duplicate prepared statement errors'. If you're using Rails 4.1 or above the monkey patch is apparently not necessary, however I guess there's a bug in Rails 3.2 that doesn't parse pgbouncer's change to the database URL in such a way that prepared statements are actually disabled.

do I have to restart apache after updating ruby in order to make passenger restart?

I need to update ruby in the system. It is a minor update and I'll do that by installing a new ruby RPM that has a new patch on it.
My question is, do I have to restart apache after updating ruby in the system? Or is it another way to make passenger reload ruby?
I've tried running a page that outputs the RUBY_VERSION, RUBY_RELEASE_DATE, RUBY_PATCHLEVEL in order to check that, but it does not work for me because the update is a new patch in the RPM, not in ruby itself, so that those constants are the same for the old and new version.
thanks
No, you don't need to restart the whole Apache.
You need to restart your application though.
Passenger has an easy way to tell the application to restart : create a restart.txt file in the tmp directory inside your application.
The application will be restarted the next time a request is made to it.
So you might want to automatically request your application after deploying to force the restart.
I am pretty sure that, at least in passenger 3, you do need to restart apache.
After installing the new ruby, you need to re-install the passenger apache module, linked to your new ruby, (passenger-install-apache2-module), then you need to take the apache config lines that it shows you after installation, and edit your apache config file to include them (replacing the old lines pointing to your old ruby), then you need to restart apache.
Now, it's possible there's a different way to do this without restarting apache for Passenger 4 (still not official final release, but in RC). Passenger 4 has some features for a passenger running multiple rubies simultaneously that might end up allowing this sort of thing too; I'm not sure, I haven't looked into it much. But I think with Passenger 3 (the existing stable Passenger, that most are probably still using), you've got to do as above.
You can restart a specific app running under passenger with a restart.txt like Damien MATHIEU says in a different answer. But to change the version of ruby that passenger is running under and starting apps under -- I'm pretty sure you need to restart apache (after first reinstalling the apache passenger modules, and changing the passenger apache config)

How to Restart Rails Production Servers After Code Deployment w/o Downtime

In Rails, what is the best strategy to restarting app servers like Thin after a code deployment through a Capistrano script. I would like to be able to deploy code to production servers without fearing that a user might see the 500.html page.
I found this question while looking for an answer. Because I wanted to stick with Thin, none of the answers here suited my needs. This fixed it for me:
thin restart -e production --servers 3 --onebyone --wait 30
Unicorn is supposed to have rolling restarts built in. I have not setup a unicorn stack yet but http://sirupsen.com/setting-up-unicorn-with-nginx/ looks like a good start.
The way I used to do the production servers are with apache and passenger. thats a industry standard setup and will allow you to deploy new versions with out a down time
Once everything is correctly setup all you have to do is, go to app directory
create a file called restart.txt in /tmp dir.
Ex: touch tmp/restart.txt
read more here http://www.modrails.com/
http://jimneath.org/2008/05/10/using-capistrano-with-passenger-mod_rails.html
http://www.zorched.net/2008/06/17/capistrano-deploy-with-git-and-passenger/
http://snippets.dzone.com/posts/show/5466
HTH
sameera

How should I deploy a patch to a Passenger-based production Rails application without downtime?

I have a Passenger-based production Rails application which has thousands of users. Occasionally we need to apply a code patch (we use git) and the current process for doing this (you can assume there are no data migrations) is:
Perform git pull origin [production-branch-name] on the server
touch tmp/restart.txt to restart Passenger
This allows us to patch the server without having to resort to putting up a maintenance page, which is great, but it doesn't feel quite right since it's not actually a proper 'deployment', and we still need to manually update the revision file and our deployment doesn't appear in the Hoptoad or NewRelic services we use.
Ideally I would run cap production deploy and just let the standard Capistrano deployment script take care of everything, but is this a dangerous thing to do without putting up a maintenance page? This deployment process seems to be fairly safe in that the new revision is deployed to a completely separate folder and only right at the end of the process is a symlink re-created to switch the currently deployed version, but I'm still fairly paranoid about this somehow resulting in a lost or failed request.
No problems here doing cap production deploy. If the deployment fails then the previous release is still good. Nothing will fail as the old release is loaded (cached) in the current Passenger process. The touch tmp/restart.txt will pick up the new release and all is good in the world.

Resources