I am having issues when using unicorn with a Capistrano deployment. From what I have been able to understand Capistrano uses a scheme in wich every release is deployed inside the releases directory under a unique name and if the transaction was successful, creates a symlink named current that will point to that release.
So I end up with a deployment directory such as:
/home/deployer/apps/sample_app/current
Then when I try to start unicorn from the binstubs directory all the unicorn methods look for things in the following path, particularly in the configurator.rb module:
/home/deployer/apps/sample_app
I haven't been able to fully understand how unicorn sets the working_directory from here:
https://github.com/defunkt/unicorn/raw/master/lib/unicorn/configurator.rb
But I wanted to check with the community if I am missing something evident due to the noob nature in me.
BTW I am starting unicorn as follows
APP_ROOT=/home/deployer/apps/sample_app/current
PID=$APP_ROOT/tmp/pids/unicorn.pid
CMD="$APP_ROOT/bin/unicorn -D -E production -c $APP_ROOT/config/unicorn.rb"
TIA
this was setup via unicorn.rb config working_directory param
Related
My goal is to have sidekiq start when the server boots up (I'm using EC2 with an auto-scaling group). I know there are a few other posts regarding getting sidekiq to start with upstart on boot, but I don't believe mine has been addressed specifically.
I'm using this wiki - https://github.com/mperham/sidekiq/tree/master/examples/upstart/manage-many and have placed the scripts inside /etc/init/sidekiq.conf and /etc/init/sidekiq-manager.conf.
I've made a couple small modifications as directed in /etc/init/sidekiq.conf, changing:
# setuid apps
# setgid apps -> replaced apps with ubuntu in both lines, which is the deployment user.
export HOME=/home/apps to export HOME=/home/ubuntu
I also have a /etc/sidekiq.conf that includes the following line:
/home/ubuntu/app_dir, 2
Otherwise, these scripts are identical to those included in the referenced repo. I'm getting the following errors in my logs (/var/log/upstart)
/bin/bash: line 19: cd: 2: No such file or directory
Could not locate Gemfile
It appears as if it's attempting to change directory somewhere other than /home/ubuntu/app_dir, at which point it's in the wrong directory and cannot find my Gemfile.
Is there somewhere else I need to specify a correct path to my app directory?
Thanks!
You can run sidekiq as an upstart job. Making a sidekiq.conf file in /etc/init/ directory and put the upstart code to run sidekiq.
Here is the complete script and the guide to make sidekiq upstart job.
After making this job, sidekiq start/stop/restart would be easy with sudo service command.
Heroku recently decreased number of available connections to production database (from 500 to 60). Opened connections were consuming a lot of memory and causing problems, so it seems like a step in right direction.
My app has more than 100 concurrent processes which all access database at same time. Heroku suggests using https://github.com/gregburek/heroku-buildpack-pgbouncer to fix this issue.
I wasn't able to find any proper guide on how to do this. I was able to install and enable buildpack, but I have no ide what these configuration variables do and how do they work. With default configuration, i get tons of ActiveRecord::ConnectionTimeoutError errors.
Does anyone has experience with this and if can please provide provide step-by-step guide on how to do this properly and how to configure everything that needs to be configured?
What version of Rails are you running on? I just deployed pgbouncer to our production webapp using these steps (for a Rails 3.2 app running on Ruby 2.0)
Create .buildpacks file in root directory app that contains the text
https://github.com/gregburek/heroku-buildpack-pgbouncer.git#v0.2.2
https://github.com/heroku/heroku-buildpack-ruby.git
I created a file called disable_prepared_statements_monkey_patch.rb in config/initializers that contained the code posted by cwninja in this thread: https://github.com/gregburek/heroku-buildpack-pgbouncer/pull/7
Modified my Procfile to add
bin/start-pgbouncer-stunnel
before
bundle exec unicorn -p $PORT -c ./config/unicorn.rb
executed
heroku config:set PGBOUNCER_PREPARED_STATEMENTS=false --app yourapp
heroku config:add BUILDPACK_URL=https://github.com/ddollar/heroku-buildpack-multi.git --app yourapp
deployed the new .buildpacks and Procfile changes to production
It worked as advertised after that point. The first time I deployed I neglected to disable prepared statements, which caused my application to blow up with lots of 'duplicate prepared statement errors'. If you're using Rails 4.1 or above the monkey patch is apparently not necessary, however I guess there's a bug in Rails 3.2 that doesn't parse pgbouncer's change to the database URL in such a way that prepared statements are actually disabled.
I'm running unicorn and am trying to get zero downtime restarts working.
So far it is all awesome sauce, the master process forks and starts 4 new workers, then kills the old one, everyone is happy.
Our scripts send the following command to restart unicorn:
kill -s USR2 `cat /www/app/shared/pids/unicorn.pid`
On the surface everything looks great, but it turns out unicorn isn't reloading production.rb. (Each time we deploy we change the config.action_controller.asset_host value to a new CDN container endpoint with our pre-compiled assets in it).
After restarting unicorn in this way the asset host is still pointing to the old release. Doing a real restart (ie: stop the master process, then start unicorn again from scratch) picks up the new config changes.
preload_app is set to true in our unicorn configuration file.
Any thoughts?
My guess is that your unicorns are being restarted in the old production directory rather than the new production directory -- in other words, if your working directory in unicorn.rb is <capistrano_directory>/current, you need to make sure the symlink happens before you attempt to restart the unicorns.
This would explain why stopping and starting them manually works: you're doing that post-deploy, presumably, which causes them to start in the correct directory.
When in your deploy process are you restarting the unicorns? You should make sure the USR2 signal is being sent after the new release directory is symlinked as current.
If this doesn't help, please gist your unicorn.rb and deploy.rb; it'll make it a lot easier to debug this problem.
Keep in mind that:
your working directory in unicorn.rb should be :
/your/cap/directory/current
NOT be:
File.expand_path("../..", FILE)
Because the unicorn and linux soft link forking error: soft link can not work well.
for example:
cd current #current is a soft link to another directory
... ...
when we get our working directory, we got the absolute path not the path in "current"
In the development environment i use the following command to start a daemon after starting the server:
RAILS_ENV=development lib/daemons/mailer_ctl start
In the production environment, from the application directory, i would use:
lib/daemons/mailer_ctl start
Can i change the development.rb and production.rb files so the daemon would automatically be started? If not, is there another way to do this?
I recommend on your production server you use god (or something similar) to watch for the existing of a process, and start it if it does not exist
http://god.rubyforge.org/
Monit is an alternative -- here's a good SO question on monit vs god
In Rails, what is the best strategy to restarting app servers like Thin after a code deployment through a Capistrano script. I would like to be able to deploy code to production servers without fearing that a user might see the 500.html page.
I found this question while looking for an answer. Because I wanted to stick with Thin, none of the answers here suited my needs. This fixed it for me:
thin restart -e production --servers 3 --onebyone --wait 30
Unicorn is supposed to have rolling restarts built in. I have not setup a unicorn stack yet but http://sirupsen.com/setting-up-unicorn-with-nginx/ looks like a good start.
The way I used to do the production servers are with apache and passenger. thats a industry standard setup and will allow you to deploy new versions with out a down time
Once everything is correctly setup all you have to do is, go to app directory
create a file called restart.txt in /tmp dir.
Ex: touch tmp/restart.txt
read more here http://www.modrails.com/
http://jimneath.org/2008/05/10/using-capistrano-with-passenger-mod_rails.html
http://www.zorched.net/2008/06/17/capistrano-deploy-with-git-and-passenger/
http://snippets.dzone.com/posts/show/5466
HTH
sameera