So far I had a simple application that only required the classic rails server to boot.
I have recently added the react_on_rails gem and it requires to boot a nodejs server to handle webpack and javascript stuff.
So I understand I need this foreman gem that is capable of managing several processes. So far so good, but then I'm still having a few problems understanding and deploying this enhanced app to my production environment (Phusion Passenger on Apache/Nginx)
So several questions :
Does passenger handle the transition from rails s to foreman start -f Procfile.dev automatically ?
If no then where do I setup things so passenger works ?
Side question : almost all google results refer to puppet when looking for foreman on passenger. Anyone could explain what puppet does in 1 line and if I really need it in production ? So far everythings runs smoothly on localhost with the foreman start -f Procfile.dev command so I don't know where this is coming from...
I am deploying my application to the Amazon Cloud using Capistrano, and I was expecting to have the rails + nodejs setup on every autoscaled instance (and Passenger would graciously handle all that). Am I thinking wrong ?
In our production environment we use eye to manage other processes related to the rails app. (Passenger will run from mod_passenger while the workers are controlled by eye)
And here is an example of how to start 4 concurrent queue_classic workers:
APP_ROOT = File.expand_path(File.dirname(__FILE__))
APP_NAME = File.basename(APP_ROOT)
Eye.config do
logger File.join(APP_ROOT, "log/eye.log")
end
Eye.application APP_NAME do
working_dir File.expand_path(File.dirname(__FILE__))
stdall 'log/trash.log' # stdout,err logs for processes by default
env 'RAILS_ENV' => 'production' # global env for each processes
trigger :flapping, times: 10, within: 1.minute, retry_in: 10.minutes
group 'qc' do
1.upto(4) do |i|
process "worker-#{i}" do
stdall "log/worker-#{i}.log"
pid_file "tmp/pids/worker-#{i}.pid"
start_command 'bin/rake qc:work'
daemonize true
stop_on_delete true
end
end
end
end
I used to be a .NET guy, and enjoyed using a nightly build system (continuous integration) called CruiseControl.NET to automatically deploy my applications to my staging environment each night.
Now that I've made the jump to Ruby on Rails and GitHub, I've found myself a little confused about how to setup an equivalent automated, nightly build system. I want to do things right, the Rails way, but I could use a push in the right direction.
Here's what I'm using...
Ruby on Rails 3.2.9 (with asset pipeline)
RVM
Apache + Passenger
MySQL
Ubuntu 12.04 (staging server OS)
GitHub (SCM)
I'm looking for a system/solution that fulfills these requirements: (Ideally utilizing Capistrano...)
Deploy the latest commit from my 'master' branch in my GitHub repository to my staging server.
One-click build on-demand: I want to click a button or link to force a deploy (or re-deploy) at any time
Have the capability to run custom commands on the staging server as a part of the deploy (i.e. 'bundle install', restart Apache, etc...)
Automatic deploy daily or after a commit on GitHub (optional)
And because I'd never ask this question without doing some research first, here are some relevant resources I've found, but haven't been able to decipher:
Automatic Deployment via Git
Deploying Rails 3 apps with Capistrano
Suggestions, anyone? Thanks!
David
Well, it turns out there is alot of good information out there regarding how to use Capistrano for this purpose (including Prakash's reference), but none of it seems to be fully comprehensive.
After many hours of picking through guide, after forum, after stack overflow question, I managed to fulfill most of my goals. To save time for others, I will try to provide answers and information that I found in the context of my original question.
UPDATE:
It turns out I was looking for Jenkins all along: it's a perfect (an even an improved) analog to the CruiseControl build server application I had used before. Jenkins is a web application that kicks off builds on a scheduled basis or, with plugins, commit events. It doesn't include functionality to actually deploy my Rails app, so that's where Capistrano steps in. Using Jenkins' "shell execute" build task to trigger Capistrano deploys, I was able to accomplish all of my goals above.
Take a look at this guide for more detail.
ORIGINAL POST:
First off, Capistrano can work for the purpose of build system, however, it is not at all similar to CruiseControl.
CruiseControl is:
A web application, that is run on a machine that serves as a 'build server'
Any user may visit the web app to run builds; the user does not have to have anything setup or configured on their end... it's just a webpage.
The interface also provides functions for logs, blame, scheduled builds, etc...
Capistrano is:
Not an application or service, but a gem that works like rake; it bears similar functionality to the Ant and Nant scripts used by CruiseControl for the purpose of deploying.
Capistrano is run on a user's local machine (typically the Rails app folder), and requires configuration on their end to run
It directly connects from the user's machine to the remote server via SSH to perform deploys (this may not be preferable if you don't want to grant users SSH access to the remote server)
Commands are run through a console i.e. cap deploy (this is as close as it gets to 'one click' deploys.)
It does produce logs, perform rollbacks etc.
It does not kick-off scheduled builds on its own (using cron was suggested)
In regards to my original requirements...
Deploy the latest commit from my 'master' branch in my GitHub repository to my staging server.
Configuring Capistrano's deploy.rb file will accomplish this.
One-click build on-demand: I want to click a button or link to force a deploy (or re-deploy) at any time
All Capistrano deploys are done through 'force': you run 'cap deploy' in your console manually
Have the capability to run custom commands on the staging server as a part of the deploy (i.e. 'bundle install', restart Apache, etc...)
Configuring Capistrano's deploy.rb file will accomplish this. Most of these commands are included out of the box.
Automatic deploy daily or after a commit on GitHub (optional)
I haven't figured this one out yet... a cron job might be the best way to do this.
Setting up Capistrano
Start with this tutorial from GitHub first. In your Rails App folder, you should end up with a Capfile and config/deploy.rb file.
To save you some time, copy and paste these files, and tweak the settings to your needs.
The files below are configured for:
Deployment to a test environment
GitHub
RVM based Rails
Using an explicit version of Ruby and gemset
Including untracked files (i.e. a database.yml file you didn't commit to SCM)
Using Rails asset pipeline
Running migrate and seed with each deploy
Restarting Apache based Passenger with each deploy
Capfile
# Set this if you use a particular version of Ruby or Gemset
set :rvm_ruby_string, 'ruby-1.9.3-p286#global'
#set :rvm_ruby_string, ENV['GEM_HOME'].gsub(/.*\//,"") # Read from local system
require "bundler/capistrano"
# Uncomment this if you're using RVM
require "rvm/capistrano"
load 'deploy'
# Uncomment if you are using Rails' asset pipeline
load 'deploy/assets'
load 'config/deploy' # remove this line to skip loading any of the default tasks
config/deploy.rb
# BEGIN RUBY CONFIG
# You can manually override path variables here
# set :default_environment, {
# 'PATH' => "/usr/local/bin:/bin:/usr/bin:/bin:/<ruby-dir>/bin",
# 'GEM_HOME' => '<ruby-dir>/lib/ruby/gems/1.8',
# 'GEM_PATH' => '<ruby-dir>lib/ruby/gems/1.8',
# 'BUNDLE_PATH' => '<ruby-dir>/lib/ruby/gems/1.8/gems'
# }
# This changes the default RVM bin path
# set :rvm_bin_path, "~/bin"
# If your remote server doesn't have a ~/.rvm directory, but is installed
# at the /usr/local/rvm path instead, use this line
set :rvm_type, :system
# END RUBY CONFIG
default_run_options[:pty] = true # Must be set for the password prompt
# from git to work
# BEGIN MULTIPLE ENVIRONMENT DEPLOYS
# Read the following URL if you need to deploy to different environments (test, development, etc)
# https://github.com/capistrano/capistrano/wiki/2.x-Multistage-Extension
# set :stages, %w(production test)
# set :default_stage, "test"
# require 'capistrano/ext/multistage'
# END MULTIPLE ENVIRONMENT DEPLOYS
# BEGIN APPLICATION VARS
set :application, "yourapp"
set :rails_env, 'test'
# END APPLICATION VARS
# BEGIN PATH DEFINITIONS
set(:releases_path) { File.join(deploy_to, version_dir) }
set(:shared_path) { File.join(deploy_to, shared_dir) }
set(:current_path) { File.join(deploy_to, current_dir) }
set(:release_path) { File.join(releases_path, release_name) }
# END PATH DEFINITIONS
# BEGIN SCM VARS
set :repository, "git#github.com:yourgithubuser/yourrepository.git" # Your clone URL
set :scm, "git"
set :scm_username, "yourgithubuser"
set :scm_password, proc{Capistrano::CLI.password_prompt('GitHub password:')} # The deploy user's password
set :branch, "master"
# END SCM VARS
# BEGIN SERVER VARS
set :user, "ubuntu" # The server's user for deploys
role :web, "dev.#{application}" # The location of your web server i.e. dev.myapp.com
role :app, "dev.#{application}" # The location of your app server i.e. dev.myapp.com
role :db, "dev.#{application}", :primary => true # The location of your DB server i.e. dev.myapp.com
set :deploy_to, "/home/#{user}/www/#{application}"
set :deploy_via, :remote_cache
# Uncomment this if you want to store your Git SSH keys locally and forward
# Else, it uses keys in the remote server's .ssh directory
# ssh_options[:forward_agent] = true
# END SERVER VARS
# BEGIN ADDITIONAL TASKS
before "deploy:start" do
deploy.migrate
deploy.seed
end
before "deploy:restart" do
deploy.migrate
deploy.seed
end
# Some files that the Rails app needs are too sensitive to store in SCM
# Instead, manually upload these files to your <Rails app>/shared folder
# then the following code can be used to generate symbolic links to them,
# so that your rails app may use them.
# In this example below, I link 'shared/config/database.yml' and 'shared/db/seed_data/moderators.csv'
before "deploy:assets:precompile" do
run ["ln -nfs #{shared_path}/config/settings.yml #{release_path}/config/settings.yml",
"ln -nfs #{shared_path}/config/database.yml #{release_path}/config/database.yml",
"mkdir -p #{release_path}/db/seed_data",
"ln -nfs #{shared_path}/db/seed_data/moderators.csv #{release_path}/db/seed_data/moderators.csv",
"ln -fs #{shared_path}/uploads #{release_path}/uploads"
].join(" && ")
end
namespace :deploy do
# Define seed task (call 'cap deploy:seed')
desc "Reload the database with seed data"
task :seed do
run "cd #{current_path}; bundle exec rake db:seed RAILS_ENV=#{rails_env}"
end
# If you are using Passenger mod_rails uncomment this:
task :start do ; end
task :stop do ; end
task :restart, :roles => :app, :except => { :no_release => true } do
run "#{try_sudo} touch #{File.join(current_path,'tmp','restart.txt')}"
end
end
# END ADDITIONAL TASKS
Yes. Capistrano is the tool you are looking for. Using Capistrano with the system cron should enable you to implement a build/deploy system with all four of the mentioned requirements. Rather easily at that.
The best resource to learn about Capistrano is the Deploying Rails book. It has two chapters on Capistrano, the first one covering the basic usage, and the second covering some advanced usage concepts. There is also a Capistrano Case study at the end which explores further config options with a real life deploy script example.
FYI: I finished reading the two chapters yesterday in a couple of hours and tweeted about it ;-)
I'd say go with Capistrano for the deploy part.
The CI part (build when you do a git push) would be best implemented by using a CI server, such as travis (if you're on Github) or Jenkins if you have a private repo.
For the one-click-build, go with either capistrano directly (cap deploy from command line) or a simple CI hook which runs all tests, then runs cap deploy.
I am having issues when using unicorn with a Capistrano deployment. From what I have been able to understand Capistrano uses a scheme in wich every release is deployed inside the releases directory under a unique name and if the transaction was successful, creates a symlink named current that will point to that release.
So I end up with a deployment directory such as:
/home/deployer/apps/sample_app/current
Then when I try to start unicorn from the binstubs directory all the unicorn methods look for things in the following path, particularly in the configurator.rb module:
/home/deployer/apps/sample_app
I haven't been able to fully understand how unicorn sets the working_directory from here:
https://github.com/defunkt/unicorn/raw/master/lib/unicorn/configurator.rb
But I wanted to check with the community if I am missing something evident due to the noob nature in me.
BTW I am starting unicorn as follows
APP_ROOT=/home/deployer/apps/sample_app/current
PID=$APP_ROOT/tmp/pids/unicorn.pid
CMD="$APP_ROOT/bin/unicorn -D -E production -c $APP_ROOT/config/unicorn.rb"
TIA
this was setup via unicorn.rb config working_directory param
I don't really understand what model of unix accounts/permissions is intended with Capistrano.
Let's say I've got a rails app called Widget and I'll be deploying with passenger. In general, pre-capistrano, I want the entire ./widget directory to be owned by a user called 'widget'. And then by default default passenger will run the app process as user 'widget' too, because passenger runs as user that owns the file.
And the whole point of this is for that 'widget' account to have fairly limited permissions, right? Since a web app will be running under that account?
So since I want the files to be owned by 'widget', I tell cap
set :user, "widget"
But now when I run "cap deploy:setup", it wants to 'sudo' from that account. No way that 'widget' account gets sudo privileges, the whole point is keeping this account limited privs.
Okay, I can tell cap not to use sudo... but then it won't actually have privs to do what it needs, maybe.
I can find a workaround to this too. But I start thinking, why do I keep having to re-invent the wheel? I mistakenly thought the point of cap recipes was to give me some best practices here. Anyway... what do people actually do here?
Use one unix account for install, but then have cap somehow 'chown' it to something else? Use one unix account, but have something non-cap (puppet?) do enough setup so that account doesn't need to sudo to get things started? What? What am I missing?
You can avoid some of the headache by using Passenger most commonly with Nginx as your webserver.
Then to restart web services the unprivileged Widget user creates a file in his path and Passenger will automatically restart Nginx when it sees that file being present.
This is enabled via the following in your config/deploy.rb:
namespace :deploy do
task :start do ; end
task :stop do ; end
task :restart, :roles => :app, :except => { :no_release => true } do
run "touch #{File.join(current_path,'tmp','restart.txt')}"
end
end
As for other privileged tasks for MySQL/DB administration your database.yml provides the credentials necessary to handle rake migration tasks.
So really the only time you would need something more privileged would be for system wide installation of gems, ruby, or rails updates, but a lot of that depends on how your production environment was setup/installed.
Given Passenger+Nginx and separate credentials for DB you can disable sudo and see if you encounter any errors during your Capistrano deploy process and then pickup from there.
I need to setup a connection to an external service in my Rails app. I do this in an initializer. The problem is that the service library uses threaded delivery (which I need, because I can't have it bogging down requests), but the Unicorn life cycle causes the thread to be killed and the workers never see it. One solution is to invoke a new connection on every request, but that is unnecessarily wasteful.
The optimal solution is to setup the up the connection in an after_fork block in the unicorn config. The problem there is that doesn't get invoked outside of unicorn, which means we can't test it in development/testing environments.
So the question is, what is the best way to determine whether a Rails app is running under Unicorn (either master or worker process)?
There is an environment variable that is accessible in Rails (I know it exists in 3.0 and 3.1), check the value of env['SERVER_SOFTWARE']. You could just put a regex or string compare against that value to determine what server you are running under.
I have a template in my admin that goes through the env variable and spits out its content.
Unicorn 4.0.1
env['SERVER_SOFTWARE'] => "Unicorn 4.0.1"
rails server (webrick)
env['SERVER_SOFTWARE'] => "WEBrick/1.3.1 (Ruby/1.9.3/2011-10-30)"
You can check for defined?(Unicorn) and in your Gemfile set: gem :unicorn, require: false
In fact you don't need Unicorn library loaded in you rails application.
Server is started by unicorn command from shell
Checking for Unicorn constant seems a good solution, BUT it depends very much on whether require: false is provided in the Gemfile. If it isn't (which is quite probable), the check might give a false positive.
I've solved it in a very straightforward manner:
# `config/unicorn.rb` (or alike):
ENV["UNICORN"] = 1
...
# `config/environments/development.rb` (or alike):
...
# Log to stdout if Web server is Unicorn.
if ENV["UNICORN"].to_i > 0
config.logger = Logger.new(STDOUT)
end
Cheers!
You could check to see if the Unicorn module has been defined with Object.constants.include?('Unicorn').
This is very specific to Unicorn, of course. A more general approach would be to have a method which sets up your connection and remembers it's already done so. If it gets called multiple times, it just returns doing nothing on subsequent calls. Then you call the method in after_fork and in a before_filter in your application controller. If it's been run in the after_fork it does nothing in the before_filter, if it hasn't been run yet it does its thing on the first request and nothing on subsequent requests.
Inside config/unicorn.rb
Define ENV variable as
ENV['RAILS_STDOUT_LOG']='1'
worker_processes 3
timeout 90
and then this variable ENV['RAILS_STDOUT_LOG'] will be accessible anywhere in your Rails app worker thread.
my issue:
I wanted to output all the logs(SQL queries) when on the Unicorn workers and not on any other workers on Heroku, so what I did is adding env variable in the unicorn configuration file
If you use unicorn_rails, below code will help
defined?(::Unicorn::Launcher)