Re-source .bashrc when restarting unicorn? - ruby-on-rails

I have some ENV variables that are sourced for the deploy user. (Similar to what Heroku recommends, but without using Heroku.)
My rails app depends on these for certain functions, for example, in application.rb:
config.action_mailer.default_url_options = { host: ENV['MY_HOST'] }
This is necessary because we have several staging hosts. Each host has MY_HOST defined to its correct hostname in .bashrc like so:
export MY_HOST="staging3.example.com"
This allows us to only use one rails staging environment, but still have each host's correct hostname used for testing, sending email, etc since this can be set on a per-machine basis.
Unfortunately it looks like when I restart Unicorn using USR2 it doesn't pick up changes to those variables. Doing a hard-stop and start will correctly load any changes.
I'm using preload_app = true which may I'm guessing has something to do with it. Any ideas?

In the end I went away from this approach altogether in favor of loading my app config from an app_config.yml file. Ryan Bates covers this approach in Railscast #226.
The only thing I did differently is that I load a shared app_config.yml for each server I use. Since I'm using capistrano, I just symlink the file on deploy.
For example, on staging2 my shared/configs/app_config.yml looks like this:
staging:
host: "staging2.example.com"
... whereas on staging3 it looks like this:
staging:
host: "staging3.example.com"
Now my application.rb has this line instead:
config.action_mailer.default_url_options = { host: APP_CONFIG[:host] }
I removed the actual config/app_config.yml from git so that it's not included on deploy. (I moved it to config/app_config.yml.template.) Then on deploy I use a capistrano task to symlink shared/configs/app_config.yml to config/app_config.yml:
namespace :deploy do
desc "Symlinks the app_config.yml"
task :symlink_app_config, :roles => [:web, :app, :db] do
run "ln -nfs #{deploy_to}/shared/config/app_config.yml #{release_path}/config/app_config.yml"
end
end
This strategy has these benefits over using ENV vars:
Deployed to all nodes via capistrano
We can do conditional hosts by simply changing the file on the appropriate server
Unicorn will get changes with USR2 since everything's done inside of rails
Everything's kept in one place, and the environment isn't affected by some other variables outside of the codebase

Related

Refresh .bashrc with new env variables

I have a production server running our rails app, and we have ENV variables in there, formatted correctly. They show up in rails c but we have an issue getting them to be recognized in the instance of the app.
Running puma, nginx on an ubuntu box.
What needs to be restarted every time we change .bashrc? This is what we do:
1. Edit .bashrc
2. . .bashrc
3. Restart puma
4. Restart nginx
still not recognized..but in rails c, what are we missing?
edit:
Added env variables to /etc/environment based on suggestions from other posts saying that .bashrc is only for specific shell sessions, and this could have an effect. supposedly /etc/environment is available for all users, so this is mine. still having the same issues:
Show up fine in rails c
Show up fine when I echo them in shell
Do not show up in application
export G_DOMAIN=sandboxbaa3b9cca599ff0.mailgun.org
export G_EMAIL=mailgun#sandboxbaa3ba3806d5b499ff0.mailgun.org
export GEL=support#xxxxxx.com
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games"
edit:
In the app i request G_DOMAIN and G_EMAIL in plain html (this works on development with dotenv, does not work once pushed to production server with ubuntu server) :
ENV TEST<BR>
G_DOMAIN: <%= ENV['G_DOMAIN'] %><br>
G_EMAIL:<%= ENV['G_EMAIL'] %>
However, the following env variables are available to use (in both .bashrc and /etc/environment, same as all variables we displayed above) because our images work fine and upload to s3 with no issue, on production.
production.rb
# Configuration for Amazon S3
:provider => 'AWS',
:aws_access_key_id => ENV['AWS_ACCESS_KEY_ID'],
:aws_secret_access_key => ENV['AWS_SECRET_ACCESS_KEY']
edit2: could this be anything with this puma issue?
https://github.com/puma/puma/commit/a0ba9f1c8342c9a66c36f39e99aeaabf830b741c
I was having a problem like this, also. For me, this only happens when I add a new environment variable.
Through this post and after some more googling, I've come to understand that the restart command for Puma (via the gem capistrano-puma) might not see new environment variables because the process forks itself when restarting rather than killing itself and being started again (this is a part of keeping the servers responsive during a deploy).
The linked post suggests using a YAML file that's only stored on your production server (read: NOT in source control) rather than rely on your deploy user's environment variables. This is how you can achieve it:
Insert this code in your Rails app's config/application.rb file:
config.before_configuration do
env_file = File.join(Rails.root, 'config', 'local_env.yml')
YAML.load(File.open(env_file)).each do |key, value|
ENV[key.to_s] = value
end if File.exists?(env_file)
end
Add this code to your Capistrano deploy script (config/deploy.rb)
desc "Link shared files"
task :symlink_config_files do
on roles(:app) do
symlinks = {
"#{shared_path}/config/local_env.yml" => "#{release_path}/config/local_env.yml"
}
execute symlinks.map{|from, to| "ln -nfs #{from} #{to}"}.join(" && ")
end
end
before 'deploy:assets:precompile', :symlink_config_files
Profit! With the code from 1, your Rails application will load any keys you define in your server's Capistrano directory's ./shared/config/local_env.yml file into the ENV hash, and this will happen before the other config files like secrets.yml or database.yml are loaded. The code in 2 makes sure that the file in ./shared/config/ on your server is symlinked to current/config/ (where the code in 1 expects it to be) on every deploy.
Example local_env.yml:
SECRET_API_KEY_DONT_TELL: 12345abc6789
OTHER_SECRET_SHH: hello
Example secrets.yml:
production:
secret_api_key: <%= ENV["SECRET_API_KEY_DONT_TELL"] %>
other_secret: <%= ENV["OTHER_SECRET_SHH"] %>
This will guarantee that your environment variables are found, by not really using environment variables. Seems like a workaround, but basically we're just using the ENV object as a convenient global variable.
(the capistrano syntax might be a bit old, but what is here works for me in Rails 5... but I did have to update a couple things from the linked post to get it to work for me & I'm new to using capistrano. Edits welcome)

Capistrano deploy to different path on same server

I am trying to deploy my application using capistrano. But I want to deploy my application to multiple paths of the same server.For example If for the first run I want to deploy it to below path
set :deploy_to, '/home/a/some_path/
Once completed the first one it should run for the second path that will be
set :deploy_to, '/home/b/some_path/
and so on. Any suggestions how can I achieve this? Right now my single path deployment path is working AOK.
In your config file:
set :deploy_to, ENV["DEPLOY_PATH"]
Then, to deploy, run the command setting the DEPLOY_PATH variable:
DEPLOY_PATH="my/path" cap production deploy
Using capistrano 3.8.2, I monkeypatched lib/capistrano/dsl/paths.rb in my deploy.rb, but then I found that I needed more work to get git wrapper set up right when there where different deploy users.
The result is at: https://gist.github.com/mcr/49e8c7034658120013c1fe49da77c2ac
But, I'm leaving the essence of the content here:
module Capistrano
module DSL
module Paths
def deploy_to
dir = #host.properties.fetch(:deploy_to) || fetch(:deploy_to)
puts "For #{#host.hostname} deploy_to: #{dir}"
dir
end
end
end
end
(You can take the puts out, and shorten it to a one-liner, but I found the extra debug useful)
One then does:
server "server.client1.example.com", user: "client1", roles: %w{app db web}, deploy_to: '/client1/app/foobar'
server "server.client2.example.com", user: "client2", roles: %w{app db web}, deploy_to: '/client2/app/foobar'
where server.client1.example.com and server.client2.example.com are CNAMEs or duplicate A/AAAA records for the same server. This also isolates the question of where each client is to DNS.

Capistrano creating new DB for every deployment

At the moment I have Capistrano deploying to a server which is set up as a development environment.
However, everytime I run cap deploy, it doesn't keep the database at all, so every deployment ends up with a fresh database, completely empty. I have to run cap deploy:migrations to set up the DB, but the issue here is that there is an individual DB for each deployment.
I figure I could change database.yml to use a path such as ../../db/development.sqlite3 for the DB but this would mean I then have to copy that change locally too, and moving my DB out of the directory for my project on my own laptop would be very in-convenient.
Is there a way to tell Capistrano to use a single DB location for every deployment yet still keep my DB in the same place locally? Setting the server to a production environment isn't an option as this stage, unfortunately. Something like being able to do :
development:
adapter: sqlite3
:on local
database: db/development.sqlite3
:on server
database: /webapps/rails/shared/dev.sqlite3
pool: 5
timeout: 5000
(At this point it's probably also worth mentioning I'm very much still learning my way around Rails).
Any of your thoughts would be most appreciated, thank you. If the only option is to set the env to production then that will have to do, but if there's a way round it that lets me keep the server as a development server, that would be great.
Jack.
Add a step in capistrano that runs before any database stuff to create a symbolic link for whatever database file you want that points to the shared directory. This is how logs is set up for you. Something along the lines of this:
namespace :custom do
task :symlink, :roles => :app do
run "ln -nfs #{shared_path}/development.sqlite3 #{release_path}/db/development.sqlite3"
end
end
after "deploy:create_symlink", "customs:symlink"
I think you're after multiple application environments: one for staging on the server, one for development locally and eventually one for production. For a good run through, try here.

Managing security for an open source rails 3 application stored at github

New to rails, open source and soon ready for deploying to a production environment, I have some security considerations.
How to handle the database.yml is covered pretty good by, how-to-manage-rails-database-yml
But from my point of view there are more configuration settings in a normal rails application that shouldn't be hosted in a public github repository and deployed to production e.g.
devise.rb -> config.pepper
secret_token.rb -> Application.config.secret_token
capistrano -> deploy.rb
...
Adding config/****/* to .gitignore would not only prevent new developers from bundle install, db:create, db:migrate, rails server but also to keep the production config up to date if a new gem with an initializer is installed.
Another possibility would be add an environment.yml with sensitive config, like database.yml where sensitive configuration in the initializers will be overridden?
This will make it easy to get up and running after a clean checkout and the production environment will be easy to maintain.
Any ideas how to approach my problems above?
I usually put "safe" data in these files, which will usually work for development purposes. But in production I symlink the files to another location with capistrano, like this:
invoke_command "ln -sf #{shared_path}/database.yml #{release_path}/config/database.yml"
So in the production server I have a bunch of files that override the files in source control. I don't even work with a database.yml.example, just some sane default database.yml that the developers agree upon to use in development and test.
For individual settings, like API keys, I usually create a config/settings.yml and read them from inside the initializer:
SETTINGS = YAML.load(IO.read(Rails.root.join("config", "settings.yml")))
YourApp::Application.config.secret_token = SETTINGS["secret_token"]

Deploying a rails app to multiple locations

I'm trying to deploy the same rails app to two different locations with different app names, different logos, different stylesheets, etc.
I've got the code working based on an APP_NAME and a HOST_NAME variable I store in environments/production.rb. Now I need to actually deploy it, and I need a better solution than manually editing the environment file on the production machine.
The only way I can see to do it is to create a new production environment - e.g. production_app2 - and define APP_NAME and HOST_NAME differently in them. Is there a better way?
No no no! Don't edit the environment files. I mean, edit them as you need to for things that need to be configured the same for every deployment, but not for things that should be configurable between deployments.
For that, use configuration.
Throw a YAML file in config that looks something like this:
development:
:app_name: App 1
:host_name: something.com
test:
:app_name: App 1
:host_name: something.com
production:
:app_name: App 1
:host_name: something.com
Call it whatever makes sense. Let's say settings.yml.
Now load it with an initializer in config/initializers/settings.rb that looks like this:
SETTINGS = YAML.load_file("#{RAILS_ROOT}/config/settings.yml")[RAILS_ENV]
Now access your configuration like this:
SETTINGS[:app_name]
(If you don't want to change your existing code at all, inside config/initializers/settings.rb add lines that set your existing names like APP_NAME = SETTINGS[:app_name], etc.)
Note that this is one possible implementation of settings configuration, but even if another approach is taken it should be based on deployment-independent configuration. This can be much more easily and maintainably set up to persist between deployments and upgrades than mucking with environment files.
Again, to recap:
environment files are for configuration that is the same across all deployments
configuration files are for configuration that can change between deployments
Update
For Capistrano based deployments, this is what I use to symlink multiple configuration files in the new current from the shared directory (I think it originally came from an Ezra recipe from EngineYard):
after "deploy:update_code","deploy:symlink_configs"
namespace(:deploy) do
task :symlink_configs, :roles => :app, :except => {:no_symlink => true} do
configs = %w{ database settings }
configs.map! { |file| "ln -nfs #{shared_path}/config/#{file}.yml #{release_path}/config/#{file}.yml" }
run <<-CMD
cd #{release_path} && #{configs.join(' && ')}
CMD
end
end
I think that's a pretty good way.
Where we are we define different environments (e.g. 'staging', 'production', 'production_backup' - giving us a staging.rb, production.rb, production_backup.rb where you can define your specific APP_NAMEs and HOST_NAMEs) and can deploy to each of them using Capistrano. It works just fine.
This is a good link on it: http://www.egtheblog.com/?p=8
Because you are actually deploying to two different environments, it seems best to create two different environment files, each with their own settings. Make sure you pick descriptive names for your environment files, not just production2.
You could also store this information in the database, but I don't know if you're willing to accept such a dependency. I guess using a database would only make sense if the number of deployments is too large to manage easily with a few environment files.

Resources