I don't really understand what model of unix accounts/permissions is intended with Capistrano.
Let's say I've got a rails app called Widget and I'll be deploying with passenger. In general, pre-capistrano, I want the entire ./widget directory to be owned by a user called 'widget'. And then by default default passenger will run the app process as user 'widget' too, because passenger runs as user that owns the file.
And the whole point of this is for that 'widget' account to have fairly limited permissions, right? Since a web app will be running under that account?
So since I want the files to be owned by 'widget', I tell cap
set :user, "widget"
But now when I run "cap deploy:setup", it wants to 'sudo' from that account. No way that 'widget' account gets sudo privileges, the whole point is keeping this account limited privs.
Okay, I can tell cap not to use sudo... but then it won't actually have privs to do what it needs, maybe.
I can find a workaround to this too. But I start thinking, why do I keep having to re-invent the wheel? I mistakenly thought the point of cap recipes was to give me some best practices here. Anyway... what do people actually do here?
Use one unix account for install, but then have cap somehow 'chown' it to something else? Use one unix account, but have something non-cap (puppet?) do enough setup so that account doesn't need to sudo to get things started? What? What am I missing?
You can avoid some of the headache by using Passenger most commonly with Nginx as your webserver.
Then to restart web services the unprivileged Widget user creates a file in his path and Passenger will automatically restart Nginx when it sees that file being present.
This is enabled via the following in your config/deploy.rb:
namespace :deploy do
task :start do ; end
task :stop do ; end
task :restart, :roles => :app, :except => { :no_release => true } do
run "touch #{File.join(current_path,'tmp','restart.txt')}"
end
end
As for other privileged tasks for MySQL/DB administration your database.yml provides the credentials necessary to handle rake migration tasks.
So really the only time you would need something more privileged would be for system wide installation of gems, ruby, or rails updates, but a lot of that depends on how your production environment was setup/installed.
Given Passenger+Nginx and separate credentials for DB you can disable sudo and see if you encounter any errors during your Capistrano deploy process and then pickup from there.
Related
I'm having troubles with capistrano and sidekiq monit.
I setup a user for capistrano and everything was going smoothly until I installed Sidekiq.
My problem is when I try to execute cap staging sidekiq:monit:config (sidekiq:monit:start has the same permission problem).
Everytime I've tried, it "freezes" because it asks for the password.
Then I tryed to set sidekiq_monit_use_sudo to false. It's ok, it doesn't use sudo, but then it doesn't have permission to copy the /tmp/monit.conf into /etc/monit/conf.d/ folder.
It's the first time I'm setting up a server and I'm kinda lost here =|
Maybe try to config the sidekiq monit manually?
I'm using ruby 2.5 and these gems:
capistrano 3.10
capistrano-sidekiq 1.0
rails 5.1
Also I have the :pty config set to true as I don't feel comfortable not using a password.
Thank you!
You have a couple of options, of which I'll define two, the right one, and the easy/bad one.
Local User Monit
My personal usage of Monit is on a shared server on which I do not have root access. So I run Monit itself as a non-root user.
In order to do this, I compiled Monit with its prefix as $HOME/apps, so that the config files are in $HOME/apps/etc. This avoids the sudo issue. If you have access to the package manager and installed Monit that way, you can run monit as your user with the -c param to define where it should look for configuration files:
monit -c $HOME/config/monitrc
In order to get Capistrano to recognize the local monit, you will need some extra parameters in config/deploy.rb:
#set :monit_bin, '/usr/bin/monit' # Use this if you compile monit yourself.
set :sidekiq_monit_conf_dir, '/home/myuser/config/monit.d' # Feel free to customize.
set :sidekiq_monit_use_sudo, false
In the monitrc file you have defined with the -c option, you will need to make sure whatever folder you define in :sidekiq_monit_conf_dir is pulled in via includes:
include /home/myuser/config/monit.d/*.conf
Since I don't have an init system, I have Cron start Monit every 30 minutes, which is a noop if it is already running:
# Restart monit if it dies
*/30 * * * * $HOME/apps/bin/monit > /dev/null
If you have root access, you can improve upon this by having an init script (or systemd unit file) start Monit as your local user.
Bad option: give your user access to the conf dir
You can edit /etc/monit/monitrc to include your local user config directory as above. Similarly, you can allow your user to write to /etc/monit/conf.d. The major downside of these solutions is that you are now allowing your non-root user to create files which will be executes as root, opening a privilege escalation vulnerability. If your user ever got compromised, you certainly don't want an easy way for the attacker to get to root.
I include this option mostly because it's commonly considered, and should be avoided in the vast majority of cases (such as whenever you care about security). However, this might be useful in occasional rare cases (such as when you have a short term server for internal use only behind a firewall with only trusted users, and you need to set it up in a hurry).
Does anyone have a good way to manage the appserver with capistrano?. This seems to be a leave it to your own devices situation, and I've yet to see a good example of it.
There is basically two trains of thoughts I see.
1) Daemonize it as the deploy user. Pros, no system service etc, so no permissions issues. However this wreaks as if the machine is rebooted, blam the system goes down.
2) Init scripts. Installing a init script and using that to manage the server. This would survive reboots, and allow for say /etc/init.d/myapp restart/stop/start control if you ssh'd in. This is decent apart from two reasons
Most people manage it from capistrano with sudo (I feel like capistrano 3 discourages this)
I've yet to see a good upstart or the like script that works with unicorn for it.
I'm experimenting with using nginx+unicorn. Nginx I have set perfectly. I've added a site to sites-available and pointed upstream to /appserver/public. This works great, asset precompilation works fantastic and all is well, I can redeploy and be served new assets. It's simple, works with the OS init process. However I've lucked out as the nginx config is basically static, and nginx only has to serve static files.
The appserver.. unicorn/thin/puma/ whatever is the part thats tripping me. I would like it to reload the application on cap deploy, but I'm struggling to find a good enough example of this.
In summary. What is a simple way of having a rails application survive reboots, and reload when cap deploy is called
If you use Passenger with your nginx and unicorn or thin... you can restart after deployment by touching tmp/restart.txt file:
task :restart do
on roles(:app), in: :sequence, wait: 5 do
execute :touch, release_path.join('tmp/restart.txt')
end
end
To reload a puma server after deploy use capistrano3-puma:
Gemfile:
gem 'capistrano3-puma'
Capfile:
require 'capistrano/puma'
I am working on an Opensourced project I want to create, deploying to a VPS.
I'm working with Rails 4 and Capistrano 3 on Ubuntu, both on the local machine and the server.
I have a configuration file named "application.yml" that includes basic information about the application, as a Google Analytics script, the application name, Mailer configurations and such.
I want to create a task that would rename "application.example.yml" to "application.yml" before Capistrano runs deploy:migrate.
I wrote the task below and put it in config/deploy.rb:
before "deploy:migrate", "configure:application"
namespace :configure do
task :application do
run "#{try_sudo} cp #{current_path}/config/application.example.yml #{current_path}/config/application.yml"
end
end
(I know cp copies the file but it also renames so it is fine.)
It does not seem to work. How can I rename "application.example.yml" before deploy:migrate runs?
GitHub Repository and deploy.rb
Also as I am moving forward with my App I seem to find more and more stuff I know little or nothing about, and would love it if someone experienced with Rails would be willing to become my "Mentor" by sharing with me his Skype (or any other communication service) so I could occasionally ask him my questions.
I have a chain of nginx + passenger for my rails app.
Now after each server restart i need to write in terminal in project folder
rake ts:start
but how can i automatize it?
So that after each server restart thinking sphinx is automatically started without my command in terminal?
I use rails 3.2.8 and ubuntu 12.04.
I can not imagine what can i try ever, please help me.
How can i do this, give some advices?
What I did to solve the same problem:
In config/application.rb, add:
module Rails
def self.rake?
!!#rake
end
def self.rake=(value)
#rake = !!value
end
end
In Rakefile, add this line:
Rails.rake = true
Finally, in config/initializers/start_thinking_sphinx.rb put:
unless Rails.rake?
begin
# Prope ts connection
ThinkingSphinx.search "test", :populate => true
rescue Mysql2::Error => err
puts ">>> ThinkingSphinx is unavailable. Trying to start .."
MyApp::Application.load_tasks
Rake::Task['ts:start'].invoke
end
end
(Replace MyApp above with your app's name)
Seems to work so far, but if I encounter any issues I'll post back here.
Obviously, the above doesn't take care of monitoring that the server stays up. You might want to do that separately. Or an alternative could be to manage the service with Upstart.
If you are using the excellent whenever gem to manage your crontab, you can just put
every :reboot do
rake "ts:start"
end
in your schedule.rb and it seems to work great. I just tested on an EC2 instance running Ubuntu 14.04.
There's two options I can think of.
You could look at how Ubuntu manages start-up scripts and add one for this (perhaps in /etc/init?).
You could set up monit or another monitoring tool and have it keep Sphinx running. Monit should boot automatically when your server restarts, and so it should ensure Sphinx (and anything else it's tracking) is running.
The catch with Monit and other such tools is that when you deliberately stop Sphinx (say, to update configuration structure and corresponding index changes), it might start it up again before it's appropriate. So I think you should start with the first of these two options - I just don't know a great deal about the finer points of that approach.
I followed #pat's suggestion and wrote a script to start ThinkingSphinx whenever the server boots up. You can see it as a gist -
https://gist.github.com/declan/4b7cc4fb4926df16f54c
We're using Capistrano for deployment to Ubuntu 14.04, and you may need to modify the path and user name to match your server setup. Otherwise, all you need to do is
Put this script into /etc/init.d/thinking_sphinx
Confirm that the script works: calling /etc/init.d/thinking_sphinx start on the command line should start ThinkingSphinx for your app, and /etc/init.d/thinking_sphinx stop should stop it
Tell Ubuntu to run this script automatically on startup: update-rc.d thinking_sphinx defaults
There's a good post on debian-administration.org called making scripts run at boot time that has more details.
The GitHub guys recently released their background processing app which uses Redis:
http://github.com/defunkt/resque
http://github.com/blog/542-introducing-resque
I have it working locally, but I'm struggling to get it working in production. Has anyone got a:
Capistrano recipe to deploy workers (control number of workers, restarting them, etc)
Deployed workers to separate machine(s) from where the main app is running, what settings were needed here?
gotten redis to survive a reboot on the server (I tried putting it in cron but no luck)
how did you work resque-web (their excellent monitoring app) into your deploy?
Thanks!
P.S. I posted an issue on Github about this but no response yet. Hoping some SO gurus can help on this one as I'm not very experienced in deployments. Thank you!
I'm a little late to the party, but thought I'd post what worked for me. Essentially, I have god setup to monitor redis and resque. If they aren't running anymore, god starts them back up. Then, I have a rake task that gets run after a capistrano deploy that quits my resque workers. Once the workers are quit, god will start new workers up so that they're running the latest codebase.
Here is my full writeup of how I use resque in production:
http://thomasmango.com/2010/05/27/resque-in-production
I just figured this out last night, for Capistrano you should use san_juan, then I like the use of God to manage deployment of workers. As for surviving a reboot, I am not sure, but I reboot every 6 months so I am not too worried.
Although he suggest different ways of starting it, this is what worked easiest for me. (Within your deploy.rb)
require 'san_juan'
after "deploy:symlink", "god:app:reload"
after "deploy:symlink", "god:app:start"
To manage where it runs, on another server, etc, he covers that in the configuration section of the README.
I use Passenger on my slice, so it was relatively easy, I just needed to have a config.ru file like so:
require 'resque/server'
run Rack::URLMap.new \
"/" => Resque::Server.new
For my VirtualHost file I have:
<VirtualHost *:80>
ServerName resque.server.com
DocumentRoot /var/www/server.com/current/resque/public
<Location />
AuthType Basic
AuthName "Resque Workers"
AuthUserFile /var/www/server.com/current/resque/.htpasswd
Require valid-user
</Location>
</VirtualHost>
Also, a quick note. Make sure you overide the resque:setup rake task, it will save you lots of time for spawning new workers with God.
I ran into a lot of trouble, so if you need any more help, just post a comment.
Garrett's answer really helped, just wanted to post a few more details. It took a lot of tinkering to get it right...
I'm using passenger also, but nginx instead of apache.
First, don't forget you need to install sinatra, this threw me for a while.
sudo gem install sinatra
Then you need to make a directory for the thing to run, and it has to have a public and tmp folder. They can be empty but the problem is that git won't save an empty directory in the repo. The directory has to have at least one file in it, so I made some junk files as placeholders. This is a weird feature/bug in git.
I'm using the resque plugin, so I made the directory there (where the default config.ru is). It looks like Garrett made a new 'resque' directory in his rails_root. Either one should work. For me...
cd MY_RAILS_APP/vendor/plugins/resque/
mkdir public
mkdir tmp
touch public/placeholder.txt
touch tmp/placeholder.txt
Then I edited MY_RAILS_APP/vendor/plugins/resque/config.ru so it looks like this:
#!/usr/bin/env ruby
require 'logger'
$LOAD_PATH.unshift File.expand_path(File.dirname(__FILE__) + '/lib')
require 'resque/server'
use Rack::ShowExceptions
# Set the AUTH env variable to your basic auth password to protect Resque.
AUTH_PASSWORD = "ADD_SOME_PASSWORD_HERE"
if AUTH_PASSWORD
Resque::Server.use Rack::Auth::Basic do |username, password|
password == AUTH_PASSWORD
end
end
run Resque::Server.new
Don't forget to change ADD_SOME_PASSWORD_HERE to the password you want to use to protect the app.
Finally, I'm using Nginx so here is what I added to my nginx.conf
server {
listen 80;
server_name resque.seoaholic.com;
root /home/admin/public_html/seoaholic/current/vendor/plugins/resque/public;
passenger_enabled on;
}
And so it gets restarted on your deploys, probably something like this in your deploy.rb
run "touch #{current_path}/vendor/plugins/resque/tmp/restart.txt"
I'm not really sure if this is the best way, I've never setup rack/sinatra apps before. But it works.
This is just to get the monitoring app going. Next I need to figure out the god part.
Use these steps instead of making configuration with web server level and editing plugin:
#The steps need to be performed to use resque-web with in your application
#In routes.rb
ApplicationName::Application.routes.draw do
resources :some_controller_name
mount Resque::Server, :at=> "/resque"
end
#That's it now you can access it from within your application i.e
#http://localhost:3000/resque
#To be insured that that Resque::Server is loaded add its requirement condition in Gemfile
gem 'resque', :require=>"resque/server"
#To add basic http authentication add resque_auth.rb file in initializers folder and add these lines for the security
Resque::Server.use(Rack::Auth::Basic) do |user, password|
password == "secret"
end
#That's It !!!!! :)
#Thanks to Ryan from RailsCasts for this valuable information.
#http://railscasts.com/episodes/271-resque?autoplay=true
https://gist.github.com/1060167