I am using Linode with Ubuntu 10.04 and Capistrano, Unicorn, & Nginx to deploy.
How do I do the equivalent of heroku run rake db:reset with this setup? Is it as simple as cap deploy:cold again to run the migrations?
I've already deployed and want to drop all databases and rerun all the migrations but am not sure which commands to run with this setup to do so.
I wrote a tiny little file you can copy to run arbitrary rake tasks via capistrano: http://jessewolgamott.com/blog/2012/09/10/the-one-where-you-run-rake-commands-with-capistrano/
once setup, you can:
cap sake:invoke task="db:reset"
For Capistrano 3 without actual dropping the database. Use bundle exec cap db:reset
namespace :db do
desc 'Resets DB without create/drop'
task :reset do
on primary :db do
within release_path do
with rails_env: fetch(:stage) do
execute :rake, 'db:schema:load'
execute :rake, 'db:seed'
end
end
end
end
end
You could add the following to your deploy.rb file
namespace :custom do
task :task do
run "cd #{current_path} && bundle exec rake db:reset RAILS_ENV=#{rails_env}"
end
end
Then run cap custom:task to clear the database.
If you are using Capistrano 3, consider using the capistrano-rails-collection.
You can also use copy the code directly from db.rake file from the repository.
Or, if you want a full-fledged solution to run all your rake tasks on a remote server, check out the Cape gem.
Related
Suppose I have a model method
def self.do_something
How do I run this method after deploy in capistrano? I've tried runner, I've tried making a rake task, nothing works. Can someone provide an exemplar?
Thanks,
kevin
You have to load the rails environment before calling your rake task in your deploy file, Capistrano doesn't do it.
run "cd #{current_path} ; RAILS_ENV=#{rails_env} bundle exec rake my:task"
I'm trying to run my seed file on a remote server using capistrano. My deploy is OK, so there is no issue there. Here is the code for running the seed file in config/deploy.rb
namespace :seed do
desc "Run a task on a remote server."
# run like: cap staging rake:invoke task=a_certain_task
task :default do
run("cd #{deploy_to}/current; /usr/bin/env bundle exec rake #{ENV['db:seed']} RAILS_ENV=#{rails_env}")
end
end
I am evoking this task by running 'cap seed'.
Whats weird is it looks like tests are running when I run this..HERE is a snippet.
Maybe the problem is with #{ENV['db:seed']} part. Isn't it should be just db:seed. The eniviroment variable db:seed doesn't exist so You are calling a pure rake command.
Try this:
run("cd #{deploy_to}/current; /usr/bin/env bundle exec rake db:seed RAILS_ENV=#{rails_env}")
I've been doing Ruby on Rails development with ElasticSearch between two machines and its starting to get a little annoying. My usual workflow is:
git pull
bundle install
rake db:migrate (or rake db:setup depending)
rails server
elasticsearch -f -D myconfig.xml
rake environment tire:import CLASS=MyObject FORCE=true
Is there anyway I can add all of these commands to some type of start up script in Rails to bring them all into one place? It would make bringing up a dev environment a lot easier on me everytime I switch machines.
The best way I've found is to use the Foreman gem to kickstart your project and associated processes.
It looks like you should do this in your deployment using Capistrano. Here is an example config/deploy.rb file:
[basic parts omitted]
after "deploy", "bundler:bundle_install"
after "bundler:bundle_install", "db:db_migrate"
after "deploy:db_migrate", "deploy:elastic_search_indexing"
namespace :bundler do
desc 'call bundle install'
task :bundle_install do
run "cd #{deploy_to}/current && bundle install"
end
end
namespace :db do
desc 'fire the db migrations'
task :db_migrate do
run "cd #{deploy_to}/current && bundle exec rake db:migrate RAILS_ENV=\"production\""
end
end
namespace :elasticsearch do
desc 'run elasticsearch indexing via tire'
task :index_classes do
run "cd #{deploy_to}/current && bundle exec rake environment tire:import CLASS=YourObject FORCE=true "
end
end
[rest omitted]
Make sure you have a config file on the target machine (Linux) in /etc/elasticsearch/elasticsearch.yml with contents like:
cluster:
name: elasticsearch_server
network:
host: 66.98.23.12
And the last point to mention is that you should create an initializer config/initializers/tire.rb:
if Rails.env == 'production'
Tire.configure do
url "http://66.98.23.12:9200"
end
end
As you can see, this is the exact same IP address, but only used for the production environment. I assume that you access elasticsearch locally (in development mode) via localhost. elasticsearch is connection per default to
http://0.0.0.0:9200
A good starting point and also in depth help is provided by awesome Ryan Bates and his Railscasts http://railscasts.com/episodes?utf8=%E2%9C%93&search=capistrano
Whats keeping you from putting it in a bash script? And put the script inside your RAILS_APP_HOME/scripts folder?
#!/bin/sh
git pull
bundle install
rake db:migrate
rails server
elasticsearch -f -D myconfig.xml
rake environment tire:import CLASS=MyObject FORCE=true
I am using Ruby on Rails 3.0.9 and I would like to seed the production database in order to add some record without re-building all the database (that is, without delete all existing records but just adding some of those not existing yet). I would like to do that because the new data is needed to make the application to work.
So, since I am using the Capistrano gem, I run the cap -T command in the console in order to list all available commands and to know how I can accomplish what I aim:
$ cap -T
=> ...
=> cap deploy:seed # Reload the database with seed data.
=> ...
I am not sure on the word "Reload" present in the "Reload the database with seed data." sentence. So, my question is: if I run the cap deploy:seed command in the console on my local machine will the seeding process delete all existing data in the production database and then populate it or will that command just add the new data in that database as I aim to do?
If you are using bundler, then the capistrano task should be:
namespace :deploy do
desc "reload the database with seed data"
task :seed do
run "cd #{current_path}; bundle exec rake db:seed RAILS_ENV=#{rails_env}"
end
end
and it might be placed in a separate file, such as lib/deploy/seed.rb and included in your deploy.rb file using following command:
load 'lib/deploy/seed'
This worked for me:
task :seed do
puts "\n=== Seeding Database ===\n"
on primary :db do
within current_path do
with rails_env: fetch(:stage) do
execute :rake, 'db:seed'
end
end
end
end
capistrano 3, Rails 4
Using Capistrano 3, Rails 4, and SeedMigrations, I created a Capistrano seed.rb task under /lib/capistrano/tasks:
namespace :deploy do
desc 'Runs rake db:seed for SeedMigrations data'
task :seed => [:set_rails_env] do
on primary fetch(:migration_role) do
within release_path do
with rails_env: fetch(:rails_env) do
execute :rake, "db:seed"
end
end
end
end
after 'deploy:migrate', 'deploy:seed'
end
My seed migrations are now completely separate from my schema migrations, and ran following the db:migrate process. What a joy! :)
Try adding something like this in your deploy.rb:
namespace :deploy do
desc "reload the database with seed data"
task :seed do
run "cd #{current_path}; rake db:seed RAILS_ENV=#{rails_env}"
end
end
cap deploy:seed should basically be a reference to rake db:seed. It should not delete existing data, unless you specified it to do so in your seed.rb.
Best assumption for the word "Reload" is that :seed is a stateless command, I does not automatically know where it left off, like regular rails migrations. So technically you would always be "reloading" the seed, every time you run it. ...A wild guess, but it sounds good, no?
Please view Javier Vidal answer below
After a discussion with capistrano-rails gem authors I decided to implement this kind of tasks in a separate gem. I think this helps to follow the DRY idea and not implementing the same task over and over again.
I hope it helps you: https://github.com/dei79/capistrano-rails-collection
I have a rails app that is not in the root directory of the repository. When it is deployed, some other static files are deployed with it in a parent directory. The structure is something like this:
root
-- otherstuff
-- railsapp
When I do a deployment with cap deploy:migrations, the Capistrano command that gets executed looks like this, which of course doesn't work:
cd /u/apps/minicart/releases/20100717215044; rake RAILS_ENV=staging db:migrate
How do I change this so that it will be:
cd /u/apps/minicart/releases/20100717215044/railsapp; rake RAILS_ENV=staging db:migrate
I made it work by adding a task that executes this command after deploy:finalize_update, but I would prefer to use the built in method, plus my hacked version is executed with every deployment.
Any advice would be appreciated.
Tim
This turned out to be very simple.
I added a deploy namepace to my deploy.rb file and then redefined the migrate method. Now my method runs on cap deploy:migrations.
namespace :deploy do
desc "Migrating the database"
task :migrate, :roles => :app do
run <<-CMD
cd #{release_path}/minicart; RAILS_ENV=#{stage} rake db:migrate
CMD
end
end