At the moment I have Capistrano deploying to a server which is set up as a development environment.
However, everytime I run cap deploy, it doesn't keep the database at all, so every deployment ends up with a fresh database, completely empty. I have to run cap deploy:migrations to set up the DB, but the issue here is that there is an individual DB for each deployment.
I figure I could change database.yml to use a path such as ../../db/development.sqlite3 for the DB but this would mean I then have to copy that change locally too, and moving my DB out of the directory for my project on my own laptop would be very in-convenient.
Is there a way to tell Capistrano to use a single DB location for every deployment yet still keep my DB in the same place locally? Setting the server to a production environment isn't an option as this stage, unfortunately. Something like being able to do :
development:
adapter: sqlite3
:on local
database: db/development.sqlite3
:on server
database: /webapps/rails/shared/dev.sqlite3
pool: 5
timeout: 5000
(At this point it's probably also worth mentioning I'm very much still learning my way around Rails).
Any of your thoughts would be most appreciated, thank you. If the only option is to set the env to production then that will have to do, but if there's a way round it that lets me keep the server as a development server, that would be great.
Jack.
Add a step in capistrano that runs before any database stuff to create a symbolic link for whatever database file you want that points to the shared directory. This is how logs is set up for you. Something along the lines of this:
namespace :custom do
task :symlink, :roles => :app do
run "ln -nfs #{shared_path}/development.sqlite3 #{release_path}/db/development.sqlite3"
end
end
after "deploy:create_symlink", "customs:symlink"
I think you're after multiple application environments: one for staging on the server, one for development locally and eventually one for production. For a good run through, try here.
Related
I have a rails app and want to split off my Postgres database to a remote, managed one rather than the standard local one. It seemed easy enough to configure this— however, now I am trying to run my migrations against this new db and it’s proving more difficult. I’m using Mina to deploy, which calls rake db:migrate as part of the deployment. It does not run the migrations, however, as it says that all migrations are up to date, and my create calls can’t find the tables, so I assume the migrations have not run on the remote db.
Whst’s the best way to accomplish this? Every other answer I’ve found involves adding something like ActiveRecord::Base.establish_connection(db_params) command to the top of every migration and every model. This seems absurd— I have probably a 75 migrations at this point. Is there no better way? Is this even the right approach, or could I also use the generated schemes file somehow?
You can set up your database credentials in database.yml with something like this.
remote:
adapter: postgresql
host: your.remote.host
database: yourdb
username: user
password: pass
pool: 5
timeout: 5000
locale: en_US.UTF8
Then run your migrations like
RAILS_ENV=remote rails db:migrate
My goal is to have 2 databases and 2 deployments of rails on the same server. I want to have the regular production server using the production database. Then I want to be able to deploy to a different web address that will use a different database. My goal is to be able to push the backup first and make sure all the migrations etc. work in the full environment. I would then push it to the main server.
The issue I seem to run into is that the database.ml file only lists the 3 database types. The passenger environment will also assume that its running in production and would migrate the main MySQL database even if I deploy the code to a different directory. Whats the best way around this? Was wondering if it is simple or if it involves setting lots of variables in lots of places? Any suggestions would be great!
You can add other database types to database.yml as you see fit.
staging:
adapter: postgresql
host: mydb_host
database: mydb_staging
etc...
You can copy config/environments/production.rb to config/environments/staging.rb and leave it as is so the two environments are exactly the same, or tweak staging.rb as you see fit.
Now you have a staging environment! Use it where appropriate, e.g.:
rake RAILS_ENV=staging db:migrate
I am not a passenger expert, but know that my shop has both staging and production instances of apps running on the same server under passenger, so it can be done. Google can probably instruct you better on configuring that than I can.
We have inherited a Rails project hosted on Heroku. The problem is that I don't understand how to get a copy of the database.yml or other various config files that have been ignored in the .gitignore file.
Sure I can look through the history but it seems like a hip shot when comparing it to what should be on the production server. Right now we are having to make changes to the staging environment which makes things a bit arduous and less efficient than having a local dev environment so we can really get under the hood. Obviously, having an exact copy of the config is a good place to start.
With other hosts it's easy to get access to all the files on the server using good ol' ftp.
This might be more easily addressed with some sort of git procedure that I am less familiar with.
Heroku stores config variables to the ENV environment
heroku config will display these list of variables
heroku config:get DATABASE_URL --app YOUR_APP
Will return the database url that Heroku as assigned to your app, using this string one can deduce the parameters necessary to connect to your heroku applications database, it follows this format
username:password#database_ip:port/database_name
This should provide you with all the values you'll need for the heroku side database.yml, its a file that is created for you each time you deploy and there is nothing it that can't be gotten from the above URL.
gitignoring the database.yml file is good practice
It's not something you can do - entries added to .gitignore means they've been excluded from source control. They've never made it into Git.
As for database.yml you can just create the file in app\config\database.yml with local settings, it should look something like;
development:
adapter: postgresql
host: 127.0.0.1
username: somelocaluser
database: somelocaldb
It's safe to say though, that if the file is not in Git then it's not on Heroku since that's the only way you can get files to Heroku. Any config is likely to be in environment variables - heroku config will show you that.
In an effort to get our tests to run faster, I decided to use parallel tests. https://github.com/grosser/parallel_tests
However, as usual is the case, this didn't go without issues. the tests were getting killed before finishing.
...killed.
Read on to see how I came about solving the issue.
After much troubleshooting I had to understand exactly what was happening or at least how parallel_tests was trying to run my tests.
Parallel_tests creates a database per core. So if I have 4 cores available, it would create 4 tests dbs. Then all tests are evenly distributed among the cores and executed using its own db.
To begin with, I wasn't using the right commands to setup the necessary dbs. Below is the order that worked for me.
Given your database.yml looks like this
development:
adapter: mysql2
encoding: utf8
database: homestars_dev
username: root
password:
test: &test
adapter: mysql2
encoding: utf8
database: homestars_test
username: root
password:
create dbs in database.yml and load the schema/structure in the dev db
rake db:setup
create test dbs based on number of cores available
rake parallel:create
copies schema from dev db into each newly created test db
rake parallel:prepare
seed each test db
rake parallel:seed
run tests
rake parallel:rspec
With this in place, parallel_test started doing its thing correctly! However, there was still an issue that was causing tests to fail.
I had implemented GC delay using a method similar to http://ariejan.net/2011/09/24/rspec-speed-up-by-tweaking-ruby-garbage-collection/
I had it tweaked to run every 10s.
For some reason, 10s was about the time it took for each core to kill the tests! So I went and removed the lines that enable that GC hack. (by doing that, GC should still run after every test)
And for some reason, that did it! Although I still cannot understand why that is the case, I'm happy to have found a solution and understood the problem/solution better.
Take away lessons: Make sure your dbs are correctly setup before running the tests, don't use GC hacks to delay it (at least until we find the reason why it kills the processes)
Hope that helps somebody and if you have any further info, please chime in!
I have some ENV variables that are sourced for the deploy user. (Similar to what Heroku recommends, but without using Heroku.)
My rails app depends on these for certain functions, for example, in application.rb:
config.action_mailer.default_url_options = { host: ENV['MY_HOST'] }
This is necessary because we have several staging hosts. Each host has MY_HOST defined to its correct hostname in .bashrc like so:
export MY_HOST="staging3.example.com"
This allows us to only use one rails staging environment, but still have each host's correct hostname used for testing, sending email, etc since this can be set on a per-machine basis.
Unfortunately it looks like when I restart Unicorn using USR2 it doesn't pick up changes to those variables. Doing a hard-stop and start will correctly load any changes.
I'm using preload_app = true which may I'm guessing has something to do with it. Any ideas?
In the end I went away from this approach altogether in favor of loading my app config from an app_config.yml file. Ryan Bates covers this approach in Railscast #226.
The only thing I did differently is that I load a shared app_config.yml for each server I use. Since I'm using capistrano, I just symlink the file on deploy.
For example, on staging2 my shared/configs/app_config.yml looks like this:
staging:
host: "staging2.example.com"
... whereas on staging3 it looks like this:
staging:
host: "staging3.example.com"
Now my application.rb has this line instead:
config.action_mailer.default_url_options = { host: APP_CONFIG[:host] }
I removed the actual config/app_config.yml from git so that it's not included on deploy. (I moved it to config/app_config.yml.template.) Then on deploy I use a capistrano task to symlink shared/configs/app_config.yml to config/app_config.yml:
namespace :deploy do
desc "Symlinks the app_config.yml"
task :symlink_app_config, :roles => [:web, :app, :db] do
run "ln -nfs #{deploy_to}/shared/config/app_config.yml #{release_path}/config/app_config.yml"
end
end
This strategy has these benefits over using ENV vars:
Deployed to all nodes via capistrano
We can do conditional hosts by simply changing the file on the appropriate server
Unicorn will get changes with USR2 since everything's done inside of rails
Everything's kept in one place, and the environment isn't affected by some other variables outside of the codebase