Simple way to run rails migrations for a remote database - ruby-on-rails

I have a rails app and want to split off my Postgres database to a remote, managed one rather than the standard local one. It seemed easy enough to configure this— however, now I am trying to run my migrations against this new db and it’s proving more difficult. I’m using Mina to deploy, which calls rake db:migrate as part of the deployment. It does not run the migrations, however, as it says that all migrations are up to date, and my create calls can’t find the tables, so I assume the migrations have not run on the remote db.
Whst’s the best way to accomplish this? Every other answer I’ve found involves adding something like ActiveRecord::Base.establish_connection(db_params) command to the top of every migration and every model. This seems absurd— I have probably a 75 migrations at this point. Is there no better way? Is this even the right approach, or could I also use the generated schemes file somehow?

You can set up your database credentials in database.yml with something like this.
remote:
adapter: postgresql
host: your.remote.host
database: yourdb
username: user
password: pass
pool: 5
timeout: 5000
locale: en_US.UTF8
Then run your migrations like
RAILS_ENV=remote rails db:migrate

Related

How to open rails console for access multiple database in rails 6?

i have multiple databases in my project and i want to open rails console to access both databases.
currently i can fetch data of only one default database.
In other word, how to open specific database console?
In other word, how to open specific database console?
From official guide:
bin/rails dbconsole figures out which database you're using and drops you into whichever command line interface you would use with it (and figures out the command line parameters to give to it, too!). It supports MySQL (including MariaDB), PostgreSQL, and SQLite3.
You can also use the alias "db" to invoke the dbconsole: bin/rails db.
If you are using multiple databases, bin/rails dbconsole will
connect to the primary database by default. You can specify which
database to connect to using --database or --db:
bin/rails dbconsole --database=animals
So command is
rails db --db=db_name_from_database_yml
For example you have in your database.yml
production:
my_primary:
adapter: postgresql
database: some_db_name
In this case your command will be
bundle exec rails db --db=my_primary -e=production
Why don't you have a read through the documentation https://guides.rubyonrails.org/active_record_multiple_databases.html.
You should be able to find just what you need. connects_to does the trick of database switching automatically of manually.
Or
https://api.rubyonrails.org/classes/ActiveRecord/ConnectionHandling.html#method-i-connected_to
ActiveRecord::Base.connected_to(database: :mydb) do
# Do something here...
end

Can Rails 2 different databases in the production environment?

My goal is to have 2 databases and 2 deployments of rails on the same server. I want to have the regular production server using the production database. Then I want to be able to deploy to a different web address that will use a different database. My goal is to be able to push the backup first and make sure all the migrations etc. work in the full environment. I would then push it to the main server.
The issue I seem to run into is that the database.ml file only lists the 3 database types. The passenger environment will also assume that its running in production and would migrate the main MySQL database even if I deploy the code to a different directory. Whats the best way around this? Was wondering if it is simple or if it involves setting lots of variables in lots of places? Any suggestions would be great!
You can add other database types to database.yml as you see fit.
staging:
adapter: postgresql
host: mydb_host
database: mydb_staging
etc...
You can copy config/environments/production.rb to config/environments/staging.rb and leave it as is so the two environments are exactly the same, or tweak staging.rb as you see fit.
Now you have a staging environment! Use it where appropriate, e.g.:
rake RAILS_ENV=staging db:migrate
I am not a passenger expert, but know that my shop has both staging and production instances of apps running on the same server under passenger, so it can be done. Google can probably instruct you better on configuring that than I can.

tests getting killed in the middle of a run using parallel_tests

In an effort to get our tests to run faster, I decided to use parallel tests. https://github.com/grosser/parallel_tests
However, as usual is the case, this didn't go without issues. the tests were getting killed before finishing.
...killed.
Read on to see how I came about solving the issue.
After much troubleshooting I had to understand exactly what was happening or at least how parallel_tests was trying to run my tests.
Parallel_tests creates a database per core. So if I have 4 cores available, it would create 4 tests dbs. Then all tests are evenly distributed among the cores and executed using its own db.
To begin with, I wasn't using the right commands to setup the necessary dbs. Below is the order that worked for me.
Given your database.yml looks like this
development:
adapter: mysql2
encoding: utf8
database: homestars_dev
username: root
password:
test: &test
adapter: mysql2
encoding: utf8
database: homestars_test
username: root
password:
create dbs in database.yml and load the schema/structure in the dev db
rake db:setup
create test dbs based on number of cores available
rake parallel:create
copies schema from dev db into each newly created test db
rake parallel:prepare
seed each test db
rake parallel:seed
run tests
rake parallel:rspec
With this in place, parallel_test started doing its thing correctly! However, there was still an issue that was causing tests to fail.
I had implemented GC delay using a method similar to http://ariejan.net/2011/09/24/rspec-speed-up-by-tweaking-ruby-garbage-collection/
I had it tweaked to run every 10s.
For some reason, 10s was about the time it took for each core to kill the tests! So I went and removed the lines that enable that GC hack. (by doing that, GC should still run after every test)
And for some reason, that did it! Although I still cannot understand why that is the case, I'm happy to have found a solution and understood the problem/solution better.
Take away lessons: Make sure your dbs are correctly setup before running the tests, don't use GC hacks to delay it (at least until we find the reason why it kills the processes)
Hope that helps somebody and if you have any further info, please chime in!

Rake aborted during postgresql migration

Rails/PostgreSQL newbie here, having real problems dealing with creating PostgreSQL projects in Rails.
Briefly, I am following Ryans excellent Railscast episodes, in particular the episode on deployment. I create a new project like this:
rails new blog -d postgresql
This generates a database.yml file similar to the following (comments extracted):
development:
adapter: postgresql
encoding: unicode
database: blog_development
pool: 5
username: blog
password:
test:
adapter: postgresql
encoding: unicode
database: blog_test
pool: 5
username: blog
password:
production:
adapter: postgresql
encoding: unicode
database: blog_production
pool: 5
username: blog
password:
Looks good so far.
However, on my development machine, whenever I try to run rake db:create:all, I get a message similar to the following:
rake aborted!
FATAL: role "blog" does not exist
My understanding is that this is because I haven't (or, rather, Rails hasn't) created a user called "blog" when the application was created. Therefore, I need to either:
Change the username for all environments to my system's "super
user" I chose when I installed Homebrew (which works); OR
Create a new super user for each individual project I set up (e.g. a
super user "blog" in this instance).
Now, the problem is this - should I be creating a super user for each individual project I create? If so, how? The reason I ask is that Ryan's Railscasts episodes never actually mention this, so I'm not sure if this is the correct approach to be taking (it seems a bit long-winded), or if there is a problem with my PostgreSQL database installation (as I would have thought that the super user would be automatically set up along with the database when the application is created).
Additionally, so as to make deployment easier, I'd like to keep the overall database.yml file the same in both the development and production environments (although I admit to knowing even less about deployment, so maybe this isn't ideal).
Thanks!
Creating database users is up to you. Rails does not create database users.
Depending on your platform, you can use some kind of database management GUI to make it easier to create users and manage their rights. I use PGAdmin, which came with the Mac OS X executable for PostgreSQL. It is available for other platforms as well.
In development, many people use the same "superuser" for all projects. In production, I would advice you to have application-specific users with minimal privileges.
I don't exactly understand the last paragraph about the database.yml being the same for dev and prod envs, because that is precisely the point - one file for all environments. You should probably not use the same credentials for development and production.

Capistrano creating new DB for every deployment

At the moment I have Capistrano deploying to a server which is set up as a development environment.
However, everytime I run cap deploy, it doesn't keep the database at all, so every deployment ends up with a fresh database, completely empty. I have to run cap deploy:migrations to set up the DB, but the issue here is that there is an individual DB for each deployment.
I figure I could change database.yml to use a path such as ../../db/development.sqlite3 for the DB but this would mean I then have to copy that change locally too, and moving my DB out of the directory for my project on my own laptop would be very in-convenient.
Is there a way to tell Capistrano to use a single DB location for every deployment yet still keep my DB in the same place locally? Setting the server to a production environment isn't an option as this stage, unfortunately. Something like being able to do :
development:
adapter: sqlite3
:on local
database: db/development.sqlite3
:on server
database: /webapps/rails/shared/dev.sqlite3
pool: 5
timeout: 5000
(At this point it's probably also worth mentioning I'm very much still learning my way around Rails).
Any of your thoughts would be most appreciated, thank you. If the only option is to set the env to production then that will have to do, but if there's a way round it that lets me keep the server as a development server, that would be great.
Jack.
Add a step in capistrano that runs before any database stuff to create a symbolic link for whatever database file you want that points to the shared directory. This is how logs is set up for you. Something along the lines of this:
namespace :custom do
task :symlink, :roles => :app do
run "ln -nfs #{shared_path}/development.sqlite3 #{release_path}/db/development.sqlite3"
end
end
after "deploy:create_symlink", "customs:symlink"
I think you're after multiple application environments: one for staging on the server, one for development locally and eventually one for production. For a good run through, try here.

Resources