I have two databases I need to work with, one oracle on a remote box, and one mysql on my local box. The oracle DB only has SOME of the data I need to work with, so I set up my models to use the mysql database by default, and the few models I needed to work with oracle, I did:
establish_connection "oracle_database"
Which worked just fine.
Unfortunately, I've just been informed that I can't rely on the remote oracle database being available. My new requirement is that my system needs to be able to use the oracle database (if available), or, if it isn't, needs to use a local database that would have the same sorts of tables/columns/etc.
This seems like something Rails wouldn't actually be built to support? Am I going to be stuck manually editing my database.yml file to change "oracle_database" to sometimes point at the remote DB, and sometimes point at the local one?
You could use begin rescue to catch an exception while establishing connection to remote server, like this
begin
establish_connection "oracle_database"
rescue Exception => e
logger.warn "Connecting to local database due to exception #{e.to_s}"
establish_connection "local_database"
end
I think you should look at https://github.com/mixonic/ShardTheLove
This isn't as robust as I would like, but for now I found out I could do a work around with config variables:
if APP_CONFIG["use_remote"]
establish_connection "oracle_database"
else
establish_connection "local_database"
end
And then I can set the config var to be "true" or "false" my config file. I'd RATHER it change automatically based on if the database is available, but for now this is working for me...
Related
In an app I'm working on, the production database is Oracle and the development db is sqlite.
Most of the app code is high level ActiveRecord, but there is some custom sql for reporting. This sql varies depending on the backend db.
Rather than extending the ORM and adapters, or writing if statements throughout the application, is it possible to duck type the connection such that something like the below code is possible:
if Archive.connection.supports_function?("EXTRACT")
Archive.select("extract(year from created_at)")...
else
Archive.select("strftime('%Y', created_at)")...
end
I might be completely misunderstanding your requirement but you can check the adapter and change the code used for a method easily enough.
If you want to add a new extract method to activerecord that behaves in two ways for example:
# config/initializers/active_record_extract.rb
class ActiveRecord::Base
def self.extract_agnostic(oracle_column, default_column)
if ActiveRecord::Base.connection.instance_values["config"][:adapter].include?('oracle')
return self.select("extract(#{column1} from created_at)")...
end
self.select("strftime(#{default_column}, created_at)")...
end
end
# Usage:
Archive.extract_agnostic("year", "%Y")
Obviously this isn't perfect but should get you started?
I don't think rails can tell you if your adapter understands a command, but you could always try wrapping the command you want in a begin/rescue:
begin
self.select("extract(year from created_at)")...
rescue # the above failed, try something else
self.select("strftime('%Y', created_at)")...
end
Why can't you run an Oracle database for your development environment..? Scratch that - I don't want to know.
Use create_function to plug an extract() method into your SQLite:
http://rdoc.info/github/luislavena/sqlite3-ruby/SQLite3/Database#create_function-instance_method
(And good luck doing THAT in Oracle!;)
How can we run a ruby on rails application with different database configuration?
in detail: I want to run multiple instance of a rails app with different database config for each on production. How is it possible?
I think you can duplicate the config in database.yml into different environments, like prod1, prod2 ... and then set RAILS_ENV environment variable to match before starting up each respective server...
You can duplicate your database.yml file as DGM mentioned. However, the right way to do this would be to use a configuration management solution like Chef.
If you look at the guide for setting up Rails stack, it includes a 2 front-end Web server + 1 back-end DB server. Which would include your case of duplicating database.yml file.
If you are able to control and configure each Rails instance, and you can afford wasting resources because of them being on standby, save yourself some trouble and just change the database.yml to modify the database connection used on every instance. If you are concerned about performance this approach won't cut it.
For models bound to a single unique table on only one database you can call establish_connection inside the model:
establish_connection "database_name_#{RAILS_ENV}"
As described here: http://apidock.com/rails/ActiveRecord/Base/establish_connection/class
You will have some models using tables from one database and other different models using tables from other databases.
If you have identical tables, common on different databases, and shared by a single model, ActiveRecord won't help you. Back in 2009 I required this on a project I was working on, using Rails 2.3.8. I had a database for each customer, and I named the databases with their IDs. So I created a method to change the connection inside ApplicationController:
def change_database database_id = params[:company_id]
return if database_id.blank?
configuration = ActiveRecord::Base.connection.instance_eval { #config }.clone
configuration[:database] = "database_name_#{database_id}_#{RAILS_ENV}"
MultipleDatabaseModel.establish_connection configuration
end
And added that method as a *before_filter* to all controllers:
before_filter :change_database
So for each action of each controller, when params[:company_id] is defined and set, it will change the database to the correct one.
To handle migrations I extended ActiveRecord::Migration, with a method that looks for all the customers and iterates a block with each ID:
class ActiveRecord::Migration
def self.using_databases *args
configuration = ActiveRecord::Base.connection.instance_eval { #config }
former_database = configuration[:database]
companies = args.blank? ? Company.all : Company.find(args)
companies.each do |company|
configuration[:database] = "database_name_#{company[:id]}_#{RAILS_ENV}"
ActiveRecord::Base.establish_connection configuration
yield self
end
configuration[:database] = former_database
ActiveRecord::Base.establish_connection configuration
end
end
Note that by doing this, it would be impossible for you to make queries within the same action from two different databases. You can call *change_database* again but it will get nasty when you try using methods that execute queries, from the objects no longer linked to the correct database. Also, it is obvious you won't be able to join tables that belong to different databases.
To handle this properly, ActiveRecord should be considerably extended. There should be a plugin by now to help you with this issue. A quick research gave me this one:
DB-Charmer: http://kovyrin.github.com/db-charmer/
I'm willing to try it. Let me know what works for you.
Well. We have to create multiple environments in you application
create config/environmenmts/production1.rb which will be same as of config/environmenmts/production.rb
then edit database.yml for production1 settings and you are done.
start server using rails s -e production1
I'm developing an application layer on top of a rails app developed by someone else.
His application uses a module called request_logger to write to a table, which worked fine under ruby1.8/rails2/mysql gem, but in my ruby1.9/rails3/mysql2 environment, activerecord falls over, suggesting that the generated query is invalid.
It obviously is, all mysql relation names are wrapped in double quotes instead of backticks.
The call to activerecord itself just sets a bunch of attributes with
log.attributes = {
:user_id => user_id,
:controller => controller,
...etc
}
and then calls
log.save
So I'm leaning towards it not being dodgy invocation. Any suggestions?
mysql2 works fine for a lot of people, but it unashamedly sacrifices conformance to the MySQL C API for performance in the common tasks. Perhaps, if request_logger is low-level enough, it's expecting calls to exist which don't.
It's trivial to switch back to using mysql - give it a try, and if it works, stick with it. Remember to change both your Gemfile and your config/database.yml settings.
It turned out to be what seems to be a change in behaviour between rails 2 and 3 (we have the same setup working fine in rails 2)
We use database.yml to specify an (empty) "master" database and then feed in our clients with shards+octopus.
The master db is sqlite for simplicity, and it seems that activerecord was feeding off requests formatted for sqlite to the mysql2 shards, regardless of their adaptor type.
I need to run an oracle script after connect to oracle database using ActiveRecord.
I know that exists the initializers, but these run only in the application's start. I need a point to write a code that runs after every new database connection be established.
This is needed to initialize some oracle environments variables shared with others applications that uses the same legacy database.
Any ideas?
Thanks in advance.
I found the solution:
Create the file /config/initializers/oracle.rb and put into it this code:
ActiveRecord::ConnectionAdapters::ConnectionPool.class_eval do
def new_connection_with_initialization
result = new_connection_without_initialization
result.execute('begin Base_Pck.ConfigSession; end;')
result
end
alias_method_chain :new_connection, :initialization
end
The alias_method_chain allows you to change a method (new_connection) without override it, but extending it.
Then we need only to change the script into the result.execute call.
I have a migration that runs an SQL script to create a new Postgres schema. When creating a new database in Postgres by default it creates a schema called 'public', which is the main schema we use. The migration to create the new database schema seems to be working fine, however the problem occurs after the migration has run, when rails tries to update the 'schema_info' table that it relies on it says that it does not exist, as if it is looking for it in the new database schema and not the default 'public' schema where the table actually is.
Does anybody know how I can tell rails to look at the 'public' schema for this table?
Example of SQL being executed: ~
CREATE SCHEMA new_schema;
COMMENT ON SCHEMA new_schema IS 'this is the new Postgres database schema to sit along side the "public" schema';
-- various tables, triggers and functions created in new_schema
Error being thrown: ~
RuntimeError: ERROR C42P01 Mrelation "schema_info" does not exist
L221 RRangeVarGetRelid: UPDATE schema_info SET version = ??
Thanks for your help
Chris Knight
Well that depends what your migration looks like, what your database.yml looks like and what exactly you are trying to attempt. Anyway more information is needed change the names if you have to and post an example database.yml and the migration. does the migration change the search_path for the adapter for example ?
But know that in general rails and postgresql schemas don't work well together (yet?).
There are a few places which have problems. Try and build and app that uses only one pg database with 2 non-default schemas one for dev and one for test and tell me about it. (from thefollowing I can already tell you that you will get burned)
Maybe it was fixed since the last time I played with it but when I see http://rails.lighthouseapp.com/projects/8994/tickets/390-postgres-adapter-quotes-table-name-breaks-when-non-default-schema-is-used or this http://rails.lighthouseapp.com/projects/8994/tickets/918-postgresql-tables-not-generating-correct-schema-list or this in postgresql_adapter.rb
# Drops a PostgreSQL database
#
# Example:
# drop_database 'matt_development'
def drop_database(name) #:nodoc:
execute "DROP DATABASE IF EXISTS #{name}"
end
(yes this is wrong if you use the same database with different schemas for both dev and test, this would drop both databases each time you run the unit tests !)
I actually started writing patches. the first one was for the indexes methods in the adapter which didn't care about the search_path ending up with duplicated indexes in some conditions, then I started getting hurt by the rest and ended up abandonning the idea of using schemas: I wanted to get my app done and I didn't have the extra time needed to fix the problems I had using schemas.
I'm not sure I understand what you're asking exactly, but, rake will be expecting to update the version of the Rails schema into the schema_info table. Check your database.yml config file, this is where rake will be looking to find the table to update.
Is it a possibility that you are migrating to a new Postgres schema and rake is still pointing to the old one? I'm not sure then that a standard Rails migration is what you need. It might be best to create your own rake task instead.
Edit: If you're referencing two different databases or Postgres schemas, Rails doesn't support this in standard migrations. Rails assumes one database, so migrations from one database to another is usually not possible. When you run "rake db:migrate" it actually looks at the RAILS_ENV environment variable to find the correct entry in database.yml. If rake starts the migration looking at the "development" environment and database config from database.yml, it will expect to update to this environment at the end of the migration.
So, you'll probably need to do this from outside the Rails stack as you can't reference two databases at the same time within Rails. There are attempts at plugins to allow this, but they're majorly hacky and don't work properly.
You can use pg_power. It provides additional DSL for migration to create PostgreSQL schemas and not only.