Access the model in production.rb rails 3 - ruby-on-rails

I have a model called SystemSettings with a name on and a value. It is where I store the majority of my configuration for my app. I need to be able to access it in my production.rb inside my rails 3.2 app. How would you go about doing this?

Since the Rails config such as production.rbis read before ActiveRecord is initialised you would need to use a callback:
Rails.application.configure do
ActiveSupport.on_load(:active_record) do
config.custom_variable = SystemSettings.find_by(name: "Foo").value
end
end
But since the callback executes later when ActiveRecord is ready you can't immediately use its value which is why your approach may be flawed due to race conditions.
Unless you are building something like a CMS where you need to provide a user interface to edit system settings you will be better off using environmental variables. They are immediately available from memory and do not have the overhead of a database query.
http://guides.rubyonrails.org/v3.2.9/initialization.html

Related

Best way to change the behavior depending on the environment variables in Rails

I sometimes face the situation which could be better to handle flow differently depending on the environment. (Ex. disable some features)
For example.
If you are on the production, you can send a email if the process is succeeded.
But for test, and development environment we just simply disable it.
For now, I just put if-clause.
ActiveRecord::Base.transaction do
itemable = create_invoiceitemable(each_line)
next unless itemable.present?
create_invoiceitem(invoice, itemable, each_line[:id])
end
ReceiptMailer.receipt(invoie[:uuid]).deliver_later if RAILS_ENV[:production]
Any ideas for better way to handle this?
It is impossible to answer an exact question, since it’s heavily opinion-based, but you might find useful stubbing such methods with:
config/initializers/stubs.rb
ReceiptMailer.prepend(Module.new do
def receipt(*args)
Logger.info "ReceiptMailer#receipt called with #{args.inspect}"
Hashie::Mash.new { deliver_later: nil } # to allow call
end
end) unless RAILS_ENV[:production]
Instead of relying on the name of the environment to check if something should be activated or not ..why not use a environment variable which You can set its value în the environment specific file and check its value instead.
This way if You for example deploy your app on Heroku You can enable or disable this 'feature without touching the code or re-deploy every time since this are all available from the interface or command line.
I personally think this is a good approach. There might be other good approaches also.
You can use different gems like dotenv to accomplish this . There might be other gems out there too.
Hope this will help :)

How to disable class cache for part of Rails application

I am developing a Rails app for network automation. Part of app consists logic to run operations, part are operations themselves. Operation is simply a ruby class that performs several commands for network device (router, switch etc).
Right now, operation is simply part of Rails app repo. But in order to make development process more agile, I would like to decouple app and operations. I would have 2 repos - one for app and one for operations. App deploy would follow standard procedure, but operation would sync every time something is pushed to master. And what is more important, I don't want to restart app after operations repo update.
So my question is:
How to exclude several classes (or namespaces) from being cashed in production Rails app - I mean every time I call this class it would be reread file from disk. What could be potential dangers of doing so?
Some code example:
# Example operation - I would like to add or modify such classes withou
class FooOperation < BaseOperation
def perform(host)
conn = new_connection(host) # method from BaseOperation
result = conn.execute("foo")
if result =~ /Error/
# retry, its known bug in device foo
conn.execute("foo")
else
conn.exit
return success # method from BaseOperation
end
end
end
# somewhere in admin panel I would do so:
o = Operations.create(name: "Foo", class_name: "Foo")
o.id # => 123 # for next example
# Ruby worker which actually runs an operation
class OperationWorker
def perform(operation_id, host)
operation = Operation.find(operation_id)
# here, everytime I load this I want ruby to search for implementation on filesystem, never cache
klass = operation.class_name.constantize
class.new(host).perform #
end
end
i think you have quite a misunderstanding about how ruby code loading and interpretation works!
the fact that rails reloads classes at development time is kind of a "hack" to let you iterate on the code while the server has already loaded, parsed and executed parts of your application.
in order to do so, it has to implement quite some magic to unload your code and reload parts of it on change.
so if you want to have up-to-date code when executing an "operation" you are probably best of by spawning a new process. this will guarantee that your new code is read and parsed properly when executed with a blank state.
another thing you can do is use load instead of require because it will actually re-read the source on subsequent requests. you have to keep in mind, that subsequent calls to load just add to the already existing code in the ruby VM. so you need to make sure that every change is compatible with the already loaded code.
this could be circumvented by some clever instance_eval tricks, but i'm not sure that is what you want...

How to initialize a logger in rails 3?

I have read the documentation on this guide and the class. I wish to create a logger which take logging informations each day and after let's say a week, delete the oldest logging information automatically each time.
logfile = File.open(RAILS_ROOT + '/log/'+ (Date.today << 1).to_s + '_custom.log', 'a') #create log file
logfile.sync = true #automatically flushes data to file
CUSTOM_LOGGER = CustomLogger.new(logfile, 'daily') #constant accessible anywhere
Plus, I wish to create a custom logging, so for instance something that looks like this (format):
class MyLogger < Logger
def format_message(severity, timestamp, progname, msg)
"#{timestamp} : #{msg}\n"
end
end
So basically, I would like to have a better idea where to place everything correctly under which directory. For instance, where MyLogger should be logically placed... (anywhere? A helper? or under app/config/ ?
Is that a valid way to implement this?
I made it by putting everything in config/initializers and creating a file named my_logger.rb. I'm still stuck at deleting/managing log files.
Does the server handle that part with a log rotation ( I know there's something with logrotation from the linux OS)? Or Rails can handle that internally?
Where should MyLogger be logically placed?
Probably put it under /lib. You can then require it from the initializer where you set the custom logger.
How can you periodically delete the oldest logging information?
There are a countless ways you can do this and choosing will be based on your constraints. You haven't spoken much about your constraints, so it's going to be hard to give you the just-right answer. E.g. you could clean up old logs every time you add a new log entry, you could run a cron job, you could install some non-Rails software that does log rotation and other log maintenance, you could use Papertrail, if you use Heroku you could look up https://devcenter.heroku.com/articles/scheduled-jobs-custom-clock-processes.
Remember Rails is designed more to handle requests and respond to them in the context of that request, than to run maintenance outside of the context of receiving a request. You could do maintenance as a side-effect of every format_message request to MyLogger, checking for the oldest logging entry and if you find one older than a week, delete them. You haven't given a constraint why you can't do this in-process, and if you're prototyping something early and portable, then this would get you going fast.

Ruby on rails with multiple production settings

How can we run a ruby on rails application with different database configuration?
in detail: I want to run multiple instance of a rails app with different database config for each on production. How is it possible?
I think you can duplicate the config in database.yml into different environments, like prod1, prod2 ... and then set RAILS_ENV environment variable to match before starting up each respective server...
You can duplicate your database.yml file as DGM mentioned. However, the right way to do this would be to use a configuration management solution like Chef.
If you look at the guide for setting up Rails stack, it includes a 2 front-end Web server + 1 back-end DB server. Which would include your case of duplicating database.yml file.
If you are able to control and configure each Rails instance, and you can afford wasting resources because of them being on standby, save yourself some trouble and just change the database.yml to modify the database connection used on every instance. If you are concerned about performance this approach won't cut it.
For models bound to a single unique table on only one database you can call establish_connection inside the model:
establish_connection "database_name_#{RAILS_ENV}"
As described here: http://apidock.com/rails/ActiveRecord/Base/establish_connection/class
You will have some models using tables from one database and other different models using tables from other databases.
If you have identical tables, common on different databases, and shared by a single model, ActiveRecord won't help you. Back in 2009 I required this on a project I was working on, using Rails 2.3.8. I had a database for each customer, and I named the databases with their IDs. So I created a method to change the connection inside ApplicationController:
def change_database database_id = params[:company_id]
return if database_id.blank?
configuration = ActiveRecord::Base.connection.instance_eval { #config }.clone
configuration[:database] = "database_name_#{database_id}_#{RAILS_ENV}"
MultipleDatabaseModel.establish_connection configuration
end
And added that method as a *before_filter* to all controllers:
before_filter :change_database
So for each action of each controller, when params[:company_id] is defined and set, it will change the database to the correct one.
To handle migrations I extended ActiveRecord::Migration, with a method that looks for all the customers and iterates a block with each ID:
class ActiveRecord::Migration
def self.using_databases *args
configuration = ActiveRecord::Base.connection.instance_eval { #config }
former_database = configuration[:database]
companies = args.blank? ? Company.all : Company.find(args)
companies.each do |company|
configuration[:database] = "database_name_#{company[:id]}_#{RAILS_ENV}"
ActiveRecord::Base.establish_connection configuration
yield self
end
configuration[:database] = former_database
ActiveRecord::Base.establish_connection configuration
end
end
Note that by doing this, it would be impossible for you to make queries within the same action from two different databases. You can call *change_database* again but it will get nasty when you try using methods that execute queries, from the objects no longer linked to the correct database. Also, it is obvious you won't be able to join tables that belong to different databases.
To handle this properly, ActiveRecord should be considerably extended. There should be a plugin by now to help you with this issue. A quick research gave me this one:
DB-Charmer: http://kovyrin.github.com/db-charmer/
I'm willing to try it. Let me know what works for you.
Well. We have to create multiple environments in you application
create config/environmenmts/production1.rb which will be same as of config/environmenmts/production.rb
then edit database.yml for production1 settings and you are done.
start server using rails s -e production1

Model-specific SQL logging in rails

In my rails application, I have a background process runner, model name Worker, that checks for new tasks to run every 10 seconds. This check generates two SQL queries each time - one to look for new jobs, one to delete old completed ones.
The problem with this - the main log file gets spammed for each of those queries.
Can I direct the SQL queries spawned by the Worker model into a separate log file, or at least silence them? Overwriting Worker.logger does not work - it redirects only the messages that explicitly call logger.debug("something").
The simplest and most idiomatic solution
logger.silence do
do_something
end
See Logger#silence
Queries are logged at Adapter level as I demonstrated here.
How do I get the last SQL query performed by ActiveRecord in Ruby on Rails?
You can't change the behavior unless tweaking the Adapter behavior with some really really horrible hacks.
class Worker < ActiveRecord::Base
def run
old_level, self.class.logger.level = self.class.logger.level, Logger::WARN
run_outstanding_jobs
remove_obsolete_jobs
ensure
self.class.logger.level = old_level
end
end
This is a fairly familiar idiom. I've seen it many times, in different situations. Of course, if you didn't know that ActiveRecord::Base.logger can be changed like that, it would have been hard to guess.
One caveat of this solution: this changes the logger level for all of ActiveRecord, ActionController, ActionView, ActionMailer and ActiveResource. This is because there is a single Logger instance shared by all modules.

Resources