How can I log an audit trail to the database? - ruby-on-rails

I want to write a logging model and controller, to log everything that user does. I was thinking about creating a db table with from ip address, controller that user called, view that was displayed and params.inspect.
Is there a way to get to a variable which controller is currently being used? What is the best way to deploy this? Is out there a gem that do all this?
Thank you

If you set the log level in production to :debug then every request is logged with the information you need. Perhaps, it might be easier to write a parser for the debug production log.
If you prefer, you can also add a filter to your ApplicationController:
class ApplicationController
before_filter :log_activity
def log_activity
if Rails.env == 'production'
#logger ||= Logger.new("#{Rails.root}/log/activity.log")
values = [
request.remote_ip,
controller_name,
action_name,
params.inspect
]
#logger.info "#{values.join ' | '}"
end
end
...
end
this might not be complete as you want the output to be more verbose, but this is the idea.

Related

How to rescue or prevent rails database connection exception and allow controller action to continue

I am trying to catch database connection issues on a specific request and take a different action when the database is down.
for example:
config/routes.rb
get 'my_route' => 'my_controller#my_action'
app/controllers/my_controller.rb
class MyController < Public::ApplicationController
def my_action
begin
url = database_lookup
rescue Mysql2::Error => e
url = fallback_lookup
end
redirect_to url
end
def database_lookup
# get info from db
end
def fallback_lookup
# lookup info in redis cache instead
end
end
This might work in certain situations, however if the database goes down and a new request comes in, active record middleware raises an exception long before ever reaching the controller.
I have been messing with Middleware to try and catch the error and do something else, but its not looking too promising.
What i'm trying is:
application.rb
config.middleware.insert_after 'ActionDispatch::RemoteIp', 'DatabaseExceptionHandler'
app/middleware/database_exception_handler.rb
class DatabaseExceptionHandler
def initialize app
#app = app
end
def call env
#status, #headers, #response = #app.call(env)
[#status, #headers, #response]
rescue Mysql2::Error => e
[#status, #headers, #response]
end
end
This is allowing me to catch connection exceptions that are raised when the request runs, but it doesn't help me much. I need to somehow get to a controller action still.
I think a simpler approach would be to skip all the active record connection nonsense for a specific controller action.
It seems silly to force a database connection for something that might not even need the database.
Anyone have any better ideas than what i've come up with so far?
I think a simpler approach would be to skip all the active record
connection nonsense for a specific controller action.
No doable or even a good idea. Rails does not process the configuration and middleware on a per request basis and that would not work with any kind of server that speeds the process up by booting up rails in the background.
This is allowing me to catch connection exceptions that are raised
when the request runs, but it doesn't help me much. I need to somehow
get to a controller action still.
This is probably a fools errand. If Rails bailed in the initialization process the integrity of the system is probably not great and you can't just continue on like its business as usual.
What you can do is set config.exceptions_app to customize the error pages. And get a less flaky database.

How do I modify the request object before routing in Rails in a testable way?

So, I have a situation where I need to determine something about a request before it is dispatched to any of the routes. Currently, this is implemented using several constraints that all hit the database, and I want to reduce the database hit to one. Unfortunately, doing it inline in routes.rb doesn't work, because the local variables within routes.rb don't get refreshed between requests; so if I do:
# Database work occurs here, and is then used to create comparator lambdas.
request_determinator = RequestDeterminator.new(request)
constraint(request_determinator.lambda_for(:ninja_requests)) do
# ...
end
constraint(request_determinator.lambda_for(:pirate_requests)) do
# ...
end
This works great on the first request, but then subsequent requests get routed as whatever the first request was. (D'oh.)
My next thought was to write a Rack middleware to add the "determinator" to the env hash, but there are two problems with this: first, it doesn't seem to be sticking in the hash at all, and specs don't even go through the Rack middleware, so there's no way to really test it.
Is there a simple mechanism I'm overlooking where I can insert, say, a hook for ActionDispatch to add something to the request, or just to say to Rails routing: "Do this before routing?"
I am using Rails 3.2 and Ruby 1.9.
One way to do this would be to store your determinator on the request's env object (which you have since ActionDispatch::Request is a subclass of Rack::Request):
class RequestDeterminator
def initialize(request)
#request = request
end
def self.for_request(request)
request.env[:__determinator] ||= new(request)
end
def ninja?
query_db
# Verify ninjaness with #request
end
def pirate?
query_db
# Verify piratacity with #request
end
def query_db
#result ||= begin
# Some DB lookup here
end
end
end
constraint lambda{|req| RequestDeterminator.for_request(req).ninja? } do
# Routes
end
constraint lambda{|req| RequestDeterminator.for_request(req).pirate? } do
# Routes
end
That way, you just instantiate a single determinator which caches your DB request across constraint checks.
if you really want to intercept the request,try rack as it is the first one to handle request in any Rails app...refer http://railscasts.com/episodes/151-rack-middleware to understand how rack works....
hope it helps.

Preferred way to use sessions to avoid hitting the database in rails

I try to optimise a Rails app with big load that currently hit the databsae on every request. I try now to optimise this by saving some info on the session so I don't need to go the database every time. I'm currently doing something like this.
def set_audience
if current_user
session[:audience] ||= current_user.audience
else
session[:audience] ||= 'general'
end
end
And then calling session[:audience] anywhere on my controller and views. Seems fine except that I'm seeing on the New Relic logs that sometimes the session is not set and therefore the app get a nasty nil.
Im thinking better I should use instance variables, maybe more something like this.
def set_audience
if current_user
session[:audience] ||= current_user.audience
end
#audience = session[:audience]
#audience = 'general' if #audience.empty?
end
And then calling #audience in my app.
Is this correct? I would like to make sure I'm used the preferred approach to this.
I think the standard approach here would be to use a helper method on ApplicationContoller:
class ApplicationController < ActionController::Base
private
def current_audience
#current_audience ||= current_user.audience
end
helper_method :current_audience
end
This will work pretty much exactly like the current_user helper method in your controllers and views. Depending on the specifics of your application, you may want to add some more robust nil handling, but this is the basic idea.

rails auditor gem - audits.user_id may not be NULL

I'm using the auditor gem to track changes in my models, and I find it quite annoying that whenever I'm trying to work from console I get this error (I guess user_id is taken from current_user and it's not associated).
I'm trying to create some objects for development, and just have to do it from the dbconsole every time..
I use 'audit(:create, :update, :destroy)' and not 'audit!'.
Does anyone knows if I can suppress these errors or disable the user_id null check? (I don't care that if in production I run console and create an object, I'll have a NULL there).
Many thanks,
Zach
I had the same issue. Problem was that I didn't have any attr_accessible declared. Read this: https://github.com/collectiveidea/audited#gotchas
edit:
Also, I had to define a method for current_user:
def current_user
User.find_by_username 'root'
end
Why don't you just set current_user to a user object before your audit work?
current_user = User.first
...your other code here

default scoping confusion

UPDATED:
I am setting default scope for some models in a runtime which seems working locally in my development env and my code is given below.
SET_OF_MODELS = [Event, Group, User]
#account = Account.find_by_subdomain(account_subdomain)
SET_OF_MODELS.each { |m| m.set_default_scope(#account.id) }
def set_default_scope(account_id)
default_scope :conditions=> { :account_id => account_id }
end
If I execute this code in ruby console with say #account1, User.first returns #account1 user whereas if I repeat the code with #account2 then User.first returns #account1 user instead of #account2. And this problem is not revealed while running app in local server but in staging server.
My guess is towards their states if they are really cached but not sure. Can someone explain in depth.
Thanks in advance
default_scope will save state in its class. It's harmful in concurrent environment because it leads to race condition. So you must isolate scope state between requests.
You can use around_filter
class ApplicationController < ActionController::Base
around_filter :set_default_scope
def set_default_scope
#account = Account.find_by_subdomain(account_subdomain)
opts = :condition => {:account_id => #account.id}
Event.send(:with_scope, opts) do
Group.send(:with_scope, opts) do
User.send(:with_scope, opts) do
yield
end
end
end
end
end
You can refactor .send(:with_scope, opts) to a class method like with_account_scope(account_id)
Development differs from production. In production all classes are loaded once and cached, so you can't redefine the default scopes on each request.
In development the classes are loaded on each request, to allow easy development: each change you do in the code is visible/active on the next request.
If you really want to, you can disable this behaviour in production. This will make your complete site slower, but maybe that is not really an issue. To turn this off, you have edit your config/environments/production.rb, find the line containing
config.cache_classes = true
and switch that to false.
Hope this helps.
There is nothing wrong with the above code but the problem was with the server used i.e. thin server. It worked perfectly after replacing thin with mongrel. I think thin wasn't allowing to execute set_default_scope more than once except after loading the application.

Resources