How to test ActiveRecord witout Rails? - ruby-on-rails

I have a project in which I use ActiveRecord to store information in a sqlite db file. I'm not using Rails and AR seems to do the job perfectly. My question is how exactly to test my classes witout hitting the db? I found some gems that would to the trick (FactoryGirl, UnitRecord), but they are meant to work with Rails.
class News < ActiveRecord::Base
belongs_to :feed
def delete_old_news_if_necessary
# time_limit = Settings::time_limit
return if time_limit.zero?
News.destroy_all("date < #{time_limit}")
end
def delete_news_for_feed(feed_id)
News.destroy_all(:id => feed_id)
end
def news_for_feed(feed_id)
News.find(feed_id)
end
end
I read that i can do a column stub:
Column = ActiveRecord::ConnectionAdapters::Column
News.stubs(:columns).returns([Column.new(),...])
Is this the right way to do these tests? Also, when is it better to have a separate db just for testing and to create it, run the tests, and the delete it?

If you want to avoid hitting the db in tests I can recommend the mocha gem. It does stubs as well as it lets you define expectations.
Edit: Regarding your question on when it is better to use a test db: I would say, whenever there is no reason against it. :)
Edit: For example, you can mock News.find like this in a test:
def news_for_feed_test
# define your expectations:
news = News.new
News.expects(:find).with(1).returns(news)
# call the method to be tested:
News.new.news_for_feed(1)
end
At the same time this makes sure, find gets called exactly once. There are a lot more things Mocha can do for you. Take a look at the documentation. Btw., it looks like these methods of yours should be class methods, no?

Related

How to easily patch several ActiveRecord models against deadlocks

So I maintain a Rails app with more than 150 database tables. And we are experiencing deadlocks at several locations.
After reading through this post https://hackernoon.com/troubleshooting-and-avoiding-deadlocks-mysql-rails-766913f3cfbc and understanding better the different situations. it seems one common pattern we have is due to unique index waiting for each others on concurrent lock.
So I am looking for a way to say in a model that it should not try to insert two at the time, since MySQL will lock the table. I want it as easy as.
class BingoCard < ActiveRecord::Base
protect_table_locks
end
Which would use a Redis base lock, to wrap around the create operations
I already looked into this answer for ideas. Mutex for ActiveRecord Model
I plan on posting my own answer when I have it.
This is my draft implementation.
if there is enough interest, I will make it a gem
# frozen_string_literal: true
module ActiveRecord
module PersistenceRedisLock
private
def _create_record
_lock_manager.lock(_locked_resource_id, _lock_duration) do |_lock_info|
super
end
end
def _locked_resource_id
#TODO: make it a configurable option
"PersistenceRedisLock#{self.class.table_name}"
end
def _lock_duration
#TODO: make it a configurable option
10.seconds # Maybe too long of a default, but this is a proof of concept for now
end
def _lock_manager
##_lock_manager ||= Redlock::Client.new [Ph::Redis.redis_url_for(:red_locks)]
end
end
class Base
def self.protect_table_locks
self.prepend PersistenceRedisLock
end
end
end

Organizing API-Calls in callbacks

We are maintaining several Rails-Apps which all pose a similar problem that we don't have a really good solution to: All these apps contain models that need to make a API-Call to an external service in their lifecycle.
Possible cases:
User is subscribed to a Newsletter-subscriber-list, when successfully created
Prices for an offer are synced with an external shopping-system after updating
Product is updated in the Search-Index after updating
What we exprienced to NOT be a good solution: Adding these calls to the after_*callbacks of the model. Since that breaks tests fast, cause all factories now have to deal with the api-calls.
I'm looking for a good way to organize these API-call. How do you guys do this?
Ideas we came up with, which I considered not real ideal:
Moving those callbacks to the controller. Now they get easily forgotten, when creating an object
Spawning an asynchronous worker to handle the api-call. Then every - even small app - needs to have the overhead of a delayed job-queue, like sidekiq.
If you are concerned about testing you could put the callback methods into a separate class and mock the callback class during testing. Here's an example using RSpec, given the following Foo and FooCallbacks classes:
class Foo < ActiveRecord::Base
after_save FooCallbacks
end
class FooCallbacks
def self.after_save
fail "Call to external API"
end
end
You can write and successfully run a spec like this:
describe Foo do
before do
allow(FooCallbacks).to receive(:after_save)
end
it "should not invoke real APIs" do
Foo.create
end
end
This is how I now did it, after the advise:
In Foo:
class Foo < ActiveRecord::Base
before_save Foo::DataSync
end
Foo:DataSynclooks like this:
class Foo::DataSync
def self.before_save(foo)
...do the API-Calls...
end
end
Now for testing in rspec I added this:
To spec_helper.rb:
config.before(:each) do
Foo::DataSync.stub(:before_save)
end
Note that config.before(:suite) will not work, since Foo:DataSync is not loaded at that time.
Now foo_spec.rb contains just this:
describe Foo do
let(:foo) {create(:foo)}
it "will sync its data before every save" do
expect(Foo::DataSync).to receive(:before_save).with(foo)
foo.save
end
end
The Foo::DataSync can be tested like this:
describe Foo::DataSync do
let!(:foo) {create(:foo)}
before do
Foo::DataSync.unstub(:before_save)
end
after do
Foo::DataSync.stub(:before_save)
end
describe "#before_save" do
...my examples...
end
end

Clearing class instance variables in between Rspec examples

I'm writing specs for a gem of mine that extends ActiveRecord. One of the things it has to do is set a class instance variable like so:
class MyModel < ActiveRecord::Base
#foo = "asd"
end
Right now when I set #foo in one it "should" {} it persists to the next one. I understand this is normal Ruby behavior but I thought RSpec had some magic that cleaned everything out in between specs. I'd like to know how I can re-use a single AR model for all my tests (since creating a bunch of tables would be a pain) while being sure that #foo is being cleared between each test. Do I need to do this manually?
I wound up generating a method in my helper class that generated new classes with Class.new, so I could be sure that nothing was being left over in between tests.
You should simply make good use of the after :each block.
after(:each) do
#foo = nil
end

DRY way of calling a method in every rails model

Along the same lines as this question, I want to call acts_as_reportable inside every model so I can do one-off manual reports in the console in my dev environment (with a dump of the production data).
What's the best way to do this? Putting acts_as_reportable if ENV['RAILS_ENV'] == "development" in every model is getting tedious and isn't very DRY at all. Everyone says monkey patching is the devil, but a mixin seems overkill.
Thanks!
For me the best way will be to add it into the ActiveRecord::Base in the initializer. I believe the acts_as_reportable is a mixin under the hood. By doing this, when you will be able to call all the method that came with acts_as_reportable in all your models in development environment only.
I will do it in the config/initializers directory, in a file called model_mixin.rb or anything that you wish.
class ActiveRecord::Base
acts_as_reportable if (ENV['RAILS_ENV'] == "development")
end
The argument of using monkey patch is dirty depends on yourself and how readable the code is, in my opinion, use what you are comfortable with. The feature are there to be used and it always depends on the user.
What about creating a Reportable class and deriving all the models from it?
class Reportable
acts_as_reportable if ENV['RAILS_ENV'] == "development"
end
class MyModel < Reportable
end
I use a mixin for common methods across all my models:
module ModelMixins
# Splits a comma separated list of categories and associates them
def process_new_categories(new_categories)
unless new_categories.nil?
for title in new_categories.split(",")
self.categories << Category.find_or_create_by_title(title.strip.capitalize)
end
self.update_counter_caches
end
end
end
I considered doing it in other ways, but to me this seems to be the most legitimate way of DRYing up your models. A model equivalent of the ApplicationController would be a neat solution, though I'm not sure how you'd go about that, or whether there's a decent argument against having one.

How to call expire_fragment from Rails Observer/Model?

I've pretty much tried everything, but it seems impossible to use
expire_fragment from models? I know you're not supposed to and it's
non-MVC, but surely there much be some way to do it.
I created a module in lib/cache_helper.rb with all my expire helpers,
within each are just a bunch of expire_fragment calls. I have all my
cache sweepers setup under /app/sweepers and have an "include
CacheHelper" in my application controller so expiring cache within the
app when called via controllers works fine.
Then things is I have some external daemons and especially some
recurring cron tasks which call a rake task that calls a certain
method. This method does some processing and inputs entries into the
model, after which I need to expire cache.
What's the best way to do this as I can't specify cache sweeper within the model.
Straight up observers seem to be the best solution but then it
complains about expire_fragment being undefined etc etc, I've even
tried including the ActionController caching classes into the observer
but that didn't work. I'd love some ideas of how to create a solution
for this. Thanks.
Disclaimer: My rails is a bit rusty, but this or something like it should work
ActionController::Base.new.expire_fragment(key, options = nil)
The solution provided by Orion works perfectly. As an enhancement and for convenience, I've put the following code into config/initializers/active_record_expire_fragment.rb
class ActiveRecord::Base
def expire_fragment(*args)
ActionController::Base.new.expire_fragment(*args)
end
end
Now, you can use expire_fragment on all instances of ActiveRecord::Base, e.g. User.first.expire_fragment('user-stats')
This is quite easy to do. You can implement Orion's suggestion, but you can also implement the broader technique illustrated below, which gives you access to the current controller from any model and for whichever purpose you decided to break MVC separation for (e.g. messing with the fragment cache, accessing current_user, generating paths/URLs, etc.)
In order to gain access to the current request's controller (if any) from any model, add the following to environment.rb or, much preferably, to a new plugin (e.g. create vendor/plugins/controller_from_model/init.rb containing the code below):
module ActiveRecord
class Base
protected
def self.thread_safe_current_controller #:nodoc:
Thread.current[:current_controller]
end
def self.thread_safe_current_controller=(controller) #:nodoc:
Thread.current[:current_controller] = controller
end
# pick up the correct current_controller version
# from ##allow_concurrency
if ##allow_concurrency
alias_method :current_controller, :thread_safe_current_controller
alias_method :current_controller=, :thread_safe_current_controller=
else
cattr_accessor :current_controller
end
end
end
Then, in app/controllers/application.rb,
class ApplicationController < ActionController::Base
before_filter { |controller|
# all models in this thread/process refer to this controller
# while processing this request
ActiveRecord::Base.current_controller = controller
}
...
Then, from any model,
if controller = ActiveRecord::Base.current_controller
# called from within a user request
else
# no controller is available, didn't get here from a request - maybe irb?
fi
Anyhow, in your particular case you might want to inject code into your various ActiveRecord::Base descendants when the relevant controller classes load, so that the actual controller-aware code still resides in app/controllers/*.rb, but it is not mandatory to do so in order to get something functional (though ugly and hard to maintain.)
Have fun!
In one of my scripts I use the following hack:
require 'action_controller/test_process'
sweepers = [ApartmentSweeper]
ActiveRecord::Base.observers = sweepers
ActiveRecord::Base.instantiate_observers
controller = ActionController::Base.new
controller.request = ActionController::TestRequest.new
controller.instance_eval do
#url = ActionController::UrlRewriter.new(request, {})
end
sweepers.each do |sweeper|
sweeper.instance.controller = controller
end
Then, once the ActiveRecord callbacks are called, sweepers are able to call expire_fragment.
I'm a bit of a rails noob, so this may not be correct, or even helpful, but it seems wrong to be trying to call controller actions from within the model.
Is it not possible to write an action within the controller that does what you want and then invoke the controller action from within your rake task?
Why not have your external rake tasks call the expiry method on the controller. Then you're still being MVC compliant, you aren't building in a dependence on some scoping hack, etc.
For that matter, why don't you just put all the daemon / external functionality on a controller and have rake / cron just call that. It would be loads easier to maintain.
-- MarkusQ
Will it not be easier and clean just to pass the current controller as an argument to the model method call? Like following:
def delete_cascade(controller)
self.categories.each do |c|
c.delete_cascade(controller)
controller.expire_fragment(%r{article_manager/list/#{c.id}.*})
end
PtSection.delete(self.id)
controller.expire_fragment(%r{category_manager/list/#{self.id}.*})
end
You can access all public methods and properties of the controller from within model.
As long as you do not modify the state of the controller, it should be fine.
This might not work for what you're doing, but you may be able to define a custom call back on your model:
class SomeModel < ActiveRecord::Base
define_callback :after_exploded
def explode
... do something that invalidates your cache ...
callback :after_exploded
end
end
You can then use a sweeper like you would normally:
class SomeModelSweeper < ActionController::Caching::Sweeper
observe SomeModel
def after_exploded(model)
... expire your cache
end
end
Let me know if this is useful!

Resources