I know some of you are already doubting my sanity with this. I have a ActiveRecord class that uses method missing to dig inside a JSON attribute it has.
# app/models/request_interactor.rb
...
def method_missing(method_sym, *arguments, &block)
return self.request_params[method_sym.to_s] if self.request_params[method_sym.to_s]
super
end
the test looks like this
before(:each) do
#ri = RequestInteractor.create(result: {magic_school: true, magic_learnt: 'all things magical'}, request_params: {application_id: 34, school_id: 20, school_name: 'Hogwarts', course_name: 'Defence against the Dark Arts.'})
end
it 'should respond to attributes set in the request parameters' do
expect(#ri).to respond_to(:school_name)
expect(#ri.school_name).to eq('Hogwarts')
end
I tried binding inside the test, the #ri.school_name will eq 'Hogwarts', but when it runs the responds_to it will fail saying there is no such a method! The dirty, dirty liar!
I tried doing something like this in the model:
def respond_to?(method, include_private = false)
super || self.respond_to?(method, include_private)
end
But this will return a stack level too deep, because of recursion, because of recursion.. so now the fate of my day is in your hands! Enlighten me O' great ones. how would I test the respond to of the method missing.
Use respond_to_missing. More infos here.
Now, with all this being said. Your pattern will still look hackish if you ask me.
Refactors
Ruby has tons of way to clean this.
Use a delegation pattern
delegate :method_name, :to => :request_params
(check other options in doc). This should solve your problems by having a method in your object so respond_to? will work and you will avoid overriding method_missing.
Generate your access methods when setting request_params (meta-programming your accessors).
Use OpenStruct since these can be initialized with a Hash such as your request_params. If you add delegation on top, you should be cool.
Hope this helps.
Related
How could I write a test to find the last created record?
This is the code I want to test:
Post.order(created_at: :desc).first
I'm also using factorybot
If you've called your method 'last_post':
def self.last_post
Post.order(created_at: :desc).first
end
Then in your test:
it 'should return the last post' do
expect(Post.last_post).to eq(Post.last)
end
On another note, the easiest way to write your code is simply
Post.last
And you shouldn't really be testing the outcome of ruby methods (you should be making sure the correct ruby methods are called), so if you did:
def self.last_post
Post.last
end
Then your test might be:
it 'should send the last method to the post class' do
expect(Post).to receive(:last)
Post.last_post
end
You're not testing the outcome of the 'last' method call - just that it gets called.
The accepted answer is incorrect. Simply doing Post.last will order the posts by the ID, not by when they were created.
https://apidock.com/rails/ActiveRecord/FinderMethods/last
If you're using sequential IDs (and ideally you shouldn't be) then obviously this will work, but if not then you'll need to specify the column to sort by. So either:
def self.last_post
order(created_at: :desc).first
end
or:
def self.last_post
order(:created_at).last
end
Personally I'd look to do this as a scope rather than a dedicated method.
scope :last_created -> { order(:created_at).last }
This allows you to create some nice chains with other scopes, such as if you had one to find all posts by a particular user/account, you could then chain this pretty cleanly:
Post.for_user(user).last_created
Sure you can chain methods as well, but if you're dealing with Query interface methods I feel scopes just make more sense, and tend to be cleaner.
If you wanted to test that it returns the correct record, in your test you could do something like:
let!(:last_created_post) { factory_to_create_post }
. . .
it "returns the correct post"
expect(Post.last_post).to eq(last_created_post)
end
If you wanted to have an even better test, you could create a couple records before the last record to verify the method under test is pulling the correct result and not just a result from a singular record.
I need to set the id parameter to a value if it is wasn't submitted with the form.
Is it ok to do something like this in Rails or does this violate any standards or cause possible issues?
if params[:cart][:cart_addresses_attributes]["0"][:id].blank?
params[:cart][:cart_addresses_attributes]["0"][:id] = 1234 #default id
end
My implementation works with this logic, but I am not sure if this is the proper way to handle the issue.
There's a chance [:record_type] is nil which will lead to an undefined method error when you attempt to call [:id] on nil. Additionally, I'd find it a bit weird to directly mutate params, even though you technically can do that. I'd consider using Strong Parameter processing methods like so (added a full action, which isn't in your sample, to give more context on how this would be used):
def create
#record_type = RecordType.new(record_type_params)
if record_type.save
redirect_to #record_type
else
render :new
end
end
def record_type_params
params.require(:record_type).permit(:id).reverse_merge(id: 1234)
end
The reverse_merge call is a way to merge the user-supplied parameters into your defaults. This accomplishes what you're after in what I would consider a more conventional way and doesn't mutate params.
def cart_params
params.require(:cart).permit(:cart_addresses_attributes => [:id]).tap do |p|
p[:cart_addresses_attributes]["0"][:id] ||= 1234
end
end
if params[:record_type][:id].nil? # or replace ".nil?" with "== nil"
params[:record_type][:id] = 1234
end
personally, this is the way I prefer to do it. Some ways are more efficient than others, but if that works for you I'd roll with it.
I have done some googling on it but to no avail.
My scenario is that I have a helper method that takes some options and then renders a commonly used graphical element on the page through a partial, tweaked by the sent in options. Much of these options have different overridable defaults set by the helper, based on the first two required arguments - object and context.
def my_helper(object, context, options = {})
defaults = { ... }
defaults[:foo] = "bar" if object.is_a?(SomeObject)
defaults[:ping] = "pong" if context.eql?(:some_context)
...
render partial: '/path/to/partial', locals: defaults.merge(options)
end
While the context is all nice and dandy, I have decided to move away from looking at the object class and, where plausible, use respond_to? rather. What I want to avoid though is having multiple if object.respond_to?(:foo?) && object.foo?, and just use something like if object.respond_to_and_send(:foo?), which would return nil if the object cannot respond to the method.
Update
I forgot to mention that this is a Rails 3.2 application, which is a shame since the updated try method in Rails 4, as mentioned in Holger Just's answer, is exactly what I need.
You can use try like this:
object.try(:foo?)
It will check if object responds to foo? and if so, calling the method. If object does not respond to the method, try will return nil.
See the documentation for details.
The answer to your question is No. UPD: object.try(:foo?) works in Rails 4, as Holger Just suggests above.
There are workarounds though. One of them is to create a module with the method you need and extend your context object with it.
module SmartResponder
def respond_to_and_send(method, *args, &block)
public_send(method, *args, &block) if respond_to?(method)
end
end
context = [1,2,3]
context.extend(SmartResponder)
context.respond_to_and_send(:to_s)
# => "[1, 2, 3]"
context.respond_to_and_send(:to_sssss)
# => nil
context.respond_to_and_send(:inject, 100) { |x,y| x+y }
# => 106
object.send(:foo) rescue nil
This should do the trick. Be careful of introducing subtle bugs using this technique. (for example if the method call itself raises an error, you will get nil and the actual exception will be hidden)
This is probably one of the things that all new users find out about Rails sooner or later. I just realized that rails is updating all fields with the serialize keyword, without checking if anything really changed inside. In a way that is the sensible thing to do for the generic framework.
But is there a way to override this behavior? If I can keep track of whether the values in a serialized fields have changed or not, is there a way to prevent it from being pushed in the update statement? I tried using "update_attributes" and limiting the hash to the fields of interest, but rails still updates all the serialized fields.
Suggestions?
Here is a similar solution for Rails 3.1.3.
From: https://sites.google.com/site/wangsnotes/ruby/ror/z00---topics/fail-to-partial-update-with-serialized-data
Put the following code in config/initializers/
ActiveRecord::Base.class_eval do
class_attribute :no_serialize_update
self.no_serialize_update = false
end
ActiveRecord::AttributeMethods::Dirty.class_eval do
def update(*)
if partial_updates?
if self.no_serialize_update
super(changed)
else
super(changed | (attributes.keys & self.class.serialized_attributes.keys))
end
else
super
end
end
end
Yes, that was bugging me too. This is what I did for Rails 2.3.14 (or lower):
# config/initializers/nopupdateserialize.rb
module ActiveRecord
class Base
class_attribute :no_serialize_update
self.no_serialize_update = false
end
end
module ActiveRecord2
module Dirty
def self.included(receiver)
receiver.alias_method_chain :update, :dirty2
end
private
def update_with_dirty2
if partial_updates?
if self.no_serialize_update
update_without_dirty(changed)
else
update_without_dirty(changed | (attributes.keys & self.class.serialized_attributes.keys))
end
else
update_without_dirty
end
end
end
end
ActiveRecord::Base.send :include, ActiveRecord2::Dirty
Then in your controller use:
model_item.no_serialize_update = true
model_item.update_attributes(params[:model_item])
model_item.increment!(:hits)
model_item.update_attribute(:nonserializedfield => "update me")
etc.
Or define it in your model if you do not expect any changes to the serialized field once created (but update_attribute(:serialized_field => "update me" still works!)
class Model < ActiveRecord::Base
serialize :serialized_field
def no_serialize_update
true
end
end
I ran into this problem today and ended up hacking my own serializer together with a getter and setter. First I renamed the field to #{column}_raw and then used the following code in the model (for the media attribute in my case).
require 'json'
...
def media=(media)
self.media_raw = JSON.dump(media)
end
def media
JSON.parse(media_raw) if media_raw.present?
end
Now partial updates work great for me, and the field is only updated when the data is actually changed.
The problem with Joris' answer is that it hooks into the alias_method_chain chain, disabling all the chains done after (like update_with_callbacks which accounts for the problems of triggers not being called). I'll try to make a diagram to make it easier to understand.
You may start with a chain like this
update -> update_with_foo -> update_with_bar -> update_with_baz
Notice that update_without_foo points to update_with_bar and update_without_bar to update_with_baz
Since you can't directly modify update_with_bar per the inner workings of alias_method_chain you might try to hook into the chain by adding a new link (bar2) and calling update_without_bar, so:
alias_method_chain :update, :bar2
Unfortunately, this will get you the following chain:
update -> update_with_bar2 -> update_with_baz
So update_with_foo is gone!
So, knowing that alias_method_chain won't let you redefine _with methods my solution so far has been to redefine update_without_dirty and do the attribute selection there.
Not quite a solution but a good workaround in many cases for me was simply to move the serialized column(s) to an associated model - often this actually was a good fit semantically anyway.
There is also discussions in https://github.com/rails/rails/issues/8328.
I have the following controller test case:
def test_showplain
Cleaner.expect(:parse).with(#somecontent)
Cleaner.any_instance.stubs(:plainversion).returns(#returnvalue)
post :showplain, {:content => #somecontent}
end
This works fine, except that I want the "stubs(:plainversion)" to be an "expects(:plainversion)".
Here's the controller code:
def showplain
Cleaner.parse(params[:content]) do | cleaner |
#output = cleaner.plainversion
end
end
And the Cleaner is simply:
class Cleaner
### other code and methods ###
def self.parse(#content)
cleaner = Cleaner.new(#content)
yield cleaner
cleaner.close
end
def plainversion
### operate on #content and return ###
end
end
Again, I can't figure out how to reliably test the "cleaner" that is made available from the "parse" method. Any suggestions?
This is a little tricky. The easiest a approach will be to break the problem into two pieces: the testing of the controller and the testing of the controller.
You have the testing of the controller set-- just remove your expectation around plainversion call.
Then, separately, you want to test the Cleaner.parse method.
cleaner = Cleaner.new('x');
cleaner.expects :close
Cleaner.expects(:new).returns(cleaner)
called_with = nil
Cleaner.parse('test') do |p|
called_with = p
end
assert_equal p, cleaner
That is not very clear what's going on. Makes me think that there is a simpler version of this. Can cleaner just be a simple function that takes a string and returns another one? Skip all the yielding and variable scoping? That will be much easier to test.
You might find the documentation for Mocha::Expectation#yields useful.
I've made an attempt at showing how you might do what you want in this gist. Note that I had to tweak the code a bit to get it into a self-contained runnable test.