This one is going to be a tinge interesting. After noting the peculiar behavior of let, I've decided to try and make a direct equivalent of the Lisp Let for RSPEC by binding instance variables to example groups.
Here's the code for this so far:
module RSpec
module Core
module MemoizedHelpers
module ClassMethods
def let_local(name, &block)
raise "#let_local called without a block" if block.nil?
# Binding to string instances, fix
current_example = caller[0]
# Attempt to find ivar, if not, make it.
meth = -> {
current_example.instance_variable_get("##{name}") ||
current_example.instance_variable_set("##{name}", block.call)
}
MemoizedHelpers.module_for(self).send(:define_method, name, meth)
before(:all) { __send__(name) }
end
end
end
end
end
Problem being, while it technically works for nested examples, I'm throwing ivars on a string. I know why it currently works but man is that hackish... How can I get a hold of the current example group that that function would be run inside? (ie
This is more of a thought exercise to see if it can be done.
There are definite performance reasons for something like this though, when used correctly (and frozen.) The use case is if you write tests in a functional manner, this let_local will not get in the way of running tests in parallel like the original let, and will not try and rebuild the object repeatedly (think expensive instantiations.)
Granted that this can already be done with a before :all ivar, but this may be a cleaner way about it.
Example test code using it:
describe 'Single local, multiple nested example, same local name' do
let_local(:a) { Person.new('Doctor', 900) }
it 'will be 900' do
expect(a.age).to eq(900)
end
it 'will be named Doctor' do
expect(a.name).to eq('Doctor')
end
context 'Doc got old' do
let_local(:a) { Person.new('Doctor', 1000) }
it 'should now be 1000' do
expect(a.age).to eq(1000)
end
context 'And older still!' do
let_local(:a) { Person.new('Doctor', 1100) }
it 'will now be 1100' do
expect(a.age).to eq(1100)
end
end
it 'will still be 1000' do
expect(a.age).to eq(1000)
end
end
it 'will still be 900' do
expect(a.age).to eq(900)
end
end
The overall intent is to emulate this type of behavior in Lisp:
(let ((x 1))
(write-line (write-to-string x)) ; prints 1
(let ((x 2))
(write-line (write-to-string x))) ; prints 2
(write-line (write-to-string x))) ; prints 1
Any tips or ideas?
You can already emulate that behavior using standard Ruby. For example,
def let(pr)
pr.call
end
let->(x=1, y=2) {
p [x, y]
let->(x=3, y=4) {
p [x, y]
}
p [x, y]
}
Output:
[1, 2]
[3, 4]
[1, 2]
Related
I am new to ruby and today I found some different behaviour of class_eval for string and block. For example
class A
end
class C
A.class_eval("print Module.nesting") # [A, C]
A.class_eval{print Module.nesting} # [C]
end
As you can see in the case of the string Module.nesting prints [A,C], while in the case of block it prints only C.
Could you please tell me the reason for this?
In the first case, you stuff a string into class_eval, and this class_eval is invoked on the class A. Hence, when the expression is evaluated, Module.nesting - which needs produce its nesting levels - finds itself inside an A, which in turn is evaluated inside a C.
In the second case, you pass a block, which is similar to a proc object. The effect is comparable to have a
class C
p = Proc.new { print Module.nesting }
do_something(p)
end
The Proc represents a closure, i.e. the context is that of creating the Proc. It is clear, that the nesting here is only C, and this does not change, if you evaluate p inside do_something.
This is a good thing. Imagine the following situation:
def f(p)
x = 'f'
p.call
end
def g
x = 'g'
p = Proc.new { puts x }
f(p)
end
Because the binding for p occurs inside method g, the x referenced in the block refers to the local value x inside g, although f has a local variable of the same name. Hence, g is printed here. In the same way, the nesting at the point of block definition is reproduced in your example.
def some_helper(exam)
x = 1
y = 2
if condition1
x = 3
y = 4
end
if condition2
x = 5
y = 6
end
return_something_base_on_x_y(x,y)
end
def return_something_base_on_x_y(x,y)
return "#{1}/#{2}" if x==1, y==2
return "#{3}/#{4}" if x==3, y==4
return "#{5}/#{6}" if x==5, y==6
end
i will call in view like this
some_helper(exam) # exam is an object
How can i write rspec for some_helper ? Can i write something like bellow. Only test the argument pass to method
describe "#some_helper" do
let(:exam) { Exam.create exam_params }
context "condition 1" do
it do
expect "some_helper" already call return_something_base_on_x_y with arguments(1,2) inside them
expect "some_helper" already call return_something_base_on_x_y with arguments(3,4) inside them
expect "some_helper" already call return_something_base_on_x_y with arguments(5,6) inside them
end
end
end
Can i avoid to write like
expect(some_helper(exam)).to eq "123" # and 456.
Because if condition is more complexity, i have to get a list of return_something_base_on_x_y result.
You can set expectations on a method before it is called by using a double:
it "sets an expectation that a method should be called"
obj = double('obj')
expect(obj).to recive(:foo).with('bar')
obj.foo('bar')
end
The example is failed if obj.bar is not called.
You can set expectations on an object after the call is done by using spies:
obj = spy('obj')
obj.foo('bar')
expect(obj).to have_recived(:foo).with('bar')
This allows you to arrange your tests after the act, arrange, assert pattern (or given, then, when in BDD terms).
Can i avoid to write like
expect(some_helper(exam)).to eq "123" # and 456.
Yes, but it might actually degrade your tests. Stubbing can mask bugs and makes your code more about testing the implementation (how the code does its job) then the behavior (the result).
Stubbing is most suitible when the object you're testing touches an application boundry or is not idempotent (like for example a method that generates random values).
I want to test an iterator using rspec. It seems to me that the only possible yield matcher is yield_successive_args (according to https://www.relishapp.com/rspec/rspec-expectations/v/3-0/docs/built-in-matchers/yield-matchers). The other matchers are used only for single yielding.
But yield_successive_args fails if the yielding is in other order than specified.
Is there any method or nice workaround for testing iterator that yields in any order?
Something like the following:
expect { |b| array.each(&b) }.to yield_multiple_args_in_any_order(1, 2, 3)
Here is the matcher I came up for this problem, it's fairly simple, and should work with a good degree of efficiency.
require 'set'
RSpec::Matchers.define :yield_in_any_order do |*values|
expected_yields = Set[*values]
actual_yields = Set[]
match do |blk|
blk[->(x){ actual_yields << x }] # ***
expected_yields == actual_yields # ***
end
failure_message do |actual|
"expected to receive #{surface_descriptions_in expected_yields} "\
"but #{surface_descriptions_in actual_yields} were yielded."
end
failure_message_when_negated do |actual|
"expected not to have all of "\
"#{surface_descriptions_in expected_yields} yielded."
end
def supports_block_expectations?
true
end
end
I've highlighted the lines containing most of the important logic with # ***. It's a pretty straightforward implementation.
Usage
Just put it in a file, under spec/support/matchers/, and make sure you require it from the specs that need it. Most of the time, people just add a line like this:
Dir[File.dirname(__FILE__) + "/support/**/*.rb"].each {|f| require f}
to their spec_helper.rb but if you have a lot of support files, and they aren't all needed everywhere, this can get a bit much, so you may want to only include it where it is used.
Then, in the specs themselves, the usage is like that of any other yielding matcher:
class Iterator
def custom_iterator
(1..10).to_a.shuffle.each { |x| yield x }
end
end
describe "Matcher" do
it "works" do
iter = Iterator.new
expect { |b| iter.custom_iterator(&b) }.to yield_in_any_order(*(1..10))
end
end
This can be solved in plain Ruby using a set intersection of arrays:
array1 = [3, 2, 4]
array2 = [4, 3, 2]
expect(array1).to eq (array1 & array2)
# for an enumerator:
enumerator = array1.each
expect(enumerator.to_a).to eq (enumerator.to_a & array2)
The intersection (&) will return items that are present in both collections, keeping the order of the first argument.
My goal is to replace methods in the String class with other methods that do additional work (this is for a research project). This works for many methods by writing code in the String class similar to
alias_method :center_OLD, :center
def center(args*)
r = self.send(*([:center_OLD] + args))
#do some work here
#return something
end
For some methods, I need to handle a Proc as well, which is no problem. However, for the scan method, invoking it has the side effect of setting special global variables from the regular expression match. As documented, these variables are local to the thread and the method.
Unfortunately, some Rails code makes calls to scan which makes use of the $& variable. That variable gets set inside my version of the scan method, but because it's local, it doesn't make it back to the original caller which uses the variable.
Does anyone know a way to work around this? Please let me know if the problem needs clarification.
If it helps at all, all the uses I've seen so far of the $& variable are inside a Proc passed to the scan function, so I can get the binding for that Proc. However, the user doesn't seem to be able to change $& at all, so I don't know how that will help much.
Current Code
class String
alias_method :scan_OLD, :scan
def scan(*args, &b)
begin
sargs = [:scan_OLD] + args
if b.class == Proc
r = self.send(*sargs, &b)
else
r = self.send(*sargs)
end
r
rescue => error
puts error.backtrace.join("\n")
end
end
end
Of course I'll do more things before returning r, but this even is problematic -- so for simplicity we'll stick with this. As a test case, consider:
"hello world".scan(/l./) { |x| puts x }
This works fine both with and without my version of scan. With the "vanilla" String class this produces the same thing as
"hello world".scan(/l./) { puts $&; }
Namely, it prints "ll" and "ld" and returns "hello world". With the modified string class it prints two blank lines (since $& was nil) and then returns "hello world". I'll be happy if we can get that working!
You cannot set $&, because it is derived from $~, the last MatchData.
However, $~ can be set and that actually does what you want.
The trick is to set it in the block binding.
The code is inspired by the old Ruby implementation of Pathname.
(The new code is in C and does not need to care about Ruby frame-local variables)
class String
alias_method :scan_OLD, :scan
def scan(*args, &block)
sargs = [:scan_OLD] + args
if block
self.send(*sargs) do |*bargs|
Thread.current[:string_scan_matchdata] = $~
eval("$~ = Thread.current[:string_scan_matchdata]", block.binding)
yield(*bargs)
end
else
self.send(*sargs)
end
end
end
The saving of the thread-local (well, actually fiber-local) variable seems unnecessary since it is only used to pass the value and the thread never reads any other value than the last one set. It probably is there to restore the original value (most likely nil, because the variable did not exist).
One way to avoid thread-locals at all is to create a setter of $~ as a lambda (but it does create a lambda for each call):
self.send(*sargs) do |*bargs|
eval("lambda { |m| $~ = m }", block.binding).call($~)
yield(*bargs)
end
With any of these, your example works!
I wrote simple code simulating the problem:
"hello world".scan(/l./) { |x| puts x }
"hello world".scan(/l./) { puts $&; }
class String
alias_method :origin_scan, :scan
def scan *args, &b
args.unshift :origin_scan
#mutex ||= Mutex.new
begin
self.send *args do |a|
break if !block_given?
#mutex.synchronize do
p $&
case b.arity
when 0
b.call
when 1
b.call a
end
end
end
rescue => error
p error, error.backtrace.join("\n")
end
end
end
"hello world".scan(/l./) { |x| puts x }
"hello world".scan(/l./) { puts $& }
And found the following. The change of containment of the variable $& became inside a :call function, i.e. on 3-rd step before :call $& contains a valid value, but inside the block it becomes the invalid. I guess this become due to the singularity stack and variable restoration during the change process/thread context, because, probably, :call function can't access the :scan local state.
I see two variants: the first is to avoid to use global variables in the specific function redefinitions, and second, may to dig sources of ruby more deeply.
Why does
a = [].tap do |x|
x << 1
end
puts "a: #{a}"
work as expected
a: [1]
but
b = [].tap do |x|
x = [1]
end
puts "b: #{b}"
doesn't
b: []
?
The reason why the second snippet does not change the array is the same why this snippet:
def foo(x)
x = [1]
end
a = []
foo(a)
does not change variable a. Variable x in your code is local to the scope of the block, and because of that you can assign anything to it, but the assignment won't be visible outside (Ruby is a pass-by-value language).
Of course, blocks have also closures on the local variables where they were declared, so this will work:
def foo(x)
yield(x)
end
b = []
foo(123) do |x|
b = [1]
end
p b # outputs [1]
The first method put 1 on the end of an empty array. In the same way you cant say that an empty array is equal to 1. Rather you would try and replicate it...
b = [].tap do |x|
x.unshift(1)
end
This is just an example yet have a look at the method call you can use on an Array by typing.
Array.methods.sort
All the best and Good luck
This is slightly unrelated -- but that [].tap idiom is horrible. You should not use it. Even many of the people who used it in rails code now admit it's horrible and no longer use it.
Do not use it.