Creating one parameter for multiple DB columns in ActiveRecord Rails [duplicate] - ruby-on-rails

Ruby setters—whether created by (c)attr_accessor or manually—seem to be the only methods that need self. qualification when accessed within the class itself. This seems to put Ruby alone the world of languages:
All methods need self/this (like Perl, and I think Javascript)
No methods require self/this is (C#, Java)
Only setters need self/this (Ruby?)
The best comparison is C# vs Ruby, because both languages support accessor methods which work syntactically just like class instance variables: foo.x = y, y = foo.x . C# calls them properties.
Here's a simple example; the same program in Ruby then C#:
class A
def qwerty; #q; end # manual getter
def qwerty=(value); #q = value; end # manual setter, but attr_accessor is same
def asdf; self.qwerty = 4; end # "self." is necessary in ruby?
def xxx; asdf; end # we can invoke nonsetters w/o "self."
def dump; puts "qwerty = #{qwerty}"; end
end
a = A.new
a.xxx
a.dump
take away the self.qwerty =() and it fails (Ruby 1.8.6 on Linux & OS X). Now C#:
using System;
public class A {
public A() {}
int q;
public int qwerty {
get { return q; }
set { q = value; }
}
public void asdf() { qwerty = 4; } // C# setters work w/o "this."
public void xxx() { asdf(); } // are just like other methods
public void dump() { Console.WriteLine("qwerty = {0}", qwerty); }
}
public class Test {
public static void Main() {
A a = new A();
a.xxx();
a.dump();
}
}
Question: Is this true? Are there other occasions besides setters where self is necessary? I.e., are there other occasions where a Ruby method cannot be invoked without self?
There are certainly lots of cases where self becomes necessary. This is not unique to Ruby, just to be clear:
using System;
public class A {
public A() {}
public int test { get { return 4; }}
public int useVariable() {
int test = 5;
return test;
}
public int useMethod() {
int test = 5;
return this.test;
}
}
public class Test {
public static void Main() {
A a = new A();
Console.WriteLine("{0}", a.useVariable()); // prints 5
Console.WriteLine("{0}", a.useMethod()); // prints 4
}
}
Same ambiguity is resolved in same way. But while subtle I'm asking about the case where
A method has been defined, and
No local variable has been defined, and
we encounter
qwerty = 4
which is ambiguous—is this a method invocation or an new local variable assignment?
#Mike Stone
Hi! I understand and appreciate the points you've made and your
example was great. Believe me when I say, if I had enough reputation,
I'd vote up your response. Yet we still disagree:
on a matter of semantics, and
on a central point of fact
First I claim, not without irony, we're having a semantic debate about the
meaning of 'ambiguity'.
When it comes to parsing and programming language semantics (the subject
of this question), surely you would admit a broad spectrum of the notion
'ambiguity'. Let's just adopt some random notation:
ambiguous: lexical ambiguity (lex must 'look ahead')
Ambiguous: grammatical ambiguity (yacc must defer to parse-tree analysis)
AMBIGUOUS: ambiguity knowing everything at the moment of execution
(and there's junk between 2-3 too). All these categories are resolved by
gathering more contextual info, looking more and more globally. So when you
say,
"qwerty = 4" is UNAMBIGUOUS in C#
when there is no variable defined...
I couldn't agree more. But by the same token, I'm saying
"qwerty = 4" is un-Ambiguous in ruby
(as it now exists)
"qwerty = 4" is Ambiguous in C#
And we're not yet contradicting each other. Finally, here's where we really
disagree: Either ruby could or could not be implemented without any further
language constructs such that,
For "qwerty = 4," ruby UNAMBIGUOUSLY
invokes an existing setter if there
is no local variable defined
You say no. I say yes; another ruby could exist which behaves exactly like
the current in every respect, except "qwerty = 4" defines a new
variable when no setter and no local exists, it invokes the setter if one
exists, and it assigns to the local if one exists. I fully accept that I
could be wrong. In fact, a reason why I might be wrong would be interesting.
Let me explain.
Imagine you are writing a new OO language with accessor methods looking
like instances vars (like ruby & C#). You'd probably start with
conceptual grammars something like:
var = expr // assignment
method = expr // setter method invocation
But the parser-compiler (not even the runtime) will puke, because even after
all the input is grokked there's no way to know which grammar is pertinent.
You're faced which a classic choice. I can't be sure of the details, but
basically ruby does this:
var = expr // assignment (new or existing)
// method = expr, disallow setter method invocation without .
that is why it's un-Ambiguous, while and C# does this:
symbol = expr // push 'symbol=' onto parse tree and decide later
// if local variable is def'd somewhere in scope: assignment
// else if a setter is def'd in scope: invocation
For C#, 'later' is still at compile time.
I'm sure ruby could do the same, but 'later' would have to be at runtime, because
as ben points out you don't know until the statement is executed which case
applies.
My question was never intended to mean "do I really need the 'self.'?" or "what
potential ambiguity is being avoided?" Rather I wanted to know why was this
particular choice made? Maybe it's not performance. Maybe it just got the job
done, or it was considered best to always allow a 1-liner local to override a
method (a pretty rare case requirement) ...
But I'm sort of suggesting that the most dynamical language might be the one which
postpones this decision the longest, and chooses semantics based on the most contextual
info: so if you have no local and you defined a setter, it would use the setter. Isn't
this why we like ruby, smalltalk, objc, because method invocation is decided at runtime,
offering maximum expressiveness?

Well, I think the reason this is the case is because qwerty = 4 is ambiguous—are you defining a new variable called qwerty or calling the setter? Ruby resolves this ambiguity by saying it will create a new variable, thus the self. is required.
Here is another case where you need self.:
class A
def test
4
end
def use_variable
test = 5
test
end
def use_method
test = 5
self.test
end
end
a = A.new
a.use_variable # returns 5
a.use_method # returns 4
As you can see, the access to test is ambiguous, so the self. is required.
Also, this is why the C# example is actually not a good comparison, because you define variables in a way that is unambiguous from using the setter. If you had defined a variable in C# that was the same name as the accessor, you would need to qualify calls to the accessor with this., just like the Ruby case.

The important thing to remember here is that Ruby methods can be (un)defined at any point, so to intelligently resolve the ambiguity, every assignment would need to run code to check whether there is a method with the assigned-to name at the time of assignment.

Because otherwise it would be impossible to set local variables at all inside of methods. variable = some_value is ambiguous. For example:
class ExampleClass
attr_reader :last_set
def method_missing(name, *args)
if name.to_s =~ /=$/
#last_set = args.first
else
super
end
end
def some_method
some_variable = 5 # Set a local variable? Or call method_missing?
puts some_variable
end
end
If self wasn't required for setters, some_method would raise NameError: undefined local variable or method 'some_variable'. As-is though, the method works as intended:
example = ExampleClass.new
example.blah = 'Some text'
example.last_set #=> "Some text"
example.some_method # prints "5"
example.last_set #=> "Some text"

Related

Way to defensive check value assigned to public const variable in immutable class in C++17?

Coming back to C++ after a hiatus in Java. Attempting to create an immutable object and after working in Java, a public const variable seems the most sensible (like Java final).
public:
const int A;
All well and good, but if I want to defensive check this value, how might I go about it. The code below seems strange to me, but unlike Java final members, I can't seem to set A in the constructor after defensive checks (compiler error).
MyObj::MyObj(int a) : A(a) {
if (a < 0)
throw invalid_argument("must be positive");
}
A public const variable for A seems like a clearer, cleaner solution than a getter only with a non const int behind it, but open to that or other ideas if this is bad practice.
Your example as it stands should work fine:
class MyObj {
public:
const int var;
MyObj(int var) : var(var) {
if (var < 0)
throw std::invalid_argument("must be positive");
}
};
(Live example, or with out-of-line constructor)
If you intend that MyObj will always be immutable, then a const member is
probably fine. If you want the variable to be immutable in general, but still have the possibility to overwrite the entire object with an assignment, then better to have a private variable with a getter:
class MyObj {
int var;
public:
MyObj(int var) : var(var) {
if (var < 0)
throw std::invalid_argument("must be positive");
}
int getVar() const { return var; }
};
// now allows
MyObj a(5);
MyObj b(10);
a = b;
Edit
Apparently, what you want to do is something like
MyObj(int var) {
if (var < 0)
throw std::invalid_argument("must be positive");
this->var = var;
}
This is not possible; once a const variable has a value it cannot be changed. Once the body ({} bit) of the constructor starts, const variables already have a value, though in this case the value is "undefined" since you're not setting it (and the compiler is throwing an error because of it).
Moreover, there's actually no point to this. There is no efficiency difference in setting the variable after the checks or before them, and it's not like any external observers will be able to see the difference regardless since the throw statement will unroll the stack, deconstructing the object straight away.
Generally the answer by N. Shead is the regular practice - but you can also consider:
Create domain-specific types and use them instead of general primitives. E.g., if your field is a telephone number, have a type TelephoneNumber which, in its constructor (or factory), taking a string, does all the telephone number validation you'd like (and throws on invalid). Then you write something like:
class Contact {
const TelephoneNumber phone_;
public:
Contact(string phone) : phone_(phone) { ... }
...
When you do this the constructor for TelephoneNumber taking a string argument will be called when initializing the field phone_ and the validation will happen.
Using domain-specific types this way is discussed on the web under the name "primitive obsession" as a "code smell".
(The problem with this approach IMO is that you pretty much have to use it everywhere, and from the start of your project, otherwise you start having to have explicit (or implicit) casting all over the place and your code looks like crap and you can never be sure if the value you have has been validated or not. If you're working with an existing codebase it is nearly impossible to retrofit it completely though you might just start using it for particularly important/ubiquitous types.)
Create validation methods that take and return some value, and which perform the validation necessary - throwing when invalid otherwise returning its argument. Here's an example validator:
string ValidatePhoneNumber(string v) {
<some kind of validation throwing on invalid...>
return v;
}
And use it as follows:
class Contact {
const string phone_;
public:
Contact(string phone) : phone_(ValidatePhoneNumber(phone)) { ... }
I've seen this used when an application or library is doing so much validation of domain-specific types that a small library of these domain-specific validator methods has been built up and code readers are used to them. I wouldn't really consider it idiomatic, but it does have the advantage that the validation is right out there in the open where you can see it.

F# Val without Self Identifier

Just curious why F# has:
member val Foo = ... with get, set
While omitting the self identifier (e.g. this.).
This is still an instance property. Maybe I am the only one confused when using it. But just bothered me enough to query whoever knows how the language was defined.
With this syntax, the property is almost totally auto-implemented -- all you provide is the initialization code, which essentially runs as part of the constructor.
One of the best-practice guard rails F# puts in place is that it does not let you access instance members before the instance is fully initialized. (wow, crazy idea, right?).
So you would have no use for a self-identifier in auto-props, anyways, since the only code you get to write is init code that can't touch instance members.
Per the MSDN docs (emphasis mine):
Automatically implemented properties are part of the initialization of
a type, so they must be included before any other member definitions,
just like let bindings and do bindings in a type definition. Note that
the expression that initializes an automatically implemented property
is only evaluated upon initialization, and not every time the property
is accessed. This behavior is in contrast to the behavior of an
explicitly implemented property. What this effectively means is that
the code to initialize these properties is added to the constructor of
a class.
Btw, if you try to be a smartass and use the class-level self-identifier to get around this, you'll still blow up at runtime:
type A() as this =
member val X =
this.Y + 10
with get, set
member this.Y = 42
let a = A()
System.InvalidOperationException: The initialization of an object or value resulted in an object or value being accessed recursively before it was fully initialized.
at Microsoft.FSharp.Core.LanguagePrimitives.IntrinsicFunctions.FailInit()
at FSI_0013.A.get_Y()
at FSI_0013.A..ctor()
at <StartupCode$FSI_0014>.$FSI_0014.main#()
Edit: Worth noting that in upcoming C# 6, they also now allow auto-props with initializers (more F# features stolen for C#, shocker :-P), and there is a similar restriction that you can't use the self-identifier:
class A
{
// error CS0027: Keyword 'this' is not available in the current context
public int X { get; set; } = this.Y + 10;
public int Y = 42;
public A() { }
}

Unable to use protected events in F#

Let's say we have the following C# class
public class Class1
{
protected event EventHandler ProtectedEvent;
protected virtual void OverrideMe() { }
}
It seems to be impossible to use the ProtectedEvent in F#.
type HelpMe() as this =
inherit Class1()
do
printfn "%A" this.ProtectedEvent
member x.HookEvents() =
printfn "%A" x.ProtectedEvent
member private x.HookEvents2() =
printfn "%A" x.ProtectedEvent
override x.OverrideMe() =
printfn "%A" x.ProtectedEvent
In this example I have attempted to call printfn on it, as there are multiple ways to hook up events in F# and I wanted to be clear that is simply the referencing of the event at all that causes the problem.
In each of the cases above the compiler complains with the following error
A protected member is called or 'base' is being used. This is only
allowed in the direct implementation of members since they could
escape their object scope.
I understand this error, what causes it and its purpose. Usually, the work around is to wrap the call in a private member, which works fine with methods - but that does not seem to work with events. No matter what I try, it seems to be impossible to use protected events in F# unless I resort to doing something with reflection, or make some changes to the base class (which in my case is not possible).
Note that I have also tried all possible combinations of using base, this and x.
Am I doing something wrong ?
I suspect that there is something about the code that the compiler generates behind the scene when you treat the event as a first-class value that later confuses it (i.e. some hidden lambda function that makes the compiler think it cannot access the protected member). I'd say that this is a bug.
As far as I can see, you can workaround it by using add_ProtectedEvent and remove_ProtectedEvent members directly (they do not show in the auto-completion, but they are there and are accessible - they are protected, but calling them is a direct method call, which is fine):
type HelpMe() =
inherit Class1()
member x.HookEvents() =
let eh = System.EventHandler(fun _ _ -> printfn "yay")
x.add_ProtectedEvent(eh)
override x.OverrideMe() =
printfn "hello"
This compiled fine for me. It is a shame that you cannot use the protected event as a first-class value, but this at least lets you use it...

Using mirrors, how can I get a reference to a class's method?

Say I have an instance of a class Foo, and I want to grab a list of all of its methods that are annotated a certain way. I want to have a reference to the method itself, so I'm not looking to use reflection to invoke the method each time, just to grab a reference to it the first time.
In other words, I want to do the reflection equivalent of this:
class Foo {
a() {print("a");}
}
void main() {
var f = new Foo();
var x = f.a; // Need reflective way of doing this
x(); // prints "a"
}
I have tried using InstanceMirror#getField, but methods are not considered fields so that didn't work. Any ideas?
As far as I understand reflection in Dart, there's no way to get the actual method as you wish to. (I'll very happily delete this answer if someone comes along and shows how to do that.)
The best I can come up with to ameliorate some of what you probably don't like about using reflection to invoke the method is this:
import 'dart:mirrors';
class Foo {
a() {print("a");}
}
void main() {
var f = new Foo();
final fMirror = reflect(f);
final aSym = new Symbol('a');
final x = () => fMirror.invoke(aSym, []);
x(); // prints "a"
}
Again, I know that's not quite what you're looking for, but I believe it's as close as you can get.
Side note: getField invokes the getter and returns the result -- it's actually fine if the getter is implemented as a method. It doesn't work for you here, but for a different reason than you thought.
What you're trying to get would be described as the "closurized" version of the method. That is, you want to get the method as a function, where the receiver is implicit in the function invocation. There isn't a way to get that from the mirror. You could get a methodMirror as
reflect(foo).type.methods[const Symbol("a")]
but you can't invoke the result.

Unexpected Validate behavior with Moq

Moq has been driving me a bit crazy on my latest project. I recently upgraded to version 4.0.10827, and I'm noticing what seems to me to be a new behavior.
Basically, when I call my mocked function (MakeCall, in this example) in the code I am testing, I am passing in an object (TestClass). The code I am testing makes changes to the TestClass object before and after the call to MakeCall. Once the code has completed, I then call Moq's Verify function. My expectation is that Moq will have recorded the complete object that I passed into MakeCall, perhaps via a mechanism like deep cloning. This way, I will be able to verify that MakeCall was called with the exact object I am expecting it to be called with. Unfortunately, this is not what I'm seeing.
I attempt to illustrate this in the code below (hopefully, clarifying it a bit in the process).
I first create a new TestClass object. Its Var property is set to "one".
I then create the mocked object, mockedObject, which is my test subject.
I then call the MakeCall method of mockedObject (by the way, the Machine.Specifications framework used in the example allows the code in the When_Testing class to be read from top to bottom).
I then test the mocked object to ensure that it was indeed called with a TestClass with a Var value of "one". This succeeds, as I expected it to.
I then make a change to the original TestClass object by re-assigning the Var property to "two".
I then proceed to attempt to verify if Moq still thinks that MakeCall was called with a TestClass with a value of "one". This fails, although I am expecting it to be true.
Finally, I test to see if Moq thinks MakeCall was in fact called by a TestClass object with a value of "two". This succeeds, although I would initially have expected it to fail.
It seems pretty clear to me that Moq is only holding onto a reference to the original TestClass object, allowing me to change its value with impunity, adversely affecting the results of my testing.
A few notes on the test code. IMyMockedInterface is the interface I am mocking. TestClass is the class I am passing into the MakeCall method and therefore using to demonstrate the issue I am having. Finally, When_Testing is the actual test class that contains the test code. It is using the Machine.Specifications framework, which is why there are a few odd items ('Because of', 'It should...'). These are simply delegates that are called by the framework to execute the tests. They should be easily removed and the contained code placed into a standard function if that is desired. I left it in this format because it allows all Validate calls to complete (as compared to the 'Arrange, Act Assert' paradigm). Just to clarify, the below code is not the actual code I am having problems with. It is simply intended to illustrate the problem, as I have seen this same behavior in multiple places.
using Machine.Specifications;
// Moq has a conflict with MSpec as they both have an 'It' object.
using moq = Moq;
public interface IMyMockedInterface
{
int MakeCall(TestClass obj);
}
public class TestClass
{
public string Var { get; set; }
// Must override Equals so Moq treats two objects with the
// same value as equal (instead of comparing references).
public override bool Equals(object obj)
{
if ((obj != null) && (obj.GetType() != this.GetType()))
return false;
TestClass t = obj as TestClass;
if (t.Var != this.Var)
return false;
return true;
}
public override int GetHashCode()
{
int hash = 41;
int factor = 23;
hash = (hash ^ factor) * Var.GetHashCode();
return hash;
}
public override string ToString()
{
return MvcTemplateApp.Utilities.ClassEnhancementUtilities.ObjectToString(this);
}
}
[Subject(typeof(object))]
public class When_Testing
{
// TestClass is set up to contain a value of 'one'
protected static TestClass t = new TestClass() { Var = "one" };
protected static moq.Mock<IMyMockedInterface> mockedObject = new moq.Mock<IMyMockedInterface>();
Because of = () =>
{
mockedObject.Object.MakeCall(t);
};
// Test One
// Expected: Moq should verify that MakeCall was called with a TestClass with a value of 'one'.
// Actual: Moq does verify that MakeCall was called with a TestClass with a value of 'one'.
// Result: This is correct.
It should_verify_that_make_call_was_called_with_a_value_of_one = () =>
mockedObject.Verify(o => o.MakeCall(new TestClass() { Var = "one" }), moq.Times.Once());
// Update the original object to contain a new value.
It should_update_the_test_class_value_to_two = () =>
t.Var = "two";
// Test Two
// Expected: Moq should verify that MakeCall was called with a TestClass with a value of 'one'.
// Actual: The Verify call fails, claiming that MakeCall was never called with a TestClass instance with a value of 'one'.
// Result: This is incorrect.
It should_verify_that_make_call_was_called_with_a_class_containing_a_value_of_one = () =>
mockedObject.Verify(o => o.MakeCall(new TestClass() { Var = "one" }), moq.Times.Once());
// Test Three
// Expected: Moq should fail to verify that MakeCall was called with a TestClass with a value of 'two'.
// Actual: Moq actually does verify that MakeCall was called with a TestClass with a value of 'two'.
// Result: This is incorrect.
It should_fail_to_verify_that_make_call_was_called_with_a_class_containing_a_value_of_two = () =>
mockedObject.Verify(o => o.MakeCall(new TestClass() { Var = "two" }), moq.Times.Once());
}
I have a few questions regarding this:
Is this expected behavior?
Is this new behavior?
Is there a workaround that I am unaware of?
Am I using Verify incorrectly?
Is there a better way of using Moq to avoid this situation?
I thank you humbly for any assistance you can provide.
Edit:
Here is one of the actual tests and SUT code that I experienced this problem with. Hopefully it will act as clarification.
// This is the MVC Controller Action that I am testing. Note that it
// makes changes to the 'searchProjects' object before and after
// calling 'repository.SearchProjects'.
[HttpGet]
public ActionResult List(int? page, [Bind(Include = "Page, SearchType, SearchText, BeginDate, EndDate")]
SearchProjects searchProjects)
{
int itemCount;
searchProjects.ItemsPerPage = profile.ItemsPerPage;
searchProjects.Projects = repository.SearchProjects(searchProjects,
profile.UserKey, out itemCount);
searchProjects.TotalItems = itemCount;
return View(searchProjects);
}
// This is my test class for the controller's List action. The controller
// is instantiated in an Establish delegate in the 'with_project_controller'
// class, along with the SearchProjectsRequest, SearchProjectsRepositoryGet,
// and SearchProjectsResultGet objects which are defined below.
[Subject(typeof(ProjectController))]
public class When_the_project_list_method_is_called_via_a_get_request
: with_project_controller
{
protected static int itemCount;
protected static ViewResult result;
Because of = () =>
result = controller.List(s.Page, s.SearchProjectsRequest) as ViewResult;
// This test fails, as it is expecting the 'SearchProjects' object
// to contain:
// Page, SearchType, SearchText, BeginDate, EndDate and ItemsPerPage
It should_call_the_search_projects_repository_method = () =>
s.Repository.Verify(r => r.SearchProjects(s.SearchProjectsRepositoryGet,
s.UserKey, out itemCount), moq.Times.Once());
// This test succeeds, as it is expecting the 'SearchProjects' object
// to contain:
// Page, SearchType, SearchText, BeginDate, EndDate, ItemsPerPage,
// Projects and TotalItems
It should_call_the_search_projects_repository_method = () =>
s.Repository.Verify(r => r.SearchProjects(s.SearchProjectsResultGet,
s.UserKey, out itemCount), moq.Times.Once());
It should_return_the_correct_view_name = () =>
result.ViewName.ShouldBeEmpty();
It should_return_the_correct_view_model = () =>
result.Model.ShouldEqual(s.SearchProjectsResultGet);
}
/////////////////////////////////////////////////////
// Here are the values of the three test objects
/////////////////////////////////////////////////////
// This is the object that is returned by the client.
SearchProjects SearchProjectsRequest = new SearchProjects()
{
SearchType = SearchTypes.ProjectName,
SearchText = GetProjectRequest().Name,
Page = Page
};
// This is the object I am expecting the repository method to be called with.
SearchProjects SearchProjectsRepositoryGet = new SearchProjects()
{
SearchType = SearchTypes.ProjectName,
SearchText = GetProjectRequest().Name,
Page = Page,
ItemsPerPage = ItemsPerPage
};
// This is the complete object I expect to be returned to the view.
SearchProjects SearchProjectsResultGet = new SearchProjects()
{
SearchType = SearchTypes.ProjectName,
SearchText = GetProjectRequest().Name,
Page = Page,
ItemsPerPage = ItemsPerPage,
Projects = new List<Project>() { GetProjectRequest() },
TotalItems = TotalItems
};
Ultimately, your question is whether a mocking framework should take snapshots of the parameters you use when interacting with the mocks so that it can accurately record the state the system was in at the point of interaction rather than the state the parameters might be in at the point of verification.
I would say this is a reasonable expectation from a logical point of view. You are performing action X with value Y. If you ask the mock "Did I perform action X with value Y", you expect it to say "Yes" regardless of the current state of the system.
To summarize the problem you are running into:
You first invoke a method on a mock object with a reference type parameter.
Moq saves information about the invocation along with the reference type parameter passed in.
You then ask Moq if the method was called one time with an object equal to the reference you passed in.
Moq checks its history for a call to that method with a parameter that matches the supplied parameter and answers yes.
You then modify the object that you passed as the parameter to the method call on the mock.
The memory space of the reference Moq is holding in its history changes to the new value.
You then ask Moq if the method was called one time with an object that isn't equal to the reference its holding.
Mock checks its history for a call to that method with a parameter that matches the supplied parameter and reports no.
To attempt to answer your specific questions:
Is this expected behavior?
I would say no.
Is this new behavior?
I don't know, but it's doubtful the project would have at one time had behavior that facilitated this and was later modified to only allow the simple scenario of only verifying a single usage per mock.
Is there a workaround that I am unaware of?
I'll answer this two ways.
From a technical standpoint, a workaround would be to use a Test Spy rather than a Mock. By using a Test Spy, you can record the values passed and use your own strategy for remembering the state, such as doing a deep clone, serializing the object, or just storing the specific values you care about to be compared against later.
From a testing standpoint, I would recommend that you follow the principle "Use The Front Door First". I believe there is a time for state-based testing as well as interaction-based testing, but you should try to avoid coupling yourself to the implementation details unless the interaction is an important part of the scenario. In some cases, the scenario you are interested in will be primarily about interaction ("Transfer funds between accounts"), but in other cases all you really care about is getting the correct result ("Withdraw $10"). In the case of the specification for your controller, this seems to fall into the query category, not the command category. You don't really care how it gets the results you want as long as they are correct. Therefore, I would recommend using state-based testing in this case. If another specification concerns issuing a command against the system, there may still end up being a front door solution which you should consider using first, but it may be necessary or important to do interaction based testing. Just my thoughts though.
Am I using Verify incorrectly?
You are using the Verify() method correctly, it just doesn't support the scenario you are using it for.
Is there a better way of using Moq to avoid this situation?
I don't think Moq is currently implemented to handle this scenario.
Hope this helps,
Derek Greer
http://derekgreer.lostechies.com
http://aspiringcraftsman.com
#derekgreer
First, you can avoid the conflict between Moq and MSpec by declaring
using Machine.Specifications;
using Moq;
using It = Machine.Specifications.It;
Then you'll only need to prefix with Moq. when you want to use Moq's It, for example Moq.It.IsAny<>().
Onto your question.
Note: This is not the original answer but an edited one after the OP added some real example code to the question
I've been trying out your sample code and I think it's got more to do with MSpec than Moq. Apparently (and I didn't know this either), when you modify the state of your SUT (System Under Test) inside an It delegate the changes gets remembered. What is happening now is:
Because delegate is run
It delegates are run, one after the other. If one changes the state, the following It will never see the set up in the Because. Hence your failed test.
I've tried marking your spec with the SetupForEachSpecificationAttribute:
[Subject(typeof(object)), SetupForEachSpecification]
public class When_Testing
{
// Something, Something, something...
}
The attribute does as its name says: It will run your Establish and Because before every It. Adding the attribute made the spec behave as expected: 3 Successes, one fail (the verification that with Var = "two").
Would the SetupForEachSpecificationAttribute solve your problem or is resetting after every It not acceptable for your tests?
FYI: I'm using Moq v4.0.10827.0 and MSpec v0.4.9.0
Free tip #2: If you're testing ASP.NET MVC apps with Mspec you might want to take a look at James Broome's MSpec extensions for MVC

Resources