I'm having a weird issue with the configureMetadataStore.
My model:
class SourceMaterial {
List<Job> Jobs {get; set;}
}
class Job {
public SourceMaterial SourceMaterial {get; set;}
}
class JobEditing : Job {}
class JobTranslation: Job {}
Module for configuring Job entities:
angular.module('cdt.request.model').factory('jobModel', ['breeze', 'dataService', 'entityService', modelFunc]);
function modelFunc(breeze, dataService, entityService) {
function Ctor() {
}
Ctor.extend = function (modelCtor) {
modelCtor.prototype = new Ctor();
modelCtor.prototype.constructor = modelCtor;
};
Ctor.prototype._configureMetadataStore = _configureMetadataStore;
return Ctor;
// constructor
function jobCtor() {
this.isScreenDeleted = null;
}
function _configureMetadataStore(entityName, metadataStore) {
metadataStore.registerEntityTypeCtor(entityName, jobCtor, jobInitializer);
}
function jobInitializer(job) { /* do stuff here */ }
}
Module for configuring JobEditing entities:
angular.module('cdt.request.model').factory(jobEditingModel, ['jobModel', modelFunc]);
function modelFunc(jobModel) {
function Ctor() {
this.configureMetadataStore = configureMetadataStore;
}
jobModel.extend(Ctor);
return Ctor;
function configureMetadataStore(metadataStore) {
return this._configureMetadataStore('JobEditing', metadataStore)
}
}
Module for configuring JobTranslation entities:
angular.module('cdt.request.model').factory(jobTranslationModel, ['jobModel', modelFunc]);
function modelFunc(jobModel) {
function Ctor() {
this.configureMetadataStore = configureMetadataStore;
}
jobModel.extend(Ctor);
return Ctor;
function configureMetadataStore(metadataStore) {
return this._configureMetadataStore('JobTranslation', metadataStore)
}
}
Then Models are configured like this :
JobEditingModel.configureMetadataStore(dataService.manager.metadataStore);
JobTranslationModel.configureMetadataStore(dataService.manager.metadataStore);
Now when I call createEntity for a JobEditing, the instance is created and at some point, breeze calls setNpValue and adds the newly created Job to the np SourceMaterial.
That's all fine, except that it is added twice !
It happens when rawAccessorFn(newValue); is called. In fact it is called twice.
And if I add a new type of job (hence I register a new type with the metadataStore), then the new Job is added three times to the np.
I can't see what I'm doing wrong. Can anyone help ?
EDIT
I've noticed that if I change:
metadataStore.registerEntityTypeCtor(entityName, jobCtor, jobInitializer);
to
metadataStore.registerEntityTypeCtor(entityName, null, jobInitializer);
Then everything works fine again ! So the problem is registering the same jobCtor function. Should that not be possible ?
Our Bad
Let's start with a Breeze bug, recently discovered, in the Breeze "backingStore" model library adapter.
There's a part of that adapter which is responsible for rewriting data properties of the entity constructor so that they become observable and self-validating and it kicks in when register a type with registerEntityTypeCtor.
It tries to keep track of which properties it has rewritten. The bug is that it records the fact of rewrite on the EntityType rather than on the constructor function. Consequently, every time you registered a new type, it failed to realize that it had already rewritten the properties of the base Job type and re-wrapped the property.
This was happening to you. Every derived type that you registered re-wrapped/re-wrote the properties of the base type (and of its base type, etc).
In your example, a base class Job property would be re-written 3 times and its inner logic executed 3 times if you registered three of its sub-types. And the problem disappeared when you stopped registering constructors of sub-types.
We're working on a revised Breeze "backingStore" model library adapter that won't have this problem and, coincidentally, will behave better in test scenarios (that's how we found the bug in the first place).
Your Bad?
Wow that's some hairy code you've got there. Why so complicated? In particular, why are you adding a one-time MetadataStore configuration to the prototypes of entity constructor functions?
I must be missing something. The code to register types is usually much smaller and simpler. I get that you want to put each type in its own file and have it self-register. The cost of that (as you've written it) is enormous bulk and complexity. Please reconsider your approach. Take a look at other Breeze samples, Zza-Node-Mongo for example.
Thanks for reporting the issue. Hang in there with us. A fix should be arriving soon ... I hope in the next release.
Related
I've encountered a dependency injection scenario which I cannot find a way through.
We currently have an Azure function.
We are using dependency injection via the FunctionsStartup attribute.
That all works fine, until I get asked to make it work for multiple environments.
The tester found it too onerous to deploy to 7 different environments, so I was asked to re-jig the function so that it runs (in a loop) for those environments.
That means 7 different IConfigurations and somehow having 7 separate compartmentalised IOC registrations of services.
I can't think of a way of doing that, without significantly re-structuring the way abstractions are being resolved. Even if you set up registrations in a loop and inject an IEnumerable of a service, when it goes to resolve a child dependency, it just pulls the last one registered, rather than the one which was meant to correlate with the current item being iterated.
So, something like this (using Autofac):
Registration
foreach (var configuration in configurations)
{
containerBuilder.Register<ICosmosDbService<AccountUsage>>(sp =>
{
var dBConfig = CosmosDBHelper.GetProjectDatabaseConfig(configuration.Value, Project.Jupiter);
return CosmosClientInitializer<AccountUsage>.Initialize(dBConfig);
}).As<ICosmosDbService<AccountUsage>>();
}
Usage
private readonly IEnumerable<IAccountUsageService> _accountUsageService;
public JobScheduler(IEnumerable<IAccountUsageService> accountUsageService)
{
_accountUsageService = accountUsageService;
}
[FunctionName("JobScheduler")]
public async Task Run([TimerTrigger("0 */2 * * * *")] TimerInfo myTimer, ILogger log)
{
log.LogInformation($"Job Scheduler Timer trigger function executed at: {DateTime.Now}");
try
{
foreach (var usageService in _accountUsageService)
{
var logs = await usageService.GetCurrentAccountUsage("gfkjdsasjfa");
// ...
}
}
I realise this kind of DI usage is not ideal (and does not even work).
Is there a way to structure an Azure Function such that it can execute for different configurations in a compartmentalised manner? Or is this really just fighting against the technology?
You've got a couple of ways to do this - either inject the right dependencies into the function constructor, or resolve them dynamically using a service-locater type approach with a named instance.
Let's consider the second approach and what it would mean for your implementation. As you demonstrated, you'd be looping through your instances and resolving the dependency you want to use, then invoking it
foreach (var usageService in _accountUsageService)
{
var logs = await usageService.GetCurrentAccountUsage("named-instance");
logs.DoSomething();
}
This is technically possible, but now you're doing batch processing - you're doing more than once piece of work that's been triggered by a single event (the timer object), which means you have to deal with a couple of extra problems. What should you do if there's a failure with one of the instances, and what to do if one of the instances is running slowly?
Ideally, you want functions to do the smallest bit of work they can, and complete quickly - You don't want failure or slowness with one particular instance impacting the other instances. By breaking it down to the smallest piece of work (think, one event trigger does one piece of work) then you can take advantage of the functions runtime for things like retries on failures, and threading and concurrency is now being done for you by the runtime.
You could then think about a couple of ways you could do this. a) multiple function signatures and a service resolver approach, e.g.
public class JobScheduler
{
public JobScheduler(IEnumerable<IAccountUsageService> accountUsageService)
{
_accountUsageService = accountUsageService;
}
[FunctionName("FirstInstance")]
public Task FirstInstance([TimerTrigger("%MetricPoller:Schedule%")] TimerInfo myTimer)
{
var logs = await _accountUsageService.GetNamedInstance("instance-a");
logs.DoSomething();
}
[FunctionName("SecondInstance")]
public Task SecondInstance([TimerTrigger("%MetricPoller:Schedule%")] TimerInfo myTimer)
{
var logs = _accountUsageService.GetNamedInstance("instance-b");
logs.DoSomething();
}
}
or b), multiple classes with the necessary dependencies injected
public class JobSchedulerFirstInstance
{
public JobSchedulerFirstInstance(ILogs logs)
{
_logs = logs;
}
[FunctionName("FirstInstance")]
public Task FirstInstance([TimerTrigger("%MetricPoller:Schedule%")] TimerInfo myTimer)
{
_logs.DoSomething();
}
}
I'd personally lean towards multiple classes approach, and register named instances with my container. A bit of extra wire up work needed, but you'll end up with lots of small classes that all look very similar that are basically jus t plumbing that the functions runtime executes.
Whenever I need to pass data down the reactive chain I end up doing something like this:
public Mono<String> doFooAndPassDtoAsMono(Dto dto) {
return Mono.just(dto)
.flatMap(dtoMono -> {
Mono<String> result = // remote call returning a Mono
return Mono.zip(Mono.just(dtoMono), result);
})
.flatMap(tup2 -> {
return doSomething(tup2.getT1().getFoo(), tup2.getT2()); // do something that requires foo and result and returns a Mono
});
}
Given the below sample Dto class:
class Dto {
private String foo;
public String getFoo() {
return this.foo;
}
}
Because it often gets tedious to zip the data all the time to pass it down the chain (especially a few levels down) I was wondering if it's ok to simply reference the dto directly like so:
public Mono<String> doFooAndReferenceParam(Dto dto) {
Mono<String> result = // remote call returning a Mono
return result.flatMap(result -> {
return doSomething(dto.getFoo(), result); // do something that requires foo and result and returns a Mono
});
}
My concern about the second approach is that assuming a subscriber subscribes to this Mono on a thread pool would I need to guarantee that Dto is thread safe (the above example is simple because it just carries a String but what if it's not)?
Also, which one is considered "best practice"?
Based on what you have shared, you can simply do following:
public Mono<String> doFooAndPassDtoAsMono(Dto dto) {
return Mono.just(dto.getFoo());
}
The way you are using zip in the first option doesn't solve any purpose. Similarly, the 2nd option will not work either as once the mono is empty then the next flat map will not be triggered.
The case is simple if
The reference data is available from the beginning (i.e. before the creation of the chain), and
The chain is created for processing at most one event (i.e. starts with a Mono), and
The reference data is immutable.
Then you can simple refer to the reference data in a parameter or local variable – just like in your second solution. This is completely okay, and there are no concurrency issues.
Using mutable data in reactive flows is strongly discouraged. If you had a mutable Dto class, you might still be able to use it (assuming proper synchronization) – but this will be very surprising to readers of your code.
We need to be able to rollback a complex transaction in a service, without throwing an exception to the caller. My understanding is that the only way to achieve this is to use withTransaction.
The question is:
why do I have to call this on a domain object, such as Books.withTransaction
What if there is no relevant domain object, what is the consequence of picking a random one?
Below is more or less what I am trying to do. The use case is for withdrawing from an account and putting it onto a credit card. If the transfer fails, we want to rollback the transaction, but not the payment record log, which must be committed in a separate transaction (using RequiresNew). In any case, the service method must return a complex object, not an exception.
someService.groovy
Class SomeService {
#NotTransactional
SomeComplexObject someMethod() {
SomeDomainObject.withTransaction{ status ->
DomainObject ob1 = new DomainObject.save()
LogDomainObject ob2 = insertAndCommitLogInNewTransaction()
SomeComplexObject ob3 = someAction()
if (!ob3.worked) {
status.setRollbackOnly() // only rollback ob1, not ob2!
}
return ob3
}
}
}
The above is flawed - I assume "return ob3" wont return ob3 from the method, as its in a closure. Not sure how to communicate from inside a closure to outside it.
To your primary question: you can pick a random domain object if you want, it won't do any harm. Or, if you prefer, you can find the current session and open a transaction on that instead:
grailsApplication.sessionFactory.currentSession.withTransaction { /* Do the things */ }
Stylistically I don't have a preference here. Others might.
Not sure how to communicate from inside a closure to outside it.
In general this could be hard; withTransaction could in principle return anything it wants, no matter what its closure argument returns. But it turns out that withTransaction returns the value returned by its closure. Here, watch:
groovy> println(MyDomainObject.withTransaction { 2 + 2 })
4
By convention, all withFoo methods which take a closure should work this way, precisely so that you can do the thing you're trying to do.
I'm assuming this question was from a grails 2 application and this problem from 2015 has been fixed before now.
I can't find this in any of the grails 2 documentation, but services have a magic transactionStatus variable injected into their methods. (at least in grails 2.3.11)
You can just leave all the annotations off and use that injected variable.
Class SomeService {
SomeComplexObject someMethod() {
DomainObject ob1 = new DomainObject.save()
LogDomainObject ob2 = insertAndCommitLogInNewTransaction()
SomeComplexObject ob3 = someAction()
if (!ob3.worked) {
transactionStatus.setRollbackOnly() // transactionStatus is magically injected.
}
return ob3
}
}
This feature is in grails 2, but not documented. It is documented in grails 3.
https://docs.grails.org/latest/guide/services.html#declarativeTransactions
search for transactionStatus.
I was able to follow the instruction on adding data, that part was easy and understandable. But when I tried to follow instructions for editing data, I'm completely lost.
I am following the todo sample, which works quite well, but when I tried to add to my own project using the same principle, nothing works.
in my controller, I have the following:
function listenForPropertyChanged() {
// Listen for property change of ANY entity so we can (optionally) save
var token = dataservice.addPropertyChangeHandler(propertyChanged);
// Arrange to remove the handler when the controller is destroyed
// which won't happen in this app but would in a multi-page app
$scope.$on("$destroy", function () {
dataservice.removePropertyChangeHandler(token);
});
function propertyChanged(changeArgs) {
// propertyChanged triggers save attempt UNLESS the property is the 'Id'
// because THEN the change is actually the post-save Id-fixup
// rather than user data entry so there is actually nothing to save.
if (changeArgs.args.propertyName !== 'Id') { save(); }
}
}
The problem is that any time I change a control on the view, the propertyChanged callback function never gets called.
Here's the code from the service:
function addPropertyChangeHandler(handler) {
// Actually adds any 'entityChanged' event handler
// call handler when an entity property of any entity changes
return manager.entityChanged.subscribe(function (changeArgs) {
var action = changeArgs.entityAction;
if (action === breeze.EntityAction.PropertyChange) {
handler(changeArgs);
}
});
}
If I put a break point on the line:
var action = changeArgs.entityAction;
In my project, it never reaches there; in the todo sample, it does! It completely skips the whole thing and just loads the view afterwards. So none of my callback functions work at all; so really, nothing is subscribed.
Because of this, when I try to save changes, the manager.hasChanges() is always false and nothing happens in the database.
I've been trying for at least 3 days getting this to work, and I'm completely dumbfounded by how complicated this whole issue has been for me.
Note: I'm using JohnPapa's HotTowel template. I tried to follow the Todo editing functionality to a Tee.. and nothing is working the way I'd like it to.
Help would be appreciated.
The whole time I thought the problem was in the javascript client side end of things. Turned out that editing doesn't work when you created projected DTOs.
So in my server side, I created a query:
public IQueryable<PersonDTO> getPerson(){
return (from _person in ContextProvider.Context.Queries
select new PersonDTO
{
Id = _person.Id,
FirstName = _person.FirstName,
LastName = _person.LastName
}).AsQueryable();
}
Which just projected a DTO to send off to the client. This did work with my app in fetching data and populating things. So this is NOT wrong. Using this, I was able to add items and fetch items, but there's no information that allowed the entitymanager to know about the item. When I created an item, the entitymanager has a "createEntity" which allowed me to tell the entitymanager which item to use.. in my case:
manager.createEntity(person, initializeValues);
Maybe if there was a "manager.getEntity" maybe that would help?
Anyways, I changed the above query to get it straight from the source:
public IQueryable<Person> getPeople(){
return ContextProvider.Context.People;
}
Note ContextProvider is:
readonly EFContextProvider<PeopleEntities> ContextProvider =
new EFContextProvider<PeopleEntities>();
So the subscribe method in the javascript checks out the info that's retrieved straight from the contextual object.. interesting. Just wish I didn't spend 4 days on this.
Moq has been driving me a bit crazy on my latest project. I recently upgraded to version 4.0.10827, and I'm noticing what seems to me to be a new behavior.
Basically, when I call my mocked function (MakeCall, in this example) in the code I am testing, I am passing in an object (TestClass). The code I am testing makes changes to the TestClass object before and after the call to MakeCall. Once the code has completed, I then call Moq's Verify function. My expectation is that Moq will have recorded the complete object that I passed into MakeCall, perhaps via a mechanism like deep cloning. This way, I will be able to verify that MakeCall was called with the exact object I am expecting it to be called with. Unfortunately, this is not what I'm seeing.
I attempt to illustrate this in the code below (hopefully, clarifying it a bit in the process).
I first create a new TestClass object. Its Var property is set to "one".
I then create the mocked object, mockedObject, which is my test subject.
I then call the MakeCall method of mockedObject (by the way, the Machine.Specifications framework used in the example allows the code in the When_Testing class to be read from top to bottom).
I then test the mocked object to ensure that it was indeed called with a TestClass with a Var value of "one". This succeeds, as I expected it to.
I then make a change to the original TestClass object by re-assigning the Var property to "two".
I then proceed to attempt to verify if Moq still thinks that MakeCall was called with a TestClass with a value of "one". This fails, although I am expecting it to be true.
Finally, I test to see if Moq thinks MakeCall was in fact called by a TestClass object with a value of "two". This succeeds, although I would initially have expected it to fail.
It seems pretty clear to me that Moq is only holding onto a reference to the original TestClass object, allowing me to change its value with impunity, adversely affecting the results of my testing.
A few notes on the test code. IMyMockedInterface is the interface I am mocking. TestClass is the class I am passing into the MakeCall method and therefore using to demonstrate the issue I am having. Finally, When_Testing is the actual test class that contains the test code. It is using the Machine.Specifications framework, which is why there are a few odd items ('Because of', 'It should...'). These are simply delegates that are called by the framework to execute the tests. They should be easily removed and the contained code placed into a standard function if that is desired. I left it in this format because it allows all Validate calls to complete (as compared to the 'Arrange, Act Assert' paradigm). Just to clarify, the below code is not the actual code I am having problems with. It is simply intended to illustrate the problem, as I have seen this same behavior in multiple places.
using Machine.Specifications;
// Moq has a conflict with MSpec as they both have an 'It' object.
using moq = Moq;
public interface IMyMockedInterface
{
int MakeCall(TestClass obj);
}
public class TestClass
{
public string Var { get; set; }
// Must override Equals so Moq treats two objects with the
// same value as equal (instead of comparing references).
public override bool Equals(object obj)
{
if ((obj != null) && (obj.GetType() != this.GetType()))
return false;
TestClass t = obj as TestClass;
if (t.Var != this.Var)
return false;
return true;
}
public override int GetHashCode()
{
int hash = 41;
int factor = 23;
hash = (hash ^ factor) * Var.GetHashCode();
return hash;
}
public override string ToString()
{
return MvcTemplateApp.Utilities.ClassEnhancementUtilities.ObjectToString(this);
}
}
[Subject(typeof(object))]
public class When_Testing
{
// TestClass is set up to contain a value of 'one'
protected static TestClass t = new TestClass() { Var = "one" };
protected static moq.Mock<IMyMockedInterface> mockedObject = new moq.Mock<IMyMockedInterface>();
Because of = () =>
{
mockedObject.Object.MakeCall(t);
};
// Test One
// Expected: Moq should verify that MakeCall was called with a TestClass with a value of 'one'.
// Actual: Moq does verify that MakeCall was called with a TestClass with a value of 'one'.
// Result: This is correct.
It should_verify_that_make_call_was_called_with_a_value_of_one = () =>
mockedObject.Verify(o => o.MakeCall(new TestClass() { Var = "one" }), moq.Times.Once());
// Update the original object to contain a new value.
It should_update_the_test_class_value_to_two = () =>
t.Var = "two";
// Test Two
// Expected: Moq should verify that MakeCall was called with a TestClass with a value of 'one'.
// Actual: The Verify call fails, claiming that MakeCall was never called with a TestClass instance with a value of 'one'.
// Result: This is incorrect.
It should_verify_that_make_call_was_called_with_a_class_containing_a_value_of_one = () =>
mockedObject.Verify(o => o.MakeCall(new TestClass() { Var = "one" }), moq.Times.Once());
// Test Three
// Expected: Moq should fail to verify that MakeCall was called with a TestClass with a value of 'two'.
// Actual: Moq actually does verify that MakeCall was called with a TestClass with a value of 'two'.
// Result: This is incorrect.
It should_fail_to_verify_that_make_call_was_called_with_a_class_containing_a_value_of_two = () =>
mockedObject.Verify(o => o.MakeCall(new TestClass() { Var = "two" }), moq.Times.Once());
}
I have a few questions regarding this:
Is this expected behavior?
Is this new behavior?
Is there a workaround that I am unaware of?
Am I using Verify incorrectly?
Is there a better way of using Moq to avoid this situation?
I thank you humbly for any assistance you can provide.
Edit:
Here is one of the actual tests and SUT code that I experienced this problem with. Hopefully it will act as clarification.
// This is the MVC Controller Action that I am testing. Note that it
// makes changes to the 'searchProjects' object before and after
// calling 'repository.SearchProjects'.
[HttpGet]
public ActionResult List(int? page, [Bind(Include = "Page, SearchType, SearchText, BeginDate, EndDate")]
SearchProjects searchProjects)
{
int itemCount;
searchProjects.ItemsPerPage = profile.ItemsPerPage;
searchProjects.Projects = repository.SearchProjects(searchProjects,
profile.UserKey, out itemCount);
searchProjects.TotalItems = itemCount;
return View(searchProjects);
}
// This is my test class for the controller's List action. The controller
// is instantiated in an Establish delegate in the 'with_project_controller'
// class, along with the SearchProjectsRequest, SearchProjectsRepositoryGet,
// and SearchProjectsResultGet objects which are defined below.
[Subject(typeof(ProjectController))]
public class When_the_project_list_method_is_called_via_a_get_request
: with_project_controller
{
protected static int itemCount;
protected static ViewResult result;
Because of = () =>
result = controller.List(s.Page, s.SearchProjectsRequest) as ViewResult;
// This test fails, as it is expecting the 'SearchProjects' object
// to contain:
// Page, SearchType, SearchText, BeginDate, EndDate and ItemsPerPage
It should_call_the_search_projects_repository_method = () =>
s.Repository.Verify(r => r.SearchProjects(s.SearchProjectsRepositoryGet,
s.UserKey, out itemCount), moq.Times.Once());
// This test succeeds, as it is expecting the 'SearchProjects' object
// to contain:
// Page, SearchType, SearchText, BeginDate, EndDate, ItemsPerPage,
// Projects and TotalItems
It should_call_the_search_projects_repository_method = () =>
s.Repository.Verify(r => r.SearchProjects(s.SearchProjectsResultGet,
s.UserKey, out itemCount), moq.Times.Once());
It should_return_the_correct_view_name = () =>
result.ViewName.ShouldBeEmpty();
It should_return_the_correct_view_model = () =>
result.Model.ShouldEqual(s.SearchProjectsResultGet);
}
/////////////////////////////////////////////////////
// Here are the values of the three test objects
/////////////////////////////////////////////////////
// This is the object that is returned by the client.
SearchProjects SearchProjectsRequest = new SearchProjects()
{
SearchType = SearchTypes.ProjectName,
SearchText = GetProjectRequest().Name,
Page = Page
};
// This is the object I am expecting the repository method to be called with.
SearchProjects SearchProjectsRepositoryGet = new SearchProjects()
{
SearchType = SearchTypes.ProjectName,
SearchText = GetProjectRequest().Name,
Page = Page,
ItemsPerPage = ItemsPerPage
};
// This is the complete object I expect to be returned to the view.
SearchProjects SearchProjectsResultGet = new SearchProjects()
{
SearchType = SearchTypes.ProjectName,
SearchText = GetProjectRequest().Name,
Page = Page,
ItemsPerPage = ItemsPerPage,
Projects = new List<Project>() { GetProjectRequest() },
TotalItems = TotalItems
};
Ultimately, your question is whether a mocking framework should take snapshots of the parameters you use when interacting with the mocks so that it can accurately record the state the system was in at the point of interaction rather than the state the parameters might be in at the point of verification.
I would say this is a reasonable expectation from a logical point of view. You are performing action X with value Y. If you ask the mock "Did I perform action X with value Y", you expect it to say "Yes" regardless of the current state of the system.
To summarize the problem you are running into:
You first invoke a method on a mock object with a reference type parameter.
Moq saves information about the invocation along with the reference type parameter passed in.
You then ask Moq if the method was called one time with an object equal to the reference you passed in.
Moq checks its history for a call to that method with a parameter that matches the supplied parameter and answers yes.
You then modify the object that you passed as the parameter to the method call on the mock.
The memory space of the reference Moq is holding in its history changes to the new value.
You then ask Moq if the method was called one time with an object that isn't equal to the reference its holding.
Mock checks its history for a call to that method with a parameter that matches the supplied parameter and reports no.
To attempt to answer your specific questions:
Is this expected behavior?
I would say no.
Is this new behavior?
I don't know, but it's doubtful the project would have at one time had behavior that facilitated this and was later modified to only allow the simple scenario of only verifying a single usage per mock.
Is there a workaround that I am unaware of?
I'll answer this two ways.
From a technical standpoint, a workaround would be to use a Test Spy rather than a Mock. By using a Test Spy, you can record the values passed and use your own strategy for remembering the state, such as doing a deep clone, serializing the object, or just storing the specific values you care about to be compared against later.
From a testing standpoint, I would recommend that you follow the principle "Use The Front Door First". I believe there is a time for state-based testing as well as interaction-based testing, but you should try to avoid coupling yourself to the implementation details unless the interaction is an important part of the scenario. In some cases, the scenario you are interested in will be primarily about interaction ("Transfer funds between accounts"), but in other cases all you really care about is getting the correct result ("Withdraw $10"). In the case of the specification for your controller, this seems to fall into the query category, not the command category. You don't really care how it gets the results you want as long as they are correct. Therefore, I would recommend using state-based testing in this case. If another specification concerns issuing a command against the system, there may still end up being a front door solution which you should consider using first, but it may be necessary or important to do interaction based testing. Just my thoughts though.
Am I using Verify incorrectly?
You are using the Verify() method correctly, it just doesn't support the scenario you are using it for.
Is there a better way of using Moq to avoid this situation?
I don't think Moq is currently implemented to handle this scenario.
Hope this helps,
Derek Greer
http://derekgreer.lostechies.com
http://aspiringcraftsman.com
#derekgreer
First, you can avoid the conflict between Moq and MSpec by declaring
using Machine.Specifications;
using Moq;
using It = Machine.Specifications.It;
Then you'll only need to prefix with Moq. when you want to use Moq's It, for example Moq.It.IsAny<>().
Onto your question.
Note: This is not the original answer but an edited one after the OP added some real example code to the question
I've been trying out your sample code and I think it's got more to do with MSpec than Moq. Apparently (and I didn't know this either), when you modify the state of your SUT (System Under Test) inside an It delegate the changes gets remembered. What is happening now is:
Because delegate is run
It delegates are run, one after the other. If one changes the state, the following It will never see the set up in the Because. Hence your failed test.
I've tried marking your spec with the SetupForEachSpecificationAttribute:
[Subject(typeof(object)), SetupForEachSpecification]
public class When_Testing
{
// Something, Something, something...
}
The attribute does as its name says: It will run your Establish and Because before every It. Adding the attribute made the spec behave as expected: 3 Successes, one fail (the verification that with Var = "two").
Would the SetupForEachSpecificationAttribute solve your problem or is resetting after every It not acceptable for your tests?
FYI: I'm using Moq v4.0.10827.0 and MSpec v0.4.9.0
Free tip #2: If you're testing ASP.NET MVC apps with Mspec you might want to take a look at James Broome's MSpec extensions for MVC