How to pass data down the reactive chain - project-reactor

Whenever I need to pass data down the reactive chain I end up doing something like this:
public Mono<String> doFooAndPassDtoAsMono(Dto dto) {
return Mono.just(dto)
.flatMap(dtoMono -> {
Mono<String> result = // remote call returning a Mono
return Mono.zip(Mono.just(dtoMono), result);
})
.flatMap(tup2 -> {
return doSomething(tup2.getT1().getFoo(), tup2.getT2()); // do something that requires foo and result and returns a Mono
});
}
Given the below sample Dto class:
class Dto {
private String foo;
public String getFoo() {
return this.foo;
}
}
Because it often gets tedious to zip the data all the time to pass it down the chain (especially a few levels down) I was wondering if it's ok to simply reference the dto directly like so:
public Mono<String> doFooAndReferenceParam(Dto dto) {
Mono<String> result = // remote call returning a Mono
return result.flatMap(result -> {
return doSomething(dto.getFoo(), result); // do something that requires foo and result and returns a Mono
});
}
My concern about the second approach is that assuming a subscriber subscribes to this Mono on a thread pool would I need to guarantee that Dto is thread safe (the above example is simple because it just carries a String but what if it's not)?
Also, which one is considered "best practice"?

Based on what you have shared, you can simply do following:
public Mono<String> doFooAndPassDtoAsMono(Dto dto) {
return Mono.just(dto.getFoo());
}
The way you are using zip in the first option doesn't solve any purpose. Similarly, the 2nd option will not work either as once the mono is empty then the next flat map will not be triggered.

The case is simple if
The reference data is available from the beginning (i.e. before the creation of the chain), and
The chain is created for processing at most one event (i.e. starts with a Mono), and
The reference data is immutable.
Then you can simple refer to the reference data in a parameter or local variable – just like in your second solution. This is completely okay, and there are no concurrency issues.
Using mutable data in reactive flows is strongly discouraged. If you had a mutable Dto class, you might still be able to use it (assuming proper synchronization) – but this will be very surprising to readers of your code.

Related

grails withTransaction - why is it on a domain object?

We need to be able to rollback a complex transaction in a service, without throwing an exception to the caller. My understanding is that the only way to achieve this is to use withTransaction.
The question is:
why do I have to call this on a domain object, such as Books.withTransaction
What if there is no relevant domain object, what is the consequence of picking a random one?
Below is more or less what I am trying to do. The use case is for withdrawing from an account and putting it onto a credit card. If the transfer fails, we want to rollback the transaction, but not the payment record log, which must be committed in a separate transaction (using RequiresNew). In any case, the service method must return a complex object, not an exception.
someService.groovy
Class SomeService {
#NotTransactional
SomeComplexObject someMethod() {
SomeDomainObject.withTransaction{ status ->
DomainObject ob1 = new DomainObject.save()
LogDomainObject ob2 = insertAndCommitLogInNewTransaction()
SomeComplexObject ob3 = someAction()
if (!ob3.worked) {
status.setRollbackOnly() // only rollback ob1, not ob2!
}
return ob3
}
}
}
The above is flawed - I assume "return ob3" wont return ob3 from the method, as its in a closure. Not sure how to communicate from inside a closure to outside it.
To your primary question: you can pick a random domain object if you want, it won't do any harm. Or, if you prefer, you can find the current session and open a transaction on that instead:
grailsApplication.sessionFactory.currentSession.withTransaction { /* Do the things */ }
Stylistically I don't have a preference here. Others might.
Not sure how to communicate from inside a closure to outside it.
In general this could be hard; withTransaction could in principle return anything it wants, no matter what its closure argument returns. But it turns out that withTransaction returns the value returned by its closure. Here, watch:
groovy> println(MyDomainObject.withTransaction { 2 + 2 })
4
By convention, all withFoo methods which take a closure should work this way, precisely so that you can do the thing you're trying to do.
I'm assuming this question was from a grails 2 application and this problem from 2015 has been fixed before now.
I can't find this in any of the grails 2 documentation, but services have a magic transactionStatus variable injected into their methods. (at least in grails 2.3.11)
You can just leave all the annotations off and use that injected variable.
Class SomeService {
SomeComplexObject someMethod() {
DomainObject ob1 = new DomainObject.save()
LogDomainObject ob2 = insertAndCommitLogInNewTransaction()
SomeComplexObject ob3 = someAction()
if (!ob3.worked) {
transactionStatus.setRollbackOnly() // transactionStatus is magically injected.
}
return ob3
}
}
This feature is in grails 2, but not documented. It is documented in grails 3.
https://docs.grails.org/latest/guide/services.html#declarativeTransactions
search for transactionStatus.

Lazy load property in IoC service

IoC service should retrieve information from DB that
never changed during session. May be this is constant value.
is time expensive
Should I use in this case Pattern "lazy load" for property that encapsulates access to service method? For example:
public interface IMyDbConstantService {
string MyDbConstant {get;}
string GetMyDbConstant();
}
public class MyDbConstantService: IMyDbConstantService {
public string MyDbConstant
{
get
{
// Implementing lazy load pattern
return myDbConstant ?? (myDbConstant = this.GetMyDbConstant());
}
}
public string GetMyDbConstant() {
// time expensive operation
}
}
I assume myDbConstant is a field? You don't declare it anywhere. I would make it a private static field--you may also need to add locks around updates to it.
I would also remove the getter in the interface and implementation, and just put the lazy initialization directly in the GetMyDBConstant method.
As far as whether this is a good idea: you haven't provided enough information to say one way or another. Does it make more sense to load on on first use, or can you load it at application startup, possibly in a background thread?

breeze: creating inheritance in client-side model

I'm having a weird issue with the configureMetadataStore.
My model:
class SourceMaterial {
List<Job> Jobs {get; set;}
}
class Job {
public SourceMaterial SourceMaterial {get; set;}
}
class JobEditing : Job {}
class JobTranslation: Job {}
Module for configuring Job entities:
angular.module('cdt.request.model').factory('jobModel', ['breeze', 'dataService', 'entityService', modelFunc]);
function modelFunc(breeze, dataService, entityService) {
function Ctor() {
}
Ctor.extend = function (modelCtor) {
modelCtor.prototype = new Ctor();
modelCtor.prototype.constructor = modelCtor;
};
Ctor.prototype._configureMetadataStore = _configureMetadataStore;
return Ctor;
// constructor
function jobCtor() {
this.isScreenDeleted = null;
}
function _configureMetadataStore(entityName, metadataStore) {
metadataStore.registerEntityTypeCtor(entityName, jobCtor, jobInitializer);
}
function jobInitializer(job) { /* do stuff here */ }
}
Module for configuring JobEditing entities:
angular.module('cdt.request.model').factory(jobEditingModel, ['jobModel', modelFunc]);
function modelFunc(jobModel) {
function Ctor() {
this.configureMetadataStore = configureMetadataStore;
}
jobModel.extend(Ctor);
return Ctor;
function configureMetadataStore(metadataStore) {
return this._configureMetadataStore('JobEditing', metadataStore)
}
}
Module for configuring JobTranslation entities:
angular.module('cdt.request.model').factory(jobTranslationModel, ['jobModel', modelFunc]);
function modelFunc(jobModel) {
function Ctor() {
this.configureMetadataStore = configureMetadataStore;
}
jobModel.extend(Ctor);
return Ctor;
function configureMetadataStore(metadataStore) {
return this._configureMetadataStore('JobTranslation', metadataStore)
}
}
Then Models are configured like this :
JobEditingModel.configureMetadataStore(dataService.manager.metadataStore);
JobTranslationModel.configureMetadataStore(dataService.manager.metadataStore);
Now when I call createEntity for a JobEditing, the instance is created and at some point, breeze calls setNpValue and adds the newly created Job to the np SourceMaterial.
That's all fine, except that it is added twice !
It happens when rawAccessorFn(newValue); is called. In fact it is called twice.
And if I add a new type of job (hence I register a new type with the metadataStore), then the new Job is added three times to the np.
I can't see what I'm doing wrong. Can anyone help ?
EDIT
I've noticed that if I change:
metadataStore.registerEntityTypeCtor(entityName, jobCtor, jobInitializer);
to
metadataStore.registerEntityTypeCtor(entityName, null, jobInitializer);
Then everything works fine again ! So the problem is registering the same jobCtor function. Should that not be possible ?
Our Bad
Let's start with a Breeze bug, recently discovered, in the Breeze "backingStore" model library adapter.
There's a part of that adapter which is responsible for rewriting data properties of the entity constructor so that they become observable and self-validating and it kicks in when register a type with registerEntityTypeCtor.
It tries to keep track of which properties it has rewritten. The bug is that it records the fact of rewrite on the EntityType rather than on the constructor function. Consequently, every time you registered a new type, it failed to realize that it had already rewritten the properties of the base Job type and re-wrapped the property.
This was happening to you. Every derived type that you registered re-wrapped/re-wrote the properties of the base type (and of its base type, etc).
In your example, a base class Job property would be re-written 3 times and its inner logic executed 3 times if you registered three of its sub-types. And the problem disappeared when you stopped registering constructors of sub-types.
We're working on a revised Breeze "backingStore" model library adapter that won't have this problem and, coincidentally, will behave better in test scenarios (that's how we found the bug in the first place).
Your Bad?
Wow that's some hairy code you've got there. Why so complicated? In particular, why are you adding a one-time MetadataStore configuration to the prototypes of entity constructor functions?
I must be missing something. The code to register types is usually much smaller and simpler. I get that you want to put each type in its own file and have it self-register. The cost of that (as you've written it) is enormous bulk and complexity. Please reconsider your approach. Take a look at other Breeze samples, Zza-Node-Mongo for example.
Thanks for reporting the issue. Hang in there with us. A fix should be arriving soon ... I hope in the next release.

How to map between push parser and pull parser

I have implemented a pull parser that reads a data stream and emits tokens on selected content via a callback handler. This abstract technique is also known as observer pattern (with the callback handler also known as observer) and used for instance in SAX for parsing XML.
The contrary design pattern (is there a name for it?) is to pull the next data token as used for instance in XML parsing with StAX.
One can easily map to a push parser by looping a pull parser:
// push
parser.parse( callback: handler );
// pull
while( token = parser.next ) {
handler(token)
}
But how do I map a push parser to a pull parser?
To adapt a push parser into a pull parser, you have to collect several (all ? depending on what is being parsed and the order of the elements being pushed) into Event objects. And then allow those Events to be pulled.
We can use XML as an example and adapt a SAXHandler into a StAX parser. We also have to implement the methods for XMLStreamReader for iterating over the StAX XMLEvents.
I've never used StAX but it looks like it stores the current state in the XMLStreamReader object. Each call to reader.next() updates the state and the returned values from reader.getName() and reader.getText() etc are updated accordingly.
We can do this several ways from parsing the entire thing in memory first, then iterating through what we've stored in memory, to more complicated techniques such as using multiple threads to parse the XML and block reading the next tag until the user calls next().
For simplicity, I'll just show storing everything in memory
class SAXHandler extends DefaultHandler implements XMLSTreamReader {
//Stax Event objects
List<XMLEvent> events = new ArrayList<>;
int counter=0;
//Stax current tag name and text data updated with calls to next()
private String name, text;
#Override
//Triggered when the start of tag is found.
public void startElement(String uri, String localName,
String qName, Attributes attributes)
throws SAXException {
//create a new XMLEvent for the start of the new tag
XMLEvent newEvent = ....
events.add(newEvent);
}
//other SAX methods implemented similarly
...
Now for the StAX methods:
#Override
public XMLEvent next(){
if( !hasNext() ){
throw NoSuchElementException();
}
counter++;
XMLEvent next =events(counter);
//update our content
this.name = next.name;
this.text = next.text;
...
return next;
}
#Override
public boolean hasNext(){
return counter < events.size();
}
...
#Override
public String getName(){
return name;
}
#Override
public String getText(){
return text;
}
}
Hope this helps
What I think you are looking for is Control Inversion, which is not easy in languages which are tied to a stacklike execution model.
C is not quite welded to execution stacks, so you could do this with the (deprecated) Posix getcontext/setcontext/makecontext, or slightly more portably with threads.
In other languages, it is easier, if no less mind-bending. See Scheme's call/cc primitive, this piece of Lua ancient history, or take a look at Python generators (although the latter is not able to invert control without help from the function whose control is to be inverted.)

Unexpected Validate behavior with Moq

Moq has been driving me a bit crazy on my latest project. I recently upgraded to version 4.0.10827, and I'm noticing what seems to me to be a new behavior.
Basically, when I call my mocked function (MakeCall, in this example) in the code I am testing, I am passing in an object (TestClass). The code I am testing makes changes to the TestClass object before and after the call to MakeCall. Once the code has completed, I then call Moq's Verify function. My expectation is that Moq will have recorded the complete object that I passed into MakeCall, perhaps via a mechanism like deep cloning. This way, I will be able to verify that MakeCall was called with the exact object I am expecting it to be called with. Unfortunately, this is not what I'm seeing.
I attempt to illustrate this in the code below (hopefully, clarifying it a bit in the process).
I first create a new TestClass object. Its Var property is set to "one".
I then create the mocked object, mockedObject, which is my test subject.
I then call the MakeCall method of mockedObject (by the way, the Machine.Specifications framework used in the example allows the code in the When_Testing class to be read from top to bottom).
I then test the mocked object to ensure that it was indeed called with a TestClass with a Var value of "one". This succeeds, as I expected it to.
I then make a change to the original TestClass object by re-assigning the Var property to "two".
I then proceed to attempt to verify if Moq still thinks that MakeCall was called with a TestClass with a value of "one". This fails, although I am expecting it to be true.
Finally, I test to see if Moq thinks MakeCall was in fact called by a TestClass object with a value of "two". This succeeds, although I would initially have expected it to fail.
It seems pretty clear to me that Moq is only holding onto a reference to the original TestClass object, allowing me to change its value with impunity, adversely affecting the results of my testing.
A few notes on the test code. IMyMockedInterface is the interface I am mocking. TestClass is the class I am passing into the MakeCall method and therefore using to demonstrate the issue I am having. Finally, When_Testing is the actual test class that contains the test code. It is using the Machine.Specifications framework, which is why there are a few odd items ('Because of', 'It should...'). These are simply delegates that are called by the framework to execute the tests. They should be easily removed and the contained code placed into a standard function if that is desired. I left it in this format because it allows all Validate calls to complete (as compared to the 'Arrange, Act Assert' paradigm). Just to clarify, the below code is not the actual code I am having problems with. It is simply intended to illustrate the problem, as I have seen this same behavior in multiple places.
using Machine.Specifications;
// Moq has a conflict with MSpec as they both have an 'It' object.
using moq = Moq;
public interface IMyMockedInterface
{
int MakeCall(TestClass obj);
}
public class TestClass
{
public string Var { get; set; }
// Must override Equals so Moq treats two objects with the
// same value as equal (instead of comparing references).
public override bool Equals(object obj)
{
if ((obj != null) && (obj.GetType() != this.GetType()))
return false;
TestClass t = obj as TestClass;
if (t.Var != this.Var)
return false;
return true;
}
public override int GetHashCode()
{
int hash = 41;
int factor = 23;
hash = (hash ^ factor) * Var.GetHashCode();
return hash;
}
public override string ToString()
{
return MvcTemplateApp.Utilities.ClassEnhancementUtilities.ObjectToString(this);
}
}
[Subject(typeof(object))]
public class When_Testing
{
// TestClass is set up to contain a value of 'one'
protected static TestClass t = new TestClass() { Var = "one" };
protected static moq.Mock<IMyMockedInterface> mockedObject = new moq.Mock<IMyMockedInterface>();
Because of = () =>
{
mockedObject.Object.MakeCall(t);
};
// Test One
// Expected: Moq should verify that MakeCall was called with a TestClass with a value of 'one'.
// Actual: Moq does verify that MakeCall was called with a TestClass with a value of 'one'.
// Result: This is correct.
It should_verify_that_make_call_was_called_with_a_value_of_one = () =>
mockedObject.Verify(o => o.MakeCall(new TestClass() { Var = "one" }), moq.Times.Once());
// Update the original object to contain a new value.
It should_update_the_test_class_value_to_two = () =>
t.Var = "two";
// Test Two
// Expected: Moq should verify that MakeCall was called with a TestClass with a value of 'one'.
// Actual: The Verify call fails, claiming that MakeCall was never called with a TestClass instance with a value of 'one'.
// Result: This is incorrect.
It should_verify_that_make_call_was_called_with_a_class_containing_a_value_of_one = () =>
mockedObject.Verify(o => o.MakeCall(new TestClass() { Var = "one" }), moq.Times.Once());
// Test Three
// Expected: Moq should fail to verify that MakeCall was called with a TestClass with a value of 'two'.
// Actual: Moq actually does verify that MakeCall was called with a TestClass with a value of 'two'.
// Result: This is incorrect.
It should_fail_to_verify_that_make_call_was_called_with_a_class_containing_a_value_of_two = () =>
mockedObject.Verify(o => o.MakeCall(new TestClass() { Var = "two" }), moq.Times.Once());
}
I have a few questions regarding this:
Is this expected behavior?
Is this new behavior?
Is there a workaround that I am unaware of?
Am I using Verify incorrectly?
Is there a better way of using Moq to avoid this situation?
I thank you humbly for any assistance you can provide.
Edit:
Here is one of the actual tests and SUT code that I experienced this problem with. Hopefully it will act as clarification.
// This is the MVC Controller Action that I am testing. Note that it
// makes changes to the 'searchProjects' object before and after
// calling 'repository.SearchProjects'.
[HttpGet]
public ActionResult List(int? page, [Bind(Include = "Page, SearchType, SearchText, BeginDate, EndDate")]
SearchProjects searchProjects)
{
int itemCount;
searchProjects.ItemsPerPage = profile.ItemsPerPage;
searchProjects.Projects = repository.SearchProjects(searchProjects,
profile.UserKey, out itemCount);
searchProjects.TotalItems = itemCount;
return View(searchProjects);
}
// This is my test class for the controller's List action. The controller
// is instantiated in an Establish delegate in the 'with_project_controller'
// class, along with the SearchProjectsRequest, SearchProjectsRepositoryGet,
// and SearchProjectsResultGet objects which are defined below.
[Subject(typeof(ProjectController))]
public class When_the_project_list_method_is_called_via_a_get_request
: with_project_controller
{
protected static int itemCount;
protected static ViewResult result;
Because of = () =>
result = controller.List(s.Page, s.SearchProjectsRequest) as ViewResult;
// This test fails, as it is expecting the 'SearchProjects' object
// to contain:
// Page, SearchType, SearchText, BeginDate, EndDate and ItemsPerPage
It should_call_the_search_projects_repository_method = () =>
s.Repository.Verify(r => r.SearchProjects(s.SearchProjectsRepositoryGet,
s.UserKey, out itemCount), moq.Times.Once());
// This test succeeds, as it is expecting the 'SearchProjects' object
// to contain:
// Page, SearchType, SearchText, BeginDate, EndDate, ItemsPerPage,
// Projects and TotalItems
It should_call_the_search_projects_repository_method = () =>
s.Repository.Verify(r => r.SearchProjects(s.SearchProjectsResultGet,
s.UserKey, out itemCount), moq.Times.Once());
It should_return_the_correct_view_name = () =>
result.ViewName.ShouldBeEmpty();
It should_return_the_correct_view_model = () =>
result.Model.ShouldEqual(s.SearchProjectsResultGet);
}
/////////////////////////////////////////////////////
// Here are the values of the three test objects
/////////////////////////////////////////////////////
// This is the object that is returned by the client.
SearchProjects SearchProjectsRequest = new SearchProjects()
{
SearchType = SearchTypes.ProjectName,
SearchText = GetProjectRequest().Name,
Page = Page
};
// This is the object I am expecting the repository method to be called with.
SearchProjects SearchProjectsRepositoryGet = new SearchProjects()
{
SearchType = SearchTypes.ProjectName,
SearchText = GetProjectRequest().Name,
Page = Page,
ItemsPerPage = ItemsPerPage
};
// This is the complete object I expect to be returned to the view.
SearchProjects SearchProjectsResultGet = new SearchProjects()
{
SearchType = SearchTypes.ProjectName,
SearchText = GetProjectRequest().Name,
Page = Page,
ItemsPerPage = ItemsPerPage,
Projects = new List<Project>() { GetProjectRequest() },
TotalItems = TotalItems
};
Ultimately, your question is whether a mocking framework should take snapshots of the parameters you use when interacting with the mocks so that it can accurately record the state the system was in at the point of interaction rather than the state the parameters might be in at the point of verification.
I would say this is a reasonable expectation from a logical point of view. You are performing action X with value Y. If you ask the mock "Did I perform action X with value Y", you expect it to say "Yes" regardless of the current state of the system.
To summarize the problem you are running into:
You first invoke a method on a mock object with a reference type parameter.
Moq saves information about the invocation along with the reference type parameter passed in.
You then ask Moq if the method was called one time with an object equal to the reference you passed in.
Moq checks its history for a call to that method with a parameter that matches the supplied parameter and answers yes.
You then modify the object that you passed as the parameter to the method call on the mock.
The memory space of the reference Moq is holding in its history changes to the new value.
You then ask Moq if the method was called one time with an object that isn't equal to the reference its holding.
Mock checks its history for a call to that method with a parameter that matches the supplied parameter and reports no.
To attempt to answer your specific questions:
Is this expected behavior?
I would say no.
Is this new behavior?
I don't know, but it's doubtful the project would have at one time had behavior that facilitated this and was later modified to only allow the simple scenario of only verifying a single usage per mock.
Is there a workaround that I am unaware of?
I'll answer this two ways.
From a technical standpoint, a workaround would be to use a Test Spy rather than a Mock. By using a Test Spy, you can record the values passed and use your own strategy for remembering the state, such as doing a deep clone, serializing the object, or just storing the specific values you care about to be compared against later.
From a testing standpoint, I would recommend that you follow the principle "Use The Front Door First". I believe there is a time for state-based testing as well as interaction-based testing, but you should try to avoid coupling yourself to the implementation details unless the interaction is an important part of the scenario. In some cases, the scenario you are interested in will be primarily about interaction ("Transfer funds between accounts"), but in other cases all you really care about is getting the correct result ("Withdraw $10"). In the case of the specification for your controller, this seems to fall into the query category, not the command category. You don't really care how it gets the results you want as long as they are correct. Therefore, I would recommend using state-based testing in this case. If another specification concerns issuing a command against the system, there may still end up being a front door solution which you should consider using first, but it may be necessary or important to do interaction based testing. Just my thoughts though.
Am I using Verify incorrectly?
You are using the Verify() method correctly, it just doesn't support the scenario you are using it for.
Is there a better way of using Moq to avoid this situation?
I don't think Moq is currently implemented to handle this scenario.
Hope this helps,
Derek Greer
http://derekgreer.lostechies.com
http://aspiringcraftsman.com
#derekgreer
First, you can avoid the conflict between Moq and MSpec by declaring
using Machine.Specifications;
using Moq;
using It = Machine.Specifications.It;
Then you'll only need to prefix with Moq. when you want to use Moq's It, for example Moq.It.IsAny<>().
Onto your question.
Note: This is not the original answer but an edited one after the OP added some real example code to the question
I've been trying out your sample code and I think it's got more to do with MSpec than Moq. Apparently (and I didn't know this either), when you modify the state of your SUT (System Under Test) inside an It delegate the changes gets remembered. What is happening now is:
Because delegate is run
It delegates are run, one after the other. If one changes the state, the following It will never see the set up in the Because. Hence your failed test.
I've tried marking your spec with the SetupForEachSpecificationAttribute:
[Subject(typeof(object)), SetupForEachSpecification]
public class When_Testing
{
// Something, Something, something...
}
The attribute does as its name says: It will run your Establish and Because before every It. Adding the attribute made the spec behave as expected: 3 Successes, one fail (the verification that with Var = "two").
Would the SetupForEachSpecificationAttribute solve your problem or is resetting after every It not acceptable for your tests?
FYI: I'm using Moq v4.0.10827.0 and MSpec v0.4.9.0
Free tip #2: If you're testing ASP.NET MVC apps with Mspec you might want to take a look at James Broome's MSpec extensions for MVC

Resources