I started using project reactor. Does anyone know how can I pass thread local variables from one thread to another? I saw some methods on Hooks.java but could not figure out what is the recommended way of doing this. Can someone point me to some documentation or with a code snippet on how to do it. Thanks.
I have a working example in this github repository based on the spring-cloud-sleuth's implementation: https://github.com/gumartinm/JavaForFun/tree/master/SpringJava/WebReactive/spring-webreactive-reactor-context-enrich
The key classes are: ContextCoreSubscriber.java, SubscriberContext.java, ThreadContextEnrichmentAutoConfiguration.java and UsernameFilter.java
ContextCoreSubscriber.java:
Enables you to fill the Mapped Diagnostic Context: MDC
SubscriberContext.java:
Helper class for inserting data in the Reactor's Context.
ThreadContextEnrichmentAutoConfiguration.java:
In charge of configuring the Reactor's Hooks: Hooks.onEachOperator
UsernameFilter.java:
Example where we want to register the username information based on some HTTP header.
Reactor doesn't guarantee that the processing done by a Flux or Mono chain of operators will stick executing on a single thread. On the contrary, it performs work-stealing and lets the user switch execution context.
As such, using ThreadLocal is not very adapted to Reactor.
There is currently some work done in 3.1.0 towards providing an equivalent, at least for library authors that use Reactor, but nothing definite in place yet.
Keep your eyes peeled for 3.1.0, that should be the main theme of that release (and will probably be the focus of the second upcoming milestone, M2).
Related
I'm writing a data pipeline using Reactor and Reactor Kafka and use spring's Message<> to save
the ReceiverOffset of ReceiverRecord in the headers, to be able to use ReciverOffset.acknowledge() when finish processing. I'm also using the out-of-order commit feature enabled.
When an event process fails I want to be able to log the error, write to another topic that represents all the failure events, and commit to the source topic. I'm currently solving that by returning Either<Message<Error>,Message<myPojo>> from each processing stage, that way the stream will not be stopped by exceptions and I'm able to save the original event headers and eventually commit the failed messages at the button of the pipeline.
The problem is that each step of the pipline gets Either<> as input and needs to filter the previous errors, apply the logic only on the Either.right and that could be cumbersome, especially when working with buffers and the operator get 'List<Either<>>' as input. So I would want to keep my business pipeline clean and get only Message<MyPojo> as input but also not missing errors that need to be handled.
I read that sending those message erros to other channel or stream is a soulution for that.
Spring Integration uses that pattern for error handling and I also read an article (link to article) that solves this problem in Akka Streams using 'divertTo()':
I couldn't find documentation or code examples of how to implement that in Reactor,
is there any way to use Spring Integration error channel with Reactor? or any other ideas to implement that?
Not familiar with reactor per se, but you can keep the stream linear. The trick, since Vavr's Either is right-biased is to use flatMap, which would take a function from Message<MyPojo> to Either<Message<Error>, Message<MyPojo>>. If the Either coming in is a right (i.e. a Message<MyPojo>, the function gets invoked and otherwise it just gets passed through.
// Apologies if the Java is atrocious... haven't written Java since pre-Java 8
incomingEither.flatMap(
myPojoMessage -> ... // compute a new Either
)
Presumably at some point you want to do something (publish to a dead-letter topic, tickle metrics, whatever) with the Message<Error> case, so for that, orElseRun will come in handy.
I want to build an application server with Dart. The httpServer in the dart:io library is certainly a good starting point for that. But I struggle with the task to "deploy" an application without restarting the server process.
To be more precise: I want to have something like a servlet container in Java, like Tomcat, into which I can easily deploy or redeploy an application without restarting the container. I thought I could utilize the mirror system, which allows me in principle to load a library and its contained classes from the filesystem. But unfortunately it seems that I cannot re-load the library. When I add for example a new class to the library, or change the coding of an existing class, a new reflection of the library without restarting the dart process, does not reflect the changes. Only when I stop the process and restart it again, the changes are visible.
So: is there a way to scrub the mirror system and let it load the library and its classes again, within the same Dart process?
I think isolates are a good fit for this requirement.
I haven't used them myself much yet but as far as I know you can load and unload them dynamically.
The documentation is not very extensive yet.
A few things I found:
https://api.dartlang.org/apidocs/channels/stable/dartdoc-viewer/dart-isolate
Recent documentation about Dart Isolates
https://www.youtube.com/watch?v=TQJ1qnrbTwk
https://www.youtube.com/watch?v=4GlK-Ln7HAc
So, yes, it is possible in Dart to dynamically (re-)load dart-files at runtime. Every new isolate has its own MirrorSystem. If you want to reload a dart-file you must create a new isolate and use the MirrorSystem of this isolate to iterate over the contents in the libraries known to this MirrorSystem. If your dart-file is part of a library known to the MirrorSystem, all functions and classes contained in this file are loaded and reflected anew.
This solution has some drawbacks: First, it is quite heavyweight. The programming of inter-isolate communication is cumbersome. Also it is to be seen whether the memory consumption increases with each reload. Second, the solution is not really dynamic: Isolates load only libraries that are "known" at design-time. They must be directly or indirectly imported into the dart file that contains the static function, which is called when the isolate is created.
Two ideas how the situation could be improved:
1. It would help if the spawn and the spawnUri methods of Isolate could get a list of additional libraries as parameter, which are included in the MirrorSystem of the isolate.
2. The classloaders in Java are independent of processes and threads. They just load classes. Why isn't this possible in Dart?
I just played around a bit with Lua and tried the Koneki eclipse plugin, which is quite nice. Problem is that when I make changes in a function I'm debugging at the moment the changes do not become effective when saving the changes. So I'm forced to restart the application. Would be so nice if I could make changes in the debugger and they would become effective on the fly as for example with Smalltalk or to some extend as in hot code replacement in Java. Anybody has a clue whether this is possible?
It is possible to some degree with some limitations. I've been developing an IDE/debugger that provides this functionality. It gives you access to a remote console to execute commands in the context/environment of your running application. The IDE also supports live coding, which reloads modified code as you make changes to it; see demos here.
The main limitation is that you can't modify a currently running function (at least without changes to Lua VM). This means that the effect of your changes to the currently running function will only be seen after you exit and re-enter that function. It works well for environments that call the same function repeatedly (for example a game engine calling draw), but may not work in your case.
Another challenge is dealing with upvalues (values that are created outside of your function and are referenced inside it). There are methods to "read" current upvalues and re-create them when the (new) function is created, but it requires some code analysis to find what functions will be recreated to query them for upvalues, to get the current values, and then to create a new environment with those upvalue and assign proper values to them. My current implementation doesn't do this, which means you need to use global variables as a workaround.
There was also relevant discussion just the other day on the Lua mailing list.
My app includes a back-end server, with many transactions which must be carried out in the background. Many of these transactions require many synchronous bits of code to run.
For example, do a query, use the result to do another query, create a new back-end object, then return a reference to the new object to a view controller object in the foreground so that the UI can be updated.
A more specific scenario would be to carry out a sequence of AJAX calls in order, similar to this question, but in iOS.
This sequence of tasks is really one unified piece of work. I did not find existing facilities in iOS that allowed me to cleanly code this sequence as a "unit of work". Likewise I did not see a way to provide a consistent context for the "unit of work" that would be available across the sequence of async tasks.
I recently had to do some JavaScript and had to learn to use the Promise concept that is common in JS. I realized that I could adapt this idea to iOS and objective-C. The results are here on Github. There is documentation, code and unit tests.
A Promise should be thought of as a promise to return a result object (id) or an error object (NSError) to a block at a future time. A Promise object is created to represent the asynchronous result. The asynchronous code delivers the result to the Promise and then the Promise schedules and runs a block to handle the result or error.
If you are familiar with Promises on JS, you will recognize the iOS version immediately. If not, check out the Readme and the Reference.
I've used most of the usual suspects, and I have to say that for me, Grand Central Dispatch is the way to go.
Apple obviously care enough about it to re-write a lot of their library code to use completion blocks.
IIRC, Apple have also said that GCD is the preferred implementation for multitasking.
I also remember that some of the previous options have been re-implemented using GCD under the hood, so you're not already attached to something else, Go GCD!
BTW, I used to find writing the block signatures a real pain, but if you just hit return when the placeholder is selected, it does all that for you. What could be sweeter than that.
This question describes two approaches of solving the sophisticated architectural problem related to ASP.NET MVC. Unfortunately our team is quite new to this technology and we haven’t found any solid sources of information on this particular topic (except overviews where it’s said that MVC is more about separation than componentization). So as for now we are hesitating: whether our solution is appropriate or there is a different obvious way to solve this problem.
We have a requirement to make ASP.NET MVC-based design with componentization in mind. View engine Razor is also a requirement for us. The key feature here is that any level of controller’s nesting is expected (obviously thru Html.Action directive within .cshtml). Any controller could potentially obtain the data thru a webservice call (the final design can break this limitation, as it’s described below).
The issue is that the data must be obtained in async and maximum parallel fashion. E.g. if two backend calls within the controllers are independent they must be performed in parallel.
At first glance the usage of async MVC controllers could solve all the problems. But there is a hidden caveat: nested controller must be specified within cshtml only (within a view). And a .cshtml view is being parsed after the original controller finished its own async execution. So all the async operations within the nested controller will be performed in a separate async slot and therefore not in parallel with the first parent controller. This is a limitation of synchronous nature of .cshtml processing.
After a deep investigation we revealed that two options are available.
1) Have only one parent async controller which will retrieve all the data and put this data into container (dictionary or whatever). The nested controllers aren’t allowed to perform any backend calls. Instead of this they will have a reference to the initialized container with the results of all the backend calls. Bu this way the consumer of the framework must differentiate between parent and child controller which is not a brilliant solution.
2) Retrieve all the data from backends within a special async HttpModule. This module will initialize the same container which will reside, for instance within HttpContext. Obviously all the controllers in such a case will not be allowed to perform any backend calls, but they will have a unified internal structure (in comparison with #1 option).
As for now we think that the option #2 is more desirable, but we are more interested in the solid community-adopted way to solve this problem in a real enterprise-level MVC projects.
Literally any links/comments are welcomed.
[UPD] A requirement of any level of nesting of controllers came from our customer which wants a system where fully reusable MVC components will be presented. And they could be combined in any sequence with any level of nesting - as it is already done in the existing webforms-based implementation. This is a business rule for existing app that the components could be combined anyhow so we're not targeted to break this rule. As for now we think that such a component is a combination of "controller+view+metadata" where "metadata" part describes the backend calls to be performed in the scenario 1 or 2.
Why are you considering async calls here? Keep in mind if your async calls are so the asp.net threads don't get all used up since the db is taking a while to return, as soon as new requests come in they too will go to the db, thus increasing the workload and in turn gaining nothing.
To be honest though, Im having a hard time following exactly what you have in mind here. Nested controllers for...?
"The key feature here is that any level of controller’s nesting is expected"
I think I (we?) need a bit more information on that part here.
However, the warning on async still stands :)
E.g. if two backend calls within the controllers are
independent they must be performed in parallel.
If they are truly independent you might be able to use asynch JavaScript calls from the client and achieve some degree of parallelism that way.