Why MassTransit `ServiceCollectionBusConfigurator` doesn't add `IConsumer` to `DI` by `AddConsumer`? - dependency-injection

I am working on Blazor Server .NET 5 project which uses a nice MassTransit 7.0.5-develop2976 framework (thanks Chris by the way!).
I am curious why MassTransit ServiceCollectionBusConfigurator doesn't add consumers to DI when I do .AddConsumer<T>(). As a result, I am getting "Unable to resolve consumer type..." exception like below. See the workaround below.
For instance, similar ServiceCollectionMediatorConfigurator does it when if I add my IConsumer<> type via it.
An example of the exception I am getting when MassTransit tries to resolve my consumer (in my case it happens when a recurring scheduled job is triggered, but it doesn't matter).
MassTransit.ConsumerException: Unable to resolve consumer type 'SomeMyConsumer'.
at MassTransit.ExtensionsDependencyInjectionIntegration.ScopeProviders.DependencyInjectionConsumerScopeProvider.MassTransit.Scoping.IConsumerScopeProvider.GetScope[TConsumer,T](ConsumeContext`1 context)
at MassTransit.Scoping.ScopeConsumerFactory`1.Send[TMessage](ConsumeContext`1 context, IPipe`1 next)
at MassTransit.Pipeline.Filters.ConsumerMessageFilter`2.GreenPipes.IFilter<MassTransit.ConsumeContext<TMessage>>.Send(ConsumeContext`1 context, IPipe`1 next)
The question: what is the reason for such behaviour from an architectural perspective? Am I missing anything?

As a workaround, if you explicitly add your consumer to IServiceCollection like in the example below - the error disappears, of course:
services.AddScoped<SomeMyConsumer>();
... or even just register the same consumer via ServiceCollectionMediatorConfigurator as well:
services.AddMediator(configurator =>
{
configurator.AddConsumer<SomeMyConsumer>();
});
P.S. IMHO, similar abstractions should behave similarly if they use the same method signature, aren't they?

Related

Is there a method in quickfix for returning execution report acknowledgement message?

I have initiator and acceptor applications in Java. I'm using FIX 4.2 protocol.
I'm sending Execution Reports via acceptor and getting them with initiator. There's no problem in here. What I need is, return an execution report acknowledgement message(type: BN) for the acceptor. In FIX 4.2 standarts there are no BN messages. I will probably add those fields to datadictionary myself.
I checked user manual of quickfix. There are some example methods for sending messages.
void sendOrderCancelRequest() throws SessionNotFound
{
quickfix.fix41.OrderCancelRequest message = new quickfix.fix41.OrderCancelRequest(
new OrigClOrdID("123"),
new ClOrdID("321"),
new Symbol("LNUX"),
new Side(Side.BUY));
message.set(new Text("Cancel My Order!"));
Session.sendToTarget(message, "TW", "TARGET");
}
Should i write a method like above and call it inside of onMessage method? How can I response these messages?
QF does not automatically do this for you.
You will need to implement your own logic to create the ack message and send it.
And yes, you are correct that you will need to add BN and its fields to your DataDictionary. I would then recommend that you re-generate the QF/j source and rebuild the library so that you can have proper BN message/field classes. (The QF/j documentation should be able to guide you with this.)

IncompatibleWorkflowDefinition when canceling the workflow execution

I am testing cancel workflow logic with flow library. The code cancel the workflow within the decider code but it throws IncompatibleWorkflowDefinition
com.amazonaws.services.simpleworkflow.flow.worker.IncompatibleWorkflowDefinition: Unknown DecisionId [type=EXTERNAL_WORKFLOW, id=735]The possible causes are nondeterministic workflow definition code or incompatible change in the workflow defini
tion.
I don't understand why it breaks the logic. Can someone explain why it makes the workflow nondeterministic? Code is like below
#Override
public void dosomething(final Input input) {
checkInput();
cancelCurrentWorkflow();
asyncMethod();
}
private cancelCurrentWorkflow() { contextProvider.getDecisionContext().getWorkflowClient().requestCancelWorkflowExecution(contextProvider.getDecisionContext().getWorkflowContext().getWorkflowExecution());}
#Asynchronous
asyncMethod()
Workflow cancelling itself doesn't make sense. It is usually an operation invoked from the outside using the SWF requestCancelWorkflowExecution API.
If you need to cancel certain part of the workflow code use TryCatchFinally.cancel method.
BTW. Are you aware about Cadence Workflow which is an open source reincarnation of SWF? It has much more developer friendly Java client that doesn't use code generation and AspectJ. It also allows write blocking synchronous code inside the workflow.

How does one inject dependencies like a logger, database connection, or SHA256 generator in Iron? [duplicate]

In writing my tests, I'd like to be able to inject a connection into the request so that I can wrap the entire test case in a transaction (even if there is more than one request in the test case).
I've attempted to do this using a BeforeMiddleware which I can link in my test cases to insert a connection, as such:
pub type DatabaseConnection = PooledConnection<ConnectionManager<PgConnection>>;
pub struct DatabaseOverride {
conn: DatabaseConnection,
}
impl BeforeMiddleware for DatabaseOverride {
fn before(&self, req: &mut Request) -> IronResult<()> {
req.extensions_mut().entry::<DatabaseOverride>().or_insert(self.conn);
Ok(())
}
}
However, I'm encountering a compile error in trying to do this:
error: the trait bound `std::rc::Rc<diesel::pg::connection::raw::RawConnection>: std::marker::Sync` is not satisfied [E0277]
impl BeforeMiddleware for DatabaseOverride {
^~~~~~~~~~~~~~~~
help: run `rustc --explain E0277` to see a detailed explanation
note: `std::rc::Rc<diesel::pg::connection::raw::RawConnection>` cannot be shared between threads safely
note: required because it appears within the type `diesel::pg::PgConnection`
note: required because it appears within the type `r2d2::Conn<diesel::pg::PgConnection>`
note: required because it appears within the type `std::option::Option<r2d2::Conn<diesel::pg::PgConnection>>`
note: required because it appears within the type `r2d2::PooledConnection<r2d2_diesel::ConnectionManager<diesel::pg::PgConnection>>`
note: required because it appears within the type `utility::db::DatabaseOverride`
note: required by `iron::BeforeMiddleware`
error: the trait bound `std::cell::Cell<i32>: std::marker::Sync` is not satisfied [E0277]
impl BeforeMiddleware for DatabaseOverride {
^~~~~~~~~~~~~~~~
help: run `rustc --explain E0277` to see a detailed explanation
note: `std::cell::Cell<i32>` cannot be shared between threads safely
note: required because it appears within the type `diesel::pg::PgConnection`
note: required because it appears within the type `r2d2::Conn<diesel::pg::PgConnection>`
note: required because it appears within the type `std::option::Option<r2d2::Conn<diesel::pg::PgConnection>>`
note: required because it appears within the type `r2d2::PooledConnection<r2d2_diesel::ConnectionManager<diesel::pg::PgConnection>>`
note: required because it appears within the type `utility::db::DatabaseOverride`
note: required by `iron::BeforeMiddleware`
Is there a way around this with diesel's connections? I've found several examples on Github to do this using the pg crate, but I'd like to keep using diesel.
This answer will certainly solve the problem, but it's not optimal. As mentioned, you can't share a single connection as it's not thread safe. However, while wrapping it in a Mutex makes it thread-safe, it would force all the server threads to use a single connection. Instead, you want to use a connection pool.
You can accomplish this with the r2d2 and r2d2-diesel crates. This will establish multiple connections as needed, and reuse them when possible in a thread safe manner.
Since there isn't enough code provided for me to reproduce your issue, I've made this:
use std::cell::Cell;
trait Middleware: Sync {}
struct Unsharable(Cell<bool>);
impl Middleware for Unsharable {}
fn main() {}
which has the same error:
error: the trait bound `std::cell::Cell<bool>: std::marker::Sync` is not satisfied [E0277]
impl Middleware for Unsharable {}
^~~~~~~~~~
help: run `rustc --explain E0277` to see a detailed explanation
note: `std::cell::Cell<bool>` cannot be shared between threads safely
note: required because it appears within the type `Unsharable`
note: required by `Middleware`
You can solve the problem by changing the type to make it cross-thread compatible:
use std::sync::Mutex;
struct Sharable(Mutex<Unsharable>);
impl Middleware for Sharable {}
Note that Rust has done a very good thing for you: it prevented you from using a type that is unsafe to be called in multiple threads.
In writing my tests, I'd like to be able to inject a connection into the request so that I can wrap the entire test case in a transaction (even if there is more than one request in the test case).
I'd suggest that it's possible an architectural change would be even better. Separate the domains of "web framework" from your "database". The authors of Growing Object-Oriented Software, Guided by Tests (a highly recommended book) advocate for this style.
Pull apart your code such that there is a method that simply accepts some type that can start / end a transaction, write the interesting stuff there, and test it thoroughly. Then have just enough glue code in the web layer to create a transaction object, then call the next layer down.

Output Spring-WS Generated WSDL Location

This seems like a simple question to me:
I have a project where I automatically generate a Spring-WS WSDL, something like this:
<sws:dynamic-wsdl id="service"
portTypeName="Service"
locationUri="/Service/"
targetNamespace="http://location.com/Service/schemas/Mos">
<sws:xsd location="classpath:/META-INF/Service.xsd"/>
</sws:dynamic-wsdl>
Is there a way, on application context startup, to output the generated address of the wsdl, including context, location, etc? This would be handy if our integration tests start to fail, we can see if the location of the WSDL has changed.
As far as I know, you can find the WSDL at http://yourHost/yourServletContext/beanId.wsdl. In your case, beanId is 'service'.
Check out 3.7. Publishing the WSDL in the Spring-WS documentation for more information about this subject.
If you plan to expose your XSD's as well, the beanId.xsd (or, in my case the method name in the #Configuration class) format will be used. For instance:
private ClassPathResource exampleXsdResource = new ClassPathResource("example.xsd");
#Bean public SimpleXsdSchema example() {
return new SimpleXsdSchema(exampleXsdResource);
}
This exposes an XSD at http://yourHost/yourServletContext/example.xsd.

MvcIntegrationTestFramework or an alternative updated for ASP.NET MVC 3

I'm interested in using Steve Sanderson’s MvcIntegrationTestFramework or a very similar alternative with ASP.NET MVC 3 Beta.
Currently when compiling MvcIntegrationTestFramework against MVC 3 Beta I get the following error due to changes in MVC:
Error 6
'System.Web.Mvc.ActionDescriptor.GetFilters()' is obsolete: '"Please call System.Web.Mvc.FilterProviders.Providers.GetFilters() now."' \MvcIntegrationTestFramework\Interception\InterceptionFilterActionDescriptor.cs Line 18
Questions
Can anybody provide the MvcIntegrationTestFramework working for ASP.NET MVC 3 Beta?
--- and / or ---
Are there similar alternatives you would recommend?
EDIT #1: Note I have e-mailed Steve the creator of MvcIntegrationTestFramework, also hoping for some feedback there.
EDIT #2 & #3: I have received a message from Steve. Quoted for your reference:
I haven't needed to use that project with MVC 3, so sorry, I don't have an updated version of it. As far as I'm aware it should be possible to update it to work on MVC 3, but you'd need to figure that out perhaps by inspecting the MVC 3 source code to notice any changes in how actions, filters, etc are invoked now. If you do update it, and if you decide to adopt it as an ongoing project (e.g., putting it on Github or similar), let me know and I'll post a link to it! (Thanks Steve!)
EDIT #4: Honestly had a quick stab at using System.Web.Mvc.FilterProviders.Providers.GetFilters() didn't get anywhere fast and simply adding the [Obsolete] found that there was an error in the internals of the MVC requests. Anybody else had a dabble?
EDIT #5: Please comment if you are using an alternative Integration Test Framework with MVC 3.
Have a look at my fork:
https://github.com/JonCanning/MvcIntegrationTestFramework/
I realize this is not the answer you're looking for but Selenium or Watin may be of use to you as an alternative to the Integration Test Framework.
Selenium will let you record tests as nUnit code so you can integrate with your existing test projects etc. Then your test can validate the DOM similarly to the Integration Test Framework. The advantage is Selenium tests can be executed with various different browsers.
Key caveat is that Selenium needs your app to be deployed on a web server, not sure if that's a show stopper for you.
I thought I would share my experiences with using MvcIntegrationTestFramework in an ASP.NET MVC 4 project. In particular, the ASP.NET MVC 4 project was a Web Role for a Windows Azure Cloud Service.
Although the example project from Jon Canning's fork worked for me (although I did change the System.Web.Mvc assembly from 3.0.0.0 to 4.0.0.0, which required a bunch of editing in the web.config file to get the tests to run and pass), I got an error whenever I tried to run the same tests against an Azure ASP.NET MVC 4 Web Role project. The error was:
System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation.
The inner exception was:
System.InvalidOperationException: This method cannot be called during the application's pre-start initialization phase.
I started wondering how an Azure Web Role project based on ASP.NET MVC 4 was different to a normal ASP.NET MVC 4 project, and how such a difference would cause this error. I did a bit of searching on the web but didn't come across anybody trying to do the same thing that I was doing. Soon enough I managed to realise that it was to do with the Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener. Part of the role of this class seems to be to ensure that the web role is running in a hosted service or the Development Fabric (you'll see a message to this effect if you switch the startup project from the cloud service project to the web role project inside a cloud service solution, and then try to debug).
The solution? I removed the corresponding listener from the Web.config file of my Web Role project:
<configuration>
...
<system.diagnostics>
<trace>
<listeners>
<!--Remove this next 'add' element-->
<add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=1.8.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"
name="AzureDiagnostics"> <filter type="" /> </add>
</listeners>
</trace>
</system.diagnostics>
...
</configuration>
I was then able to run integration tests as normal against my Web Role project. I did, however, add the listener to the Web.Debug.config and Web.Release.config transformation files so that everything was still the same for normal deploying and debugging.
Maybe that will help somebody looking to use the MvcIntegrationTestFramework for Azure development.
EDIT
I just realised that this solution might be a bit of a 'hack' because it might not let you do integration testing on application code that relates to Azure components (e.g. the special Azure caching mechanisms perhaps). That said, I haven't come across any issues to do with this yet, although I also haven't really written that many integration tests yet either...
I used Jon Canning's updated version (https://github.com/JonCanning/MvcIntegrationTestFramework/) and it solved my problem very well for controller methods that only accept value types and strings, but did not work for those that accepted classes.
It turns out there was an issue with the code for the updated MvcIntegrationTestFramework.
I figured out how to fix it, but don't know where else to post the solution, so here it is:
A simple sample to show how it works is:
[TestMethod]
public void Account_LogOn_Post_Succeeds()
{
string loginUrl = "/Account/LogOn";
appHost.Start(browsingSession =>
{
var formData = new RouteValueDictionary
{
{ "UserName", "myusername" },
{ "Password", "mypassword" },
{ "RememberMe", "true"},
{ "returnUrl", "/myreturnurl"},
};
RequestResult loginResult = browsingSession.Post(loginUrl, formData);
// Add your test assertions here.
});
}
The call to browsingSession.Post would ultimately cause the NameValueCollectionConversions.ConvertFromRouteValueDictionary(object anonymous) method to be called, and the code for that was:
public static class NameValueCollectionConversions
{
public static NameValueCollection ConvertFromObject(object anonymous)
{
var nvc = new NameValueCollection();
var dict = new RouteValueDictionary(anonymous); // ** Problem 1
foreach (var kvp in dict)
{
if (kvp.Value == null)
{
throw new NullReferenceException(kvp.Key);
}
if (kvp.Value.GetType().Name.Contains("Anonymous"))
{
var prefix = kvp.Key + ".";
foreach (var innerkvp in new RouteValueDictionary(kvp.Value))
{
nvc.Add(prefix + innerkvp.Key, innerkvp.Value.ToString());
}
}
else
{
nvc.Add(kvp.Key, kvp.Value.ToString()); // ** Problem2
}
}
return nvc;
}
Then there was two problems:
The call to new RouteValueDictionary(anonymous) would cause the "new" RouteValueDictionary to be created, but instead of 4 keys, there are only three, one of which was an array of 4 items.
When it hits this line: nvc.Add(kvp.Key, kvp.Value.ToString(), the kvp.Value is an array, and the ToString() gives:
"System.Collections.Generic.Dictionary'2+ValueCollection[System.String,System.Object]"
The fix (to my specific issue) was to change the code as follows:
var dict = anonymous as RouteValueDictionary; // creates it properly
if (null == dict)
{
dict = new RouteValueDictionary(anonymous);
}
After I made this change, then my model class would properly bind, and all would be well.

Resources