[Edited to narrow down the question scope]:
For simplicity sake, I have a Winforms application:
main function opens FormA
FormA has a button that creates and opens FormB
FormB needs to send something to the server, so it creates an instance of WonderService class that has this capability.
WonderClass needs ILogger to log the success/failure and IConfig to read the service Url.
Now, the question is, if I want my system to be designed with IOC in mind, how do I pass ILogger and IConfig down the stream if FormA/B don't need it?
Related
I am developing an application with ZF2.5. I need to make an SSE (Server-Sent Events) module, but I can't manage to do that using a controller, it does not keep my connection alive (of the type: text/event-stream). So I am doing this in a separate php file, but I need authentication on that, and needed to reach Zend's service manager from this file "outside" the Zend environment.
Is it posible? Any suggestions?
Yes, you can do this from within ZF2, but it is not easy. The base of SSE is the connection is kept open. So you need somehow a while(true) or similar in php to keep the process running.
A controller is, when the function is done, terminated and the response is sent. You have to get this logic into a controller. Next, the response handler buffers all output in ZF2 and then sends all data at once. You need to reprogram the ZF2 output buffering flow, so you can send data directly from your controller without the buffering. Because otherwise, you're while(true) loop never sends data, only when you break the loop.
So the short answer: almost anything is possible in ZF2, including your needs. But it is not that straightforward.
The alternative is to load the service manager in your stand-alone script. This is also perfectly possible. Using the application config and merged other configs, you need to build your complete config and provision the SM with it. Then when instantiated, you can fully utilize its system.
Also here, only instantiating the SM can be hard. Easier here is to instantiate the application and grab the SM from it:
$app = Zend\Mvc\Application::init(include 'config/application.config.php');
$sm = $app->getServiceManager();
Note you don't `run()' the app, only bootstrap it!
When the client calls a Controller, a new Thread is created, which takes a long time. The View is intimidatingly returned to the user. The user should stay informed over the progress of the work, so SignalR is used.
How can send updates only to the user which called the Controller.
If I create a new Thread the HTTP Context get's lost, so I don't know how I can tell SignalR to which client it should send the information.
When you spawn your thread you should pass to it a user identifier, and then from the thread get a hub context and call something like:
var context = GlobalHost.ConnectionManager.GetHubContext<YourHub>();
context.Clients.User(userId).whatever();
on it. By default the user id would match the user name you get from your principal before calling the thread (so your HTTP context is still valid), but you can also check the IUserIdProvider interface for alternate ways of handling it.
If nature of long running operation allows it (ie you don't need to render any view or something specific to MVC), you can just implement your "long running work" inside the hub method (always use Task<T> and await to do that) and report progress back to client as shown here. Sample code is missing client side part. For that, take a look at this SO question.
This approach has another benefit. If your controller\action performing long running operation is using ASP.NET Session (which is default behavior), no other MVC actions\requests can run on server until the long running request finishes because of the Session lock - take a look here for more details. SignalR on the other side do not use Session by default so there is one less problem to worry about...
Oh BTW: Do not create your own Threads - its very inefficient. Use ThreadPool.QueueUserWorkItem or better Task<T> API...
I building multitenancy into the unit of work for a set of services. I want to keep the tenancy question out of the way of day-to-day business domain work, and I do not want to touch every existing consumer in the system (I am retrofitting the multitenancy onto a system without any prior concept of a tenant).
Most messages in the system will be contexted by a tenant. However, there will be some infrastructure messages which will not be, particularly for the purpose of automating tenant creation. I need a way of determining whether to use a tenant-contexted unit of work, or a infrastructure unit of work uncontexted by a tenant because the way I interact with the database is different depending on whether I have tenant context. The unit of work is built in the process of spinning up the dependencies of the consumer.
As such I need a way of peeking at the message or its metadata before consuming it, and specifically, I need to be able to peek at it during the dependency building. I was intended to have a tag interface to mark tenant management messages out from normal business domain messages, but any form of identifying the difference could work. If I am in a unit of work resulting from an HTTP request, I can look at WebApi's HttpContext.Current and see the headers of the current request, etc. How do I do something analogous to this if I am in a unit of work resulting from messaging?
I see there is a way to intercept messages with BeforeConsumingMessage() but I need a way of correlating it to the current unit of work I am spinning up and I'm not seeing how that would work for me. Pseudocode for what I am trying to do:
if MessageContext.Message.GetType() = typeof<ITenantInfrastructureMessage>:
database = new Database(...)
else:
tenantId = MessageContext.Headers.TenantId;
database = new TenantDatabase(..., tenantId)
I am working in C#/.NET using MassTransit with RabbitMQ and Autofac with MassTransit's built-in support for both.
Your best option is to override at the IConsumerFactory<T> extension point, and extract the tenant from the message (either via a message header, or some message property) and register that in the container child lifetime scope so that subsequent resolutions from the actual consumer class (and it's dependencies) are properly matched to the tenant in the message.
In our systems, we have a TenantContext that is registered in a newly created LifetimeScope (we're using Autofac), after which we resolve the consume from the child scope, and the dependencies that use the tenant context get the proper value since it's registered as part of building the child container for the message scope.
It works extremely well, we even built up extension methods to make it easy for developers registering consumers to specify "tenant context providers" that go from a message type to the proper tenant id, which is used to build the TenantContext.
You can do similar things with activity factories in Courier routing slips (which are a specialization of a consumer).
How do I access my Remote Data Module(RDM)'s instance from another unit at runtime? (The RDM is single instance). When I create a normal Data Module descendant Delphi creates a variable for it in the same unit (ex: MyDM: TMyDM), but when I create a RDM's descendant there's no variable.
I'm trying to set the provider of a TClientDataSet created at runtime in another unit to a TDataSetProvider in my RDM, but I can't find a reference to my RDM's instance.
I also tried to do it at design time but while I have no problems to set the connection property of a TSQLQuery from the same unit to a TSQLConnection in that RDM, I wasn't able to set the TClientDataSet's provider, because no providers from the RDM appears in the TClientDataSet's provider list.
First you need to set the RemoteServer property of your client dataset, assign it an instance of TLocalConnection component (which should be placed on your remote data module since you are not using it remotely). The remote data module unit has to be in the uses clause of the unit with the client dataset, of course.
Then you can assign the ProviderName property of your client dataset.
I did some study on TRemoteDataModule and learned that it is dedicated to support COM application servers.
The fact you don´t have a variable to your RDM is because you are not supposed to access it like a regular DM. The application server will instantiate the RDM in response to a remote call, just like any COM application. It will be destroyed when no more references exist to that RDM.
Since the life-cicle of that object depends on the client, not the server, having a reference to it in the server is highly dangerous. You never know when it´s valid or not. Besides, more than one instance will exist, one for each client that is accessing that object in a given moment.
Considering that, I believe is very reasonable to tell you that it´s impossible to access the RDM after it is created to perform the correction you intend to do.
If you really need to put the TDatasetProvider in a different unit, then my best suggestion is to make the RDM look for that provider in some kind of Provider poll service. Doing like this will enable you to find the provider you need everytime a new RDM is instantiated and only when it is instantiated.
In your place I would add a handler to the OnCreate event of the RDM and in that handler I would call a method like TProviderPool.GetProvider. That method would give me a provider and I would assign its name to the ProviderName property of the CDS.
I am developing a J2EE application on Glassfish 3.1 which will use one external library that relies heavily on URLs. The URLs will use a custom protocol (e.g. db:123 where 123 is ID of a record in a database). I am having doubts on how to implement the URL protocol handler (and its URLConnection implementation), because this protocol handler will use EJBs to fetch data from the database.
The db protocol handler needs to be registered globally at the moment of JVM startup through -Djava.protocol.handler.pkgs flag. (I couldn't find a better way to do this in Glassfish.) Anyway, because it is registered at JVM startup, it has no knowledge of any EJBs that that it may call at the moment of opening URL streams. So what I did is create a singleton registry class to which database handlers can be registered. This registry is used by URLConnection whenever a stream is requested (it will search the registry for a database handler and use it to fetch data).
Then, an EJB will register itself as database handler in a #PostConstruct method. This way, any time a db:XXX URL is used, EJBs will be called indirectly to fetch the data.
Somehow I feel that this design is a bit dirty, but I'm very limited because of custom URL handlers. If you have any suggestions or tips that you can give me, that would be great.