I building multitenancy into the unit of work for a set of services. I want to keep the tenancy question out of the way of day-to-day business domain work, and I do not want to touch every existing consumer in the system (I am retrofitting the multitenancy onto a system without any prior concept of a tenant).
Most messages in the system will be contexted by a tenant. However, there will be some infrastructure messages which will not be, particularly for the purpose of automating tenant creation. I need a way of determining whether to use a tenant-contexted unit of work, or a infrastructure unit of work uncontexted by a tenant because the way I interact with the database is different depending on whether I have tenant context. The unit of work is built in the process of spinning up the dependencies of the consumer.
As such I need a way of peeking at the message or its metadata before consuming it, and specifically, I need to be able to peek at it during the dependency building. I was intended to have a tag interface to mark tenant management messages out from normal business domain messages, but any form of identifying the difference could work. If I am in a unit of work resulting from an HTTP request, I can look at WebApi's HttpContext.Current and see the headers of the current request, etc. How do I do something analogous to this if I am in a unit of work resulting from messaging?
I see there is a way to intercept messages with BeforeConsumingMessage() but I need a way of correlating it to the current unit of work I am spinning up and I'm not seeing how that would work for me. Pseudocode for what I am trying to do:
if MessageContext.Message.GetType() = typeof<ITenantInfrastructureMessage>:
database = new Database(...)
else:
tenantId = MessageContext.Headers.TenantId;
database = new TenantDatabase(..., tenantId)
I am working in C#/.NET using MassTransit with RabbitMQ and Autofac with MassTransit's built-in support for both.
Your best option is to override at the IConsumerFactory<T> extension point, and extract the tenant from the message (either via a message header, or some message property) and register that in the container child lifetime scope so that subsequent resolutions from the actual consumer class (and it's dependencies) are properly matched to the tenant in the message.
In our systems, we have a TenantContext that is registered in a newly created LifetimeScope (we're using Autofac), after which we resolve the consume from the child scope, and the dependencies that use the tenant context get the proper value since it's registered as part of building the child container for the message scope.
It works extremely well, we even built up extension methods to make it easy for developers registering consumers to specify "tenant context providers" that go from a message type to the proper tenant id, which is used to build the TenantContext.
You can do similar things with activity factories in Courier routing slips (which are a specialization of a consumer).
Related
I have a simple question. I'm newer with UnityContainer of Miscrosoft. I'm writing an ASP.NET MVC application with Unity for DI.
Have I a different CONTAINER for each user connected to my web app? Or the CONTAINER is the same for all users?
So if I resolve the life time of an object with ContainerControlledLifetimeManager does it mean that only for one user session this object is always the same?
I hope you understand.
Thanks,
Christian
Lifetime refers to the life of the object created by the DI process. Per request means each request gets its own object. If the object depends on the current user, querystring values on that request or values/presence of Request headers a PerRequest lifetime is appropriate. If you have settings that vary based on location of your service, for example, you saved values from web.config, then a the container is most likely created in global.asa and these objects can live as long as the container lives.
A concrete example:
You have a service as part of your site and you are migrating to vNext of that service. Users can opt-in by clicking a link that includes a parameter like &myService=vNext to see the new behavior. your Factory method uses the value of this parameter to select vNow or vNext for each request.
Here's some pseudo code to get you started:
container.RegisterInstance<IProductFactory>("enterprise", new EnterpriseProductFactory());
container.RegisterInstance<IProductFactory>("retail", new RetailProductFactory());
container.RegisterVersionedServiceFactory<IProductFactorySettings, IProductFactory>();
In this example RegisterVersionedServiceFactory is an extension method that does nothing but decide which of the IProductFactory instances to use for the current request. The factory provides the current instance (there are only two for the life of the service) to use for this request (thousands per seconds).
This pattern is what makes a very large site you probably used recently both very stable and very flexible. New versions of services are rolled out using this exact same pattern to help keep the site very stable.
I'm reading the Grails docs on services which make numerous mention of transactions/transactionality, but without really defining what a transactional service method really is.
Given the nature of services, they frequently require transactional behaviour.
What exactly does this mean? Are transactional methods only those that use JPA/JDBC to communicate with a relational DB, or do they apply to anything covered by JTA?
Is there any reason why I just wouldn't make a service class #Transactional in case it evolves to some day use a transaction? In other words, are there performance concerns to making all service methods transactional?
Grails services are transactional by default - if you don't want a service to be transactional, you need to remove all #Transactional annotations (both Grails' #grails.transaction.Transactional and Spring's #org.springframework.transaction.annotation.Transactional) and add
static transactional = false
If you haven't disabled transactions with the transactional property and have no annotations, the service works the same as if it were annotated with Spring's annotation. That is, at runtime Spring creates a CGLIB proxy of your class and registers an instance of the proxy as the Spring bean, and it delegates to an instance of your actual class to do the database access and your business logic. This lets the proxy intercept all public method calls and start a new transaction, join an existing one, create a new one, etc.
The newer Grails annotation has all of the same settings as the Spring annotation, but it works a bit differently. Instead of triggering the creation of a single proxy, each method is rewritten by an AST transform during compilation, essentially creating a mini proxy for each method (this is obviously a simplification). This is better because the database access and transaction semantics are the same, but if you call one annotated method from another annotated with different settings, the different settings will be respected. But with a proxy, it's a direct call inside the delegate instance, and the proxy is bypassed. Since the proxy has all of the logic to create a new transaction or use other different settings, the two methods will use the first method's settings. With the Grails annotation every method works as expected.
There is a small performance hit involved for transactional methods, and this can accumulate if there are a lot of calls and/or a lot of traffic. Before your code runs, a transaction is started (assuming one isn't active) and to do this, a connection must be retrieved from the pool (DataSource) and configured to turn off autocommit, and make the various transaction settings (isolation, timeout, readonly, etc.) have to be made. But the Grails DataSource is actually a smart wrapper around the "real" one. It doesn't get a real JDBC Connection until you start a query, so all of the configuration settings are cached until then, and then "replayed" on the real connection. If the method doesn't do any database work (either because it never does, or because it exits early based on some condition before the db access code fires), then there's basically no database cost. But if it does, then things work as expected.
Don't rely on this DataSource proxying logic though - it's best to be explicit about which services are transactional and which aren't, and within each service which methods are transactional and which aren't. The best way to do this is by annotating methods as needed, or adding a single annotation at the class level if all methods use the same settings.
You can get more info in this talk I did about transactions in Grails.
First, if your performance concerns are due to the fact your services are transactional then you have reached nirvana. I say this because there are going to be plenty of other bottle necks in your application long before this is a major (or even minor) concern. So, don't fret about that.
Typically in Grails a transaction relates to the transactional state of a database connection or hibernate session. Though it could be anything managed by the JTA with the proper Spring configuration.
In simple terms, it's usually means (by default) a database transaction.
I know that service in grails is singleton by default.
Is it bad practice to use private fields in controller/service? Could anyone explain, why?
Controllers are not singletons by default. They are created for each request. Services are, by default singletons. It's not bad practice to use private fields in Services. It's fairly common that Services have private fields to hold configuration state at runtime.
I suspect your concern is about using private fields as a means of storing state for a particular request within a Service. Which is obviously bad considering there could be N requests being serviced by the Service. So long as you are using private fields to control the service from an application perspective and not a request perspective you will be fine.
Edit (further information)
As stated, services can and often do have private members. However, you should never use these as a means for storing information about the current request being processed. Obviously, since this is a singleton that would cause interleaving issues. Only use private members to store information that is visible across all requests. Typically these will be configuration settings of the service itself.
It's best to make your service stateless in regards to the requests they are processing. Any state you need should be encapsulated in the parameter(s) or input/output of your Service methods. Services should act on data, not the other way around.
I've been doing some research on BPEL for about two weeks now and still don't quite get it.
I have deployed the HelloWorld sample in ODE and have also managed to deploy this other one.
My intention was to do something like the second example but with my own real WS deployed and working.
I'm now at the point of having a process with no errors and correctly deployed in ODE with the following structure:
I have started the project from a service definition importing my Multiply.wsdl. The Designer has composed the import tag into the MuktiplyProcessArtifacts.wsdl next to the PartnerLinkTypes all automagically so I assume all namespaces, etc are ok.
There is a few concepts I misundertand in order to make all of this work:
In my original Multiply.wsdl I have
soap:address
location="http://localhost:8080/WS-multiply/multiply"
but ODE tells me my soap:address must have the form host.port/ode/processes..
This doesn't sound reasonable to me since my WS could be implemented anywhere outside my ODE_HOME.
The second example I mentioned before explains how the Designer presumably creates a "Caller.wsdl", which in fact has the function I would desire, which is to implement a "wrapper" WSDL, providing the BPEL process with entry and exit points. The issue is the Designer does not generate that interface. Am I supposed to create it myself? Do I have to create it at all?
If that 3rd wsdl is really needed, is it the one I would have to call if I wanted to test the whole process?
It looks like your partner WSDL is associated to a myrole of a partnerlink. Partnerlinks and partnerlink types are a concept in BPEL that is used to define dual interfaces in a sense that if a partner A wants to communicate with a BPEL process as a buyer, it needs to provide a certain set of functionality that the process can use for further communications (i.e. sending a shipment confirmation to the buyer). Thus, a partnerlink maintains two roles, the myRole is the portType (aka interface) that the process itself provides, the partnerRole refers to a portType the process expects to be implemented by the partner. MyRoles must be of course implemented by the BPEL process and thus needs to have an endpoint that is exposed by the BPEL engine. PartnerRoles can be bound to arbitrary endpoints. This happens in the deployment descriptor, which is the deploy.xml in ODE.
I guess you can fix your process by assigning your partner WSDL to a partner role.
I hope http://thiliniishaka.blogspot.com/2012/10/develop-ws-bpel-process-using-wso2.html
and http://thiliniishaka.blogspot.com/2012/10/part-2-developing-ws-bpel-process-using.html may help you to resolve aforementioned queries.
Thanks
Thilini
Its mandatory to have Ode.war deployed at tomcat server, tomcat create a path like the picture, you need to config your endpoit with the complete path /ode/processes
c:\apache-tomcat-7.0.55\webapps\ode\WEB-INF\processes\BPEL_WS\
I have one more doubt with RuntimeStore.
I am able to exchange strings using RuntimeStore.
But i want object also to be exchanged.
Example: 3 independent applications are there A, B, C.
A creates an object of C an share it with B using RuntimeStore, and then B will use the same object and invoke the methods or data of C.
Can we do something like this using RuntimeStore.
I couldn't find it.
If you have any idea please share them with me.
Thanks.
The Runtime Store can be used for inter-application communication. As long as your two applications maintain the same data schema you shouldn't have a problem allowing for upgrades.
There is an example at the link that should help you get it going.
Runtime store can be used to exchange data. It would be better if we can know when is the data exchanged as well. You could use GlobalEventListener or Notification Manager for this purpose. Using these, you can express interest to receive certain types of events and register a listener [the same way as action listener on a button]. And when such an event occurs you could read the data from Runtime Store. But callback of Global Event Listener itself can accomodate the data exchange as well.
Hope it helps.!
Here is an example for you to check out. Actually, this is an honest example of IPC that really answers your question. You might also want to consider the security of the data you are supposedly exchanging.
I am not sure what is your intention and objective. Let me present you with different scenarios and provide plausible solutions.
There exists a 3rd party application A and you are authoring application B. Now your application is interested in invoking some services of A or some protected /sensitive/personal features. If this is the case, you could use specific and pre defined and well known permissions that are defined here and then ask system to provide you with those permissions. System in turn asks the user that application B is requesting permission for a list of operations. A UI is presented to user to request permissions. If user grants them, your application will in-turn be granted; else your application will be denied permission.
You are the sole author two applications A and B and have complete control over both the applications. Your objective here is to exchange some data securely, even in presence of other rogue applications that can sniff data. In this case, you could use Application Manager's postGlobal event and Global Event Listener to signal when exactly to exchange the data. Now, you can use RuntimeStore to exchange data; in order to put security in this exchange, you could sign the data with your keys and place it in runtime store. Only the other entities that can provide credentials will be granted access to your data on run time store. This is called controlled access to private data
RuntimeStore.put( MY_DATA_ID, new ControlledAccess( myHashtable, codeSigningKey ) ); // in application A
Hashtable myHashtable = (Hashtable) RuntimeStore.get( MY_DATA_ID, codeSigningKey ); // in Application B
The notification between applications A and B can also facilitated by Notifications Manager.
So, let us know what exactly is you are trying to accomplish. We can accordingly direct you to code examples.