I know that service in grails is singleton by default.
Is it bad practice to use private fields in controller/service? Could anyone explain, why?
Controllers are not singletons by default. They are created for each request. Services are, by default singletons. It's not bad practice to use private fields in Services. It's fairly common that Services have private fields to hold configuration state at runtime.
I suspect your concern is about using private fields as a means of storing state for a particular request within a Service. Which is obviously bad considering there could be N requests being serviced by the Service. So long as you are using private fields to control the service from an application perspective and not a request perspective you will be fine.
Edit (further information)
As stated, services can and often do have private members. However, you should never use these as a means for storing information about the current request being processed. Obviously, since this is a singleton that would cause interleaving issues. Only use private members to store information that is visible across all requests. Typically these will be configuration settings of the service itself.
It's best to make your service stateless in regards to the requests they are processing. Any state you need should be encapsulated in the parameter(s) or input/output of your Service methods. Services should act on data, not the other way around.
Related
I have a simple question. I'm newer with UnityContainer of Miscrosoft. I'm writing an ASP.NET MVC application with Unity for DI.
Have I a different CONTAINER for each user connected to my web app? Or the CONTAINER is the same for all users?
So if I resolve the life time of an object with ContainerControlledLifetimeManager does it mean that only for one user session this object is always the same?
I hope you understand.
Thanks,
Christian
Lifetime refers to the life of the object created by the DI process. Per request means each request gets its own object. If the object depends on the current user, querystring values on that request or values/presence of Request headers a PerRequest lifetime is appropriate. If you have settings that vary based on location of your service, for example, you saved values from web.config, then a the container is most likely created in global.asa and these objects can live as long as the container lives.
A concrete example:
You have a service as part of your site and you are migrating to vNext of that service. Users can opt-in by clicking a link that includes a parameter like &myService=vNext to see the new behavior. your Factory method uses the value of this parameter to select vNow or vNext for each request.
Here's some pseudo code to get you started:
container.RegisterInstance<IProductFactory>("enterprise", new EnterpriseProductFactory());
container.RegisterInstance<IProductFactory>("retail", new RetailProductFactory());
container.RegisterVersionedServiceFactory<IProductFactorySettings, IProductFactory>();
In this example RegisterVersionedServiceFactory is an extension method that does nothing but decide which of the IProductFactory instances to use for the current request. The factory provides the current instance (there are only two for the life of the service) to use for this request (thousands per seconds).
This pattern is what makes a very large site you probably used recently both very stable and very flexible. New versions of services are rolled out using this exact same pattern to help keep the site very stable.
I'm reading the Grails docs on services which make numerous mention of transactions/transactionality, but without really defining what a transactional service method really is.
Given the nature of services, they frequently require transactional behaviour.
What exactly does this mean? Are transactional methods only those that use JPA/JDBC to communicate with a relational DB, or do they apply to anything covered by JTA?
Is there any reason why I just wouldn't make a service class #Transactional in case it evolves to some day use a transaction? In other words, are there performance concerns to making all service methods transactional?
Grails services are transactional by default - if you don't want a service to be transactional, you need to remove all #Transactional annotations (both Grails' #grails.transaction.Transactional and Spring's #org.springframework.transaction.annotation.Transactional) and add
static transactional = false
If you haven't disabled transactions with the transactional property and have no annotations, the service works the same as if it were annotated with Spring's annotation. That is, at runtime Spring creates a CGLIB proxy of your class and registers an instance of the proxy as the Spring bean, and it delegates to an instance of your actual class to do the database access and your business logic. This lets the proxy intercept all public method calls and start a new transaction, join an existing one, create a new one, etc.
The newer Grails annotation has all of the same settings as the Spring annotation, but it works a bit differently. Instead of triggering the creation of a single proxy, each method is rewritten by an AST transform during compilation, essentially creating a mini proxy for each method (this is obviously a simplification). This is better because the database access and transaction semantics are the same, but if you call one annotated method from another annotated with different settings, the different settings will be respected. But with a proxy, it's a direct call inside the delegate instance, and the proxy is bypassed. Since the proxy has all of the logic to create a new transaction or use other different settings, the two methods will use the first method's settings. With the Grails annotation every method works as expected.
There is a small performance hit involved for transactional methods, and this can accumulate if there are a lot of calls and/or a lot of traffic. Before your code runs, a transaction is started (assuming one isn't active) and to do this, a connection must be retrieved from the pool (DataSource) and configured to turn off autocommit, and make the various transaction settings (isolation, timeout, readonly, etc.) have to be made. But the Grails DataSource is actually a smart wrapper around the "real" one. It doesn't get a real JDBC Connection until you start a query, so all of the configuration settings are cached until then, and then "replayed" on the real connection. If the method doesn't do any database work (either because it never does, or because it exits early based on some condition before the db access code fires), then there's basically no database cost. But if it does, then things work as expected.
Don't rely on this DataSource proxying logic though - it's best to be explicit about which services are transactional and which aren't, and within each service which methods are transactional and which aren't. The best way to do this is by annotating methods as needed, or adding a single annotation at the class level if all methods use the same settings.
You can get more info in this talk I did about transactions in Grails.
First, if your performance concerns are due to the fact your services are transactional then you have reached nirvana. I say this because there are going to be plenty of other bottle necks in your application long before this is a major (or even minor) concern. So, don't fret about that.
Typically in Grails a transaction relates to the transactional state of a database connection or hibernate session. Though it could be anything managed by the JTA with the proper Spring configuration.
In simple terms, it's usually means (by default) a database transaction.
I building multitenancy into the unit of work for a set of services. I want to keep the tenancy question out of the way of day-to-day business domain work, and I do not want to touch every existing consumer in the system (I am retrofitting the multitenancy onto a system without any prior concept of a tenant).
Most messages in the system will be contexted by a tenant. However, there will be some infrastructure messages which will not be, particularly for the purpose of automating tenant creation. I need a way of determining whether to use a tenant-contexted unit of work, or a infrastructure unit of work uncontexted by a tenant because the way I interact with the database is different depending on whether I have tenant context. The unit of work is built in the process of spinning up the dependencies of the consumer.
As such I need a way of peeking at the message or its metadata before consuming it, and specifically, I need to be able to peek at it during the dependency building. I was intended to have a tag interface to mark tenant management messages out from normal business domain messages, but any form of identifying the difference could work. If I am in a unit of work resulting from an HTTP request, I can look at WebApi's HttpContext.Current and see the headers of the current request, etc. How do I do something analogous to this if I am in a unit of work resulting from messaging?
I see there is a way to intercept messages with BeforeConsumingMessage() but I need a way of correlating it to the current unit of work I am spinning up and I'm not seeing how that would work for me. Pseudocode for what I am trying to do:
if MessageContext.Message.GetType() = typeof<ITenantInfrastructureMessage>:
database = new Database(...)
else:
tenantId = MessageContext.Headers.TenantId;
database = new TenantDatabase(..., tenantId)
I am working in C#/.NET using MassTransit with RabbitMQ and Autofac with MassTransit's built-in support for both.
Your best option is to override at the IConsumerFactory<T> extension point, and extract the tenant from the message (either via a message header, or some message property) and register that in the container child lifetime scope so that subsequent resolutions from the actual consumer class (and it's dependencies) are properly matched to the tenant in the message.
In our systems, we have a TenantContext that is registered in a newly created LifetimeScope (we're using Autofac), after which we resolve the consume from the child scope, and the dependencies that use the tenant context get the proper value since it's registered as part of building the child container for the message scope.
It works extremely well, we even built up extension methods to make it easy for developers registering consumers to specify "tenant context providers" that go from a message type to the proper tenant id, which is used to build the TenantContext.
You can do similar things with activity factories in Courier routing slips (which are a specialization of a consumer).
I'm planning on implementing a single-page application in Rails/AngularJS which also has some pieces that are exposed as a "public" API. My question is, what's the best way to architect the two APIs in such an application? E.g. Is it wise to have them both housed/versioned in the same namespace, or should they be kept separate somehow?
This is relatively new territory for me, but at first blush it seems like providing a single API covering both internal and external needs, then parsing up which pieces are available via some kind of authorization system based on the provided token would be the best way of going about this.
Is this the right direction, or would you recommend some other path?
FWIW, I will give you my opinion.
CAVEAT: I'm not a rails guy so I'm coming at this from nodejs/expressjs land.
There are many ways to skin this cat, but I'll just say that you are headed in the right direction. if you want to look at a very opinionated way to do things (and one people might hate) in node, see this: https://github.com/DaftMonk/fullstack-demo/blob/master/server/api/user/index.js. here you see this bit:
var router = express.Router();
router.get('/', auth.hasRole('admin'), controller.index);
router.delete('/:id', auth.hasRole('admin'), controller.destroy);
router.get('/me', auth.isAuthenticated(), controller.me);
router.put('/:id/password', auth.isAuthenticated(), controller.changePassword);
router.get('/:id', auth.isAuthenticated(), controller.show);
router.post('/', controller.create);
these routes correspond to calls to http:/serverurl/api/user/ etc. obviously, these are all checking authentication, but you could easily create a resource route that didn't need to check for authentication before passing control to the controller and (eventually) sending back a resource.
the approach this takes is to have middleware on the server check for auth tokens to make sure the client can call the api. without making you look into the code too much, i'll just give you a basic rundown.
client(requests Auth)->server(approves passes back token)->client(stores token)
LATER:
client(requests api call sends token in request)->server(passes request to middleware that checks token to make sure its kosher)->server(sends back resource and token)->client(uses resource and stores token)
then the whole thing repeats.
as far as whether to have separate apis vs one namespace, i don't have a very strong opinion. it really depends on how you structure your app. if you know in advance what resources will be public, then its probably easy to create a namespaced api.
angular can easily adapt to multiple api calls. you can create services for your public vs private http calls (or whatever way you decide to call the api.)
hope this was somewhat helpful! sorry its not railsy! (but nodejs/express is awesome!)
From One DbContext per web request... why?
My understanding is that a DbContext instance should not be shared across concurrent web request, so definitely not across threads.
But how about sharing it across non-concurrent web requests?
Due to thread agility (What is the meaning of thread-agility in ASP.Net?), am I right that a thread can handle more than one web request before it dies?
If so, is it safe to dependency inject a DbContext instance for each thread?
Reason for this is I'm using Unity, which does not include per-request lifetime option.
From MVC, EF - DataContext singleton instance Per-Web-Request in Unity , I think I could use a custom LifetimeManager; I'm just wondering if it is safe and sufficient to use PerThreadLifetimeManager.
is it safe to dependency inject a DbContext instance for each thread?
It depends. If your idea is to have one DbContext per web request, and the consistency of your application depends on it, having one DbContext per thread is a bad idea, since a single web request can still get multiple DbContext instances. And since ASP.NET pools threads, instances that are cached per thread, will live for the duration of the entire application, which is very bad for a DbContext (as explained here).
On the other hand, you might be able to come up with a caching scheme that ensures that a single DbContext is used for a single web request, and is returned to a pool when the request is finished, so other web requests could pick it up. This is basically the way how connection pooling in .NET works. But since DbContext instances cache data, that data becomes stale very quickly, so even if you are able to come up with a thread-safe solution, your system still behaves in an inconsistent way, since at some seemingly random moments, old data is shown to the user, while in a following request new data is shown.
I think it is possible to clear the DbContext's cache at the beginning of a web request, but that would be basically be the same as creating a new DbContext for that request, but with the downside of much slower performance.
I'm just wondering if it is safe and sufficient to use PerThreadLifetimeManager.
No, it isn't safe because of the reasons described above.
But it is actually quite easy to register a DbContext on a per-web request basis:
container.Register<MyApplicationEntities>(new InjectionFactory(c => {
var context = (MyApplicationEntities)HttpContext.Current.Items["__dbcontext"];
if (context == null) {
context = new MyApplicationEntities();
HttpContext.Current.Items["__dbcontext"] = context;
}
return context;
}));
If you will use multiple DbContext per request you can end up with a multi threaded application with high possibility loosing data integrity. One web request can be viewed as one transaction ; but with PerThreadLifeTimeManager you would have multiple distinct transactions which aren't related but maybe they should. For example posting a form with many data can end up with saving multiple database table and with 2 or more independent context can happen that one insert succeeds but another fails and you could have inconsistent data.
The other important thing is that the ASP.NET infrastructure uses thread pooling so every started thread will be reused after the request has finished and if something went wrong in one request it can affect another one. That is why not recommended to use any Thread-LocalStorage (static in the thread) in threadpool environment because we can not control the threads' lifetime.