Some service types have pretty clear lifetime requirements. For example, if one is using EntityFramework in an ASP.NET app, it is clear that the DbContext lifetime should be tied to the request. However, there are some services that are "stateless": they do not store any state, and instead just forward instructions to their dependencies. A simple case would be a Query object:
public class MyQuery : IQuery<SomeQuery, SomeResponse> {
private readonly IRepository<MyTable> Repository; // Injected via constructor
public SomeResponse Query(SomeQuery query) {
return Repository.All().Where(r => r.Field == query.Field)
.Select(r => new SomeResponse { Field = r.Field }).Single();
}
}
This particular class has no requirements itself, other than those imposed by its dependencies. What guidelines should be used to determine the lifetime to choose for this type of object? Should one use a transient lifetime whenever possible? Or should one have the longest lifetime possible? And why?
Related
I have a widely used cache interface in a web application with the implementation currently registered as SingleInstance.
This current cache implementation assumes single threaded initialization, but once initialized is immutable, so is safely shared across multiple threads.
However, this means that currently, if the underlying values change, the cache doesn't get updated until the application is restarted. While updating the underlying values is rare, we would now like to provide application behavior that modifies the underlying values, and then tells the cache to refresh.
I could modify the cache implementation to use locking, or perhaps utilize one of the .NET concurrent collections to safely update the cache values.
However, I'm wondering if autofac provides a capability that would allow me to change out the registered instance for a new instance on the next request, so that the cache implementation itself would not need to be modified.
So the ideal behavior would be, that when we modify the underlying values, we trigger the creation of a new cache instance. Once the instance is finished initializing, all in-progress requests continue with the old cache instance, any new http request scopes resolve to the updated instance.
Does autofac provide a built-in way to support this scenario?
You can never safely replace a singleton registered instance in your container. Once other singleton components depend on that, they will simply hold a reference to the old instance, and replacing the instance in the container means that some components (that will be created after the replace action) will refer to the new instance, while other components keep referring to the old instance. This will hardly ever lead to the behavior you like, and will most likely cause bugs.
My advice is never try to change your container's registrations, once the application is running. This will very quickly become quite complex to oversee whether the situation is correct and is thread-safe. For instance, what if you replace the instance at the time that the object graph for another thread is being resolved? It could mean that that object graph holds both a reference to the old and the new instance.
Instead, solve this problem at the application level. First of all, you need two APIs; one for reading the cache, and a second for updating the cache. Both can be implemented using the same component though:
// Very simplified version of what you actually might need
interface ICache { CacheObject Get(); }
interface ICacheUpdater { void Set(CacheObject o); }
A simplistic implementation could look like this:
sealed class Cache : ICache, ICacheUpdater
{
private static CacheObject instance;
public void Set(CacheObject o) => instance = o;
public CacheObject Get() => instance;
}
This implementation might work, but if the cache is retrieved multiple times within the same request, it's possible to read both the old and the new values within the same request (since a different thread can call Set in between). This might be a problem. In that case, you can change the implementation to the following:
sealed class HttpCache : ICache, ICacheUpdater
{
private static readonly object key = typeof(HttpCache);
private static CacheObject instance;
private static IDictionary items => HttpContext.Current.Items;
public void Set(CacheObject o) => instance = o;
public CacheObject Get() => (CacheObject)items[key] ?? (items[key] = instance);
}
In this implementation an extra reference to the cache object is stored in the HttpContext.Items dictionary. This ensures that during the execution of a single (web) request, always the same instance is retrieved.
This example assumes you are running a web application, but you can easily imagine a solution for a different application type.
To update a component registered as a single instance, you can have a registration like this :
builder.RegisterType<ServiceProvider>().SingleInstance();
builder.Register(c => c.Resolve<ServiceProvider>().Service).As<IService>();
and ServiceProvider like this :
public class ServiceProvider
{
public ServiceProvider()
{
this.Service = new Service();
}
public IService Service { get; set; }
}
To update the instance you only have to do that :
container.Resolve<ServiceProvider>().Service = newInstance;
The second part of the question may be more difficult :
Once the instance is finished initializing, all in-progress requests continue with the old cache instance, any new http request scopes resolve to the updated instance.
What you want is to inject a single instance registration in a specific scope. To make this, you can use the ChildLifetimeScopeBeginning event to set the instance for the whole life of scope.
builder.RegisterType<ServiceProvider>().Named<ServiceProvider>("root").SingleInstance();
builder.RegisterType<ServiceProvider>().InstancePerRequest();
builder.Register(c => c.Resolve<ServiceProvider>().Service).As<IService>();
IContainer container = builder.Build();
container.ChildLifetimeScopeBeginning += (sender, e) =>
{
ServiceProvider scopeServiceProvider = e.LifetimeScope.Resolve<ServiceProvider>();
ServiceProvider rootServiceProvider = container.ResolveNamed<ServiceProvider>("root");
scopeServiceProvider.Service = rootServiceProvider.Service;
};
To change the global IService instance you will have to resolve the "root" named ServiceProvider
scope.ResolveNamed<ServiceProvider>("root").Service = newInstance;
and to change the scope only IService instance you will resolve a normal ServiceProvider
scope.Resolve<ServiceProvider>().Service = newInstance;
Java or dotNet world is rich of open source frameworks and libraries. We all like to use Spring and Hibernate almost everywhere.
Everyone agrees that hibernate is a very handy tool.
What Hibernate can do ? well, Basically - Hibernate can track our domain objects changes and persist only modified data to database, that is it.
Basically, That is everything we want. I want to load some records from database, do some modifications to them, and call transaction.commit(), and all modifications get persisted, instantaneously.
That is excelent, right !
But how about web world ? In web applications database session must be closed.
I cannot load some domain objects and wait for user to do modifications through HTTP, and persist those objects after modifications.
We have to use detached objects or DTO. How it works ?
User makes modifications in HTML browser, spring Mvc automatically thransfers those HTML modifiactions to our customized DTO objects using MVC model binding,
then we do some programming effort to transfer modifications from DTO objects to hibernate domain objects and only then we persist them.
For example - we have a web form that updates Customer address, and another form which updates customer details.
We must have two different business layer methods - UpdateAddress() and UpdateDetails(), both methods must accept some kind of DTO,
one represents address information, the other represents details infprmation.
We also have custom logic that transfers data from those 2 DTO to the domain class 'Customer'.
Yes, of course, instead of DTO objects we could reuse our domain classes. But it does not make it simpler.
In both cases we will still have to implement custom logic that transfer modifications to persistent objects,
I cannot persist detached object rightaway, because usually domain classes have lots and lots of properties representing numerous relations, for ex. Customer has - Orders property. When I update customer address I don't want to update its orders.
Is there a beautifull universal way to mapping modifications from mvc model to domain objects without writing a lot of custom code and without risk of overwriting too many fields ?
It's good practice to have a data access layer, which translates into having a repository for each domain object / entity. Furthermore, all repositories share common code so you you naturally have an abstract repository:
public abstract class AbstractRepository<E extends BaseModel> implements Repository<E> {
#PersistenceContext
private EntityManager entityManager;
private Class<E> entityClass;
public AbstractRepository(Class<E> entityClass) {
this.entityClass = entityClass;
}
protected EntityManager getEM() {
return entityManager;
}
protected TypedQuery<E> createQuery(String jpql) {
return createQuery(jpql, entityClass);
}
protected <T> TypedQuery<T> createQuery(String jpql, Class<T> typeClass) {
return getEM().createQuery(jpql, typeClass);
}
#Override
public E merge(E entity) {
return getEM().merge(entity);
}
#Override
public void remove(E entity) {
getEM().remove(entity);
}
#Override
public E findById(long id) {
return getEM().find(entityClass, id);
}
}
It's also good practice to have a service layer where you are to create, update and delete instances of an entity (where you could pass through a DTO to the create and update methods if you so desire).
...
#Inject
private CustomerRepository customerRepository;
public Customer createCustomer(CustomerDto customerDto) {
Customer customer = new Customer();
customer.setEmail(customerDto.getEmail());
...
return customerRepository.merge(customer);
}
public Customer updateCustomerAddress(Customer customer, String address) {
customer.setAddress(address);
return customerRepository.merge(customer);
}
...
So it's up to you how many update methods you want. I would typically group them into common operations such as updating the customer's address, where you would pass the customer Id and the updated address from the front end (probably via ajax) to your controller listening on a specific endpoint. This endpoint is where you would use the repository to find the entity first by Id and then pass it to your service to do the address update for example.
Lastly you need to ensure that the data actually gets persisted, so in Spring you can add the #Transactional annotation either to you Spring MVC controller or to your service that does the persisting. I'm not aware of any best practices around this but I prefer adding it to my controllers so that you're always guaranteed to have a transaction no matter what service you are in.
I have a StructureMap config that looks something like:
cfg.For<ICacheOrder>().Use<CacheOrder>().Ctor<int>().Is(context => LoginHelper.LoginID);
cfg.For<ICacheProduct>().Use<CacheProduct>().Ctor<int>().Is(context => LoginHelper.LoginID);
cfg.For<ISQLOrder>().Use<SQLOrder>().Ctor<int>().Is(context => LoginHelper.LoginID);
cfg.For<ISQLProduct>().Use<SQLProduct>().Ctor<int>().Is(context => LoginHelper.LoginID);
Via constructor injection, a chain of objects can be created, with some needing an int LoginID that is determined at the time of creation. The static LoginHelper determines the LoginID.
Presently in my config, LoginHelper is called for every created object. Is there a way, perhaps via StructureMap's IContext, for LoginID to be "remembered" and only determined once within a chain of creation?
I know that I could refactor and create an ILogin interface/concrete that StructureMap could construct and cache - but I'd prefer my various layers to be concerned only with a simple int LoginID.
Although it's okay to inject primitive configuration values in your services, when you repetitively inject that same primitive into multiple services, you are missing an abstraction.
This is clearly the case with your configuration; you are missing an abstraction.
The solution is to let those services depend on an abstraction rather than a primitive value. For instance:
public interface ICurrentUser
{
int LoginID { get; }
}
And you can create a rather simple implementation as follows:
public class CurrentUserImpl : ICurrentUser
{
public CurrentUserImpl()
{
this.LoginID = LoginHelper.LoginID;
}
public int LoginID { get; private set; }
}
This means that you will have to change the constructors of CacheOrder, CacheProduct, SQLOrder and SQLProduct, but when you do this, your configuration gets much more maintainable:
cfg.For<ICacheOrder>().Use<CacheOrder>();
cfg.For<ICacheProduct>().Use<CacheProduct>();
cfg.For<ISQLOrder>().Use<SQLOrder>();
cfg.For<ISQLProduct>().Use<SQLProduct>();
The problem of "remembering a param literal" now goes away immediately, because we can now register the ICurrentUser as follows:
cfg.For<ICurrentUser>().Use<CurrentUserImpl>();
The default lifecycle in Structure Map is per request (per object graph) so the same instance is injected into all objects in a single object graph.
Another option is to register it using the HttpContext lifecycle, but this of course only works when running an ASP.NET web application.
This is more of a design concern.
Im building an app and i have created my Repository Pattern Structure as following :
My Core name space is the DAL/Repository/BusinessLogic layers assembly.
By the way, i am using Dapper.NET micro ORM as my data connection, thats why you will see an extension on my SqlConnection object.
For my data access, i have created a base repository class :
namespace Core
{
public class BaseRepository<T>: IDisposable where T : BaseEntity
{
protected SqlConnection conn = null;
#region Constructors
public BaseRepository() : this("LOCAL")
{
}
public BaseRepository(string configurationKey = "LOCAL")
{
conn = new SqlConnection(ConfigurationManager.ConnectionStrings[configurationKey].ConnectionString);
}
#endregion
#region IDisposable
public void Dispose()
{
conn.Dispose();
}
#endregion
/// <summary>
/// returns a list of entities
/// </summary>
/// <typeparam name="T">BaseEntity type</typeparam>
/// <param name="sproc">optional parameters, stored procedure name.</param>
/// <returns>BaseEntity</returns>
protected virtual IEnumerable<T> GetListEntity(string sproc = null)
{
string storedProcName = string.Empty;
if (sproc == null)
{
storedProcName = "[dbo].sp_GetList_" + typeof(T).ToString().Replace("Core.",string.Empty);
}
else
{
storedProcName = sproc;
}
IEnumerable<T> items = new List<T>();
try
{
conn.Open();
items = conn.Query<T>(storedProcName,
commandType: CommandType.StoredProcedure);
conn.Close();
}
finally
{
conn.Close();
}
return items;
}
}
}
And for each entity that I have, lets say ExtendedUser, Messages , i am creating its on Interface-Class pair like this :
namespace Core
{
public class ExtendedUserRepository : BaseRepository<UsersExtended>,IExtendedUserRepository
{
public ExtendedUserRepository() : this("PROD")
{
}
public ExtendedUserRepository(string configurationKey) : base(configurationKey)
{
}
public UsersExtended GetExtendedUser(string username)
{
var list = GetListEntity().SingleOrDefault(u => u.Username == username);
return list;
}
public UsersExtended GetExtendedUser(Guid userid)
{
throw new NotImplementedException();
}
public List<UsersExtended> GetListExtendedUser()
{
throw new NotImplementedException();
}
}
}
etc.
The above code is just one of the entities :ExtendedUser.
The question is : should i create a Interface-ClassThatImplemenetsInterface pair for each entity that i have ? or should i have only one RepositoryClass and one IRepository interface with all my methods from all of my entities?
I don't think you need to create interface without the reason. I even don't see why you need base repository class here. I even think this is not repository but DAL (Data Access Layer) but this is defintion argue.
I think good DAL implementation should decouple database structure from business logic structure - but hardcoding sp_GetList_XXXEntityNameXXX pattern or passing stored procedure name outside of DAL is not decoupling.
You are very optimistic or your application is really simple if you think all entity lists are obtained in one way and you will always need full set of entities in business logic without any parameters.
Separating interface from implementation is only needed if you plan to replace/wrap out different implementations, or mix few interfaces in one class. Otherwise it is not required.
Don't think in terms of entities when creating repositories. Repository contains business logic and should be built over scenarios of usage. Having classes like you have is more about Data Access Layer - and DAL is built over queries you would need in business logic. Probably you would never need list of ALL users at once - but would very often need list of active users, privileged users etc.
It is really hard to predict what queries you will need - so I prefer to start designing from business logic and add DAL methods by the way.
A system with a complex domain model often benefits from a layer, such
as the one provided by Data Mapper (165), that isolates domain objects
from details of the database access code. In such systems it can be
worthwhile to build another layer of abstraction over the mapping
layer where query construction code is concentrated.
http://martinfowler.com/eaaCatalog/repository.html
With a generic repository you can have common methods like findBy(array('id' => 1)), findOneBy(array('email' => 'john#bar.com')), findById(), findAll() and so on. Indeed, it'll have one interface.
You'll always have to create a concrete implementation that will indicates which is the domain object that will be managed by the repository in order to accomplish this getUsersRepository().findAll()
Moreover, if you need more complicated queries, you can create new methods on the concrete implementation, such as findMostActiveUsers() and then reuse it across your application.
Now, answering your question:
Your application will be expecting for at least one interface (that generic one, with the common methods). But if you manage to have specific methods, like the one I've just mentioned above, you'd be better having another interface (e.g. RepositoryInterface and UsersRepositoryInteface).
With that in mind, then you'll only depend on the repository interface. The query construction will be encapsulated by the concrete implementation. So you'll be able to change your repository implementation (e.g. using an full-blow ORM) without affecting the rest of your application.
I have been looking at various dependency injection frameworks for .NET as I feel the project I am working on would greatly benefit from it. While I think I have a good grasp of the capabilities of these frameworks, I am still a little unclear on how best to introduce them into a large system. Most demos (understandably) tend to be of quite simple classes that have one or two dependencies.
I have three questions...
First, how do you deal with those common but uninteresting dependencies, e.g. ILog, IApplicationSettings, IPermissions, IAudit. It seems overkill for every class to have these as parameters in their constructor. Would it be better to use a static instance of the DI container to get these when they are needed?
MyClass(ILog log, IAudit audit, IPermissions permissions, IApplicationSettings settings)
// ... versus ...
ILog log = DIContainer.Get<ILog>();
Second, how do you approach dependencies that might be used, but may be expensive to create. Example - a class might have a dependency on an ICDBurner interface but not want the concrete implementation to be created unless the CD Burning feature was actually used. Do you pass in interfaces to factories (e.g. ICDBurnerFactory) in the constructor, or do you again go with some static way of getting directly to the DI Container and ask for it at the point it is needed?
Third, suppose you have a large Windows Forms application, in which the top level GUI component (e.g. MainForm) is the parent of potentially hundreds of sub-panels or modal forms, each of which may have several dependencies. Does this mean that MainForm should be set up to have as dependencies the superset of all the dependencies of its children? And if you did so, wouldn't this end up creating a huge self-inflating monster that constructs every single class it could ever need the moment you create MainForm, wasting time and memory in the process?
Well, while you can do this as described in other answers I believe there is more important thing to be answered regarding your example and that is that you are probably violating SRP principle with class having many dependencies.
What I would consider in your example is breaking up the class in couple of more coherent classes with focused concerns and thus the number of their dependencies would fall down.
Nikola's law of SRP and DI
"Any class having more than 3
dependencies should be questioned for
SRP violation"
(To avoid lengthy answer, I posted in detail my answers on IoC and SRP blog post)
First: Add the simple dependencies to your constructor as needed. There is no need to add every type to every constructor, just add the ones you need. Need another one, just expand the constructor. Performance should not be a big thing as most of these types are likely to be singletons so already created after the first call. Do not use a static DI Container to create other objects. Instead add the DI Container to itself so it can resolve itself as a dependency. So something like this (assuming Unity for the moment)
IUnityContainer container = new UnityContainer();
container.RegisterInstance<IUnityContainer>(container);
This way you can just add a dependency on IUnityContainer and use that to create expensive or seldom needed objects. The main advantage is that it is much easier when unit testing as there are no static dependencies.
Second: No need to pass in a factory class. Using the technique above you can use the DI container itself to create expensive objects when needed.
Three: Add the DI container and the light singleton dependencies to the main form and create the rest through the DI container as needed. Takes a little more code but as you said the startup cost and memory consumption of the mainform would go through the roof if you create everything at startup time.
First:
You could inject these objects, when needed, as members instead of in the constructor. That way you don't have to make changes to the constructor as your usage changes, and you also don't need to use a static.
Second:
Pass in some sort of builder or factory.
Third:
Any class should only have those dependencies that it itself requires. Subclasses should be injected with their own specific dependencies.
I have a similar case related to the "expensive to create and might be used", where in my own IoC implementation, I'm adding automagic support for factory services.
Basically, instead of this:
public SomeService(ICDBurner burner)
{
}
you would do this:
public SomeService(IServiceFactory<ICDBurner> burnerFactory)
{
}
ICDBurner burner = burnerFactory.Create();
This has two advantages:
Behind the scenes, the service container that resolved your service is also used to resolve the burner, if and when it is requested
This alleviates the concerns I've seen before in this kind of case where the typical way would be to inject the service container itself as a parameter to your service, basically saying "This service requires other services, but I'm not going to easily tell you which ones"
The factory object is rather easy to make, and solves a lot of problems.
Here's my factory class:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using LVK.IoC.Interfaces;
using System.Diagnostics;
namespace LVK.IoC
{
/// <summary>
/// This class is used to implement <see cref="IServiceFactory{T}"/> for all
/// services automatically.
/// </summary>
[DebuggerDisplay("AutoServiceFactory (Type={typeof(T)}, Policy={Policy})")]
internal class AutoServiceFactory<T> : ServiceBase, IServiceFactory<T>
{
#region Private Fields
[DebuggerBrowsable(DebuggerBrowsableState.Never)]
private readonly String _Policy;
#endregion
#region Construction & Destruction
/// <summary>
/// Initializes a new instance of the <see cref="AutoServiceFactory<T>"/> class.
/// </summary>
/// <param name="serviceContainer">The service container involved.</param>
/// <param name="policy">The policy to use when resolving the service.</param>
/// <exception cref="ArgumentNullException"><paramref name="serviceContainer"/> is <c>null</c>.</exception>
public AutoServiceFactory(IServiceContainer serviceContainer, String policy)
: base(serviceContainer)
{
_Policy = policy;
}
/// <summary>
/// Initializes a new instance of the <see cref="AutoServiceFactory<T>"/> class.
/// </summary>
/// <param name="serviceContainer">The service container involved.</param>
/// <exception cref="ArgumentNullException"><paramref name="serviceContainer"/> is <c>null</c>.</exception>
public AutoServiceFactory(IServiceContainer serviceContainer)
: this(serviceContainer, null)
{
// Do nothing here
}
#endregion
#region Public Properties
/// <summary>
/// Gets the policy that will be used when the service is resolved.
/// </summary>
public String Policy
{
get
{
return _Policy;
}
}
#endregion
#region IServiceFactory<T> Members
/// <summary>
/// Constructs a new service of the correct type and returns it.
/// </summary>
/// <returns>The created service.</returns>
public IService<T> Create()
{
return MyServiceContainer.Resolve<T>(_Policy);
}
#endregion
}
}
Basically, when I build the service container from my service container builder class, all service registrations are automatically given another co-service, implementing IServiceFactory for that service, unless the programmer has explicitly registered on him/her-self for that service. The above service is then used, with one parameter specifying the policy (which can be null if policies aren't used).
This allows me to do this:
var builder = new ServiceContainerBuilder();
builder.Register<ISomeService>()
.From.ConcreteType<SomeService>();
using (var container = builder.Build())
{
using (var factory = container.Resolve<IServiceFactory<ISomeService>>())
{
using (var service = factory.Instance.Create())
{
service.Instance.DoSomethingAwesomeHere();
}
}
}
Of course, a more typical use would be with your CD Burner object. In the above code I would resolve the service instead of course, but it's an illustration of what happens.
So with your cd burner service instead:
var builder = new ServiceContainerBuilder();
builder.Register<ICDBurner>()
.From.ConcreteType<CDBurner>();
builder.Register<ISomeService>()
.From.ConcreteType<SomeService>(); // constructor used in the top of answer
using (var container = builder.Build())
{
using (var service = container.Resolve<ISomeService>())
{
service.Instance.DoSomethingHere();
}
}
inside the service, you could now have a service, a factory service, which knows how to resolve your cd burner service upon request. This is useful for the following reasons:
You might want to resolve more than one service at the same time (burn two discs simultaneously?)
You might not need it, and it could be costly to create, so you only resolve it if needed
You might need to resolve, dispose, resolve, dispose, multiple times, instead of hoping/trying to clean up an existing service instance
You're also flagging in your constructor which services you need and which ones you might need
Here's two at the same time:
using (var service1 = container.Resolve<ISomeService>())
using (var service2 = container.Resolve<ISomeService>())
{
service1.Instance.DoSomethingHere();
service2.Instance.DoSomethingHere();
}
Here's two after each other, not reusing the same service:
using (var service = container.Resolve<ISomeService>())
{
service.Instance.DoSomethingHere();
}
using (var service = container.Resolve<ISomeService>())
{
service.Instance.DoSomethingElseHere();
}
First:
You might approach it by creating a container to hold your "uninteresting" dependencies (ILog, ICache, IApplicationSettings, etc), and use constructor injection to inject that, then internal to the constructor, hydrate the fields of the service from container.Resolve() ? I'm not sure I'd like that, but, well, it's a possibility.
Alternatively, you might like to use the new IServiceLocator common interface (http://blogs.msdn.com/gblock/archive/2008/10/02/iservicelocator-a-step-toward-ioc-container-service-locator-detente.aspx) instead of injecting the dependencies?
Second:
You could use setter injection for the optional/on-demand dependencies? I think I would go for injecting factories and new up from there on-demand.
To partially answer my first question, I've just found a blog post by Jeremy Miller, showing how Structure Map and setter injection can be used to auto-populate public properties of your objects. He uses ILogger as an example:
var container = new Container(r =>
{
r.FillAllPropertiesOfType<ILogger>().TheDefault.Is
.ConstructedBy(context => new Logger(context.ParentType));
});
This means that any classes with an ILogger property, e.g.:
public class ClassWithLogger
{
public ILogger Logger { get; set; }
}
public class ClassWithLogger2
{
public ILogger Logger { get; set; }
}
will have their Logger property automatically set up when constructed:
container.GetInstance<ClassWithLogger>();