As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
In ASP.NET MVC, the ActionResult class, which is the base for all results returned by action methods from a controller, is defined as an abstract class with the single method (© Microsoft):
public abstract void ExecuteResult(ControllerContext context);
Can you think of any specific reasons for this design? Specifically, it seems a bit weird to me, that
there is no IActionResult interface,
and that the class would not be required at all, if there was such an interface.
After all, if this was an interface instead of that abstract class, there would be no need to extend a base class in order to create a new ActionResult - one would just have to implement IActionResult properly. In a world, err language, without multiple inheritance, this advantage would seem quite important to me.
Interfaces are great for allowing a class to implement multiple contracts, such as when you know that a type must be two different things. In some cases, this can encourage creating a type that has too many responsibilities.
Action results have a single responsibility and it didn't seem like there would be any scenario where you need an object to be both an action result and something else. Even if you did, it's possible to do via composition. So in this case, we went with ABS to allow us greater flexibility after we RTM to make changes if necessary.
However, if there's a specific scenario we're blocking in which an interface would be preferable, we'll consider it. We can always do it later in a manner that's not breaking.
You can even do it yourself by writing your own action invoker, which only requires you to implement IActionInvoker (an interface) and that invoker could check for your own IActionResult rather than ActionResult.
I'm gonna guess because they were anticipating the ActionResult to gain methods and properties over the life of the CTP/beta. If it was an interface, every change to IActionResult would break existing code. Adding another method to the abstract base class wouldn't cause any problems.
You implement interfaces and you inherit from abstract classes.
For me it is the difference between "being of a type" or "acting like a type"
Since C# doesn't support multiple inheritance you are forced to define your class as an ActionResult, instead of something that acts as an ActionResult.
Compare it to the class EventArgs. Why does it make sense to inherit EventArgs instead of a IEventArgs interface. Well because an EventHandler carries something around of type EventArgs, not something acting as an EventArgs class.
I know this isn't exactly what you are looking for but for hahas I opened up the MVC3 source, changed ActionResult to IActionResult, ran a couple of find and replaces and everything built fine.
This means to me that ActionResult is an abstract class for an API reason. Maybe its as simple as the MVC team wanted you to be able to use Fields or didn't want to give people the ability to do crazy IActionResult, ISomething, IMyNuttyThing.
A scenario that I would think IActionResult would help is for dependency injection. I would like to have one set of controllers that are shared between a SPA and a razor UI. In configuration I would like to dynamically set the application type and have my controllers
public ActionResult Get(){
var customer = new Customer();
return View(customer);
}
I would like the concrete View method to be determined at runtime based on the application type. ie. Json() or ViewResult()
The object I pass into the result for both cases is going to be the same.
Does that make sense? Or is this possible without an IActionResult?
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 months ago.
Improve this question
I viewed a couple of answers online regarding Abstractions, Abstract Classes, Interface's, DI, and Loose coupling. But none of these answers are answering my question. I grouped these topics because they are related to achieving abstractions. Got a good understanding of the mentioned topics, but yet not fully understand them in detail and how they related to each other.
Generally speaking, interfaces are used to make classes loosely coupled. Thus define a set of functions and fields to be implemented. The idea of making Loosely Coupled classes is that will need to remove dependency over several classes.
For instance, if we make a change to one of these classes then we do not need to change other places making code maintainable. The only good example I can think of to use loosely coupling is through DI. So when we say interfaces make classes loosely coupled do we mean by passing an interface as a dependency?
"Please continue reading will further clarify".
A question here is if we are going to use DI and pass interfaces as dependencies then why not pass a class as a dependency instead? maybe I will need further clarification about Interfaces before answering the previous question. I will further explain.
The main idea of interfaces is to establish contact with classes that going to implement the interface meaning we are going to need to define functions and fields to enforce to implement them. but still, the idea of interfaces as a contract is not yet clear because if we enforce a developer to implement an interface called a server that has methods to turn on and off the server but the developer forgot to turn off the server programmatically then what is the point of this contract?
Further, my understanding is that this all falls under the concept of abstraction which means we do not need to worry about details but an abstraction. Does that not mean when building an application we first need to create classes/structures without code such as using UML?
Further, why would we use an abstract class over interfaces where an abstract class has similarities to an interface such as defining a function but without a body?
Coming back to Interfaces and DI we can inject interfaces as a dependency but why? Can we not inject a class it self? is it not easier to use classes as a dependency? where we can access all functions or this is not the idea of interface Can sombody help with this. I only understand one use case on why we should use DI. Example:
//Class1
//Class1 Con
Public Class1 Con(){
Class2 class2 = new Class2(1,1,1)
}
The above example is not maintainable because if we add a new parameter to Class2 then we need to modify it elsewhere. but if we use DI Injection then we won't are there any other reasons.
Also, DI can be useful to create one instance and use that instance across the whole application. Does that save some memory by not creating multiple instances? or saving time connecting to DI?
The question should we use abstractions at the very early coding stage where we create classes without code?
Further, do we use interfaces to make the developer aware that they need to implement a certain set of functions? But why?
Do I predict that we need to use an interface by creating UML diagrams to see if there would be different classes to use an interface with similar functionalities
"Can we not just create a superclass and override methods"
Can somebody explain when to use superclass and override methods over interfaces and provide an implementation?
Also, when to pass an interface as a dependency? And when to pass a class as a dependency? One advantage I can think of when using interfaces is polymorphism where we can make an interface of any implemented types and then access the interface type function; polymorphism. Example:
Class1 class = new Interface1();
Can this be possible?
Bottom line is we would like always to make our class's loosely coupled. Meaning that decouple class's to achieve maintainable. Thus, loosely coupled classes provider's late binding, extensibility, maintainability and easy testing. May refer to reference 1. We use interface's to make class's loosely coupled as well. but before answering how. we need to understand interface's why we use them and how they are different to abstract class's. Interface's are mainly used as contract meaning that when we create multiple class's sharing same behaviour but with different implementation then we use interface's. Thus, its a set of infrastructure to tell developer's what method's to implement. interface's only includes functions, fields signature with no implementation.
How we use DI to achieve loosely coupled class's is by injecting dependencies. suppose the following class's implements interface called Database:
public interface Database
{
void Save();
}
class SqlServer : Database
{
public void Save()
{
Console.WriteLine("Saving...");
}
}
class Oracle : Database
{
public void Save()
{
Console.WriteLine("Saving...");
}
}
Then we can easily inject dependencies as follows:
class Library
{
//private Database _SqlServer;
private Database _Oracle;
public Student(IDAL _SqlServer)
{
this._SqlServer = _SqlServer;
}
public void SaveBoo()
{
_SqlServer.Save();
}
}
Using the above approach we are injecting dependencies meaning that class's are now not fully tightly coupled. if any change made to _SqlServer we do not have to worry. To achieve full decoupled class's then use DI container Refer to reference 1.
The difference between abstract class's and interface's is that we use interface to define a contract where we use abstract if we want partial implementation. In Abstract class's you can define some method's implementation while leaving other as abstract.
You may create UML class diagrams to represent class's relationship without the need to worry about the coding side yet
As I am replying to my own question I would think it’s good to create classes and relationship I will call it classes structure then do all code later in case UML Class diagram is not going to be used. I guess this will fall under the technique/concept that is called abstraction where we do not yet worry about the details yet. So we can have an image about how is the application is structured without using UML’s.
Hope make sense
References:
(https://findnerd.com/account/#url=/list/view/Dependency-Injection-in--Net/24098/)
I am fairly new to Dependency Injection, and I wrote a great little app that worked exactly like Mark Seemann told me it would and the world was great. I even added some extra complexity to it just to see if I could handle that using DI. And I could, happy days.
Then I took it to a real world application and spent a long time scratching my head. Mark tells me that I am not allowed to use the 'new' keyword to instantiate objects, and I should instead let the IoC do this for me.
However, say that I have a repository and I want it to be able to return me a list of things, thusly:
public interface IThingRepository
{
public IEnumerable<IThing> GetThings();
}
Surely at least one implementation of this interface will have to instantiate some Thing's? And it doesn't seem so bad being allowing ThingRepository to new up some Things as they are related anyway.
I could instead pass round a POCO instead, but at some point I'm going to have to convert the POCO in to a business object, which would require me to new something up.
This situation seems to occur every time I want a number of things which is not knowable in the Composition Root (ie we only find out this information later - for example when querying the database).
Does anyone know what the best practice is in these kinds of situations?
In addition to Steven's answer, I think it is ok for a specific factory to new up it's specific matching-implementation that it was created for.
Update
Also, check this answer, specifically the comments, which say something about new-ing up instances.
Example:
public interface IContext {
T GetById<T>(int id);
}
public interface IContextFactory {
IContext Create();
}
public class EntityContext : DbContext, IContext {
public T GetById<T>(int id) {
var entity = ...; // Retrieve from db
return entity;
}
}
public class EntityContextFactory : IContextFactory {
public IContext Create() {
// I think this is ok, since the factory was specifically created
// to return the matching implementation of IContext.
return new EntityContext();
}
}
Mark tells me that I am not allowed to use the 'new' keyword to instantiate objects
That's not what Mark Seemann tells you, or what he means. You must make the clear separation between services (controlled by your composition root) at one side and primitives, entities, DTOs, view models and messages on the other side. Services are injectables and all other types are newables. You should only prevent using new on service types. It would be silly to prevent newing up strings for instance.
Since in your example the service is a repository, it seems reasonable to assume that the repository returns domain objects. Domain objects are newables and there's no reason not to new them manually.
Thanks for the answers everybody, they led me to the following conclusions.
Mark makes a distinction between stable and unstable dependencies in the book I am reading ( "Dependency injection in .NET"). Stable dependencies (eg Strings) can be created at will. Unstable dependencies should be moved behind a seam / interface.
A dependency is anything that is in a different assembly from the one that we are writing.
An unstable dependency is any of the following
It requires a run time environment to be set up such as a database, web server, maybe even the file system (otherwise it won't be extensible or testable, and it means we couldn't do late binding if we wanted to)
It doesn't exist yet (otherwise we can't do parallel development)
It requires something that isn't installed on all machines (otherwise it can cause test difficulties)
It contains non deterministic behaviour (otherwise impossible to test well)
So this is all well and good.
However, I often hide things behind seams within the same assembly. I find this extremely helpful for testing. For example if I am doing a complex calculation it is impossible to test the entire calculation well in one go. If I split the calculation up into lots of smaller classes and hide these behind seams, then I can easily inject any arbirtary intermediate results into a calculating class.
So, having had a good old think about it, these are my conclusions:
It is always OK to create a stable dependency
You should never create unstable dependencies directly
It can be useful to use seams within an assembly, particularly to break up big classes and make them more easily testable.
And in answer to my original question, it is ok to instatiate a concrete object from a concrete factory.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am new to Dependency Injection, using C#, so please forgive me for my lame question. I wanted to post this question anyway before investing time and buying expensive books on this subject.
After going through a few online documents, it appears that using dependency containers along with configuration files one can use this to swith from one type of implementation to another. However this could be easily done by an if/else statement and some config settings.
What is the advantage of using such a cumbersome implementation just to change from one class to another? I see abstract and factory patterns to be much more useful. Maybe I am wrong.
I ha s a real worl case where i needed dependency injection :
2 assembly : 1 responsible for computing a price when you do a search (lets call it Search), and the second responsible for computing a booking price with all the options (let's call it Booking).
The Booking assembly is already referencing Search(because it needs to know the initial price for computing the full price).
But here is the requirement : we need the price from Search to include all mandatories options (yes in the tourism industry you have mandatories options) like "full room cleaning".
So I couldn't had a ref to Booking in Search (because of circular reference). So I decided use dependency injection.
My Search assembly defined an interface
public interface IAddMandatoryOptionService{
void ChangeResultsWithMandatoryOptions(SearchResult[] results);
}
And then my Booking Assembly could implement this interface.
public class AddMandatoryOptionService : IAddMandatoryOptionService{
public void ChangeResultsWithMandatoryOptions(SearchResult[] results){
...
}
}
My SearchService class now would look like
public class SearchService{
public SearchService(IAddMandatoryOptionService optionService){
this.OptionService = optionService;
}
public SearchResult[] Search(Filter filter){
...
this.OptionService.ChangeResultsWithMandatoryOptions(results);
...
return results;
}
}
So my Search Service has no dependency to the AddMandatoryOptionService class (and the Booking assembly), but it's using its functionality.The good IAddMandatoryOptionService will be injected when I create my service (in my Application_Start or with a DI Framework)
The advantages are :
Resolving the circular reference problem.
If I want to unit test my SearchService I'll juste have to mock/fake IAddMandatoryOptionService.
Here the need for injection was more technical than logical, but I think this kind of real world scenario could help you to get the point.
In short, dependency injection is used to be able to losely couple classes. By using an if else statement you introduce a dependency between classes. When adding à new implementation to your if else statement you need to add another else statement.
You've probably read http://en.m.wikipedia.org/wiki/Dependency_injection since they have a pretty good motivation section.
Perhaps complete your q with different code examples.
Something that has been bugging me since I read an answer on another stackoverflow question (the precise one eludes me now) where a user stated something like "If you're calling the Service Locator, you're doing it wrong."
It was someone with a high reputation (in the hundred thousands, I think) so I tend to think this person might know what they're talking about. I've been using DI for my projects since I first started learning about it and how well it relates to Unit Testing and what not. It's something I'm fairly comfortable with now and I think I know what I'm doing.
However, there are a lot of places where I've been using the Service Locator to resolve dependencies in my project. Once prime example comes from my ModelBinder implementations.
Example of a typical model binder.
public class FileModelBinder : IModelBinder {
public object BindModel(ControllerContext controllerContext,
ModelBindingContext bindingContext) {
ValueProviderResult value = bindingContext.ValueProvider.GetValue("id");
IDataContext db = Services.Current.GetService<IDataContext>();
return db.Files.SingleOrDefault(i => i.Id == id.AttemptedValue);
}
}
not a real implementation - just a quick example
Since the ModelBinder implementation requires a new instance when a Binder is first requested, it's impossible to use Dependency Injection on the constructor for this particular implementation.
It's this way in a lot of my classes. Another example is that of a Cache Expiration process that runs a method whenever a cache object expires in my website. I run a bunch of database calls and what not. There too I'm using a Service Locator to get the required dependency.
Another issue I had recently (that I posted a question on here about) was that all my controllers required an instance of IDataContext which I used DI for - but one action method required a different instance of IDataContext. Luckily Ninject came to the rescue with a named dependency. However, this felt like a kludge and not a real solution.
I thought I, at least, understood the concept of Separation of Concerns reasonably well but there seems to be something fundamentally wrong with how I understand Dependency Injection and the Service Locator Pattern - and I don't know what that is.
The way I currently understand it - and this could be wrong as well - is that, at least in MVC, the ControllerFactory looks for a Constructor for a Controller and calls the Service Locator itself to get the required dependencies and then passes them in. However, I can understand that not all classes and what not have a Factory to create them. So it seems to me that some Service Locator pattern is acceptable...but...
When is it not acceptable?
What sort of pattern should I be on the look out for when I should rethink how I'm using the Service Locator Pattern?
Is my ModelBinder implementation wrong? If so, what do I need to learn to fix it?
In another question along the lines of this one user Mark Seemann recommended an Abstract Factory - How does this relate?
I guess that's it - I can't really think of any other question to help my understanding but any extra information is greatly appreciated.
I understand that DI might not be the answer to everything and I might be going overboard in how I implement it, however, it seems to work the way I expect it to with Unit Testing and what not.
I'm not looking for code to fix my example implementation - I'm looking to learn, looking for an explanation to fix my flawed understanding.
I wish stackoverflow.com had the ability to save draft questions. I also hope whoever answers this question gets the appropriate amount of reputation for answering this question as I think I'm asking for a lot. Thanks, in advance.
Consider the following:
public class MyClass
{
IMyInterface _myInterface;
IMyOtherInterface _myOtherInterface;
public MyClass(IMyInterface myInterface, IMyOtherInterface myOtherInterface)
{
// Foo
_myInterface = myInterface;
_myOtherInterface = myOtherInterface;
}
}
With this design I am able to express the dependency requirements for my type. The type itself isn't responsible for knowing how to instantiate any of the dependencies, they are given to it (injected) by whatever resolving mechanism is used [typically an IoC container]. Whereas:
public class MyClass
{
IMyInterface _myInterface;
IMyOtherInterface _myOtherInterface;
public MyClass()
{
// Bar
_myInterface = ServiceLocator.Resolve<IMyInterface>();
_myOtherInterface = ServiceLocator.Resolve<IMyOtherInterface>();
}
}
Our class is now dependent on creating the specfic instances, but via delegation to a service locator. In this sense, Service Location can be considered an anti-pattern because you're not exposing dependencies, but you are allowing problems which can be caught through compilation to bubble up into runtime. (A good read is here). You hiding complexities.
The choice between one or the other really depends on what your building on top of and the services it provides. Typically if you are building an application from scratch, I would choose DI all the time. It improves maintainability, promotes modularity and makes testing types a whole lot easier. But, taking ASP.NET MVC3 as an example, you could easily implement SL as its baked into the design.
You can always go for a composite design where you could use IoC/DI with SL, much like using the Common Services Locator. You component parts could be wired up through DI, but exposed through SL. You could even throw composition into the mix and use something like the Managed Extensibility Framework (which itself supports DI, but can also be wired to other IoC containers or service locators). It's a big design choice to make, generally my recommendation would be for IoC/DI where possible.
Your specific design I wouldn't say is wrong. In this instance, your code is not responsible for creating an instance of the model binder itself, that's up to the framework so you have no control over that but your use of the service locator could probably be easily changed to access an IoC container. But the action of calling resolve on the IoC container...would you not consider that service location?
With an abstract factory pattern the factory is specialised at creating specific types. You don't register types for resolution, you essentially register an abstract factory and that builds any types that you may require. With a Service Locator it is designed to locate services and return those instances. Similar from an convention point of view, but very different in behaviour.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Google Guice provides some great dependency injection features.
I came across the #Nullable feature recently which allows you to mark constructor arguments as optional (permitting null) since Guice does not permit these by default:
e.g.
public Person(String firstName, String lastName, #Nullable Phone phone) {
this.firstName = checkNotNull(firstName, "firstName");
this.lastName = checkNotNull(lastName, "lastName");
this.phone = phone;
}
https://github.com/google/guice/wiki/UseNullable
What are the other useful features of Guice (particularly the less obvious ones) that people use?
None of 'em are intended to be hidden, but these are my favorite 'bonus features' in Guice:
Guice can inject a TypeLiteral<T>, effectively defeating erasure.
TypeLiteral can do generic type resolution: this tells you that get() on a List<String> returns an Iterator<String>.
Types is a factory for implementations of Java's generic type interfaces.
Grapher visualizes injectors. If your custom provider implements HasDependencies, it can augment this graph.
Modules.override() is incredibly handy in a pinch.
Short syntax for defining parameterized keys: new Key<List<String>>() {}.
Binder.skipSources() lets you to write extensions whose error messages track line numbers properly.
The SPI. Elements.getElements() breaks a module into atoms and Elements.getModule() puts them back together.
If you implement equals() and hashCode() in a Module, you can install that module multiple times without problem.
I like how totally open the Scope interface is: basically, it's just a transformation from Provider to Provider. (Okay, from Key and Provider to Provider)
Want some things to be basically Singleton, but re-read from the database every half hour? It's easy to make a scope for that. Want to run some requests in the background, and have a scope that means "all background requests started from the same HTTP request?" It's relatively easy to write that Scope too.
Want to scope some Key on your server during tests so that it uses a separate instance for each test that you're running from a client? (With the test passing the test id in a Cookie or extra HTTP parameter) That's harder to do, but it's perfectly possible and so someone's already written that for you.
Yes, excessive abuse of Scope will cause Jesse to start hunting around for the stakes and garlic cloves, but its amazing flexibility can be really useful.
One great feature of Guice is how easy it makes implementing method interceptors in any Module, using:
public void bindInterceptor(
Matcher<? super Class<?>> classMatcher,
Matcher<? super Method> methodMatcher,
MethodInterceptor... interceptors);
Now, any method matching methodMatcher within a class matching classMatcher in that Module's scope is intercepted by interceptors.
For example:
bindInterceptor(
Matchers.any(),
Matchers.annotatedWith(Retryable.class),
new RetryableInterceptor());
Now, we can simply annotate any method with #Retryable and our RetryableInterceptor can
retry it if it fails.