Constructor Injection: How many dependencies is too many? [closed] - dependency-injection

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I've been using manual constructor injection DI for a little bit now. One thing that I notice is that my constructors are starting to get rather long.
I have a class that depends on a bunch of little objects - anywhere between 6 and 10 sometimes. As I continue to break my application into smaller chunks, I could see this number increasing over time. Is this a common problem?
Obviously this is going to depend a great deal on the project. However, the basic question is this:
When do you start to get uncomfortable with the number of dependencies that a class has? What are some strategies that you use to reduce these dependencies?

I would not worry about it.
Instead, I would worry about the class being too complex.
A class with many dependencies that uses them all but has no loops or if statements is fine. In some code I was working on recently there were around 14 dependencies in a class. However, there was only one path through the code and no logical way to group the dependencies into better classes.
A class with a small number of dependencies that contains many branch statements or complex loop conditions should be simplified.

This may be a sign that the class with the 6-10 dependencies itself needs to be refactored.

I would think no more than three or four. If you are getting more than that, I would start thinking about how well you are abstracting your concerns. A single repository object, for example, should fulfill all of your data retrieval needs within the class in question.

Runcible,
Here is a link to the Castle Windsor project. It is an Inversion of Control container. These containers allow factory classes to collect your dependencies together and inject them as a single object into your constructor.
http://www.castleproject.org/container/index.html
I have heard good things about Windsor. Spring also makes an IoC container, and there are others.

A class with 6-10 dependencies is a code smell. It is an indication that the class is probably violating the Single Responsibility Principle.
What are some strategies that you use to reduce these dependencies?
Mark Seemann has made that task clear in his post Refactoring to Aggregate Services and moreso in his book Dependency Injection in .NET. The fact your class has so many dependencies indicates there are more than one responsibilities within the class. Often there is an implicit domain concept waiting to be made explicit by identifying it and making it into its own service. Generally speaking, most classes should never need more than 4-5 dependencies.

You may also want to see if any of the parameters to your constructor should be combined into a single class as well (assuming that the parameters make sense as a class).
It might also be that you want to look at using the ServiceLocator pattern for some of your dependencies. This is particularly true if you're having to pass the dependencies down a long chain of constructors.

Related

Which layer should DBContext, Repository, and UnitOfWork be in? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I want to use Layered Architecture and EF, Repository and UoW Pattern in the project.
Which layer should DBContext, Repository, and UnitOfWork be in?
DAL or BLL?
I would put your DbContext implementation in your DAL (Data Access Layer). You will probably get different opinions on this, but I would not implement the repository or unit of work patterns. Technically, the DBContext is the unit of work and the IDbSet is a repository. By implementing your own, you are adding an abstraction on top of an abstraction.
More on this here and here.
DAL is an acronym for Data Access Layer. DbContext, repositories and Unit Of Work are related to working with data so you should definitely place them in DAL.
"Should" is probably not the correct word here, as there are many views on this subject.
If you want to implement these patterns out of the book, I would check out this link from the ASP.NET guys:
https://www.asp.net/mvc/overview/older-versions/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application
But I actually have started to layer it like this:
Controller / Logic <- Where business logic and boundary objects are created and transformed.
Repository <- Where logic related to persistence and transforming entities and query objects
Store <- Where the actual implementations of storage mechanisms reside. This is abstracted away behind an interface.
This way leverages the fact that both the business logic and repository logic are testable, decoupled and free to use whatever mechanism for persistence - or lack thereof. Without the rest of the application knowing anything about it.
This is offcourse true with other patterns as well, this is just my take on this.
DbContext should never cross beyond the boundary of the DAL, if you want to put your repositories or units of work there, you are free to, just do not let them leak their details or dependencies upwards. The DbContext should in my opinion be scoped to as narrow scopes as possible, to keep it as clean as possible - you never know where that context has been... please wear protection! But jokes aside, if you have a async, multithreaded, multinode, big application, using these DbContexts everywhere passing them around, you will get into general concurrency and data race issues.
What I like to do is start with a InMemory store, that I inject into my controller. As soon as that store starts serving multiple entities, and the persistence logic get's more and more complex - I refactor it into a store with a repository on top. Once all my tests pass and I get it working like I want, I start to create database or file system based implementations of that store.
Again my opinions here, because this is a pretty general question, which has few "true" answers, just a lot of opinions.
Most opinions on this are valid, they just have different strengths and weaknesses, and the important part is to figure out which strengths you need, and how you will work with the weaknesses.
Your repository should have a reference to DbSet<T> objects, and once you add, update or remove, from one or more repositories, you should invoke SaveChanges from the UnitOfWork. Therefore you should place DbContext into Unit of Work implementation.

Which data layer / handling architecture or pattern to choose for a non-enterprise web application? (MVC / EF) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I need some help in making a design choice for my application. It’s a fairly straightforward web application, definitely not enterprise class or enterprise-anything.
The architecture is standard MVC 5 / EF 6 / C# ASP.NET, and the pages talk to a back-end database that’s in SQL server, and all the tables have corresponding entity objects generated from VS 2013 using the EF designer and I don’t see that changing anytime in the near future. Therefore creating super abstract “what if my database changes” etc. separations is possibly pointless. I am a one-man operation so we're not talking huge teams etc.
What I want is a clean way to do CRUD and query operations on my database, using DbContext and LINQ operations – but I’m not good with database related code design. Here are my approaches
1. Static class with methods - Should I create a static class (my DAL) that holds my datacontext and then provide functions that controllers can call directly
e.g. MyStaticDBLib.GetCustomerById(id)
but this poses problems when we try to update records from disconnected instances (i.e. I create an object that from a JSON response and need to ‘update’ my table). The good thing is I can centralize my operations in a Lib or DAL file. This is also quickly getting complicated and messy, because I can’t create methods for every scenario so I end up with bits of LINQ code in my controllers, and bits handled by these LIB methods
2. Class with context, held in a singleton, and called from controller
MyContext _cx = MyStaticDBLib.GetMyContext(“sessionKey”);
var xx = cx.MyTable.Find(id) ; //and other LINQ operations
This feels a bit messy as my data query code is in my controllers now but at least I have clean context for each session. The other thinking here is LINQ-to-SQL already abstracts the data layer to some extent as long as the entities remain the same (the actual store can change), so why not just do this?
3. Use a generic repository and unitofwork pattern – now we’re getting fancy. I’ve read a bit about this pattern, and there’s so many different advises, including some strongly suggesting that EF6 already builds the repository into its context therefore this is overkill etc. It does feel overkill but need someone here to tell me that given my context
4. Something else? Some other clean way of handling basic database/CRUD
Right now I have the library type approach (1. above) and it's getting increasingly messy. I've read many articles and I'm struggling as there's so many different approaches, but I hope the context I've given can elicit a few responses as to what approach may suit me. I need to keep it simple, and I'm a one-man-operation for the near future.
Absolutely not #1. The context is not thread safe and you certainly wouldn't want it as a static var in a static class. You're just asking for your application to explode.
Option 2 is workable as long as you ensure that your singleton is thread-safe. In other words, it'd be a singleton per-thread, not for the entire application. Otherwise, the same problems with #1 apply.
Option 3 is typical but short-sighted. The repository/unit of work patterns are pretty much replaced by having an ORM. Wrapping Entity Framework in another layer like this only removes many of the benefits of working with Entity Framework while simultaneously increasing the friction involved in developing your application. In other words, it's a lose-lose and completely unnecessary.
So, I'll go with #4. If the app is simple enough, just use your context directly. Employ a DI container to inject your context into the controller and make it request-scoped (new context per request). If the application gets more complicated or you just really, really don't care for having a dependency on Entity Framework, then apply a service pattern, where you expose endpoints for specific datasets your application needs. Inject your context into the service class(es) and then inject your service(s) into your controllers. Hint: your service endpoints should return fully-formed data that has been completely queried from the database (i.e. return lists and similar enumerables, not queryables).
While Chris's answer is a valid approach, another option is to use a very simple concrete repository/service façade. This is where you put all your data access code behind an interface layer, like IUserRepository.GetUsers(), and then in this code you have all your Entity Framework code.
The value here is separation of concerns, added testability (although EF6+ now allows mocking directly, so that's less of an issue) and more importantly, should you decide someday to change your database code, it's all in one place... Without a huge amount of overhead.
It's also a breeze to inject via dependency injection.

How to write good base classes for iOS projects? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I've been developed apps for iOS for sometime and find that there are many repeating tasks. So I want to write base classes that the upcoming projects will subclass, so that it will cost less time and more easily to track code across projects. The most concerned are
Write good base model class that has many strategies (Core Data, Archiving, ...). This model class also has some JSON-to-property converting techniques like Mantle so that model on device and on server are the same
Write good base networking class (mostly with AFNetworking)
Write good base ViewController class. I see some repetitive tasks : avoiding keyboard with ScrollView, logging, crash tracking, loading views from Nibs, ...
Find and use some other good categories for UIView, UINib, Autolayout, ...
These are just my concerns. It may seems a vague topic and I don't ask for how to use libraries or how to make reusable components.
I just want to ask about experience for making these kinds of base classes and where I can learn from
You are not the only one that has a problem with this, I've been going through same problem with many of the projects. So the best solution to this problem is the open source libraries. The good ones are usually updated often and keep up with Apple's SDK releases. I will explain what I use to keep boilerplate code at a minimum.
Base model - Since I only use Network and Core Data models, I use MagicalRecord for Core Data and JSONModel for network based models (that map to API responses).
Networking classes - are coupled with AFNetworking and previously mentioned JSONModel, I did not find to need anything else. I can easily extend those with categories.
There are many libraries to avoid UITextField's with keyboard in a UIScrollView, but mostly I just use custom code. But if I need one, I follow TPKeyboardAvoiding. For crash tracking I just use Crashlytics or Flurry, they provide their own SDK, so I do not need much code. And I do not use NIB's anymore.
There are many useful categories around on the web. I created my own repository as a CocoaPod, which keeps all useful categories in a single pod. I keep the repository up to date and add new categories and small classes when I need them. The down side of it is that you usually do not need all of them, so sometimes too much code is loaded. But until now I did not notice any performance downsides. If you want, you can take a look on GitHub, how it looks.
Do not forget about project initialization, I've been working on my own custom Xcode project templates to solve this problem.

Building a testable MVC3 & EF 4.1 app [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
First, I apologize for the open-ended nature of this question. However, I've been paralyzed by this for months, and in spite of consent searching, still cannot get past this.
I have been working on an MVC/EF app for a while. I'm trying to get an understanding on how to design and build a testable MVC3 application backed by Entity Framework (4.1). You can see a few questions I've asked on the subject here, here, and here.
I'm trying not to over complicate it, but I want it to be a sound, loosely coupled design that can grow. The way I understand it, the following are pretty much bare minimum required components:
MVC app
This is very thin. As little logic as possible goes here. My views have as little conditional logic as possible, my view models are never more than POCOs, and my controllers simply handle the mapping between the view models and domain models, and calling out to services.
Service layer + interfaces (separate assemblies)
This is where all of my business logic goes. The goal of this is to be able to slap any thin client (forms app, mobile app, web service) on top of this to expose the guts of my application. Interfaces for the service layer sits in another assembly.
Core utilities/cross-cutting + interfaces (separate assemblies)
This is stuff I build that is not specific to my application, but is not part of the framework or any third party plugin I'm using. Again, interfaces to these components sit in their own assembly.
Repository (EF context)
This is the interface between my domain models and my database. My service layer uses this to retrieve/modify my database via the domain models.
Domain models (EF POCOs)
The EF4 generated POCOs. Some of these may be extended for convenience to other nested properties or computed properties (such as Order.Total = Order.Details.Sum(d => d.Price))
IoC container
This is what is used for injecting my concrete/fake dependencies (services/utilities) into the MVC app & services. Constructor injection is used exclusively throughout.
Here is where I'm struggling:
1) When integration testing is appropriate vs. unit testing. For example, will some assemblies require a mix of both, or is integration testing mainly for the MVC app and unit testing for my services & utilities?
2) Do I bother writing tests against the repository/domain model code? Of course in the case of POCOs, this is not applicable. But what about when I extend my POCOs w/ computed properties?
3) The proper pattern to use for repositories. I know this is very subjective, as every time I see this discussed, it seems everyone has a different approach. Therefore it makes it hard to figure out which way to go. For example, do I roll my own repositories, or just use EF (DbContext) directly?
4) When I write tests for my services, do I mock my repositories, or do I use SQL Lite to build a mock database and test against that? (See debates here and here).
5) Is this an all-or-nothing affair, as in, if I do any testing at all, I should test everything? Or, is it a matter of any testing is better than no testing? If the latter, where are the more important areas to hit first (I'm thinking service layer)?
6) Are there any good books, articles, or sample apps that would help answer most of these questions for me?
I think that's enough for now. If this ends up being too open ended, let me know and I will gladly close. But again, I've already spent months trying to figure this out on my own with no luck.
This is really complex question. Each of your point is large enough to be a separate question so I will write only short summary:
Integration testing and unit testing don't replace each other. You always needs both if you want to have well tested application. Unit test is for testing logic in isolation (usually with help of mocks, stubs, fakes, etc.) whereas integration test is for testing that your components work correctly together (= no mocks, stubs or fakes). When to use integration test and when to use unit test really depends on the code you are testing and on the development approach you are following (for example TDD).
If your POCOs contains any logic you should write unit tests for them. Logic in your repositories is usually heavily dependent on database so mocking context and test them without database is usually useless so you should cover them with integration tests.
It is really dependent on what you expect from repositories. If it is only stupid DbContext / DbSet wrapper then the value of repository is zero and it will most probably not make your code unit testable as described in some referenced debates. If it wraps queries (no LINQ-to-entites in upper layer), expose access to aggregate roots then the meaning of repository is correct separation of data access and exposing mockable interface.
It is fully dependent on previous point. If you expose IQueryable or methods accepting Expression<Func<>> passed to IQueryable internally you cannot correctly mock the repository (well you can but you still need to pair each unit test with integration test testing the same logic). LINQ-to-entities is "side effect" / leaky abstraction. If you completely wrap the queries inside repository and use your own declarative query language (specification pattern) you can mock them.
Any testing is better then no testing. Many methodologies expects high density coverage. TDD goes even to 100% test coverage because test is written always first and there is no logic without test. It is about the methodology you are following and up to your professional decision to chose if you need a test for piece of code.
I don't think that there is any "read this and you will know how to do that". This is software engineering and software engineering is an art. There is no blue print which works in every case (neither in most cases).

How do you make code reusable? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Closed 2 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Any code can be reused in a way or an other, at least if you modify the code. Random code is not very reusable as such. When I read some books, they usually say that you should explicitly make the code reusable by taking into account other situations of code usage too. But certain code should not be an omnipotent all doing class either.
I would like to have reusable code that I don't have to change later. How do you make code reusable? What are the requirements for code being reusable? What are the things that reusable code should definitely have and what things are optional?
See 10 tips on writing reusable code for some help.
Keep the code DRY. Dry means "Don't Repeat Yourself".
Make a class/method do just one thing.
Write unit tests for your classes AND make it easy to test classes.
Remove the business logic or main code away from any framework code
Try to think more abstractly and use Interfaces and Abstract classes.
Code for extension. Write code that can easily be extended in the future.
Don't write code that isn't needed.
Try to reduce coupling.
Be more Modular
Write code like your code is an External API
If you take the Test-Driven Development approach, then your code only becomes re-usable as your refactor based on forthcoming scenarios.
Personally I find constantly refactoring produces cleaner code than trying to second-guess what scenarios I need to code a particular class for.
More than anything else, maintainability makes code reusable.
Reusability is rarely a worthwhile goal in itself. Rather, it is a by-product of writing code that is well structured, easily maintainable and useful.
If you set out to make reusable code, you often find yourself trying to take into account requirements for behaviour that might be required in future projects. No matter how good you become at this, you'll find that you get these future-proofing requirements wrong.
On the other hand, if you start with the bare requirements of the current project, you will find that your code can be clean and tight and elegant. When you're working on another project that needs similar functionality, you will naturally adapt your original code.
I suggest looking at the best-practices for your chosen programming language / paradigm (eg. Patterns and SOLID for Java / C# types), the Lean / Agile programming literature, and (of course) the book "Code Complete". Understanding the advantages and disadvantages of these approaches will improve your coding practice no end. All your code will then become reausable - but 'by accident', rather than by design.
Also, see here: Writing Maintainable Code
You'll write various modules (parts) when writing a relatively big project. Reusable code in practice means you'll have create libraries that other projects needing that same functionality can use.
So, you have to identify modules that can be reused, for that
Identify the core competence of each module. For instance, if your project has to compress files, you'll have a module that will handle file compression. Do NOT make it do more than ONE THING. One thing only.
Write a library (or class) that will handle file compression, without needing anything more than the file to be compressed, the output and the compression format. This will decouple the module from the rest of the project, enabling it to be (re)used in a different setting.
You don't have to get it perfect the first time, when you actually reuse the library you will probably find out flaws in the design (for instance, you didn't make it modular enough to be able to add new compression formats easily) and you can fix them the second time around and improve the reusability of your module. The more you reuse it (and fix the flaws), the easier it'll become to reuse.
The most important thing to consider is decoupling, if you write tightly coupled code reusability is the first casualty.
Leave all the needed state or context outside the library. Add methods to specify the state to the library.
For most definitions of "reuse", reuse of code is a myth, at least in my experience. Can you tell I have some scars from this? :-)
By reuse, I don't mean taking existing source files and beating them into submission until a new component or service falls out. I mean taking a specific component or service and reusing it without alteration.
I think the first step is to get yourself into a mindset that it's going to take at least 3 iterations to create a reusable component. Why 3? Because the first time you try to reuse a component, you always discover something that it can't handle. So then you have to change it. This happens a couple of times, until finally you have a component that at least appears to be reusable.
The other approach is to do an expensive forward-looking design. But then the cost is all up-front, and the benefits (may) appear some time down the road. If your boss insists that the current project schedule always dominates, then this approach won't work.
Object-orientation allows you to refactor code into superclasses. This is perhaps the easiest, cheapest and most effective kind of reuse. Ordinary class inheritance doesn't require a lot of thinking about "other situations"; you don't have to build "omnipotent" code.
Beyond simple inheritance, reuse is something you find more than you invent. You find reuse situations when you want to reuse one of your own packages to solve a slightly different problem. When you want to reuse a package that doesn't precisely fit the new situation, you have two choices.
Copy it and fix it. You now have to nearly similar packages -- a costly mistake.
Make the original package reusable in two situations.
Just do that for reuse. Nothing more. Too much thinking about "potential" reuse and undefined "other situations" can become a waste of time.
Others have mentioned these tactics, but here they are formally. These three will get you very far:
Adhere to the Single Responsibility
Principle - it ensures your class only "does one thing", which means it's more likely it will be reusable for another application which includes that same thing.
Adhere to the Liskov
Substitution Principle - it ensures your code "does what it's supposed without surprises", which means it's more likely it will be reusable for another application that needs the same thing done.
Adhere to the Open/Closed Principle - it ensures your code can be made to behave differently without modifying its source, which means it's more likely to be reusable without direct modification.
To add to the above mentioned items, I'd say:
Make those functions generic which you need to reuse
Use configuration files and make the code use the properties defined in files/db
Clearly factor your code into such functions/classes that those provide independent functionality and can be used in different scenarios and define those scenarios using the config files
I would add the concept of "Class composition over class inheritance" (which is derived from other answers here).
That way the "composed" object doesn't care about the internal structure of the object it depends on - only its behavior, which leads to better encapsulation and easier maintainability (testing, less details to care about).
In languages such as C# and Java it is often crucial since there is no multiple inheritance so it helps avoiding inheritance graph hell u might have.
As mentioned, modular code is more reusable than non-modular code.
One way to help towards modular code is to use encapsulation, see encapsulation theory here:
http://www.edmundkirwan.com/
Ed.
Avoid reinventing the wheel. That's it. And that by itself has many benefits mentioned above. If you do need to change something, then you just create another piece of code, another class, another constant, library, etc... it helps you and the rest of the developers working in the same application.
Comment, in detail, everything that seems like it might be confusing when you come back to the code next time. Excessively verbose comments can be slightly annoying, but they're far better than sparse comments, and can save hours of trying to figure out WTF you were doing last time.

Resources