Debating the use of dependency injection in iOS app - dependency-injection

I'm working on an iOS app where we need different binaries for each customer based on their needs. A customer may want to change all the colors, icons and texts. We can do that through white labeling process. The problem here, though, is when they ask for different behavior, for instance, removing login screen and making it optional to login.
I thought we can use dependency injections and use different handlers for each customer if needed. For instance, we can have LoginHandler1 and LoginHandler2, both implementing ILoginHandler and inherit from UIViewController.
However, use of dependency injection is costly, it slows down the app because resolving is expensive comparing to normal instantiation.
The other way is to define all these behaviors in the app and enable/disable them in a plist file. like "is login optional? yes/no"
Any suggestions?
Thanks

You should create the entire object graph up-front, in the composition root. Object creation, and constructor injection, should not take much time at all as long as your constructors are not doing any actual work.
That being said, there are times when creating the entire object graph at the start of the application may take longer than is acceptable. In those cases, you can use lazy-loading to defer the costly initialization until later - while still creating the objects in the composition root.
Mark Seemann describes this approach in more detail here: Compose object graphs with confidence.

I thought we can use dependency injections and use different handlers for each customer if needed.
You thought right. Flexibility is one of the main reasons people use DI.
However, use of dependency injection is costly, it slows down the app because resolving is expensive comparing to normal instantiation.
It really doesn't cost that much at all. Have you tried it yourself? Unless the object in question (i.e. the object being injected) is very expensive to instantiate, you have no real reason to stay away from DI and Inversion Of Control. Also, as #Lilshieste noted above, creating the object graph up front (see AppDelegate) will probably make this even less a problem.
A good way of doing that is described here:
http://cocoapatterns.com/passing-data-between-view-controllers/ and here http://cocoapatterns.com/ios-view-controller-transitions-mediator-pattern/
The other way is to define all these behaviors in the app and enable/disable them in a plist file. like "is login optional? yes/no"
While less "elegant", this solution is a pretty useful one, especially if the project is not really big in terms of number of classes and VCs. It is also the easiest one to implement if the app code is already laid out and introducing major design changes would ask for lots of refactoring.
Always take action based on the task at hand, there is rarely if ever a single solution to a software design problem.

Related

If a dependency is only used in a single method, should I inject it or use service location?

I am bit new to dependency injection & came across subjected question while working.
Lets say I have a class 'Employee' which has one method this method say 'Promote' gets conditionally called that too on rarest scenario.
'Promote' method uses an object of 'ValueAddition', now is it best practice to have this objected injected via constructor & user global object or should I
resolve the dependency in method itself?
What is recommended best practice? or any pointer on the life time of resolved dependency would be helpful.
In general you should always look to inject dependencies rather than use service location (ie, resolving a dependency directly from the inversion of control container). There's a great, pretty well known blog article here called "Service Locator is an Anti-Pattern" that can help clarify the reasons.
You may run into other issues as you expand that seem to point you to using service location. In general, you really should design around those issues instead of falling back to service location.
I only need the object in one method. Consider moving the functionality of the method to a different object. For example, maybe the Promote() method should be on an IEmployeePromoter where you resolve that and it takes the special object. Then you'd call Promote(Employee) and pass in the employee rather than changing the dependencies for an Employee. Sometimes your inversion of control container will have a way to generate a factory (like Func<T> or Lazy<T>) to help you defer resolution until you actually need it.
I have way too many dependencies. Usually this means your object is doing too much. The single responsibility principle can help you refactor your objects to be a more manageable size. Smaller objects generally need fewer dependencies because they have far fewer concerns to manage.
There's a good amount of documentation and books out there on the pattern of dependency injection. It might be good to widen the scope of your search to looking at the patterns and practices in general when not applied to a particular framework (or even language) and it can help inform you of best practices in your own code.

Data Accessor object singleton or some other pattern? (Objective C)

It seems to satisfy the three requirements here: On Design Patterns: When to use the Singleton?
I need it to exist only once.
I need to access it from all over the source base.
It handles concurrent access, i.e. Locks for writes, but can handle concurrent reads.
Hi all,
I've been reading a lot of no doubt intelligent educated and wise gems of advice that Singletons are 'Evil' and singletons are anti patterns or just plain bad news.
With the argument that logging makes sense but not much else.
Just interested to know if the case of essentially a persistent data store context makes sense for a singleton, i.e. Reading/Writing from disk to memory and referencing an object graph.
And if not, how do people normally tackle this issue, I don't currently have any problem with it, I know it's created only once, it's fast and the accessor logic is in one place. Meaning it takes me one line of code to do anything data model related.
Which leaves the only argument that it's bad for testing, in that it's a hard coded production implementation to data, but couldn't I just write a method swizzle through a category or in the test code to create the test version of the singleton?
And the final argument from DI testers, is that it is a hard coded implementation, rather than simply an interface to something, which I do get but I don't really have a major drive to use a DI framework given that I can use protocols for implementation, and use separate init methods for setting up an objects state in testing. There's only ever going to be two types of state for the singleton, or realistically one type...production.
Making it (in my opinion) much easier to read and faster to develop.
Change my view SO?
Yup, for some singletons are Evil. For the new developers who has little MRC knowledge and more ARC it sounds scary because they need to mess with memory,volatile,synchronize etc.
However it is not against to use them, they indeed has their own purpose to use with some are below.
when sharing large data models like arrays and dictionaries etc between multiple screens (VC's) we can't prefer storing them in UserDefaults (because userdefaults is permanent storage and storing such large entries make app start lazily) instead singletons are best since they stay only current app context and restarting app creates new one.
when we need a stable db connection to be accessible allover the app without messing up with connecting and closing in every business classes we can go for it.
when we wanted an ability to app for theming itself dynamically we would need to create a singleton class which holds all the color,image instances etc. and use that instance in application VC/Views etc so no code duplication and re-processing theme occurs in all places.
You dont have to change your view but tweak it a bit to get some positive intention towards singletons.
Hoping this clears it out, thanks

Why does Apple documentation that getting ManagedObjectContext from UIApplicationDelegate is bad?

Just curious why ManagedObjectContexts should be passed to UIViewControllers when they are created, rather than just grabbing them from a UIApplicationDelegate?
The docs say that this makes your applications more rigid, but I am failing to see the nuances of when to use which pattern.
Thanks!
Imagine that I ask you to do some task, like painting a room. If I just tell you "go paint a room," you'll need to ask me a lot of questions, like:
Which room?
Where's the paint?
Where are the brushes?
Should I use a dropcloth?
In short, you won't be able to complete the task without help from me. If you have to depend on me every time, you won't be a very flexible painter. One way to deal with that problem is for me to give you all the stuff you need at the outset. Instead of "go paint a room," I'll say "please paint room number 348 using this bucket of paint and this brush, and don't bother with a dropcloth." Now, you've got everything you need, and you can get right to work with no further help from me. You're a much more flexible worker because you no longer depend on me.
The same thing applies to view controllers (and objects generally); it's better to give them everything they need than to have them depend on a particular object like the app delegate. It's true not just for managed object contexts, but for any information they need to do their job.
This is mainly because you want to use dependency injection with your UIViewControllers instead of just grabbing everything from UIApplication, this keeps your delegate clean instead of full of reference hacks.
This is also to keep with the MVC pattern:
Model
View Controller (Only for view logic)
Controller (For coordinating between the view and the model)
I tend not to agree with this pattern.
First of all I try to treat Core Data as an implementation detail, and as any implementation detail it should be hidden behind a good facade. The facade is the interfaces I expose for my model objects. For example if I have two model objects; Cource and Student, any cource can have a number of students. I do not want to let the controller take upon the duty to setup predicates and sort descriptors, and jump through all Core Data hoops just to get a list of students for a particular class. There is a perfectly valid way to expose this in the model:
#interface Cource (StudentAccess)
-(NSArray*)studentsStortedByName;
#end
Then implement the ugly stuff once and for all in the Model class. Hiding all the complex details of Core Data, and no need to pass around managed object contexts. But how would I find the sources, it has to start somewhere right? Yes, it does but you need not expose it to the controller. Adding methods such as these are perfectly reasonable as well:
#interface Cource (CourceAccess)
+(Cource*)caurceByID:(NSString*)courceID;
+(NSArray*)allCources;
+(NSArray*)courcesHeldByTeacher:(Teacher*)teacher;
#end
This also helps in minimizing dependencies between controllers. And reducing he dependencies between the model and controller. Assuming I have a CourceViewController and a StudenViewController is I did not hide the Core Data details behind a facade and wanted to pass around the managed object context as well, then I would end up with a designated initializer like this:
-(id)initWithManagedObjectContext:(NSManagedObjectContext*)moc
student:(Student*)student;
Whereas with good a good facade I end up with this:
-(id)initWithStudent:(Student*)student;
Minimizing dependencies behind facades, in favor of dependency injection also makes it much easier to change the internal implementations. Passing around the managed object context encourages each controller to implement their own logic for basic stuff. Take for example studentsSortedByName method. At first it might be sorter by last/first name, if later changed to last/first name sort you would have to go to each and every controller that has sorted students and make the change. Where a good facade method requires you to change in one method, and all controller automagically get the update for free.
The Apple Docs try to foster the most widely applicable and sustainable design patterns.
Dependency injection is preferred because it allows for the most flexible, expandable, reusable and maintainable design.
As apps grow in complexity, using a quasi-singleton like parking the context in the app delegate breaks down. In more complex apps, you may have multiple context tied to multiple stores. You might want the same view-controller/view pair to display data from different context at different times or you may end up with multiple context on different threads/operations. You can't pile all those context up in the app delegate.
If you have a simple app with a single context then using the quasi-singleton with the app delegate can work well. I've used it on several smaller apps in the past without immediate issue but I did hit scalability problems on a couple of apps when the apps grew overtime.
Which pattern to use depends on your shipping constraints and you best guesses about of the evolution app over its entire lifecycle. If its a small one shot app, then the app delegate quasi-singleton will work fine. If the app is more complex, might grow more complex or might spawn other related apps that will reuse existing components, then dependency injection is the way to go.

Is there any hard data on the value of Inversion of Control or dependency injection?

I've read a lot about IoC and DI, but I'm not really convinced that you gain a lot by using them in most situations.
If you are writing code that needs pluggable components, then yes, I see the value. But if you are not, then I question whether changing a dependency from a class to an interface is really gaining you anything, other than more typing.
In some cases, I can see where IoC and DI help with mocking, but if you're not using Mocking, or TDD then what's the value? Is this a case of YAGNI?
I doubt you will have any hard data on it, so I will add some thoughts on it.
First, you don't use DI (or other SOLID principles) because it helps you do TDD. Its the other way around, you do TDD because it helps you with the design - which usually means you get code that follow those principles.
Discussing why to use interfaces is a different matter, see: https://stackoverflow.com/questions/667139/what-is-the-purpose-of-interfaces.
I will assume you agree that having your classes do many different things results in messy code. Thus, I am assuming you are already going for SRP.
Because you have different classes that do specific things, you need a way to relate them. If you relate them inside the classes (i.e. the constructors), you get plenty of code that uses specific versions of the classes. This means that making changes to the system will be hard.
You are going to need to change the system, that's a fact of software development. You can call YAGNI about not adding specific extra features, but not on that you won't be needing to change the system. In my case that's something really important as I do weekly sprints.
I use a DI framework where configuration is done through code. With a really small code configuration, you hook up lots of different relations. So, when you take away the discussion on interface vs. concrete classes, you are actually saving typing not the other way around. Also for the cases a concrete class is on the constructor, it hooks it up automatically (I don't have to configure) building the rest of the relations. It also allows me to control some objects life time, in particular I can configure an object to be a Singleton and it hands a single instance all the time.
Also note that just using these practices isn't more overhead. Using them for the first times, is what causes the overhead (because of the learning process + in some cases mind set change).
Bottom line: you ain't gonna need to put all those constructor calls all over the place to go faster.
The most significant gains from DI are not necessarily due to the use of interfaces. You do not actually need to use interfaces to have beneficial effects of dependency injection. If there's only one implementation you can probably inject that directly, and you can use a mix of classes and interfaces.
You're still getting loose coupling, and quite a few development environments you can introduce that interface with a few keypresses if needed.
Hard data on the value of loose coupling I cannot give, but it's been a vision in textbooks for as long as I can remember. Now it's real.
DI frameworks also give you some quite amazing features when it comes to hierarchical construction of large structures. Instead of looking for the leanest DI framework around, I'd recommend you look for a full-featured one. Less isn't always more, at least when it comes to learning about new ways of programming. Then you can go for less.
Apart from testing also the loose coupling is worth it.
I've worked on components for an embedded Java system, which had a fixed configuration of objects after startup (about 50 mostly different objects).
The first component was legacy code without dependency injection, and the subobjects where created all over the place. Now it happened several times that for some modification some code needed to talk to an object which was only available three constructors away. So what can you do but add another parameter to the constructor and pass it through, or even store it in a field to pass it on later. In the long run things became even more tangled than they already where.
The second component I developed from scratch, and used dependency injection (without knowing it at the time). That is, I had one factory which constructed all objects and injected then on a need to know basis. Adding another dependency was easy, just add it to the factory and the objects constructor (or add a setter to avoid loops). No unrelated code needed to be touched.

When do you use dependency injection?

I've been using StructureMap recently and have enjoyed the experience thoroughly. However, I can see how one can easily get carried away with interfacing everything out and end up with classes that take in a boatload of interfaces into their constructors. Even though that really isn't a huge problem when you're using a dependency injection framework, it still feels that there are certain properties that really don't need to be interfaced out just for the sake of interfacing them.
Where do you draw the line on what to interface out vs just adding a property to the class?
The main problem with dependency injection is that, while it gives the appearance of a loosely coupled architecture, it really doesn't.
What you're really doing is moving that coupling from the compile time to the runtime, but still if class A needs some interface B to work, an instance of a class which implements interface B needs still to be provided.
Dependency injection should only be used for the parts of the application that need to be changed dynamically without recompiling the base code.
Uses that I've seen useful for an Inversion of Control pattern:
A plugin architecture. So by making the right entry points you can define the contract for the service that must be provided.
Workflow-like architecture. Where you can connect several components dynamically connecting the output of a component to the input of another one.
Per-client application. Let's say you have various clients which pays for a set of "features" of your project. By using dependency injection you can easily provide just the core components and some "added" components which provide just the features the client have paid.
Translation. Although this is not usually done for translation purposes, you can "inject" different language files as needed by the application. That includes RTL or LTR user interfaces as needed.
Think about your design. DI allows you to change how your code functions via configuration changes. It also allows you to break dependencies between classes so that you can isolate and test objects easier. You have to determine where this makes sense and where it doesn't. There's no pat answer.
A good rule of thumb is that if its too hard to test, you've got some issues with single responsibility and static dependencies. Isolate code that performs a single function into a class and break that static dependency by extracting an interface and using a DI framework to inject the correct instance at runtime. By doing this, you make it trivial to test the two parts separately.
Dependency injection should only be used for the parts of the
application that need to be changed dynamically without recompiling
the base code
DI should be used to isolate your code from external resources (databases, webservices, xml files, plugin architecture). The amount of time it would take to test your logic in code would almost be prohibitive at a lot of companies if you are testing components that DEPEND on a database.
In most applications the database isn't going to change dynamically (although it could) but generally speaking it's almost always good practice to NOT bind your application to a particular external resource. The amount involve in changing resources should be low (data access classes should rarely have a cyclomatic complexity above one in it's methods).
What do you mean by "just adding a property to a class?"
My rule of thumb is to make the class unit testable. If your class relies on the implementation details of another class, that needs to be refactored/abstracted to the point that the classes can be tested in isolation.
EDIT: You mention a boatload of interfaces in the constructor. I would advise using setters/getters instead. I find that it makes things much easier to maintain in the long run.
I do it only when it helps with separation of concerns.
Like maybe cross-project I would provide an interface for implementers in one of my library project and the implementing project would inject whatever specific implementation they want in.
But that's about it... all the other cases it'd just make the system unnecessarily complex
Even with all the facts and processes in the world.. every decision boils down to a judgment call - Forgot where I read that
I think it's more of a experience / flight time call.
Basically if you see the dependency as a candidate object that may be replaced in the near future, use dependency injection. If I see 'classA and its dependencies' as one block for substitution, then I probably won't use DI for A's deps.
The biggest benefit is that it will help you understand or even uncover the architecture of your application. You'll be able to see very clearly how your dependency chains work and be able to make changes to individual parts without requiring you to change things that are unrelated. You'll end up with a loosely coupled application. This will push you into a better design and you'll be surprised when you can keep making improvements because your design will help you keep separating and organizing code going forward. It can also facilitate unit testing because you now have a natural way to substitute implementations of particular interfaces.
There are some applications that are just throwaway but if there's a doubt I would go ahead and create the interfaces. After some practice it's not much of a burden.
Another item I wrestle with is where should I use dependency injection? Where do you take your dependency on StructureMap? Only in the startup application? Does that mean all the implementations have to be handed all the way down from the top-most layer to the bottom-most layer?
I use Castle Windsor/Microkernel, I have no experience with anything else but I like it a lot.
As for how do you decide what to inject? So far the following rule of thumb has served me well: If the class is so simple that it doesn't need unit tests, you can feel free to instantiate it in class, otherwise you probably want to have a dependency through the constructor.
As for whether you should create an interface vs just making your methods and properties virtual I think you should go the interface route either if you either a) can see the class have some level of reusability in a different application (i.e. a logger) or b) if either because of the amount of constructor parameters or because there is a significant amount of logic in the constructor, the class is otherwise difficult to mock.

Resources