I am little confused with the benefit of protocols in VIPER architecture.
I understand that DI (Dependency Injection) achieved via protocols and helps avoid direct dependency between objects - I Agree.
But i am looking at a real benefit from usage perspective, an example may be - especially how protocols help benefit in unit testing (testing the Interactor portion).
Can't we achieve the same via method Callback's using blocks?
Hope someone can help me understand from usage perspective with some example
Cheers
Using a callback, e.g. from the Interactor to the Presenter, can make it harder to test the Presenter.
When writing tests for how the Presenter processes input (sent from the Interactor) your test would have to call some method on the Presenter that would cause the Presenter to make a call on the Interactor, which would cause the Interactor to send data to the Presenter.
By having the Presenter implement a protocol defined by the Interactor, your test can just directly call the appropriate input method on the Presenter.
As far as declaring protocols, I practice TDD in the style of mock roles, not objects (http://www.jmock.org/oopsla2004.pdf). The protocols help provide a better abstraction by focusing on what the object does (its role), not how it does it.
Protocols, on their own, are of little value for the unit tests. Your unit tests will provide test doubles (http://martinfowler.com/bliki/TestDouble.html) for the dependencies of the system under test. Even if you expose dependencies as concrete classes you can still create test doubles for your test.
In Objective-C, you can use a mocking library, such as OCMock (http://ocmock.org), or OCMockito (https://github.com/jonreid/OCMockito), to create stubs, spies, or mocks of the concrete class.
In Swift, you could create your test doubles by subclassing each of the concrete classes used as dependencies.
In short, protocols are not used to ease unit testing, but to describe at a higher level of abstraction, what the application does.
Here is an example of how having the abstract protocols were beneficial after the fact:
I created a protocol to represent the actions a user could perform on a screen, e.g. ProfileUserActions, that had actions such as changeName, and changeAddress. The Presenter implemented ProfileUserActions, and the View accepted a ProfileUserActions as a dependency. When the user tapped a button on screen, the View would send the appropriate message to its userActions object.
When I wanted to add analytics, I was able to create a new, independent ProfileAnalytics class which also implemented ProfileUserActions. I inserted the analytics object between the View and the Presenter, which allowed the app to capture analytics, without having to modify either the View or the Presenter.
By using a protocol it is easier for you to swap out an implementation in your VIPER structure. For example, you might have an interactor that is working with a class that is writing to the filesystem. You don't want to be testing the filesystem in your unit tests so instead if you put the filesystem write operations in your interactor behind a protocol you can then replace the filesystem write operations with an in-memory implementation.
As for a protocol on the interactor itself, I think it pays to be a bit more pragmatic about your choice. If it's easy to build your interactor for the test and it doesn't cause any side effects as part of testing then there's probably no need for the protocol. On the other hand, if you have to create a number of other dependencies then it might pay to have the interactor conform to a protocol so that you can more easily fake the output you get from the interactor.
Related
When I use dependency injection, I create all objects at the very beginning of the program.
I end up having to carry dependencies through a lot of constructors, although it would be more easy to create them somewhere earlier.
The pattern of factories comes to mind. Factory can be passed in the constructor to not violate DI.
But isn't this an anti-pattern / violating of DI? Since Factory creates a concrete implementation of the object?
What are the approaches to create objects without breaking DI?
When practicing DI, there is no need or requirement to construct all objects at startup; that's a design choice or perhaps a design constraint in your particular environment. In fact, lazy initialization with DI is the norm—not the exception.
But the consequence of lazy initialization of object graphs is that you typically have some factory-like behavior somewhere in the application.
Many application types, like web application frameworks, provide a factory abstraction that you can implement or override that allows you to create the necessary objects for the request that comes in.
If you do this, and create the object structure specific for a request, just in time when a request comes in, I'd say that in most cases there is no need for postponing the creation of other objects for that request. This means there's no need for injecting any factory classes into the classes of your application. I'd say that in most cases, Abstract Factories are a Design Smell.
But there are exceptions, though, and the argument that Abstract Factories should be reviewed with suspicion, doesn't mean that you don't need factory-like behavior somewhere inside your code. Whenever you're doing message dispatching, for instance, where an incoming message gets dispatched to one or multiple handlers, where many handler classes exist in the application, requires lazy initialization of classes, and requires factory-like behavior. This, however, can be hidden behind abstractions that are implemented inside the Composition Root, which prevents application code from depending on an Abstract Factory.
But that said, not all application frameworks expose factory abstractions for you to implement, and as I said, there are always exceptions to the rule.
I have read in VIPER blogs that moving view controller's code to Presenter codes makes it easy to unit test. The reason given in the blogs was that the Presenter doesn't have any UIKit related code in it.
How does this make it easier to unit test. Can any one please explain this in detail? Or is there any other advantage of this apart from avoiding Massive View Controller problem?
The biggest problem in unit testing is how to mock something. You want to test a method but that method is calling 3 other methods and you don't want to test these 3 methods, therefore you want to mock them to return some fixed value.
This is pretty easy in languages like Javascript, where you can substitute a method on any object or in Objective-C where you can do the same (although with a bit more difficulty).
This is not easy in a language like Swift. Therefore Viper came with the idea to split view controller into units (e.g. Presenter, Interactor, View, Router) and every unit has its own protocol. Now to mock one the units you can just implement the protocol and use it instead of the real Presenter or View.
(you can actually use some tools to generate the mocks for you dynamically in tests)
That makes unit testing much much easier.
However note that unit testing UI is never easy. UI usually operates in terms that are difficult to unit test and unit testing UI almost always means that you will be duplicating a lot of your app code in unit tests. UI is more commonly tested via integration tests (e.g. automatic clicking and validating what is visible on the screen).
Viper is not a bad architecture. Separation of concerns is something that many programmers struggle with and it's not a bad idea to have strict architectural rules. With complex screens you still won't be able to avoid big controllers but at least you will be forced to move some code out of the controller.
Massive View Controllers are not a problem of the MVC pattern. They are a problem of bad separation of concerns and strict rules in Viper help to avoid that.
I need to know about the design patterns used in iPhone development other than MVC.
Please reply with any sample explanation or example with code snippet.
Thanks.
Abstract Factory
The Abstract Factory pattern provides an interface for creating families of related or dependent objects without specifying their concrete classes. The client is decoupled from any of the specifics of the concrete object obtained from the factory.
Adapter
The Adapter design pattern converts the interface of a class into another interface that clients expect. Adapter lets classes work together that couldn’t otherwise because of incompatible interfaces. It decouples the client from the class of the targeted object.
Chain of Responsibility
The Chain of Responsibility design pattern decouples the sender of a request from its receiver by giving more than one object a chance to handle the request. The pattern chains the receiving objects together and passes the request along the chain until an object handles it. Each object in the chain either handles the request or passes it to the next object in the chain.
Command
The Command design pattern encapsulates a request as an object, thereby letting you parameterize clients with different requests, queue or log requests, and support undoable operations. The request object binds together one or more actions on a specific receiver. The Command pattern separates an object making a request from the objects that receive and execute that request.
Composite
The Composite design pattern composes related objects into tree structures to represent part-whole hierarchies. The pattern lets clients treat individual objects and compositions of objects uniformly. The Composite pattern is part of the Model-View-Controller aggregate pattern.
Decorator
The Decorator design pattern attaches additional responsibilities to an object dynamically. Decorators provide a flexible alternative to subclassing for extending functionality. As does subclassing, adaptation of the Decorator pattern allows you to incorporate new behavior without modifying existing code. Decorators wrap an object of the class whose behavior they extend. They implement the same interface as the object they wrap and add their own behavior either before or after delegating a task to the wrapped object. The Decorator pattern expresses the design principle that classes should be open to extension but closed to modification.
Facade
The Facade design pattern provides a unified interface to a set of interfaces in a subsystem. The pattern defines a higher-level interface that makes the subsystem easier to use by reducing complexity and hiding the communication and dependencies between subsystems.
Iterator
The Iterator design pattern provides a way to access the elements of an aggregate object (that is, a collection) sequentially without exposing its underlying representation. The Iterator pattern transfers the responsibility for accessing and traversing the elements of a collection from the collection itself to an iterator object. The Iterator defines an interface for accessing collection elements and keeps track of the current element. Different iterators can carry out different traversal policies.
Mediator
The Mediator design pattern defines an object that encapsulates how a set of objects interact. Mediator promotes loose coupling by keeping objects from referring to each other explicitly, and it lets you vary their interaction independently. These objects can thus remain more reusable.
A "mediator object” in this pattern centralizes complex communication and control logic between objects in a system. These objects tell the mediator object when their state changes and, in turn, respond to requests from the mediator object.
Memento
The Memento pattern captures and externalizes an object’s internal state—without violating encapsulation—so that the object can be restored to this state later. The Memento pattern keeps the important state of a key object external from that object to maintain cohesion.
Observer
The Observer design pattern defines a one-to-many dependency between objects so that when one object changes state, all its dependents are notified and updated automatically. The Observer pattern is essentially a publish-and-subscribe model in which the subject and its observers are loosely coupled. Communication can take place between the observing and observed objects without either needing to know much about the other.
Proxy
The Proxy design pattern provides a surrogate, or placeholder, for another object in order to control access to that other object. You use this pattern to create a representative, or proxy, object that controls access to another object, which may be remote, expensive to create, or in need of securing. This pattern is structurally similar to the Decorator pattern but it serves a different purpose; Decorator adds behavior to an object whereas Proxy controls access to an object.
Receptionist
The Receptionist design pattern addresses the general problem of redirecting an event occurring in one execution context of an application to another execution context for handling. It is a hybrid pattern. Although it doesn’t appear in the “Gang of Four” book, it combines elements of the Command, Memo, and Proxy design patterns described in that book. It is also a variant of the Trampoline pattern (which also doesn’t appear in the book); in this pattern, an event initially is received by a trampoline object, so-called because it immediately bounces, or redirects, the event to a target object for handling.
Singleton
The Singleton design pattern ensures a class only has one instance, and provides a global point of access to it. The class keeps track of its sole instance and ensures that no other instance can be created. Singleton classes are appropriate for situations where it makes sense for a single object to provide access to a global resource.
Template Method
The Template Method design pattern defines the skeleton of an algorithm in an operation, deferring some steps to subclasses. The Template Method pattern lets subclasses redefine certain steps of an algorithm without changing the algorithm’s structure.
Source: Cocoa Design Patterns.
In real world applications code bases become complex over time and you end up with massive view controllers, which are hard to test and maintain. The solution is to use MVVM, which is a better alternative to MVC it self.
Using MVVM Design pattern in your application is related to your business logic that you will do in your project to display some contents on view.
Incase your view doesn't need more logic to display it's content you can use MVC
but if you have to make some business logic to display these contents on view the best practise in this case is to separate this logic to be in another layer so MVVM will better in this case,ViewModel in MVVM will contain this logic.
In my opinion MVVM is better than MVC on level design due to these reasons
MVVM is compatible with your existing MVC architecture.
MVVM makes your apps more testable.
MVVM works best with a binding mechanism.
How MVVM is compatible with MVC
MVC > Model,View,Controller
MVVM > Model,View ,ViewModel > Model,(ViewController),ViewModel
Or it's better to use another Design Pattern?
Responded to a similar question some days ago here, mocking a Singleton. The original post is for C#.Net as regards mocking a singleton's behaviour, but should still apply.
As regards the singleton pattern, there isn't anything wrong with it per se - in many cases we want to centralize logic and data. However, there is a very big difference between a singleton and a static class. Building your singleton as a static class hard codes that implementation to every consumer in your application - which makes unit testing very difficult!
What you want to do is define an interface for your singleton, exposing the methods for your consumers to use. Your consumers in turn are passed a reference to an implementing class by whomever instantiates them [typically this is your application, or a container if you are familiar with Dependency Injection\Inversion of Control].
It's this framework, whomever is instantiating the consumers, that is responsible for ensuring one and only one instance is floating around. It's really not that great a leap from static class to interface reference [as demonstrated in link above], you just lose the convenience of a globally accessible instance - i know i know, global references are terribly seductive, but Luke turned his back to the Dark Side, so can you!
Generally speaking, best practices suggest avoiding static references, and encourages progamming against interfaces. Remember, it is still possible to apply the singleton pattern with these constraints. Follow these guidelines, and you should have no problem unit testing your work :)
Hope this helps!
singleton != public static class, rather singleton == single instance
Lack of testability is one of the major downfalls of the classic Singleton model (static class method returning an instance). As far as I'm concerned, that's justification enough to re-design any code that uses Singletons to use some other design.
If you absolutely need to have a singular instance, then Dependency Injection and writing to an interface, as suggested by johnny g, is definitely the way to go.
I'm using the following pattern when I write a static-based singletons that I can mock. The code is Java, but I think you will get an idea. The main problem with this approach is that you have to relax constructor to package-protected (which sorta defeats a true singleton).
As a side note - the code applies to ability to mock your "static" code not necessarily simply calling it
I generally only use Singletons for Flyweight objects or similar value objects. Looking into an IoC container (as discussed above) is probably a better way to handle a shared object than a singleton.
Consider that in Smalltalk (where a lot of these patterns originated), true and false were both effectively singletons :)
If you must use a singleton (and there are reasons to do so...but I would always try to avoid it if possible). I would recommend using a IOC container to manage it. Im not sure if there is one for Delphi or not. But in Java you could use Spring, in .NET you can use Windsor/Castle. A IOC container can hold onto the Singleton and can register different implementations for testing.
It's probably too big of a subject to get into here beyond this snippet.
I've had a certain feeling these last couple of days that dependency-injection should really be called "I can't make up my mind"-pattern. I know this might sound silly, but really it's about the reasoning behind why I should use Dependency Injection (DI). Often it is said that I should use DI, to achieve a higher level of loose-coupling, and I get that part. But really... how often do I change my database, once my choice has fallen on MS SQL or MySQL .. Very rarely right?
Does anyone have some very compelling reasons why DI is the way to go?
Two words, unit testing.
One of the most compelling reasons for DI is to allow easier unit testing without having to hit a database and worry about setting up 'test' data.
DI is very useful for decoupling your system. If all you're using it for is to decouple the database implementation from the rest of your application, then either your application is pretty simple or you need to do a lot more analysis on the problem domain and discover what components within your problem domain are the most likely to change and the components within your system that have a large amount of coupling.
DI is most useful when you're aiming for code reuse, versatility and robustness to changes in your problem domain.
How relevant it is to your project depends upon the expected lifespan of your code. Depending on the type of work you're doing zero reuse from one project to the next for the majority of code you're writing might actually be quite acceptable.
An example for use the use of DI is in creating an application that can be deployed for several clients using DI to inject customisations for the client, which could also be described as the GOF Strategy pattern. Many of the GOF patterns can be facilitated with the use of a DI framework.
DI is more relevant to Enterprise application development in which you have a large amount of code, complicated business requirements and an expectation (or hope) that the system will be maintained for many years or decades.
Even if you don't change the structure of your program during development phases you will find out you need to access several subsystems from different parts of your program. With DI each of your classes just needs to ask for services and you're free of having to provide all the wiring manually.
This really helps me on concentrating on the interaction of things in the software design and not on "who needs to carry what around because someone else needs it later".
Additionally it also just saves a LOT of work writing boilerplate code. Do I need a singleton? I just configure a class to be one. Can I test with such a "singleton"? Yes, I still can (since I just CONFIGURED it to exist only once, but the test can instantiate an alternative implementation).
But, by the way before I was using DI I didn't really understand its worth, but trying it was a real eye-opener to me: My designs are a lot more object-oriented as they have been before.
By the way, with the current application I DON'T unit-test (bad, bad me) but I STILL couldn't live with DI anymore. It is so much easier moving things around and keeping classes small and simple.
While I semi-agree with you with the DB example, one of the large things that I found helpful to use DI is to help me test the layer I build on top of the database.
Here's an example...
You have your database.
You have your code that accesses the database and returns objects
You have business domain objects that take the previous item's objects and do some logic with them.
If you merge the data access with your business domain logic, your domain objects can become difficult to test. DI allows you to inject your own data access objects into your domain so that you don't depend on the database for testing or possibly demonstrations (ran a demo where some data was pulled in from xml instead of a database).
Abstracting 3rd party components and frameworks like this would also help you.
Aside from the testing example, there's a few places where DI can be used through a Design by Contract approach. You may find it appropriate to create a processing engine of sorts that calls methods of the objects you're injecting into it. While it may not truly "process it" it runs the methods that have different implementation in each object you provide.
I saw an example of this where the every business domain object had a "Save" function that the was called after it was injected into the processor. The processor modified the component with configuration information and Save handled the object's primary state. In essence, DI supplemented the polymorphic method implementation of the objects that conformed to the Interface.
Dependency Injection gives you the ability to test specific units of code in isolation.
Say I have a class Foo for example that takes an instance of a class Bar in its constructor. One of the methods on Foo might check that a Property value of Bar is one which allows some other processing of Bar to take place.
public class Foo
{
private Bar _bar;
public Foo(Bar bar)
{
_bar = bar;
}
public bool IsPropertyOfBarValid()
{
return _bar.SomeProperty == PropertyEnum.ValidProperty;
}
}
Now let's say that Bar is instantiated and it's Properties are set to data from some datasource in it's constructor. How might I go about testing the IsPropertyOfBarValid() method of Foo (ignoring the fact that this is an incredibly simple example)? Well, Foo is dependent on the instance of Bar passed in to the constructor, which in turn is dependent on the data from the datasource that it's properties are set to. What we would like to do is have some way of isolating Foo from the resources it depends upon so that we can test it in isolation
This is where Dependency Injection comes in. What we want is to have some way of faking an instance of Bar passed to Foo such that we can control the properties set on this fake Bar and achieve what we set out to do, test that the implementation of IsPropertyOfBarValid() does what we expect it to do, i.e. return true when Bar.SomeProperty == PropertyEnum.ValidProperty and false for any other value.
There are two types of fake object, Mocks and Stubs. Stubs provide input for the application under test so that the test can be performed on something else. Mocks on the other hand provide input to the test to decide on pass\fail.
Martin Fowler has a great article on the difference between Mocks and Stubs
I think that DI is worth using when you have many services/components whose implementations must be selected at runtime based on external configuration. (Note that such configuration can take the form of an XML file or a combination of code annotations and separate classes; choose what is more convenient.)
Otherwise, I would simply use a ServiceLocator, which is much "lighter" and easier to understand than a whole DI framework.
For unit testing, I prefer to use a mocking API that can mock objects on demand, instead of requiring them to be "injected" into the tested unit from a test. For Java, one such library is my own, JMockit.
Aside from loose coupling, testing of any type is achieved with much greater ease thanks to DI. You can put replace an existing dependency of a class under test with a mock, a dummy or even another version. If a class is created with its dependencies directly instantiated it can often be difficult or even impossible to "stub" them out if required.
I just understood tonight.
For me, dependancy injection is a method for instantiate objects which require a lot of parameters to work in a specific context.
When should you use dependancy injection?
You can use dependancy injection if you instanciate in a static way an object. For example, if you use a class which can convert objects into XML file or JSON file and if you need only the XML file. You will have to instanciate the object and configure a lot of thing if you don't use dependancy injection.
When should you not use depandancy injection?
If an object is instanciated with request parameters (after a submission form), you should not use depandancy injection because the object is not instanciated in a static way.