I was going through the Apple's Dcoumentation for Protocols and got stuck to understand hoe protocols ensure class anonymity.
Can someone provide some code to understand how this is implemented. ?
Thanks :)
You can imaging a protocol as a contract: any class conforming the protocols promises to implement this contract — the rest of the class is out of scope of this contract. It doesn't matter, what else the class is, if it subclasses a certain class or implements other protocols.
So anonymity here describes that during compile time the class of an object is irrelevant. It only needs to fulfill the contract. As Objective-C also supports runtime manipulation this applies for run time as-well.
Related
I am rather new to object-oriented programming and I am attempting to wrap my head around protocols, delegates, and polymorphism. I recently watched a training video that promoted that when you have two classes that are similar with similar method implementations, a protocol is the best solution for achieving this elegantly. That makes sense. However, some additional research has led me to discover polymorphism and it sounds like that is also a preferred approach, whereas you could simply use the base class to model the functionality and update these methods in the subclasses.
So, I have two questions. First, is my understanding of polymorphism correct? I am still rather new to all of these concepts. Second, do protocols trump polymorphism and when would you use one over the other?
Thanks!
There are cases in which protocols are more appropriate way to go, and cases in which a base class is the solution.
In Swift base class allows you to share the same implementation thus reducing code redundancy. However, a base class does not force its subclasses to override its methods. So if all the subclasses are supposed to override some specific method, base class will be short to enforce it (there are no abstract classes in swift that would enable mixing implementation with requirements). There are ways how to "hack" it, e.g., by including fatalError() in base class implementation to enforce the programmer to override it (otherwise the base implementation would cause crash) - but that is a runtime error. So if the base class is just for you, it can be a good approach, but if you are implementing a library/framework and you expect the user of the library to subclass it, then you have to consider these concerns.
Protocols on the other hand are contract definitions. Protocol defines which methods have to be implemented in order to implement that protocol. So each protocol implementing class will be forced to implement those methods. This is usually something you want - you want to bind the implementing class by the contract to fulfil the requirements of the protocol. However, making the implementing classes to share the code is a bit harder. Take a look into protocol extension for this. Protocol extensions allow you to add "default" implementation to the protocol methods.
You can take a look at my blog article about protocol oriented programming for some more polemics about it.
My question is about using final keyword im Swift code. I know that final helps compiler to compile code faster because of dynamic dispatch. So, if I definitely know that I will not inherit some of my classes, should I make all of them final?
There was this protective approach taught by the iOS Stanford course.
The approach was, define all your APIs private. It increases encapsulation. Later if you needed to give away something then remove the privacy.
Because it's bad to do it the other way around ie design something public and then later change it to private.
Similarly here, I think making a class final and then later deciding it shouldn't be final is better than making a class non-final, allowing multiple classes to subclass it and then later attempt to revert it back to final because of some design decisions.
You are fine to do so if you are 110% you won't attempt to subclass any of your 'final' classes, as your project won't compile if you do so.
The article below has some great information and should help you decide.
http://blog.human-friendly.com/the-importance-of-being-final
If your app or a framework uses a protocols over inheritance then you can define your class types as finals.
If you prefer to use an inheritance over protocols and your app or framework is tested with a Unit Tests then don't define your class types as finals when they are used for a dependency objects because they will not be able to be mocked.
I am little confused with the benefit of protocols in VIPER architecture.
I understand that DI (Dependency Injection) achieved via protocols and helps avoid direct dependency between objects - I Agree.
But i am looking at a real benefit from usage perspective, an example may be - especially how protocols help benefit in unit testing (testing the Interactor portion).
Can't we achieve the same via method Callback's using blocks?
Hope someone can help me understand from usage perspective with some example
Cheers
Using a callback, e.g. from the Interactor to the Presenter, can make it harder to test the Presenter.
When writing tests for how the Presenter processes input (sent from the Interactor) your test would have to call some method on the Presenter that would cause the Presenter to make a call on the Interactor, which would cause the Interactor to send data to the Presenter.
By having the Presenter implement a protocol defined by the Interactor, your test can just directly call the appropriate input method on the Presenter.
As far as declaring protocols, I practice TDD in the style of mock roles, not objects (http://www.jmock.org/oopsla2004.pdf). The protocols help provide a better abstraction by focusing on what the object does (its role), not how it does it.
Protocols, on their own, are of little value for the unit tests. Your unit tests will provide test doubles (http://martinfowler.com/bliki/TestDouble.html) for the dependencies of the system under test. Even if you expose dependencies as concrete classes you can still create test doubles for your test.
In Objective-C, you can use a mocking library, such as OCMock (http://ocmock.org), or OCMockito (https://github.com/jonreid/OCMockito), to create stubs, spies, or mocks of the concrete class.
In Swift, you could create your test doubles by subclassing each of the concrete classes used as dependencies.
In short, protocols are not used to ease unit testing, but to describe at a higher level of abstraction, what the application does.
Here is an example of how having the abstract protocols were beneficial after the fact:
I created a protocol to represent the actions a user could perform on a screen, e.g. ProfileUserActions, that had actions such as changeName, and changeAddress. The Presenter implemented ProfileUserActions, and the View accepted a ProfileUserActions as a dependency. When the user tapped a button on screen, the View would send the appropriate message to its userActions object.
When I wanted to add analytics, I was able to create a new, independent ProfileAnalytics class which also implemented ProfileUserActions. I inserted the analytics object between the View and the Presenter, which allowed the app to capture analytics, without having to modify either the View or the Presenter.
By using a protocol it is easier for you to swap out an implementation in your VIPER structure. For example, you might have an interactor that is working with a class that is writing to the filesystem. You don't want to be testing the filesystem in your unit tests so instead if you put the filesystem write operations in your interactor behind a protocol you can then replace the filesystem write operations with an in-memory implementation.
As for a protocol on the interactor itself, I think it pays to be a bit more pragmatic about your choice. If it's easy to build your interactor for the test and it doesn't cause any side effects as part of testing then there's probably no need for the protocol. On the other hand, if you have to create a number of other dependencies then it might pay to have the interactor conform to a protocol so that you can more easily fake the output you get from the interactor.
Heyho,
There´s a question in my mind for some time now, which hopefully can be cleared quickly by some of you:
I am a big fan of MVC, ASP.Net Mvc in my case.
What I have noticed is the hype about interfaces. Every video, tutorial and book seems to solve any kind of abstraction with interfaces. I have adapted these patterns, understood why and how and I am basically very happy with it.
But I just don´t get why interfaces are used everywhere. I´ve almost never seen some abstraction being done with abstract base classes, which I don´t understand. Maybe I miss something? I know that you can only inherit from one base class while multiple interfaces are possible. But interfaces do have disadvantages, especially when some changes need to be done, which breaks your implementations.
In my projects so far, I only used to pick interfaces for completely different classes.
For example, the whole repository pattern could be done with an abstract base class, still providing testability and exchangeability, or did I miss something?
Please point me to the part where my brain laggs :)
Interfaces are used in tutorials, blogs and elsewhere because those authors are particularly influenced by a group of methodology called "design for testability".
Primarily, design for testability school of thoughts used interface every way because they want to be able to mock any component under tests. If you use concrete class, then a lot of mocking tools can't mock those class, and hence will make it difficult to test your code.
A Story
I once attended a Java user group
meeting where James Gosling (Java's
inventor) was the featured speaker.
During the memorable Q&A session,
someone asked him: "If you could do
Java over again, what would you
change?" "I'd leave out classes," he
replied. After the laughter died down,
he explained that the real problem
wasn't classes per se, but rather
implementation inheritance (the
extends relationship). Interface
inheritance (the implements
relationship) is preferable. You
should avoid implementation
inheritance whenever possible.
While using only or mostly Interfaces does have code reuse problems(as well as eliminating nice base classes), It makes it a lot easier to do Multiple Inheritance like things. As well as having widely different implementations that will work and where you don't have to worry about the base class changing or even what it does(you do have to implement the whole thing though so its a trade off).
P.S. I think the new Go language is based on interfaces rather then inheritance(looks sort of interesting).
If the language doesn't support multiple inheritance or mix-ins abstract base classes are limited in scope compared to interfaces. E.g. in .NET if you must inherit from some other type such as MarshalByRef, you can't use an abstract base class to implement a pattern. Interfaces do not impose this restriction.
Besides the fact you mentioned that you can inherit from a single base class only (which is pretty inconvenient if you want to use an existing class that already inherits from some class with the new framework base class), you also avoid the fragile base class problem if you use interfaces instead.
Coding against interfaces makes your design more flexible and extensible. For instance, plugin frameworks and dependency injection. Without interfaces, the extensibility of it is pretty much limited.
Read about interfaces, abstract classes, breaking changes, and MVC here: http://ayende.com/Blog/archive/2008/02/21/Re-Versioning-Issues-With-Abstract-Base-Classes-and-Interfaces.aspx.
One solution that is presented there (or somewhere else on Ayende's blog) is: do use interface but also provide abstract classes. Those who case about breaking changes can base their implementations on abstract classes. Those who need power of interfaces are also satisfied. But do make sure your methods accept interfaces, not abstract classes, as input.
Or it's better to use another Design Pattern?
Responded to a similar question some days ago here, mocking a Singleton. The original post is for C#.Net as regards mocking a singleton's behaviour, but should still apply.
As regards the singleton pattern, there isn't anything wrong with it per se - in many cases we want to centralize logic and data. However, there is a very big difference between a singleton and a static class. Building your singleton as a static class hard codes that implementation to every consumer in your application - which makes unit testing very difficult!
What you want to do is define an interface for your singleton, exposing the methods for your consumers to use. Your consumers in turn are passed a reference to an implementing class by whomever instantiates them [typically this is your application, or a container if you are familiar with Dependency Injection\Inversion of Control].
It's this framework, whomever is instantiating the consumers, that is responsible for ensuring one and only one instance is floating around. It's really not that great a leap from static class to interface reference [as demonstrated in link above], you just lose the convenience of a globally accessible instance - i know i know, global references are terribly seductive, but Luke turned his back to the Dark Side, so can you!
Generally speaking, best practices suggest avoiding static references, and encourages progamming against interfaces. Remember, it is still possible to apply the singleton pattern with these constraints. Follow these guidelines, and you should have no problem unit testing your work :)
Hope this helps!
singleton != public static class, rather singleton == single instance
Lack of testability is one of the major downfalls of the classic Singleton model (static class method returning an instance). As far as I'm concerned, that's justification enough to re-design any code that uses Singletons to use some other design.
If you absolutely need to have a singular instance, then Dependency Injection and writing to an interface, as suggested by johnny g, is definitely the way to go.
I'm using the following pattern when I write a static-based singletons that I can mock. The code is Java, but I think you will get an idea. The main problem with this approach is that you have to relax constructor to package-protected (which sorta defeats a true singleton).
As a side note - the code applies to ability to mock your "static" code not necessarily simply calling it
I generally only use Singletons for Flyweight objects or similar value objects. Looking into an IoC container (as discussed above) is probably a better way to handle a shared object than a singleton.
Consider that in Smalltalk (where a lot of these patterns originated), true and false were both effectively singletons :)
If you must use a singleton (and there are reasons to do so...but I would always try to avoid it if possible). I would recommend using a IOC container to manage it. Im not sure if there is one for Delphi or not. But in Java you could use Spring, in .NET you can use Windsor/Castle. A IOC container can hold onto the Singleton and can register different implementations for testing.
It's probably too big of a subject to get into here beyond this snippet.