Protocols versus polymorphism in swift - ios

I am rather new to object-oriented programming and I am attempting to wrap my head around protocols, delegates, and polymorphism. I recently watched a training video that promoted that when you have two classes that are similar with similar method implementations, a protocol is the best solution for achieving this elegantly. That makes sense. However, some additional research has led me to discover polymorphism and it sounds like that is also a preferred approach, whereas you could simply use the base class to model the functionality and update these methods in the subclasses.
So, I have two questions. First, is my understanding of polymorphism correct? I am still rather new to all of these concepts. Second, do protocols trump polymorphism and when would you use one over the other?
Thanks!

There are cases in which protocols are more appropriate way to go, and cases in which a base class is the solution.
In Swift base class allows you to share the same implementation thus reducing code redundancy. However, a base class does not force its subclasses to override its methods. So if all the subclasses are supposed to override some specific method, base class will be short to enforce it (there are no abstract classes in swift that would enable mixing implementation with requirements). There are ways how to "hack" it, e.g., by including fatalError() in base class implementation to enforce the programmer to override it (otherwise the base implementation would cause crash) - but that is a runtime error. So if the base class is just for you, it can be a good approach, but if you are implementing a library/framework and you expect the user of the library to subclass it, then you have to consider these concerns.
Protocols on the other hand are contract definitions. Protocol defines which methods have to be implemented in order to implement that protocol. So each protocol implementing class will be forced to implement those methods. This is usually something you want - you want to bind the implementing class by the contract to fulfil the requirements of the protocol. However, making the implementing classes to share the code is a bit harder. Take a look into protocol extension for this. Protocol extensions allow you to add "default" implementation to the protocol methods.
You can take a look at my blog article about protocol oriented programming for some more polemics about it.

Related

Swift. Make all classes final?

My question is about using final keyword im Swift code. I know that final helps compiler to compile code faster because of dynamic dispatch. So, if I definitely know that I will not inherit some of my classes, should I make all of them final?
There was this protective approach taught by the iOS Stanford course.
The approach was, define all your APIs private. It increases encapsulation. Later if you needed to give away something then remove the privacy.
Because it's bad to do it the other way around ie design something public and then later change it to private.
Similarly here, I think making a class final and then later deciding it shouldn't be final is better than making a class non-final, allowing multiple classes to subclass it and then later attempt to revert it back to final because of some design decisions.
You are fine to do so if you are 110% you won't attempt to subclass any of your 'final' classes, as your project won't compile if you do so.
The article below has some great information and should help you decide.
http://blog.human-friendly.com/the-importance-of-being-final
If your app or a framework uses a protocols over inheritance then you can define your class types as finals.
If you prefer to use an inheritance over protocols and your app or framework is tested with a Unit Tests then don't define your class types as finals when they are used for a dependency objects because they will not be able to be mocked.

Why do delegates exist in iOS?

I understand that delegates are essentially objects that another object can pass messages to, and that they are used on behalf of other classes. So for example, a UITableViewDelegate has methods which can be used to detect particular events in a UITableView. This is very useful, and indeed I have used delegates a lot in past iOS projects, so this is more of a curiosity:
Why do the methods in a delegate class not just exist in the class that the delegate is being delegated by?
Surely it would be more convenient to have those methods in the actual class, such as a UITableView?
Perhaps it is that architecturally it is more convenient, but from fist looks it seems counter intuitive.
As a general rule, composition is more powerful than inheritance. Inheritance creates many subtle problems, the most common of which is the diamond problem, but there are many other problems.
Delegation is just a specific formulation of the Strategy pattern, which allows us to extend an object via composition rather than inheritance.
As a concrete example of the issue, and how the diamond problem creeps in when you use inheritance, consider this:
You have a very common way you want to provide cells. For example, you'd like a Core Data fetch request, or a network request that generates cells. So you would build a superclass that encapsulated all this logic. We'll call the class that handles thatFetchRequestDataProviding.
Separately, you have a visual behavior you use a lot. For example, you want a particular kind of animations for your view, so you wrap that up into a class FadeInTableView.
Now we have a problem because we want both. So we need multiple inheritance. And multiple inheritance is Pandora's box of ambiguities.
But I eliminate all of that if I make FetchRequestDataProviding a separate object that behaves as a delegate. I actually could make things even more powerful by breaking out FadeInAnimating as a delegate/strategy (though UIView doesn't have that power today).
In ObjC, "composition is more powerful than inheritance" shows itself commonly in a fairly shallow inheritance tree and lots of delegates. Swift pushes this further with protocols and structs that have no inheritance. None of this means that inheritance is bad; it can have a lot of value (though languages like Go avoid it entirely; though interestingly still has to face the diamond problem due to embedding). But when in doubt, composition is the more powerful tool.

iOS: How Protocols ensure class anonymity?

I was going through the Apple's Dcoumentation for Protocols and got stuck to understand hoe protocols ensure class anonymity.
Can someone provide some code to understand how this is implemented. ?
Thanks :)
You can imaging a protocol as a contract: any class conforming the protocols promises to implement this contract — the rest of the class is out of scope of this contract. It doesn't matter, what else the class is, if it subclasses a certain class or implements other protocols.
So anonymity here describes that during compile time the class of an object is irrelevant. It only needs to fulfill the contract. As Objective-C also supports runtime manipulation this applies for run time as-well.

Why does any kind of abstraction use interfaces instead of abstract classes?

Heyho,
There´s a question in my mind for some time now, which hopefully can be cleared quickly by some of you:
I am a big fan of MVC, ASP.Net Mvc in my case.
What I have noticed is the hype about interfaces. Every video, tutorial and book seems to solve any kind of abstraction with interfaces. I have adapted these patterns, understood why and how and I am basically very happy with it.
But I just don´t get why interfaces are used everywhere. I´ve almost never seen some abstraction being done with abstract base classes, which I don´t understand. Maybe I miss something? I know that you can only inherit from one base class while multiple interfaces are possible. But interfaces do have disadvantages, especially when some changes need to be done, which breaks your implementations.
In my projects so far, I only used to pick interfaces for completely different classes.
For example, the whole repository pattern could be done with an abstract base class, still providing testability and exchangeability, or did I miss something?
Please point me to the part where my brain laggs :)
Interfaces are used in tutorials, blogs and elsewhere because those authors are particularly influenced by a group of methodology called "design for testability".
Primarily, design for testability school of thoughts used interface every way because they want to be able to mock any component under tests. If you use concrete class, then a lot of mocking tools can't mock those class, and hence will make it difficult to test your code.
A Story
I once attended a Java user group
meeting where James Gosling (Java's
inventor) was the featured speaker.
During the memorable Q&A session,
someone asked him: "If you could do
Java over again, what would you
change?" "I'd leave out classes," he
replied. After the laughter died down,
he explained that the real problem
wasn't classes per se, but rather
implementation inheritance (the
extends relationship). Interface
inheritance (the implements
relationship) is preferable. You
should avoid implementation
inheritance whenever possible.
While using only or mostly Interfaces does have code reuse problems(as well as eliminating nice base classes), It makes it a lot easier to do Multiple Inheritance like things. As well as having widely different implementations that will work and where you don't have to worry about the base class changing or even what it does(you do have to implement the whole thing though so its a trade off).
P.S. I think the new Go language is based on interfaces rather then inheritance(looks sort of interesting).
If the language doesn't support multiple inheritance or mix-ins abstract base classes are limited in scope compared to interfaces. E.g. in .NET if you must inherit from some other type such as MarshalByRef, you can't use an abstract base class to implement a pattern. Interfaces do not impose this restriction.
Besides the fact you mentioned that you can inherit from a single base class only (which is pretty inconvenient if you want to use an existing class that already inherits from some class with the new framework base class), you also avoid the fragile base class problem if you use interfaces instead.
Coding against interfaces makes your design more flexible and extensible. For instance, plugin frameworks and dependency injection. Without interfaces, the extensibility of it is pretty much limited.
Read about interfaces, abstract classes, breaking changes, and MVC here: http://ayende.com/Blog/archive/2008/02/21/Re-Versioning-Issues-With-Abstract-Base-Classes-and-Interfaces.aspx.
One solution that is presented there (or somewhere else on Ayende's blog) is: do use interface but also provide abstract classes. Those who case about breaking changes can base their implementations on abstract classes. Those who need power of interfaces are also satisfied. But do make sure your methods accept interfaces, not abstract classes, as input.

How can I test a Singleton class with DUnit?

Or it's better to use another Design Pattern?
Responded to a similar question some days ago here, mocking a Singleton. The original post is for C#.Net as regards mocking a singleton's behaviour, but should still apply.
As regards the singleton pattern, there isn't anything wrong with it per se - in many cases we want to centralize logic and data. However, there is a very big difference between a singleton and a static class. Building your singleton as a static class hard codes that implementation to every consumer in your application - which makes unit testing very difficult!
What you want to do is define an interface for your singleton, exposing the methods for your consumers to use. Your consumers in turn are passed a reference to an implementing class by whomever instantiates them [typically this is your application, or a container if you are familiar with Dependency Injection\Inversion of Control].
It's this framework, whomever is instantiating the consumers, that is responsible for ensuring one and only one instance is floating around. It's really not that great a leap from static class to interface reference [as demonstrated in link above], you just lose the convenience of a globally accessible instance - i know i know, global references are terribly seductive, but Luke turned his back to the Dark Side, so can you!
Generally speaking, best practices suggest avoiding static references, and encourages progamming against interfaces. Remember, it is still possible to apply the singleton pattern with these constraints. Follow these guidelines, and you should have no problem unit testing your work :)
Hope this helps!
singleton != public static class, rather singleton == single instance
Lack of testability is one of the major downfalls of the classic Singleton model (static class method returning an instance). As far as I'm concerned, that's justification enough to re-design any code that uses Singletons to use some other design.
If you absolutely need to have a singular instance, then Dependency Injection and writing to an interface, as suggested by johnny g, is definitely the way to go.
I'm using the following pattern when I write a static-based singletons that I can mock. The code is Java, but I think you will get an idea. The main problem with this approach is that you have to relax constructor to package-protected (which sorta defeats a true singleton).
As a side note - the code applies to ability to mock your "static" code not necessarily simply calling it
I generally only use Singletons for Flyweight objects or similar value objects. Looking into an IoC container (as discussed above) is probably a better way to handle a shared object than a singleton.
Consider that in Smalltalk (where a lot of these patterns originated), true and false were both effectively singletons :)
If you must use a singleton (and there are reasons to do so...but I would always try to avoid it if possible). I would recommend using a IOC container to manage it. Im not sure if there is one for Delphi or not. But in Java you could use Spring, in .NET you can use Windsor/Castle. A IOC container can hold onto the Singleton and can register different implementations for testing.
It's probably too big of a subject to get into here beyond this snippet.

Resources