Why is subclassing undesirable? [closed] - ios

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Why does the UIApplication class need a delegate to have "things" done on its behalf? Why can't this class do "things" directly without the need for an intermediate object such as a delegate? What is the purpose of that principle?
In the same way I ask myself: why isn't it possible to directly access the keyboard in order to dismiss it? Why is there a need for signaling the view controller as a delegate in order to resign first responder?

Delegation is a core design pattern. It allows for the seperation of responsibilities between parts of your program. The idea is that the part of your program that, for example, draws to the screen probably shouldn't be talking to your database. There are several reasons for this:
Performance: if the same object that draws to the screen access your data store, you're going to run into performance problems. Period. Full stop.
Code maintanence: it's easier to conceptualize code that is properly modularized. (For me, anyway)
Flexibility: If you subclass in your code, that's great - until you start running into monolithic classes that have all sorts of undesired behavior. You'll reach the point where you have to overload behaviors to turn things off, and your property namespace can become polluted. Try categories, delegatation, and blocks for altrnatives. As king Solomon says in Ecclesiastes, paraphrased: There's a time and place for everything under the sun.
To make it easier to write, read, iterate upon, and maintain your program, it is strongly advised that you follow certain practices. You're welcome to subclass many classes, and Apple won't reject your app for poor code, provided that it runs as advertised. That said, if you don't abide by specific tried and true practices, you're digging your own grave. Sub passing isn't inherently bad, but categories, protocols, and blocks are so fascinating, that I'd prefer them anyway.

Why is subclassing undesirable?
Subclassing isn't undesirable. It's just that it's not always the right solution, and it's often better to use composition instead of inheritance. Lots has been written on this on Stack Overflow already, to the point that your title puts this question in danger of being closed as a duplicate. Rather than trying to repeat, I'll just point to one of the more popular questions covering this topic: Prefer composition over inheritance?
Why does the UIApplication class need a delegate to have "things" done
on its behalf? Why can't this class do "things" directly without the
need for an intermediate object such as a delegate? What is the
purpose of that principle?
It provides a much looser coupling that allows Apple to change or replace the application object without breaking existing code.

Delegates are very useful but they also have their place and time. It is also very possible to go about without using delegates. There are certain instances though, where delegates are important. For instance, when you use the uiTextview delegate, it is important for the controlling class to know when the text changes so it can handle it. And, because different classes may want to handle this event in different ways (limit characters, capitalize letters, etc) its easiest to use a delegate.
You may be saying: Yes, but why not just make a subclass with desired behavior? Well, this is possible, but there are a few reasons why you should not proceed this way. Firstly, you may override a behavior that is happening before the delegate is called. Next, it can be dangerous because you might accidentally override a function which was necessary. Third, it is a lot more work and more code, so it will be harder to debug. Imagine you have a form with 100 textboxes and they all have different behaviours. you would not want to create 100 subclasses... its much easier to do a switch case and delegate appropriately.

Related

AppDelegate property or Singleton Object?

This is more of a design best practice question:
When you are designing the structure of lets say a location based app. The location Manager is obviously an important instance and should be given easy access for other objects.
Should you have it as a property of appDelegate? or a singleton on its own?
Under what scenario would you prefer one over the other?
I understand both would work, but I want to make sure I'm doing things the right way, not just hacking everything together.
Your inputs are much appreciated!
Neither.
Pass in a location manger object via a custom init method or property where ever you need it.
This will conform with the SOLID principals S, O & D (single responsible, open-close, dependency inversion).
Also testing with mocks will be possible more easily.
The worst choice IMO is to store things in the app delegate. See What describes the Application Delegate best? How does it fit into the whole concept? for much more on that. In short, the app delegate is the Application Delegate. It is not "the Application Dumping Ground for Globalish Things."
Singletons are a long-established approach in ObjC, via the shared... pattern. After decades of popularity, and extensive use within the core Cocoa frameworks (NSUserDefaults, NSNotificationCenter, NSApplication, NSFontManager, NSWorkspace, UIDevice, etc. etc. etc.), they have in recent years fallen into some disregard in favor of other techniques, particularly "dependency injection," which is just to say "assigning to a property."
After years of using singletons in ObjC, I am coming around to the DI way of thinking. The improvements in testability and the improved clarity of dependencies are quite nice. That said, sometimes DI can lead to awkward transitions, particularly when dealing with storyboards.
So, I would say:
When practical, just construct objects and assign them to properties on the objets that need them.
If you have a lot to pass around, consider collecting them into a single "configuration" object and pass that around (but this hurts modularity somewhat).
If DI creates chaos (particularly if it leads to a lot of passing objects to things just so they can pass the object on to something else), or if it forces a lot of storyboard segue code that you could otherwise avoid, singletons are a well-established and well-respected pattern in Cocoa and are not wrong. They are a Cocoa Core Competency.
If you ever find yourself calling (MyAppDelegate *)[[UIApplication sharedApplication] delegate], then you're doing something wrong.
Make another singleton for location management. Single responsibility is a first principle of SOLID.
Think about do you really need it in AppDelegate ?
For Location Manager for example, no you don't. You better keep the location manager with all its related methods in a separate class to keep the singularity principle as #vladimir said, and to be able to reuse it later.
AppDelegate is responsible for handling what happens when the app launch and/or going to the background, initializing core data, registering to push notification and other 3rd party libraries like parse,...
Adding other things to appDelegate will make it grow larger by time and it would be very hard to maintain.
When to add things that don't belong to AppDelegate to it? I think when the app is small, you know it won't scale up, and you are required to favor time over clean code.
Check the AppDelegate responsibilities , and this answer by Matt
You can create multiple instances of CLLocationManager and use them wherever you need them.
If you create one instance and try to share it then you'll have trouble trying to forward the delegate methods around or try to re-implement them as notifications leading to a big mess.

Couple of questions about asp.net MVC view model patterns [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I'm new to MVC.
I have read this short bit detailing three ways of dealing with the view model in MVC:
http://geekswithblogs.net/michelotti/archive/2009/10/25/asp.net-mvc-view-model-patterns.aspx
The gist of it seems to me that:
Method 1, pull an object out the database and use it as your view model. Quick and simple, but if you want data from multiple tables you're totally screwed (I can't think of a way around it without Method 2).
Method 2, create a class that has references to multiple objects, and use this as your view model. This way you can access everything you need. The article says that when views become complicated it breaks down due to impedance mismatch between domain/view model objects... I don't understand what this means. Googling impedance mismatch returned a lot of stuff, the gist of it being that you're representing database stuff using objects and stuff doesn't map across cleanly, but you'd presumably have this problem even with Method 1. Not sure what I am missing. Also seems to me that creating a class for each View to get the data you want isn't ideal from a maintenance point of view, not sure if you have a choice.
Method 3, I am still getting my head around it, but I don't quite understand why their checkbox example wouldn't work in method 2, if you added a bool addAdditional to your class that wasn't connected to the domain model.
Method 3 seems to say rather than return the domain stuff directly, just pull out the properties you specifically need, which I think is nicer but is gonna be harder to maintain since you'll need some big constructors that do this.x = domain.x, this.y = domain.y etc.
I don't understand the builder, specifically why the interface is used, but will keep working on it.
Edit: I just realised this isn't really a question, my question is, is my thinking correct?
The problem I've run into with #2 is that I have to do one of these two things:
Include every single field on every single object on the form -- those that aren't going to be displayed need to be included but hidden.
Only include the specific fields I need but use AutoMapper or something similar to map these fields back onto the actual objects.
So with #2 I see a mismatch between what I want to do and what I'm required to do. If we move on to #3, this mismatch is removed (from what I understand, briefly glanced at it). It also fixes the problem that a determined hacker could submit values like id fields or similar when using method #2 that unless I was very careful could be written to my data store. In other words, it is possible to update anything on any of the objects unless one is very careful.
With method #3 you can use AutoMapper or similar to do the dirty work of mapping the custom object to the data store objects without worrying about the security issues/impedance exposed by method #2 (see comment for more details on security issues with #2).
You're probably right about impedance mismatch being present in both Methods 1 and 2 - it shows up anywhere you're going between objects in code and DB objects with relational mapping. Jeff Atwood writes about it, and cites this article, which is a fantastic discussion of everything dealing with Object-Relational mapping, or "The Vietnam of Computer Science". What you'll pretty much end up doing is weighing the pros and cons of all these approaches and choose the one that sounds like it fits your needs the best, then later realizing you chose the wrong one. Or perhaps you're luckier than I am and can get it right the first go 'round. Either way, it's a hairy problem.

Singletons: Pros,Cons, Design Concerns [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I admit it. I am using singletons. I know what you may all say, and frankly, seeing all these answers on the Internet, saying about the bad aspects of singletons, and advising against them really made me question my programming practices.
I have already read some posts in StackOverflow regarding Singletons, but I post this question not only to ask about them, but to see some insights about the way I use them in my programs.
I feel I must really clarify some things here and ask fore directions.
So lets consider some cases where I use singletons a lot:
To create accessors to global variables, like my root view controller, specific and always existing view controllers, application state, my global managed object context... stuff like that
To create utility classes whose job is to handle data application-wide. For example, I create a Singleton that will operate my caching database, which relies on Core Data. Since I need to create caches and other stuff to be put in the database in different views, it somehow felt better to create a class that will handle database ins/outs (being careful about thread safety).
Handling network sessions. Actually, I use it to keep alive a connection and sending something like a PINg to a server each XX seconds.
I think that about sums it up. I would really like opinions from other developers on the matter.
Do you think that there are better solutions for these problems above?
Do you think that there are always better alternatives to singletons and that they should be avoided?
Is it better in terms of multithreading to forget about singletons?
Any recommendations and thoughts would be useful, and most welcome.
Singletons are certainly not always evil, but as you mention you have to be careful about thread safety (on that topic, check out this blog post on singleton initialization).
One reason why singletons are often decried as evil is that they can make it harder to "scale" parts of a program if they rely too much on a singleton and its behavior. You mention database access, a server or desktop application might start out with singleton implementation to handle all database needs and then later try to use multiple connections to speed up independent requests, etc. Breaking up such code can be very hard.
Even in an iOS application using CoreData it can be useful to not rely on a global ManagedObjectContext on your application delegate or some root view controller.
If you pass each view controller a reference to a ManagedObjectContext you can gain some flexibility. Most of the time you'll just pass the same context from one view controller to the next, but if you ever decide to in the future, you can for example create a new ManagedObjectContext for an editing view where you can use the undo features of the context but only merge the changes back into the "root" context if the user decides to save them or easily discard them otherwise. Or maybe you want to do some background processing on a whole set of objects. If everything operates on the same context, you'll end up with synchronization issues.
To create accessors to global variables, like my root view controller, specific and always existing view controllers, application state, my global managed object context... stuff like that
Singletons are globals. Wrapping a global with another global doesn't really help anything.
To create utility classes whose job is to handle data application-wide. For example, I create a Singleton that will operate my caching database, which relies on Core Data. Since I need to create caches and other stuff to be put in the database in different views, it somehow felt better to create a class that will handle database ins/outs (being careful about thread safety).
Sure, but this probably doesn't need to be a singleton. Indeed, part of the design of Core Data is that it is valid to have multiple MOCs that talk to the same store.
Handling network sessions. Actually, I use it to keep alive a connection and sending something like a PINg to a server each XX seconds.
This should be a singleton if the server might enforce some n-connections-per-user/IP-address limit. Otherwise, it probably does not need to be a singleton.
As I mention in my aforelinked blog post, the solution I prefer is to have each object own each other object. For example, your MOC and connection objects would each be owned by any object(s) that need to work with them. This improves memory and resource management (singleton objects never die) and makes the overall design of the application that much more straightforward.

When to violate YAGNI? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
The YAGNI "principle" states that you shouldn't focus on providing functionality before you needed as "you ain't gonna need it" anyway.
I usually tend to use common sense above any rule, no matter what but there are some times when I feel it is useful to over design or future proof something if you have good reasons, even if it's possible you'll never use it.
The actual case I have in my hands right now is more or less like this:
I've got an application that has to run over a simple proprietary communication protocol (OSI level 4). This protocol has a desirable set of characteristics (such as following NORM specification) which provide robustness to the application but which are not strictly required (UDP multicast could perform acceptable).
There's also the fact that the application is probably (but not surely) be used by other clients in the future which will not have access to the proprietary solution and, therefore, will need another solution. I know for a fact the probability of another client for the application is high.
So, what's your thinking? Should I just design for the proprietary protocol and leave the refactoring, interface extraction and so on to when I really need it or should I design now thinking for the (not so far) future?
Note: Just to be clear, I'm interested in hearing all kind of opinions to the general question (when to violate YAGNI) but I'd really like some advice or thoughts on my current dilemma :)
The reason YAGNI applies to code is that the cost of change is low. With good, well refactored code adding a feature later is normally cheap. This is different from say construction.
In the case of protocols, adding change later is usually not cheap. Old versions break, it can lead to communication failures, and an N^2 testing matrix as you have to test every version against every other version. Compare this with single codebases where new versions only have to work with themselves.
So in your case, for the protocol design, I wouldn't recommend YAGNI.
IMHO
I'd say go YAGNI first. Get it working without the NORM specification using 'the simplest thing that would work'.
Next compare if the cost of making the 'design changes' in the future is significantly greater than making the change now. Is your current solution reversible ? If you can easily make the change tomorrow or after a couple of months don't do it now. If you don't need to make an irreversible design decision now.. delay till the last responsible moment (so that you have more information to make a better decision)
To close if you know with a considerable degree of certainity that something is on the horizon and adding it later is going to be a pain, don't be an ostrich.. design for it.
e.g. I know that diagnostic logs would be needed before the product ships. Adding logging code after a month would be much more effort than adding it in today as I write each function... so this would be a case where I'd override YAGNI even though I dont need logs right now.
See-also: T. & M. Poppendieck's Lean books are better at explaining the dilemma of bullet#2 above.
Structuring your program well (abstraction, etc) isn't something that YAGNI applies to. You always want to structure your code well.
Just to clarify, I think your current predicament is due to over application of YAGNI. Structuring your code in such a way that you have a reusable library for using this protocol is just good programming practice. YAGNI does not apply.
I think that YAGNI could be inappropriate when you want to learn something :) YAGNI is good for the professionals, but not for students. When you want to learn you'll always need it.
I think it's pretty simple and obvious:
Violate YAGNI when you know that, in full certainty, You Are Going To Need It
I wouldn't worry. The fact that you aware of "YAGNI" means you are already thinking pragmatically.
I'd say, regardless of anything posted here, you are statistically more likely to come up with better code than someone who isn't analysing their practices in the same way.
I agree with Gishu and Nick.
Designing part of a protocol later often leads to thoughts like "damn, I should have done this that way, now I have to use this ugly workaround"
But it also depends on who will interface with this protocol.
If your control both ends, and that they will change of version at the same time, you can always refactor the protocol later as you would with a normal code interface.
About doing the extra protocol features implementation later, I found that implementing the protocol helps a lot to validate its design, so you may at least want to do a simple out-of-production code sample to test it, if you need the design to be official.
There are some cases where it makes sense to go against the YAGNI intuition.
Here are a few:
Following programming conventions. Especially base class and interface contracts. For example, if a base class you inherit provides a GetHashCode and an Equals method, overriding Equals but not GetHashCode breaks platform-documented rules developers are supposed to follow when they override Equals. This convention should be followed even if you find that GetHashCode would not actually be called. Not overriding GetHashCode is a bug even if there is no current way to provoke it (other than a contrived test). A future version of the platform might introduce calls to GetHashCode. Or, another programmer who has looked at documentation (in this example, the platform documentation for the base class you are inheriting) might rightfully expect that your code adheres without examining your code. Another way of thinking about this is that all code and applicable documentation must be consistent, even with documentation written by others such as that provided by the platform vendor.
Supporting customization. Particularly by external developers who will not be modifying your source code. You must figure out and implement suitable extension points in your code so that these developers can implement all kinds of addon functionality that never crossed your mind. Unfortunately, it is par for the course that you will add some extensibility features that few if any external developers ultimately use. (If it is possible to discuss the extensibility requirements with all of the external developers ahead of time or use frequent development/release cycles, great, but this is not feasible for all projects.)
Assertions, debug checks, failsafes, etc. Such code is not actually needed for your application to work correctly, but it will help make sure that your code works properly now and in the future when revisions are made.

How do you make code reusable? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Closed 2 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Any code can be reused in a way or an other, at least if you modify the code. Random code is not very reusable as such. When I read some books, they usually say that you should explicitly make the code reusable by taking into account other situations of code usage too. But certain code should not be an omnipotent all doing class either.
I would like to have reusable code that I don't have to change later. How do you make code reusable? What are the requirements for code being reusable? What are the things that reusable code should definitely have and what things are optional?
See 10 tips on writing reusable code for some help.
Keep the code DRY. Dry means "Don't Repeat Yourself".
Make a class/method do just one thing.
Write unit tests for your classes AND make it easy to test classes.
Remove the business logic or main code away from any framework code
Try to think more abstractly and use Interfaces and Abstract classes.
Code for extension. Write code that can easily be extended in the future.
Don't write code that isn't needed.
Try to reduce coupling.
Be more Modular
Write code like your code is an External API
If you take the Test-Driven Development approach, then your code only becomes re-usable as your refactor based on forthcoming scenarios.
Personally I find constantly refactoring produces cleaner code than trying to second-guess what scenarios I need to code a particular class for.
More than anything else, maintainability makes code reusable.
Reusability is rarely a worthwhile goal in itself. Rather, it is a by-product of writing code that is well structured, easily maintainable and useful.
If you set out to make reusable code, you often find yourself trying to take into account requirements for behaviour that might be required in future projects. No matter how good you become at this, you'll find that you get these future-proofing requirements wrong.
On the other hand, if you start with the bare requirements of the current project, you will find that your code can be clean and tight and elegant. When you're working on another project that needs similar functionality, you will naturally adapt your original code.
I suggest looking at the best-practices for your chosen programming language / paradigm (eg. Patterns and SOLID for Java / C# types), the Lean / Agile programming literature, and (of course) the book "Code Complete". Understanding the advantages and disadvantages of these approaches will improve your coding practice no end. All your code will then become reausable - but 'by accident', rather than by design.
Also, see here: Writing Maintainable Code
You'll write various modules (parts) when writing a relatively big project. Reusable code in practice means you'll have create libraries that other projects needing that same functionality can use.
So, you have to identify modules that can be reused, for that
Identify the core competence of each module. For instance, if your project has to compress files, you'll have a module that will handle file compression. Do NOT make it do more than ONE THING. One thing only.
Write a library (or class) that will handle file compression, without needing anything more than the file to be compressed, the output and the compression format. This will decouple the module from the rest of the project, enabling it to be (re)used in a different setting.
You don't have to get it perfect the first time, when you actually reuse the library you will probably find out flaws in the design (for instance, you didn't make it modular enough to be able to add new compression formats easily) and you can fix them the second time around and improve the reusability of your module. The more you reuse it (and fix the flaws), the easier it'll become to reuse.
The most important thing to consider is decoupling, if you write tightly coupled code reusability is the first casualty.
Leave all the needed state or context outside the library. Add methods to specify the state to the library.
For most definitions of "reuse", reuse of code is a myth, at least in my experience. Can you tell I have some scars from this? :-)
By reuse, I don't mean taking existing source files and beating them into submission until a new component or service falls out. I mean taking a specific component or service and reusing it without alteration.
I think the first step is to get yourself into a mindset that it's going to take at least 3 iterations to create a reusable component. Why 3? Because the first time you try to reuse a component, you always discover something that it can't handle. So then you have to change it. This happens a couple of times, until finally you have a component that at least appears to be reusable.
The other approach is to do an expensive forward-looking design. But then the cost is all up-front, and the benefits (may) appear some time down the road. If your boss insists that the current project schedule always dominates, then this approach won't work.
Object-orientation allows you to refactor code into superclasses. This is perhaps the easiest, cheapest and most effective kind of reuse. Ordinary class inheritance doesn't require a lot of thinking about "other situations"; you don't have to build "omnipotent" code.
Beyond simple inheritance, reuse is something you find more than you invent. You find reuse situations when you want to reuse one of your own packages to solve a slightly different problem. When you want to reuse a package that doesn't precisely fit the new situation, you have two choices.
Copy it and fix it. You now have to nearly similar packages -- a costly mistake.
Make the original package reusable in two situations.
Just do that for reuse. Nothing more. Too much thinking about "potential" reuse and undefined "other situations" can become a waste of time.
Others have mentioned these tactics, but here they are formally. These three will get you very far:
Adhere to the Single Responsibility
Principle - it ensures your class only "does one thing", which means it's more likely it will be reusable for another application which includes that same thing.
Adhere to the Liskov
Substitution Principle - it ensures your code "does what it's supposed without surprises", which means it's more likely it will be reusable for another application that needs the same thing done.
Adhere to the Open/Closed Principle - it ensures your code can be made to behave differently without modifying its source, which means it's more likely to be reusable without direct modification.
To add to the above mentioned items, I'd say:
Make those functions generic which you need to reuse
Use configuration files and make the code use the properties defined in files/db
Clearly factor your code into such functions/classes that those provide independent functionality and can be used in different scenarios and define those scenarios using the config files
I would add the concept of "Class composition over class inheritance" (which is derived from other answers here).
That way the "composed" object doesn't care about the internal structure of the object it depends on - only its behavior, which leads to better encapsulation and easier maintainability (testing, less details to care about).
In languages such as C# and Java it is often crucial since there is no multiple inheritance so it helps avoiding inheritance graph hell u might have.
As mentioned, modular code is more reusable than non-modular code.
One way to help towards modular code is to use encapsulation, see encapsulation theory here:
http://www.edmundkirwan.com/
Ed.
Avoid reinventing the wheel. That's it. And that by itself has many benefits mentioned above. If you do need to change something, then you just create another piece of code, another class, another constant, library, etc... it helps you and the rest of the developers working in the same application.
Comment, in detail, everything that seems like it might be confusing when you come back to the code next time. Excessively verbose comments can be slightly annoying, but they're far better than sparse comments, and can save hours of trying to figure out WTF you were doing last time.

Resources