Related
In Java, it is very common to have a list of utility functions
public class Utils {
private Utils() {
}
public static void doSomething() {
System.out.println("Utils")
}
}
If I were in Swift, should I use class or struct to achieve the similar thing? Or, it really doesn't matter?
class
class Utils {
private init() {
}
static func doSomething() {
print("class Utils")
}
}
struct
struct Utils {
private init() {
}
static func doSomething() {
print("struct Utils")
}
}
I think a conversation about this has to start with an understanding of dependancy injection, what it is and what problem it solves.
Dependancy Injection
Programming is all about the assembly of small components into every-more-abstract assemblies that do cool things. That's fine, but large assemblies are hard to test, because they're very complex. Ideally, we want to test the small components, and the way they fit together, rather than testing entire assemblies.
To that end, unit and integration tests are incredibly useful. However, every global function call (including direct calls to static functions, which are really just global functions in a nice little namespace) is a liability. It's a fixed junction with no seam that can be cut apart by a unit test. For example, if you have a view controller that directly calls a sort method, you have no way to test your view controller in isolation of the sort method. There's a few consequences of this:
Your unit tests take longer, because they test dependancies multiple times (e.g. the sort method is tested by every piece of code that uses it). This disincentivizes running them regularly, which is a big deal.
Your unit tests become worse at isolating issues. Broke the sort method? Now half your tests are failing (everything that transitively depends on the sort method). Hunting down the issue is harder than if only a single test-case failed.
Dynamic dispatch introduces seams. Seams are points of configurability in the code. Where one implementation can be changed taken out, and another one put in. For example, you might want to have a an MockDataStore, a BetaDataStore, and a ProdDataStore, which is picked depending on the environment. If all 3 of these types conform to a common protocol, than dependant code could be written to depend on the protocol that allows these different implementations to be swapped around as necessary.
To this end, for code that you want to be able to isolate out, you never want to use global functions (like foo()), or direct calls to static functinos (which are effectively just global functions in a namespace), like FooUtils.foo(). If you want to replace foo() with foo2() or FooUtils.foo() with BarUtils.foo(), you can't.
Dependancy injection is the practice of "injecting" dependancies (depending on a configuration, rather than hard coding them. Rather than hard-coding a dependancy on FooUtils.foo(), you make a Fooable interface, which requires a function foo. In the dependant code (the type that will call foo), you will store an instance member of type Fooable. When you need to call foo, call self.myFoo.foo(). This way, you will be calling whatever implementation of Fooable has been provided ("injected") to the self instance at the time of its construction. It could be a MockFoo, a NoOpFoo, a ProdFoo, it doesn't care. All it knows is that its myFoo member has a foo function, and that it can be called to take care of all of it's foo needs.
The same thing above could also be achieve a base-class/sub-class relationship, which for these intents and purposes acts just like a protocol/conforming-type relationship.
The tools of the trade
As you noticed, Swift gives much more flexibility in Java. When writing a function, you have the option to use:
A global function
An instance function (of a struct, class, or enum)
A static function (of a struct, class, or enum)
A class function (of a class)
There is a time and a place where each is appropriate. Java shoves options 2 and 3 down your throat (mainly option 2), whereas Swift lets you lean on your own judgement more often. I'll talk about each case, when you might want to use it, and when you might not.
1) Global functions
These can be useful for one-of utility functions, where there isn't much benefit to " grouping" them in a particular way.
Pros:
Short names due to unqualified access (can access foo, rather than FooUtils.foo)
Short to write
Cons:
Pollutes the global name space, and makes auto-completion less useful.
Not grouped in a way that aids discoverability
Cannot be dependancy injected
Any accessed state must be global state, which is almost always a recipe for a disaster
2) Instance functions
Pros:
Group related operations under a common namespace
Have access to localized state (the members of self), which is almost always preferable over global state.
Can be dependancy injected
Can be overridden by subclasses
Cons:
Longer to write than global functions
Sometimes instances don't make sense. E.g. if you have to make an empty MathUtils object, just to use its pow instance method, which doesn't actually use any instance data (e.g. MathUtils().pow(2, 2))
3) Static functions
Pros:
Group related operations under a common namespace
Can be dependancy in Swift (where protocols can support the requirement for static functions, subscripts, and properties)
Cons:
Longer to write than global functions
It's hard to extend these to be stateful in the future. Once a function is written as static, it requires an API-breaking change to turn it into an instance function, which is necessary if the need for instance state ever arrises.
4) Class functions
For classes, static func is like final class func. These are supported in Java, but in Swift you can also have non-finals class functions. The only difference here is that they support overriding (by a subclass). All other pro/cons are shared with static functions.
Which one should I use?
It depends.
If the piece you're programming is one that would like to isolate for testing, then global functions aren't a candidate. You have to use either protocol or inheritance based dependancy injection. Static functions could be appropriate if the code does not nned to have some sort of instance state (and is never expected to need it), whereas an instance function should be when instance state is needed. If you're not sure, you should opt for an instance function, because as mentioned earlier, transitioning a function from static to instance is an API breaking change, and one that you would want to avoid if possible.
If the new piece is really simple, perhaps it could be a global function. E.g. print, min, abs, isKnownUniquelyReferenced, etc. But only if there's no grouping that makes sense. There are exceptions to look out for:
If your code repeats a common prefix, naming pattern, etc., that's a strong indication that a logical grouping exists, which could be better expressed as unification under a common namespace. For example:
func formatDecimal(_: Decimal) -> String { ... }
func formatCurrency(_: Price) -> String { ... }
func formatDate(_: Date) -> String { ... }
func formatDistance(_: Measurement<Distance>) -> String { ... }
Could be better expressed if these functions were grouped under a common umbrella. In this case, we don't need instance state, so we don't need to use instance methods. Additionally, it would make sense to have an instance of FormattingUtils (since it has no state, and nothing that could use that state), so outlawing the creation of instances is probably a good idea. An empty enum does just that.
enum FormatUtils {
func formatDecimal(_: Decimal) -> String { ... }
func formatCurrency(_: Price) -> String { ... }
func formatDate(_: Date) -> String { ... }
func formatDistance(_: Measurement<Distance>) -> String { ... }
}
Not only does this logical grouping "make sense", it also has the added benefit of bringing you one step closer to supporting dependancy injection for this type. All you would need is to extract the interface into a new FormatterUtils protocol, rename this type to ProdFormatterUtils, and change dependant code to rely on the protocol rather than the concrete type.
If you find that you have code like in case 1, but also find yourself repeating the same parameter in each function, that's a very strong indication that you have a type abstraction just waiting to be discovered. Consider this example:
func enableLED(pin: Int) { ... }
func disableLED(pin: Int) { ... }
func checkLEDStatus(pin: Int) -> Bool { ... }
Not only can we apply the refactoring from point 1 above, but we can also notice that pin: Int is a repeated parameter, which could be better expressed as an instance of a type. Compare:
class LED { // or struct/enum, depending on the situation.
let pin: Int
init(pin: Int)? {
guard pinNumberIsValid(pin) else { return nil }
self.pin = pin
}
func enable() { ... }
func disable() { ... }
func status() -> Bool { ... }
}
Compared to the refactoring from point 1, this changes the call site from
LEDUtils.enableLED(pin: 1)`
LEDUtils.disableLED(pin: 1)`
to
guard let redLED = LED(pin: 1) else { fatalError("Invalid LED pin!") }
redLED.enable();
redLED.disable();
Not only is this nicer, but now we have a way to clearly distinguish functions that expect any old integer, and those which expect an LED pin's number, by using Int vs LED. We also give a central place for all LED-related operations, and a central point at which we can validate that a pin number is indeed valid. You know that if you have an instance of LED provided to you, the pin is valid. You don't need to check it for yourself, because you can rely on it already having been checked (otherwise this LED instance wouldn't exist).
In Swift, I have historically used extensions to extend closed types and provide handy, logic-less functionality, like animations, math extensions etc. However, since extensions are hard dependencies sprinkled all over your code-base, I always think three times before implementing something as an extension.
Lately, though, I have seen that Apple suggests using extensions to an even greater extent, e.g. implementing protocols as separate extensions.
That is, if you have a class A that implement protocol B, you end up with this design:
class A {
// Initializers, stored properties etc.
}
extension A: B {
// Protocol implementation
}
As you enter that rabbit-hole, I started seeing more extension-based code, like:
fileprivate extension A {
// Private, calculated properties
}
fileprivate extension A {
// Private functions
}
One part of me likes the building-blocks you get when you implement protocols in separate extensions. It makes the separate parts of the class really distinct. However, as soon as you inherit this class, you will have to change this design, since extension functions cannot be overridden.
I think the second approach is...interesting. Once great thing with it is that you do not have to annotate each private property and function as private, since you can specify that for the extension.
However, this design also splits up stored and non-stored properties, public and private functions, making the "logic" of the class harder to follow (write smaller classes, I know). That, together with the subclassing issues, makes me halt a bit on the porch of extension wonderland.
Would love to hear how the Swift community of the world looks at extensions. What do you think? Is there a silverbullet?
This is only my opinion, of course, so take what I'll write easy.
I'm currently using the extension-approach in my projects for few reasons:
The code is much more clean: my classes are never over 150 lines and the separation through extensions makes my code more readable and separated by responsibilities
This is usually what a class looks like:
final class A {
// Here the public and private stored properties
}
extension A {
// Here the public methods and public non-stored properties
}
fileprivate extension A {
// here my private methods
}
The extensions can be more than one, of course, it depends on what your class does. This is simply useful to organize your code and read it from the Xcode top bar
It reminds me that Swift is a protocol-oriented-programming language, not an OOP language. There is nothing you can't do with protocol and protocol extensions. And I prefer to use protocols for adding a security layer to my classes / struct. For example I usually write my models in this way:
protocol User {
var uid: String { get }
var name: String { get }
}
final class UserModel: User {
var uid: String
var name: String
init(uid: String, name: String) {
self.uid = uid
self.name = name
}
}
In this way you can still edit your uid and name values inside the UserModel class, but you can't outside since you'll only handle the User protocol type.
I use a similar approach, which can be described in one sentence:
Sort a type's responsibilities into extensions
These are examples for aspects I'm putting into individual extensions:
A type's main interface, as seen from a client.
Protocol conformances (i.e. a delegate protocol, often private).
Serialization (for example everything NSCoding related).
Parts of a types that live on a background thread, like network callbacks.
Sometimes, when the complexity of a single aspect rises, I even split a type's implementation over more than one file.
Here are some details that describe how I sort implementation related code:
The focus is on functional membership.
Keep public and private implementations close, but separated.
Don't split between var and func.
Keep all aspects of a functionality's implementation together: nested types, initializers, protocol conformances, etc.
Advantage
The main reason to separate aspects of a type is to make it easier to read and understand.
When reading foreign (or my own old) code, understanding the big picture is often the most difficult part of diving in. Giving a developer an idea of a context of some method helps a lot.
There's another benefit: Access control makes it easier not to call something inadvertently. A method that is only supposed to be called from a background thread can be declared private in the "background" extension. Now it simply can't be called from elsewhere.
Current Restrictions
Swift 3 imposes certain restrictions on this style. There are a couple of things that can only live in the main type's implementation:
stored properties
overriding func/var
overidable func/var
required (designated) initializers
These restrictions (at least the first three) come from the necessity to know the object's data layout (and witness table for pure Swift) in advance. Extensions can potentially be loaded late during runtime (via frameworks, plugins, dlopen, ...) and changing the type's layout after instances have been created would brake their ABI.
A modest proposal for the Swift team :)
All code from one module is guaranteed to be available at the same time. The restrictions that prevent fully separating functional aspects could be circumvented if the Swift compiler would allow to "compose" types within a single module. With composing types I mean that the compiler would collect all declarations that define a type's layout from all files within a module. Like with other aspects of the language it would find intra file dependencies automatically.
This would allow to really write "aspect oriented" extensions. Not having to declare stored properties or overrides in the main declaration would enable better access control and separation of concerns.
I hate it. It adds extra complexity and muddies the use of extensions, making it unclear on what to expect that people are using the extensions for.
If you're using an extension for protocol conformance, OK, I can see that, but why not just comment your code? How is this better? I don't see that.
One of the definition of ninject in internet is;
"Somewhere in the middle of your application, you're creating a class
inside another class. That means you're creating a dependency.
Dependency Injection is about passing in those dependencies, usually
through the constructor, instead of embedding them."
what i want to learn is, where ever we see a creation of a class inside another class should we use ninject or just we should use in some part of program that we want/need to apply loosely coupling for design purposes because maybe we would like to use different approaches in the future?
Sorry if this is a silly question.
It's a perfectly valid question, and there are no absolute right or wrong answers. Ninject and other IoC frameworks are designed to de-couple dependencies.
So the moment you do this:
public class MyClass1
{
public MyClass1()
{
MyClass2 mc2 = new MyClass2();
}
}
You can categorically say that MyClass1 has a dependency in MyClass2.
For me, my rule is this: do I need to unit test MyClass1, or is it likely I'll need to unit test MyClass1?
If I don't need to unit test it, then I don't find much value in decoupling the two classes.
However, if I do need to unit test MyClass1, then injecting in MyClass2 gives you much better control over your unit tests (and allows you to test MyClass1 in isolation).
You do need to evaluate each case separately though. In the above example, if I need to unit test MyClass1, and MyClass2 is just a basic string formatting class, then I probably wouldn't decouple it. However, if MyClass2 was an email sending class then I would de-couple it. I don't want my unit tests actually sending emails, so I would feed in a fake for my tests instead.
So I don't believe there are any solid rules, but hopefully the above gives you a better idea of when you might decouple, and when you might not decouple.
I'm trying to understand dependency injections (DI), and once again I failed. It just seems silly. My code is never a mess; I hardly write virtual functions and interfaces (although I do once in a blue moon) and all my configuration is magically serialized into a class using json.net (sometimes using an XML serializer).
I don't quite understand what problem it solves. It looks like a way to say: "hi. When you run into this function, return an object that is of this type and uses these parameters/data."
But... why would I ever use that? Note I have never needed to use object as well, but I understand what that is for.
What are some real situations in either building a website or desktop application where one would use DI? I can come up with cases easily for why someone may want to use interfaces/virtual functions in a game, but it's extremely rare (rare enough that I can't remember a single instance) to use that in non-game code.
First, I want to explain an assumption that I make for this answer. It is not always true, but quite often:
Interfaces are adjectives; classes are nouns.
(Actually, there are interfaces that are nouns as well, but I want to generalize here.)
So, e.g. an interface may be something such as IDisposable, IEnumerable or IPrintable. A class is an actual implementation of one or more of these interfaces: List or Map may both be implementations of IEnumerable.
To get the point: Often your classes depend on each other. E.g. you could have a Database class which accesses your database (hah, surprise! ;-)), but you also want this class to do logging about accessing the database. Suppose you have another class Logger, then Database has a dependency to Logger.
So far, so good.
You can model this dependency inside your Database class with the following line:
var logger = new Logger();
and everything is fine. It is fine up to the day when you realize that you need a bunch of loggers: Sometimes you want to log to the console, sometimes to the file system, sometimes using TCP/IP and a remote logging server, and so on ...
And of course you do NOT want to change all your code (meanwhile you have gazillions of it) and replace all lines
var logger = new Logger();
by:
var logger = new TcpLogger();
First, this is no fun. Second, this is error-prone. Third, this is stupid, repetitive work for a trained monkey. So what do you do?
Obviously it's a quite good idea to introduce an interface ICanLog (or similar) that is implemented by all the various loggers. So step 1 in your code is that you do:
ICanLog logger = new Logger();
Now the type inference doesn't change type any more, you always have one single interface to develop against. The next step is that you do not want to have new Logger() over and over again. So you put the reliability to create new instances to a single, central factory class, and you get code such as:
ICanLog logger = LoggerFactory.Create();
The factory itself decides what kind of logger to create. Your code doesn't care any longer, and if you want to change the type of logger being used, you change it once: Inside the factory.
Now, of course, you can generalize this factory, and make it work for any type:
ICanLog logger = TypeFactory.Create<ICanLog>();
Somewhere this TypeFactory needs configuration data which actual class to instantiate when a specific interface type is requested, so you need a mapping. Of course you can do this mapping inside your code, but then a type change means recompiling. But you could also put this mapping inside an XML file, e.g.. This allows you to change the actually used class even after compile time (!), that means dynamically, without recompiling!
To give you a useful example for this: Think of a software that does not log normally, but when your customer calls and asks for help because he has a problem, all you send to him is an updated XML config file, and now he has logging enabled, and your support can use the log files to help your customer.
And now, when you replace names a little bit, you end up with a simple implementation of a Service Locator, which is one of two patterns for Inversion of Control (since you invert control over who decides what exact class to instantiate).
All in all this reduces dependencies in your code, but now all your code has a dependency to the central, single service locator.
Dependency injection is now the next step in this line: Just get rid of this single dependency to the service locator: Instead of various classes asking the service locator for an implementation for a specific interface, you - once again - revert control over who instantiates what.
With dependency injection, your Database class now has a constructor that requires a parameter of type ICanLog:
public Database(ICanLog logger) { ... }
Now your database always has a logger to use, but it does not know any more where this logger comes from.
And this is where a DI framework comes into play: You configure your mappings once again, and then ask your DI framework to instantiate your application for you. As the Application class requires an ICanPersistData implementation, an instance of Database is injected - but for that it must first create an instance of the kind of logger which is configured for ICanLog. And so on ...
So, to cut a long story short: Dependency injection is one of two ways of how to remove dependencies in your code. It is very useful for configuration changes after compile-time, and it is a great thing for unit testing (as it makes it very easy to inject stubs and / or mocks).
In practice, there are things you can not do without a service locator (e.g., if you do not know in advance how many instances you do need of a specific interface: A DI framework always injects only one instance per parameter, but you can call a service locator inside a loop, of course), hence most often each DI framework also provides a service locator.
But basically, that's it.
P.S.: What I described here is a technique called constructor injection, there is also property injection where not constructor parameters, but properties are being used for defining and resolving dependencies. Think of property injection as an optional dependency, and of constructor injection as mandatory dependencies. But discussion on this is beyond the scope of this question.
I think a lot of times people get confused about the difference between dependency injection and a dependency injection framework (or a container as it is often called).
Dependency injection is a very simple concept. Instead of this code:
public class A {
private B b;
public A() {
this.b = new B(); // A *depends on* B
}
public void DoSomeStuff() {
// Do something with B here
}
}
public static void Main(string[] args) {
A a = new A();
a.DoSomeStuff();
}
you write code like this:
public class A {
private B b;
public A(B b) { // A now takes its dependencies as arguments
this.b = b; // look ma, no "new"!
}
public void DoSomeStuff() {
// Do something with B here
}
}
public static void Main(string[] args) {
B b = new B(); // B is constructed here instead
A a = new A(b);
a.DoSomeStuff();
}
And that's it. Seriously. This gives you a ton of advantages. Two important ones are the ability to control functionality from a central place (the Main() function) instead of spreading it throughout your program, and the ability to more easily test each class in isolation (because you can pass mocks or other faked objects into its constructor instead of a real value).
The drawback, of course, is that you now have one mega-function that knows about all the classes used by your program. That's what DI frameworks can help with. But if you're having trouble understanding why this approach is valuable, I'd recommend starting with manual dependency injection first, so you can better appreciate what the various frameworks out there can do for you.
As the other answers stated, dependency injection is a way to create your dependencies outside of the class that uses it. You inject them from the outside, and take control about their creation away from the inside of your class. This is also why dependency injection is a realization of the Inversion of control (IoC) principle.
IoC is the principle, where DI is the pattern. The reason that you might "need more than one logger" is never actually met, as far as my experience goes, but the actually reason is, that you really need it, whenever you test something. An example:
My Feature:
When I look at an offer, I want to mark that I looked at it automatically, so that I don't forget to do so.
You might test this like this:
[Test]
public void ShouldUpdateTimeStamp
{
// Arrange
var formdata = { . . . }
// System under Test
var weasel = new OfferWeasel();
// Act
var offer = weasel.Create(formdata)
// Assert
offer.LastUpdated.Should().Be(new DateTime(2013,01,13,13,01,0,0));
}
So somewhere in the OfferWeasel, it builds you an offer Object like this:
public class OfferWeasel
{
public Offer Create(Formdata formdata)
{
var offer = new Offer();
offer.LastUpdated = DateTime.Now;
return offer;
}
}
The problem here is, that this test will most likely always fail, since the date that is being set will differ from the date being asserted, even if you just put DateTime.Now in the test code it might be off by a couple of milliseconds and will therefore always fail. A better solution now would be to create an interface for this, that allows you to control what time will be set:
public interface IGotTheTime
{
DateTime Now {get;}
}
public class CannedTime : IGotTheTime
{
public DateTime Now {get; set;}
}
public class ActualTime : IGotTheTime
{
public DateTime Now {get { return DateTime.Now; }}
}
public class OfferWeasel
{
private readonly IGotTheTime _time;
public OfferWeasel(IGotTheTime time)
{
_time = time;
}
public Offer Create(Formdata formdata)
{
var offer = new Offer();
offer.LastUpdated = _time.Now;
return offer;
}
}
The Interface is the abstraction. One is the REAL thing, and the other one allows you to fake some time where it is needed. The test can then be changed like this:
[Test]
public void ShouldUpdateTimeStamp
{
// Arrange
var date = new DateTime(2013, 01, 13, 13, 01, 0, 0);
var formdata = { . . . }
var time = new CannedTime { Now = date };
// System under test
var weasel= new OfferWeasel(time);
// Act
var offer = weasel.Create(formdata)
// Assert
offer.LastUpdated.Should().Be(date);
}
Like this, you applied the "inversion of control" principle, by injecting a dependency (getting the current time). The main reason to do this is for easier isolated unit testing, there are other ways of doing it. For example, an interface and a class here is unnecessary since in C# functions can be passed around as variables, so instead of an interface you could use a Func<DateTime> to achieve the same. Or, if you take a dynamic approach, you just pass any object that has the equivalent method (duck typing), and you don't need an interface at all.
You will hardly ever need more than one logger. Nonetheless, dependency injection is essential for statically typed code such as Java or C#.
And...
It should also be noted that an object can only properly fulfill its purpose at runtime, if all its dependencies are available, so there is not much use in setting up property injection. In my opinion, all dependencies should be satisfied when the constructor is being called, so constructor-injection is the thing to go with.
I think the classic answer is to create a more decoupled application, which has no knowledge of which implementation will be used during runtime.
For example, we're a central payment provider, working with many payment providers around the world. However, when a request is made, I have no idea which payment processor I'm going to call. I could program one class with a ton of switch cases, such as:
class PaymentProcessor{
private String type;
public PaymentProcessor(String type){
this.type = type;
}
public void authorize(){
if (type.equals(Consts.PAYPAL)){
// Do this;
}
else if(type.equals(Consts.OTHER_PROCESSOR)){
// Do that;
}
}
}
Now imagine that now you'll need to maintain all this code in a single class because it's not decoupled properly, you can imagine that for every new processor you'll support, you'll need to create a new if // switch case for every method, this only gets more complicated, however, by using Dependency Injection (or Inversion of Control - as it's sometimes called, meaning that whoever controls the running of the program is known only at runtime, and not complication), you could achieve something very neat and maintainable.
class PaypalProcessor implements PaymentProcessor{
public void authorize(){
// Do PayPal authorization
}
}
class OtherProcessor implements PaymentProcessor{
public void authorize(){
// Do other processor authorization
}
}
class PaymentFactory{
public static PaymentProcessor create(String type){
switch(type){
case Consts.PAYPAL;
return new PaypalProcessor();
case Consts.OTHER_PROCESSOR;
return new OtherProcessor();
}
}
}
interface PaymentProcessor{
void authorize();
}
** The code won't compile, I know :)
The main reason to use DI is that you want to put the responsibility of the knowledge of the implementation where the knowledge is there. The idea of DI is very much inline with encapsulation and design by interface.
If the front end asks from the back end for some data, then is it unimportant for the front end how the back end resolves that question. That is up to the requesthandler.
That is already common in OOP for a long time. Many times creating code pieces like:
I_Dosomething x = new Impl_Dosomething();
The drawback is that the implementation class is still hardcoded, hence has the front end the knowledge which implementation is used. DI takes the design by interface one step further, that the only thing the front end needs to know is the knowledge of the interface.
In between the DYI and DI is the pattern of a service locator, because the front end has to provide a key (present in the registry of the service locator) to lets its request become resolved.
Service locator example:
I_Dosomething x = ServiceLocator.returnDoing(String pKey);
DI example:
I_Dosomething x = DIContainer.returnThat();
One of the requirements of DI is that the container must be able to find out which class is the implementation of which interface. Hence does a DI container require strongly typed design and only one implementation for each interface at the same time. If you need more implementations of an interface at the same time (like a calculator), you need the service locator or factory design pattern.
D(b)I: Dependency Injection and Design by Interface.
This restriction is not a very big practical problem though. The benefit of using D(b)I is that it serves communication between the client and the provider. An interface is a perspective on an object or a set of behaviours. The latter is crucial here.
I prefer the administration of service contracts together with D(b)I in coding. They should go together. The use of D(b)I as a technical solution without organizational administration of service contracts is not very beneficial in my point of view, because DI is then just an extra layer of encapsulation. But when you can use it together with organizational administration you can really make use of the organizing principle D(b)I offers.
It can help you in the long run to structure communication with the client and other technical departments in topics as testing, versioning and the development of alternatives. When you have an implicit interface as in a hardcoded class, then is it much less communicable over time then when you make it explicit using D(b)I. It all boils down to maintenance, which is over time and not at a time. :-)
Quite frankly, I believe people use these Dependency Injection libraries/frameworks because they just know how to do things in runtime, as opposed to load time. All this crazy machinery can be substituted by setting your CLASSPATH environment variable (or other language equivalent, like PYTHONPATH, LD_LIBRARY_PATH) to point to your alternative implementations (all with the same name) of a particular class. So in the accepted answer you'd just leave your code like
var logger = new Logger() //sane, simple code
And the appropriate logger will be instantiated because the JVM (or whatever other runtime or .so loader you have) would fetch it from the class configured via the environment variable mentioned above.
No need to make everything an interface, no need to have the insanity of spawning broken objects to have stuff injected into them, no need to have insane constructors with every piece of internal machinery exposed to the world. Just use the native functionality of whatever language you're using instead of coming up with dialects that won't work in any other project.
P.S.: This is also true for testing/mocking. You can very well just set your environment to load the appropriate mock class, in load time, and skip the mocking framework madness.
I've always wondered on the topic of public, protected and private properties. My memory can easily recall times when I had to hack somebody's code, and having the hacked-upon class variables declared as private was always upsetting.
Also, there were (more) times I've written a class myself, and had never recognized any potential gain of privatizing the property. I should note here that using public vars is not in my habit: I adhere to the principles of OOP by utilizing getters and setters.
So, what's the whole point in these restrictions?
The use of private and public is called Encapsulation. It is the simple insight that a software package (class or module) needs an inside and an outside.
The outside (public) is your contract with the rest of the world. You should try to keep it simple, coherent, obvious, foolproof and, very important, stable.
If you are interested in good software design the rule simply is: make all data private, and make methods only public when they need to be.
The principle for hiding the data is that the sum of all fields in a class define the objects state. For a well written class, each object should be responsible for keeping a valid state. If part of the state is public, the class can never give such guarantees.
A small example, suppose we have:
class MyDate
{
public int y, m, d;
public void AdvanceDays(int n) { ... } // complicated month/year overflow
// other utility methods
};
You cannot prevent a user of the class to ignore AdvanceDays() and simply do:
date.d = date.d + 1; // next day
But if you make y, m, d private and test all your MyDate methods, you can guarantee that there will only be valid dates in the system.
The whole point is to use private and protected to prevent exposing internal details of your class, so that other classes only have access to the public "interfaces" provided by your class. This can be worthwhile if done properly.
I agree that private can be a real pain, especially if you are extending classes from a library. Awhile back I had to extend various classes from the Piccolo.NET framework and it was refreshing that they had declared everything I needed as protected instead of private, so I was able to extend everything I needed without having to copy their code and/or modify the library. An important take-away lesson from that is if you are writing code for a library or other "re-usable" component, that you really should think twice before declaring anything private.
The keyword private shouldn't be used to privatize a property that you want to expose, but to protect the internal code of your class. I found them very helpful because they help you to define the portions of your code that must be hidden from those that can be accessible to everyone.
One example that comes to my mind is when you need to do some sort of adjustment or checking before setting/getting the value of a private member. Therefore you'd create a public setter/getter with some logic (check if something is null or any other calculations) instead of accessing the private variable directly and always having to handle that logic in your code. It helps with code contracts and what is expected.
Another example is helper functions. You might break down some of your bigger logic into smaller functions, but that doesn't mean you want to everyone to see and use these helper functions, you only want them to access your main API functions.
In other words, you want to hide some of the internals in your code from the interface.
See some videos on APIs, such as this Google talk.
Having recently had the extreme luxury of being able to design and implement an object system from scratch, I took the policy of forcing all variables to be (equivalent to) protected. My goal was to encourage users to always treat the variables as part of the implementation and not the specification. OTOH, I also left in hooks to allow code to break this restriction as there remain reasons to not follow it (e.g., the object serialization engine cannot follow the rules).
Note that my classes did not need to enforce security; the language had other mechanisms for that.
In my opinion the most important reason for use private members is hiding implementation, so that it can changed in the future without changing descendants.
Some languages - Smalltalk, for instance - don't have visibility modifiers at all.
In Smalltalk's case, all instance variables are always private and all methods are always public. A developer indicates that a method's "private" - something that might change, or a helper method that doesn't make much sense on its own - by putting the method in the "private" protocol.
Users of a class can then see that they should think twice about sending a message marked private to that class, but still have the freedom to make use of the method.
(Note: "properties" in Smalltalk are simply getter and setter methods.)
I personally rarely make use of protected members. I usually favor composition, the decorator pattern or the strategy pattern. There are very few cases in which I trust a subclass(ing programmer) to handle protected variables correctly. Sometimes I have protected methods to explicitly offer an interface specifically for subclasses, but these cases are actually rare.
Most of the time I have an absract base class with only public pure virtuals (talking C++ now), and implementing classes implement these. Sometimes they add some special initialization methods or other specific features, but the rest is private.
First of all 'properties' could refer to different things in different languages. For example, in Java you would be meaning instance variables, whilst C# has a distinction between the two.
I'm going to assume you mean instance variables since you mention getters/setters.
The reason as others have mentioned is Encapsulation. And what does Encapsulation buy us?
Flexibility
When things have to change (and they usually do), we are much less likely to break the build by properly encapsulating properties.
For example we may decide to make a change like:
int getFoo()
{
return foo;
}
int getFoo()
{
return bar + baz;
}
If we had not encapsulated 'foo' to begin with, then we'd have much more code to change. (than this one line)
Another reason to encapsulate a property, is to provide a way of bullet-proofing our code:
void setFoo(int val)
{
if(foo < 0)
throw MyException(); // or silently ignore
foo = val;
}
This is also handy as we can set a breakpoint in the mutator, so that we can break whenever something tries to modify our data.
If our property was public, then we could not do any of this!