In order to write testable C# code, I use DI heavily.
However lately I've been messing around with IronPython and found that as you can mock any methods/classes/functions etc... you like, the need for DI is gone.
Is this the case for dynamic langagues such as Python?
Instead of:
class Person(Address) {
...
You can have:
class Person() {
...
// Address initialised in here.
For dynamic languages and therefore following manaual DI for dynamic langagues is simply not needed.
Any advice on this?
Dependency Injection is also about how you wire things together --- which has nothing to do about the mockability of depended-on objects. There's a difference between having a Foo-instance that needs a Bar-connection of some kind instantiate it directly and having it completely ignore how it gets that connection as long as it has it.
If you use dependency injection you also gain better testability. But the converse isn't true. Easier testability by being able to overwrite anything doesn't bring the other advantages of dependency injection. There are many component/DI-frameworks for Python available exactly for these reasons.
I strongly disagree with your statement that Dependency Injection is not needed in dynamically typed languages. The reasons for why DI is useful and necessary are completely independent of the typing discipline of the language.
The main difference is that DI in dynamically typed languages is easy and painless: you don't need a heavyweight framework and a gazillion lines of XML configuration.
In Ruby, for example, there are only two DI frameworks. Both were written by a Java programmer. Neither of the two frameworks is used by a single project. Not even by the author of those frameworks.
However, DI is used all over the place in Ruby.
Jamis Buck, who is the author of both of those frameworks gave a talk called Recovering from Enterprise at RubyConf 2008 about how and why he wrote those frameworks and why that was a bad idea, which is well worth watching. There’s also an accompanying blog post if you’d like to read. (Just substitute “Python” everytime he says “Ruby” and everything will be just as valid.)
I'll try again. My last answer missed the question by a mile and zoomed way off topic.
Using pseudo-code, dependency Injection says out with:
class Person
def Chat() {
someOperation("X","Y","Z")
end
end
...
Person.new().Chat()
and in with:
class Person
initialize(a,b,c)
#a=a
#b=b
#c=c
end
def Chat()
someOperation(#a,#b,#c)
end
end
...
Person.new("X","Y","Z").Chat()
,., and generally in with putting the object and the call into different files for SCM purposes.
Whether "X", "Y" or "Z" are mockable (...if they were instead objects...(!)...(!)...) have nothing at all to do with whether DI is good. Really. :-)
DI is just easier in Python or Ruby, like a lot of other tasks, because there's more of a scripting approach, like Jörg says; and also of course less of a culture and a tendency saying that constants and adapters are to get populated into models and global constants.
In practical terms for me DI is the first step towards separating out those application parameters, API constants and factories into separate files to help make your revision tracking report look less spaghetti-like ("Were those extra checkins on the AppController to change the configuration..? Or to update the code...?") and more informing, and more easy to read.
My recommendation: Keep using DI... :-)
I think you're presenting a question that seems to be about best practice but is actually about run-time performance.
Get rid of dependency injection? How can a software release manager sleep at night?
The tests for function to perform must surely slow the program down one or two tads.
// my generic function entry point - IronPython
if func="a":
...
if func="b":
...
if func="c":
...
You can use standard Python with classes... or you can assign function pointers to function pointer members. Just what kind of a beast is it...?? I know, I know. Python I think is difficult to define but I like it. And I like and think highly of dependency injection, not that I've had long where I'd think to assign such a lengthy name to the practice.
Related
I'm making a method inside a Ruby on Rails app called "print" that can take any string and converts it into a png. I've been told it's not good to make class methods for base ruby classes like String or Array or Hash, etc. so "some string to print".print is probably not something I should do.
I was thinking about making a subclass of String called Print (class Print < String) and storing it in my lib/assets folder. So it would look like: Print.new("some string to print"). So my question is, am I on the right track by 1) creating a sub-class from String and 2) storing it in lib/assets?
Any guidance would be greatly appreciated!
Answers to your question will necessarily be subjective because there are always be many answers to "where should I put functionality?", according to preference, principle, habit, customs, etc. I'll list a few and describe them, maybe add some of my personal opinions, but you'll ultimately have to choose and accept the consequences.
Note: I'll commonly refer to the common degenerate case of "losing namespacing scope" or "as bad as having global methods".
Monkeypatch/Extend String
Convenient and very "OO-message-passing" style at the cost of globally affecting all String in your application. That cost can be large because doing so breaks an implicit boundary between Ruby core and your application and it also scatters a component of "your application" in an external place. The functionality will have global scope and at worst will unintentionally interact with other things it shouldn't.
Worthy mention: Ruby has a Refinements feature that allows you to do do "scoped monkeypatching".
Worthy mention 2: Ruby also lets you includes modules into existing classes, like String.class_eval { include MyCustomization } which is slightly better because it's easier to tell a customization has been made and where it was introduced: "foo".method(:custom_method).owner will reveal it. Normal Monkeypatching will make it as if the method was defined on String itself.
Utils Module
Commonly done in all programming languages, a Util module is simply a single namespace where class methods/static methods are dumped. This is always an option to avoid the global pollution, but if Util ends up getting used everywhere anyways and it gets filled to the brim with unrelated methods, then the value of namespacing is lost. Having a method in a Util module tends to signify not enough thought was put into organizing code, since without maintenance, at it's worst, it's not much better than having global methods.
Private Method
Suppose you only need it in one class -- then it's easy to just put it into one private method. What if you need it in many classes? Should you make it a private method in a base class? If the functionality is inherent to the class, something associated with the class's identity, then Yes. Used correctly, the fact that this message exists is made invisible to components outside of that class.
However, this has the same downfall as the Rails Helper module when used incorrectly. If the next added feature requires that functionality, you'll be tempted to add the new feature to the class in order to have access to it. In this way the class's scope grows over time, eventually becoming near-global in your application.
Helper Module
Many Rails devs would suggest to put almost all of these utility methods inside rails Helper modules. Helper modules are kind of in between Utils Module and Private Method options. Helpers are included and have access to private members like Private Methods, and they suggest independence like Utils Modules (but do not guarantee it). Because of these properties, they tend to end up appearing everywhere, losing namespacing, and they end up accessing each other's private members, losing independence. This means it's more powerful, but can easily become much worse than either free-standing class/static methods or private methods.
Create a Class
If all the cases above degenerate into a "global scope", what if we forcibly create a new, smaller scope by way of a new class? The new class's purpose will be only to take data in and transform it on request on the way out. This is the common wisdom of "creating many, small classes", as small classes will have smaller scopes and will be easier to handle.
Unfortunately, taking this strategy too far will result in having too many tiny components, each of which do almost nothing useful by themselves. You avoid the ball of mud, but you end up with a chunky soup where every tiny thing is connected to every other tiny thing. It's just as complicated as having global methods all interconnected with each other, and you're not much better off.
Meta-Option: Refactor
Given the options above all have the same degenerate case, you may think there's no hope and everything will always eventually become horribly global -- Not True! It's important to understand they all degenerate in different ways.
Perhaps functionality 1, 2, 3, 4... 20 as Util methods are a complete mess, but they work cohesively as functionality A.1 ~ A.20 within the single class A. Perhaps class B is a complete mess and works better broken apart into one Util method and two private methods in class C.
Your lofty goal as an engineer will be to organize your application in a configuration that avoids all these degenerate cases for every bit of functionality in the system, making the system as a whole only as complex as necessary.
My advice
I don't have full context of your domain, and you probably won't be able to communicate that easily in a SO question anyways, so I can't be certain what'll work best for you.
However, I'll point out that it's generally easier to combine things than it is to break them apart. I generally advise starting with class/static methods. Put it in Util and move it to a better namespace later (Printer?). Perhaps in the future you'll discover many of these individual methods frequently operate on the same inputs, passing the same data back and forth between them -- this may be a good candidate for a class. This is often easier than starting off with a class or inheriting other class and trying to break functionality apart, later.
For instance I have this bit of code
public class ProductService{
private IProductDataSource _dataSource = DependencyManager.Get<IProductDataSource>();
public Product Get(int id){
return _dataSource.Select(id);
}
}
I have 2 different data source:
XML file which contains the informations only in 1 language,
a SQL data base which contains the informations in many languages.
So I created 2 implementation for IProductDataSource, for for each kind of datasource.
But how do I send the required language to the SQL data source ?
I add the parameter "language" to the method "IProductDataSource.Select" even if I won't use it in the case of the XML implementation.
Inside the SQL implementation I get the language from a global state ?
I add the language to the constructor of my SQL implementation, but then I won't use my DependencyManager and handle my self the dependency injection.
Maybe my first solution is not good.
The third option is the way to go. Inject the language configuration to your SQL implementation. Also get rid of your DependencyManager ServiceLocator and use constructor injection instead.
If your application needs to work with multiple languages in a single instance I think point one is a sensible approach. If the underlying data does not provide translations for a request language then return null. There is another solution in this scenario. I'm assuming that what you have is a list of products and language translations for each product. Can you refactor your model so that you do not need to specify or asertain the langauge until you reference language specific text? The point being a product is a product regardless of the language you choose to describe it. i.e. one product instance per product, only the product id on the Datasource.Select(..) method and some other abstraction mechanism to deal with accessing the correct text translation.
If however each instance of your application is only concerned with one language set I second Mr Gloor.
First of all I need to point out that you are NOT injecting any dependencies with your example - you are depending on a service locator (DependencyManager) to get them for you. Dependency injection, simply put, is when your classes are unaware of who provides the dependencies, e.g. using a constructor, a setter, a method. As it was already mentioned in the other answers, Service locator is an anti-pattern and should be avoided. The reasons are described in this great article.
Another thing is that the settings you are mentioning, such as language or currency, seem to be localization related and would probably be better dealt with using the built-in mechanisms of your language of choice (e.g. resource files, etc).
Now, having said that, depending on how the rest of your code is structured you have several options to solve this while still using Service locator:
You could have SqlDataSource depend on some ILanguageProvider which pulls the current language from somewhere. However, with more settings like these (or if it is difficult to get current language in an isolated way) this can get messy very fast.
You could depend on IProductDataSourceFactory instead (or, if you are using C#, Func<IProductDataSource>) which would return the concrete implementation with the correct settings. Again, you need to be able to get the current language in an isolated way in order to use this.
You could go with option 1 in your question. This would be a leaky abstraction but would be the simplest to implement.
However, if you decide to get rid of service locator and start using some DI container, the best solution would be using option 3 (as it was already stated) and configuring container accordingly to provide the correct value. Some good ideas of how to do this in an elegant way can be found in the answer to this question
I'm running Pharo and I'm just in a use case that sort of screams for Dependency Injection à la Guice. Is there something similar for Smalltalk?
I understand that you can sort of do it all by foot, by just passing in your dependencies explicitly. But that feels awkward and verbose to me.
There is a Smalltalk dialect with strong emphasis on dependency injection. It extends the language such that not only method names but also class names use a dynamic lookup. The novel lookup of class names is most similar to that of methods, except that bubbles up through a series of nested classes rather than along an inheritance chain. Thus you can change the injected classes by changing the nesting environment.
To learn more about the dialect, follow this link.
With Guice, it looks like you define your classes to take certain interfaces as constructor parameters. Then you tell Guice "this interface maps to that class implementing said interface".
That sort've thing is completely unnecessary in Smalltalk, because Smalltalk classes only care about protocols.
If we translated the example into Smalltalk, we could pass any object we liked into the RealBillingService's constructor, as long as that object responded to #logChargeResult: and #logConnectException:, i.e., as long as that object implemented the protocol required of a TransactionLog.
Here's a link to a similar answer to the above.
I am not really an expert but I found this article on google: http://codebetter.com/blogs/jeremy.miller/archive/2006/05/05/144172.aspx
I hope this will lead you in the right direction.
:)
I'm refactoring a class and adding a new dependency to it. The class is currently taking its existing dependencies in the constructor. So for consistency, I add the parameter to the constructor.
Of course, there are a few subclasses plus even more for unit tests, so now I am playing the game of going around altering all the constructors to match, and it's taking ages.
It makes me think that using properties with setters is a better way of getting dependencies. I don't think injected dependencies should be part of the interface to constructing an instance of a class. You add a dependency and now all your users (subclasses and anyone instantiating you directly) suddenly know about it. That feels like a break of encapsulation.
This doesn't seem to be the pattern with the existing code here, so I am looking to find out what the general consensus is, pros and cons of constructors versus properties. Is using property setters better?
Well, it depends :-).
If the class cannot do its job without the dependency, then add it to the constructor. The class needs the new dependency, so you want your change to break things. Also, creating a class that is not fully initialized ("two-step construction") is an anti-pattern (IMHO).
If the class can work without the dependency, a setter is fine.
The users of a class are supposed to know about the dependencies of a given class. If I had a class that, for example, connected to a database, and didn't provide a means to inject a persistence layer dependency, a user would never know that a connection to the database would have to be available. However, if I alter the constructor I let the users know that there is a dependency on the persistence layer.
Also, to prevent yourself from having to alter every use of the old constructor, simply apply constructor chaining as a temporary bridge between the old and new constructor.
public class ClassExample
{
public ClassExample(IDependencyOne dependencyOne, IDependencyTwo dependencyTwo)
: this (dependnecyOne, dependencyTwo, new DependnecyThreeConcreteImpl())
{ }
public ClassExample(IDependencyOne dependencyOne, IDependencyTwo dependencyTwo, IDependencyThree dependencyThree)
{
// Set the properties here.
}
}
One of the points of dependency injection is to reveal what dependencies the class has. If the class has too many dependencies, then it may be time for some refactoring to take place: Does every method of the class use all the dependencies? If not, then that's a good starting point to see where the class could be split up.
Of course, putting on the constructor means that you can validate all at once. If you assign things into read-only fields then you have some guarantees about your object's dependencies right from construction time.
It is a real pain adding new dependencies, but at least this way the compiler keeps complaining until it's correct. Which is a good thing, I think.
If you have large number of optional dependencies (which is already a smell) then probably setter injection is the way to go. Constructor injection better reveals your dependencies though.
The general preferred approach is to use constructor injection as much as possible.
Constructor injection exactly states what are the required dependencies for the object to function properly - nothing is more annoying than newing up an object and having it crashing when calling a method on it because some dependency is not set. The object returned by a constructor should be in a working state.
Try to have only one constructor, it keeps the design simple and avoids ambiguity (if not for humans, for the DI container).
You can use property injection when you have what Mark Seemann calls a local default in his book "Dependency Injection in .NET": the dependency is optional because you can provide a fine working implementation but want to allow the caller to specify a different one if needed.
(Former answer below)
I think that constructor injection are better if the injection is mandatory. If this adds too many constructors, consider using factories instead of constructors.
The setter injection is nice if the injection is optional, or if you want to change it halfway trough. I generally don't like setters, but it's a matter of taste.
It's largely a matter of personal taste.
Personally I tend to prefer the setter injection, because I believe it gives you more flexibility in the way that you can substitute implementations at runtime.
Furthermore, constructors with a lot of arguments are not clean in my opinion, and the arguments provided in a constructor should be limited to non-optional arguments.
As long as the classes interface (API) is clear in what it needs to perform its task,
you're good.
I prefer constructor injection because it helps "enforce" a class's dependency requirements. If it's in the c'tor, a consumer has to set the objects to get the app to compile. If you use setter injection they may not know they have a problem until run time - and depending on the object, it might be late in run time.
I still use setter injection from time to time when the injected object maybe needs a bunch of work itself, like initialization.
I personally prefer the Extract and Override "pattern" over injecting dependencies in the constructor, largely for the reason outlined in your question. You can set the properties as virtual and then override the implementation in a derived testable class.
I perfer constructor injection, because this seems most logical. Its like saying my class requires these dependencies to do its job. If its an optional dependency then properties seem reasonable.
I also use property injection for setting things that the container does not have a references to such as an ASP.NET View on a presenter created using the container.
I dont think it breaks encapsulation. The inner workings should remain internal and the dependencies deal with a different concern.
One option that might be worth considering is composing complex multiple-dependencies out of simple single dependencies. That is, define extra classes for compound dependencies. This makes things a little easier WRT constructor injection - fewer parameters per call - while still maintaining the must-supply-all-dependencies-to-instantiate thing.
Of course it makes most sense if there's some kind of logical grouping of dependencies, so the compound is more than an arbitrary aggregate, and it makes most sense if there are multiple dependents for a single compound dependency - but the parameter block "pattern" has been around for a long time, and most of those that I've seen have been pretty arbitrary.
Personally, though, I'm more a fan of using methods/property-setters to specify dependencies, options etc. The call names help describe what is going on. It's a good idea to provide example this-is-how-to-set-it-up snippets, though, and make sure the dependent class does enough error checks. You might want to use a finite state model for the setup.
I recently ran into a situation where I had multiple dependencies in a class, but only one of the dependencies was necessarily going to change in each implementation. Since the data access and error logging dependencies would likely only be changed for testing purposes, I added optional parameters for those dependencies and provided default implementations of those dependencies in my constructor code. In this way, the class maintains its default behavior unless overridden by the consumer of the class.
Using optional parameters can only be accomplished in frameworks that support them, such as .NET 4 (for both C# and VB.NET, though VB.NET has always had them). Of course, you can accomplish similar functionality by simply using a property that can be reassigned by the consumer of your class, but you don't get the advantage of immutability provided by having a private interface object assigned to a parameter of the constructor.
All of this being said, if you are introducing a new dependency that must be provided by every consumer, you're going to have to refactor your constructor and all code that consumers your class. My suggestions above really only apply if you have the luxury of being able to provide a default implementation for all of your current code but still provide the ability to override the default implementation if necessary.
Constructor injection does explicitly reveal the dependencies, making code more readable and less prone to unhandled run-time errors if arguments are checked in the constructor, but it really does come down to personal opinion, and the more you use DI the more you'll tend to sway back and forth one way or the other depending on the project. I personally have issues with code smells like constructors with a long list of arguments, and I feel that the consumer of an object should know the dependencies in order to use the object anyway, so this makes a case for using property injection. I don't like the implicit nature of property injection, but I find it more elegant, resulting in cleaner-looking code. But on the other hand, constructor injection does offer a higher degree of encapsulation, and in my experience I try to avoid default constructors, as they can have an ill effect on the integrity of the encapsulated data if one is not careful.
Choose injection by constructor or by property wisely based on your specific scenario. And don't feel that you have to use DI just because it seems necessary and it will prevent bad design and code smells. Sometimes it's not worth the effort to use a pattern if the effort and complexity outweighs the benefit. Keep it simple.
This is an old post, but if it is needed in future maybe this is of any use:
https://github.com/omegamit6zeichen/prinject
I had a similar idea and came up with this framework. It is probably far from complete, but it is an idea of a framework focusing on property injection
It depends on how you want to implement.
I prefer constructor injection wherever I feel the values that go in to the implementation doesnt change often. Eg: If the compnay stragtegy is go with oracle server, I will configure my datsource values for a bean achiveing connections via constructor injection.
Else, if my app is a product and chances it can connect to any db of the customer , I would implement such db configuration and multi brand implementation through setter injection. I have just taken an example but there are better ways of implementing the scenarios I mentioned above.
When to use Constructor injection?
When we want to make sure that the Object is created with all of its dependencies and to ensure that required dependencies are not null.
When to use Setter injection?
When we are working with optional dependencies that can be assigned reasonable default values within the class. Otherwise, not-null checks must be performed everywhere the code uses the dependency.
Additionally, setter methods make objects of that class open to reconfiguration or re-injection at a later time.
Sources:
Spring documentation ,
Java Revisited
I've been a bad programmer because I am doing a copy and paste. An example is that everytime i connect to a database and retrieve a recordset, I will copy the previous code and edit, copy the code that sets the datagridview and edit. I am aware of the phrase code reuse, but I have not actually used it. How can i utilize code reuse so that I don't have to copy and paste the database code and the datagridview code.,
The essence of code reuse is to take a common operation and parameterize it so it can accept a variety of inputs.
Take humble printf, for example. Imagine if you did not have printf, and only had write, or something similar:
//convert theInt to a string and write it out.
char c[24];
itoa(theInt, c, 10);
puts(c);
Now this sucks to have to write every time, and is actually kind of buggy. So some smart programmer decided he was tired of this and wrote a better function, that in one fell swoop print stuff to stdout.
printf("%d", theInt);
You don't need to get as fancy as printf with it's variadic arguments and format string. Even just a simple routine such as:
void print_int(int theInt)
{
char c[24];
itoa(theInt, c, 10);
puts(c);
}
would do the trick nickely. This way, if you want to change print_int to always print to stderr you could update it to be:
void print_int(int theInt)
{
fprintf(stderr, "%d", theInt);
}
and all your integers would now magically be printed to standard error.
You could even then bundle that function and others you write up into a library, which is just a collection of code you can load in to your program.
Following the practice of code reuse is why you even have a database to connect to: someone created some code to store records on disk, reworked it until it was usable by others, and decided to call it a database.
Libraries do not magically appear. They are created by programmers to make their lives easier and to allow them to work faster.
Put the code into a routine and call the routine whenever you want that code to be executed.
Check out Martin Fowler's book on refactoring, or some of the numerous refactoring related internet resources (also on stackoverflow), to find out how you could improve code that has smells of duplication.
At first, create a library with reusable functions. They can be linked with different applications. It saves a lot of time and encourages reuse.
Also be sure the library is unit tested and documented. So it is very easy to find the right class/function/variable/constant.
Good rule of thumb is if you use same piece three times, and it's obviously possible to generalize it, than make it a procedure/function/library.
However, as I am getting older, and also more experienced as a professional developer, I am more inclined to see code reuse as not always the best idea, for two reasons:
It's difficult to anticipate future needs, so it's very hard to define APIs so you would really use them next time. It can cost you twice as much time - once you make it more general just so that second time you are going to rewrite it anyway. It seems to me that especially Java projects of late are prone to this, they seem to be always rewritten in the framework du jour, just to be more "easier to integrate" or whatever in the future.
In a larger organization (I am a member of one), if you have to rely on some external team (either in-house or 3rd party), you can have a problem. Your future then depends on their funding and their resources. So it can be a big burden to use foreign code or library. In a similar fashion, if you share a piece of code to some other team, they can then expect that you will maintain it.
Note however, these are more like business reasons, so in open source, it's almost invariably a good thing to be reusable.
to get code reuse you need to become a master of...
Giving things names that capture their essence. This is really really important
Making sure that it only does one thing. This is really comes back to the first point, if you can't name it by its essence, then often its doing too much.
Locating the thing somewhere logical. Again this comes back to being able to name things well and capturing its essence...
Grouping it with things that build on a central concept. Same as above, but said differntly :-)
The first thing to note is that by using copy-and-paste, you are reusing code - albeit not in the most efficient way.
You have recognised a situation where you have come up with a solution previously.
There are two main scopes that you need to be aware of when thinking about code reuse. Firstly, code reuse within a project and, secondly, code reuse between projects.
The fact that you have a piece of code that you can copy and paste within a project should be a cue that the piece of code that you're looking at is useful elsewhere. That is the time to make it into a function, and make it available within the project.
Ideally you should replace all occurrances of that code with your new function, so that it (a) reduces redundant code and (b) ensures that any bugs in that chunk of code only need to be fixed in one function instead of many.
The second scope, code reuse across projects, requires some more organisation to get the maximum benefit. This issue has been addressed in a couple of other SO questions eg. here and here.
A good start is to organise code that is likely to be reused across projects into source files that are as self-contained as possible. Minimise the amount of supporting, project specific, code that is required as this will make it easier to reuse entire files in a new project. This means minimising the use of project specific data-types, minimising the use project specific global variables, etc.
This may mean creating utility files that contain functions that you know are going to be useful in your environment. eg. Common database functions if you often develop projects that depend on databases.
I think the best way to answer your problem is that create a separate assembly for your important functions.. in this way you can create extension methods or modify the helper assemble itself.. think of this function..
ExportToExcel(List date, string filename)
this method can be use for your future excel export functions so why don't store it in your own helper assembly.. i this way you just add reference to these assemblies.
Depending on the size of the project can change the answer.
For a smaller project I would recommend setting up a DatabaseHelper class that does all your DB access. It would just be a wrapper around opening/closing connections and execution of the DB code. Then at a higher level you can just write the DBCommands that will be executed.
A similar technique could be used for a larger project, but would need some additional work, interfaces need to be added, DI, as well as abstracting out what you need to know about the database.
You might also try looking into ORM, DAAB, or over to the Patterns and Practices Group
As far as how to prevent the ole C&P? - Well as you write your code, you need to periodically review it, if you have similar blocks of code, that only vary by a parameter or two, that is always a good candidate for refactoring into its own method.
Now for my pseudo code example:
Function GetCustomer(ID) as Customer
Dim CMD as New DBCmd("SQL or Stored Proc")
CMD.Paramaters.Add("CustID",DBType,Length).Value = ID
Dim DHelper as New DatabaseHelper
DR = DHelper.GetReader(CMD)
Dim RtnCust as New Customer(Dx)
Return RtnCust
End Function
Class DataHelper
Public Function GetDataTable(cmd) as DataTable
Write the DB access code stuff here.
GetConnectionString
OpenConnection
Do DB Operation
Close Connection
End Function
Public Function GetDataReader(cmd) as DataReader
Public Function GetDataSet(cmd) as DataSet
... And So on ...
End Class
For the example you give, the appropriate solution is to write a function that takes as parameters whatever it is that you edit whenever you paste the block, then call that function with the appropriate data as parameters.
Try and get into the habit of using other people's functions and libraries.
You'll usually find that your particular problem has a well-tested, elegant solution.
Even if the solutions you find aren't a perfect fit, you'll probably gain a lot of insight into the problem by seeing how other people have tackled it.
I'll do this at two levels. First within a class or namespace, put that code piece that is reused in that scope in a separate method and make sure it is being called.
Second is something similar to the case that you are describing. That is a good candidate to be put in a library or a helper/utility class that can be reused more broadly.
It is important to evaluate everything that you are doing with an perspective whether it can be made available to others for reuse. This should be a fundamental approach to programming that most of us dont realize.
Note that anything that is to be reused needs to be documented in more detail. Its naming convention be distinct, all the parameters, return results and any constraints/limitations/pre-requisites that are needed should be clearly documented (in code or help files).
It depends somewhat on what programming language you're using. In most languages you can
Write a function, parameterize it to allow variations
Write a function object, with members to hold the varying data
Develop a hierarchy of (function object?) classes that implement even more complicated variations
In C++ you could also develop templates to generate the various functions or classes at compile time
Easy: whenever you catch yourself copy-pasting code, take it out immediately (i.e., don't do it after you've already CP'd code several times) into a new function.