Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
The YAGNI "principle" states that you shouldn't focus on providing functionality before you needed as "you ain't gonna need it" anyway.
I usually tend to use common sense above any rule, no matter what but there are some times when I feel it is useful to over design or future proof something if you have good reasons, even if it's possible you'll never use it.
The actual case I have in my hands right now is more or less like this:
I've got an application that has to run over a simple proprietary communication protocol (OSI level 4). This protocol has a desirable set of characteristics (such as following NORM specification) which provide robustness to the application but which are not strictly required (UDP multicast could perform acceptable).
There's also the fact that the application is probably (but not surely) be used by other clients in the future which will not have access to the proprietary solution and, therefore, will need another solution. I know for a fact the probability of another client for the application is high.
So, what's your thinking? Should I just design for the proprietary protocol and leave the refactoring, interface extraction and so on to when I really need it or should I design now thinking for the (not so far) future?
Note: Just to be clear, I'm interested in hearing all kind of opinions to the general question (when to violate YAGNI) but I'd really like some advice or thoughts on my current dilemma :)
The reason YAGNI applies to code is that the cost of change is low. With good, well refactored code adding a feature later is normally cheap. This is different from say construction.
In the case of protocols, adding change later is usually not cheap. Old versions break, it can lead to communication failures, and an N^2 testing matrix as you have to test every version against every other version. Compare this with single codebases where new versions only have to work with themselves.
So in your case, for the protocol design, I wouldn't recommend YAGNI.
IMHO
I'd say go YAGNI first. Get it working without the NORM specification using 'the simplest thing that would work'.
Next compare if the cost of making the 'design changes' in the future is significantly greater than making the change now. Is your current solution reversible ? If you can easily make the change tomorrow or after a couple of months don't do it now. If you don't need to make an irreversible design decision now.. delay till the last responsible moment (so that you have more information to make a better decision)
To close if you know with a considerable degree of certainity that something is on the horizon and adding it later is going to be a pain, don't be an ostrich.. design for it.
e.g. I know that diagnostic logs would be needed before the product ships. Adding logging code after a month would be much more effort than adding it in today as I write each function... so this would be a case where I'd override YAGNI even though I dont need logs right now.
See-also: T. & M. Poppendieck's Lean books are better at explaining the dilemma of bullet#2 above.
Structuring your program well (abstraction, etc) isn't something that YAGNI applies to. You always want to structure your code well.
Just to clarify, I think your current predicament is due to over application of YAGNI. Structuring your code in such a way that you have a reusable library for using this protocol is just good programming practice. YAGNI does not apply.
I think that YAGNI could be inappropriate when you want to learn something :) YAGNI is good for the professionals, but not for students. When you want to learn you'll always need it.
I think it's pretty simple and obvious:
Violate YAGNI when you know that, in full certainty, You Are Going To Need It
I wouldn't worry. The fact that you aware of "YAGNI" means you are already thinking pragmatically.
I'd say, regardless of anything posted here, you are statistically more likely to come up with better code than someone who isn't analysing their practices in the same way.
I agree with Gishu and Nick.
Designing part of a protocol later often leads to thoughts like "damn, I should have done this that way, now I have to use this ugly workaround"
But it also depends on who will interface with this protocol.
If your control both ends, and that they will change of version at the same time, you can always refactor the protocol later as you would with a normal code interface.
About doing the extra protocol features implementation later, I found that implementing the protocol helps a lot to validate its design, so you may at least want to do a simple out-of-production code sample to test it, if you need the design to be official.
There are some cases where it makes sense to go against the YAGNI intuition.
Here are a few:
Following programming conventions. Especially base class and interface contracts. For example, if a base class you inherit provides a GetHashCode and an Equals method, overriding Equals but not GetHashCode breaks platform-documented rules developers are supposed to follow when they override Equals. This convention should be followed even if you find that GetHashCode would not actually be called. Not overriding GetHashCode is a bug even if there is no current way to provoke it (other than a contrived test). A future version of the platform might introduce calls to GetHashCode. Or, another programmer who has looked at documentation (in this example, the platform documentation for the base class you are inheriting) might rightfully expect that your code adheres without examining your code. Another way of thinking about this is that all code and applicable documentation must be consistent, even with documentation written by others such as that provided by the platform vendor.
Supporting customization. Particularly by external developers who will not be modifying your source code. You must figure out and implement suitable extension points in your code so that these developers can implement all kinds of addon functionality that never crossed your mind. Unfortunately, it is par for the course that you will add some extensibility features that few if any external developers ultimately use. (If it is possible to discuss the extensibility requirements with all of the external developers ahead of time or use frequent development/release cycles, great, but this is not feasible for all projects.)
Assertions, debug checks, failsafes, etc. Such code is not actually needed for your application to work correctly, but it will help make sure that your code works properly now and in the future when revisions are made.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have joined a rails project as a contractor. The project has been going for more than a year. The code is written by about 10 different developers and most of them are contractors as well. They have different code style. Some of them came from Java. The code has horrible scores with metric_fu. Many functions are very long (100 - 300 lines). Some functions have insane amount of logical branches, loops, and recursions. Each request generates a ton of sql queries. Performance is very bad. Many obsolete code that are never used but never got the chance to be cleaned up. The core architecture is plain wrong or over engineered. Code coverage is only about 25%. Views and partials are chaotic and terrible to read and understand.
The manager is in a position trying to satisfy the CEO by continuously adding new features, however newer features are increasingly hard to get implemented correctly without breaking something else. He knows the code is bad, but doesn't want to put too much effort in fixing them as refactoring will take too long.
As a contractor / developer, what is a good way to clear this situation and convenience the Manager or CEO to partition some time for refactoring?
Related Questions
How can I convince skeptical management and colleagues to allow refactoring of awful code?
How to refactor on a budget
Dealing with illogical managers
In my limited experiance:
It's impossible to convince a manager that it's necessary to set aside time to refactor. You can make him aware of it, and reinforce the point every time that you run into an issue because of bad code. Then just move on. Hopefully your boss will figure it out.
It's quite common to get in on a running project and think "this is total junk". Give it some time. You might begin to see a pattern in the madness.
I've been in similar situation. There are basically only two options:
You get some relaxed time and you may be granted time to refactor something
Due to the bad code further development of some component comes to a stall. You can't proceed to add anything because every little change causes everything else to stop functioning. In this emergency case you will get a "go" with refactoring.
I have just answered in some other question, my horror story:
https://stackoverflow.com/questions/1333077/dirty-coding-tricks-to-deliver-project-on-time/1333095#1333095
I have worked on a project where dirty tricks were the main driving principle of the development. Needlees to say, after some time these tricks have started to conflict with each other. In one analytics component, we had to implement the other very dirty trick - to hide away those calculated values which due to the conflicting tricks were not calculated properly. Afterward, the second level tricks started to conflict and we had to create tricks to deal with those. Ever since, even the mentioning of this component makes me feel horror that I may have to work on it again.
It's exactly the second situation where refactoring is the only way out.
In general, many managers without a technical background (actually, those who come from bad programmers as well) neither care nor understand the value of quality code and good architecture. You can't make them listen until something interrupts their plans, like a blow of "non-implementable" features, increasing and reoccurring bugs, customer requests that cannot be satisfied and so on. Only then understanding of the code problems may come for the first time. Usually, it's too late by then.
Refactoring code that sucks is part of coding, so you don't need to get anybody's approval unless your manager is watching your code and or hours VERY closely. The time I save refactoring today is time that I don't have to bill doing mad tricks to get normal code to work tomorrow (so it works out, in the end).
Busting up methods into smaller methods and deleting methods that are not used is part of your job. Reducing DB calls, in code that you call, is also necessary so that your code doesn't suck. Again, not really refactoring, just normal coding.
Convincing your manager depends on other factors, including (but not limited to) their willingness to be convinced, and your ability to convince.
Anyway, what is massive refactoring in RoR? Even if the "core architecture is just plain wrong," it can usually be straightened out a bit at a time. Make sure you break it into chunks /use branches so you don't break anything while you're busy fixing.
If this is impossible, then you come back to the social question of how to convince your manager. That's a simple question of figuring out what his/her buttons are, and pushing them without getting fired nor arrested. Shaming, withholding food, giving prizes, being a friend, anonymous kidnapping threats where you step in and save the day... It's pretty simple, really: creativity is the key!
Everyone is missing a point here:
Refactoring is part of the software development life cycle.
this is not only a RoR or any specific project but any other software development project.
If somehow you could convince your PM why it is important to refactor the existing code base before adding any new feature, you're done. You should clearly tell your PM that any further addition of new feature without any refactoring will take more time than required. And even if the feature is added, somehow, bug resolving sessions will take even more time since the code is very bloat and unmaintainable.
I really don't understand why people forget the principle of optimise later. Optimising later also includes refactor later IMHO.
One more thing, when taking design decisions, you should tell the consequences, good or bad, to your PM very very clearly.
You can create a different branch(I assume you are using git) for refactoring and start adding new feature in some other branch if your PM insist on adding new feature along with refactoring.
A tricky one, i have recently worked in such a company... they were always pushing for new things, again they knew it was bad, but no matter how hard I pushed it - i even got external consultants in to verify my findings - they seen it as a waste of time.
Eventually they seen the light... it only took multiple server crashes and at one point almost a full 8 days of no website to convince them.
Even at that they insisted it 'must' be the hosting service.
The key is to try and quantify how long their site will last before it crunches, and get some external verification to back you up - 'they' always trust outsiders who know nothing about your app! Also, try - if you can - to give a plan that involves gradual replacement at worst, and a plan for how long it would take to do that way. Also a plan for if 1 or 2 bodies were working on a complete rewrite hwo long it might take - but be realistic too or it will bit you in the bum! If you go that route (which is what we done) you can still have some work on the existing site as long as you incorporate it into the new.
I would suggest that you put focus on things that they can see for themselves, that is, they will surely notice that the application is slow in some functionalities, so pick up one of them and say something like "I can reduce the waiting time here, can I take some time to improve this specific thing?" (more well said, but you got the point :P).
Also consider that 10 developers before you did not refactor the code base, this may mean that it is a monstruos task, likely to make the situation worse, in this situation if something will go wrong after the refactoring it will be your fault if the program does not work properly anymore.
Just a though, but worth considering.
I'd take one small chunk of application and refactored and optimized it 'till it shines (and I'd do it in my personal time in order not to annoy my manager). Then you'll be able to show your manager/CEO the good results of refactoring and SQL optimizations.
If there is a need to refactor then the code will speak for itself. Minor refactoring can continue in during development. If you cant convince the manager then probably you should rethink if its necessary at all.
However if it is absolutely necessary then constructing metrics of development activities and the benefits should convince the manager.
I think one of options would be to highlight to the manager how re-factoring the code base now will save time (i.e. money) in the long-run. If the project is expecting to be running long term then making the changes now will clearly save you and other developers time in the future.
Best to use an example of a feature you've worked on estimating how long it would have taken if you had the cleaner code to work from in the first place. Good luck!
I am in same position right now, but with an agreement with the manager that, when the new feature should be implemented in some existing module to re-factor the module too (if it needs re-factoring), we are struggling now with the code created 4-5 years ago and definitely I find out that the re-factoring someone else s code is not trivial nor amusing to do, but very very helpful for the future re-use.
If you're writing something by yourself, whether to practice, solve a personal problem, or just for entertainment, is it ok, once in a while, to have a public field? Maybe?
Let me give you an analogy.
I come from a part of the world where English is not the primary language. But it’s necessary for all things in life.
During one of those usual days during my pre-teen years I said something very funny in English. Then my Dad said, “Son, think in English. Then you’ll get fluent”
I think it applies perfectly to this situation.
Think,try and question best practices in your playground. You will soon realize what’s best for what.Why are properties needed in the first place. Why should this be public? Why should I not call a virtual member from the constructor? Let me try using "new" modifier for a method call. What happens when I write 10 nested levels of if-then-else and try reading it again after 10 days. Why the heck should I use a factory pattern for a simple project. Et cetera.
And then you’ll realize without shooting at your foot, why design patterns are patterns...
I think it's reasonable if you're consciously throwing the code away afterwards. In particular, if you're experimenting with something completely different, taking shortcuts makes sense. Just don't let it lead to habits which cross over into "real" code.
Violating general principles is always "ok"! It is not an error to violate a principle but it is a trade off. The cost of not writing clean code will be higher the longer your software will survive. My take on this is: If in doubt make it clean!
Of course it's OK. It's your code, you can do whatever you want with it. Personally, I try to stick to good practice also in my private code, just to make it a natural habit so I don't have to think about it.
The short answer is yes, if you believe that you're gaining a lot by making things public instead of private with accessors you are welcome to do so. Consistency, I think, is the biggest thing to keep in mind. For instance, don't make some variables straight public, and some not. Do the same across the board if you break with best practices. It comes back to a trade-off. Almost no-one follows many of the IEEE specs for how Software Engineering should be executed and documented because the overhead is far too great, and it can get unmanageable. The same is true for personal, light-weight programming. It's okay to do something quick and dirty, just do not get used to it.
Public members are acceptable in the Data Transfer Object design patter:
Typically, the members in the Transfer Object are defined as public, thus eliminating the need for get and set methods.
One of the key advantages of OOP is for scaling and maintainability. By encapsulating code, one can hide the implementation. This means other programmers don't have to know the implementation, and can't change your object's internal state. If you language doesn't support properties, you end up with a lot of code which obfuscates and bloats your project. If the code doesn't need to be worked on by multiple programmers, you aren't producing a reusable component, and YOU are the maintenance programmer, then code in whatever manner allows you to get things done.
Does a maid need to make his/her own bed in the morning in order to practice properly making a bed?
Side note: it also depends on the language:
In Scala, according to the Uniform Access Principle, clients read and write field values as if they are publicly accessible, even though in some case they are actually calling methods. The maintainer of the class has the freedom to change the implementation without forcing users to make code changes.
Scala keeps field and method names in the same namespace.
Many languages, like Java, keep field and method names in separate namespaces.
However, these languages can’t support the uniform access principle as a result, unless they build in ad hoc support in their grammars or compilers.
So the real question is:
What service are you exposing (here by having a public field)?.
If the service (get/set a given type value) makes sense for your API, then the "shortcut" is legitimate.
As long as you encapsulate that field eventually, is it ok because you made the shortcut for the "right" reason (API and service exposure), versus the "wrong" reason (quick ad-hoc access).
A good unit test (thinking like the user of your API) can help you check if that field should be accessed directly or if it is only useful for internal development of other classes within your program.
Here's my take on it:
I'd advise avoiding public fields. They have a nasty habit of biting you later on because you can't control them. (The word you're looking for here is volatility.) Further, if you decide to change their internal implementation, you have to touch a lot more code.
Then again, that's what refactoring tools are for. If you have a decent refactoring tool, that's not nearly so difficult.
There is no silver bullet. I can't repeat this enough. If you have work to do, and you need to get it done in a hurry, writing one line of code instead of eight (as is the case in Visual Basic) is certainly faster.
Rules were meant to be broken. If a rule doesn't necessarily apply in your case, don't use it. Design patterns, coding guidelines, laws and best practices should not be treated as a straightjacket that requires you to needlessly complicate your code to the point where it is enormously complex and difficult to understand and maintain. Don't let someone force you into a practice just because it's popular or "standard" when it doesn't fit your requirements.
Again, this is a subjective opinion, and your mileage may vary.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
G'day,
While having a think about this question here about overdesigning for possible future changes it got me thinking.
What reasons against can you provide to people who insist on blowing out designs because "they might want to use it somewhere else at some stage in the future"?
Similarly, what do you do when people take the requirements and then come back with a bloated design with lots of extra "bells and whistles" that you didn't ask for?
I can understand extending a design when you know it makes sense for requirements or possible uses that exist either right now or in the near future. And I'm not advocating just blithely accepting a list of requirements and implementing that explicitly without providing any feedback on what you think may be missing.
I am talking about what to do when people insist on adding, or having, extraneous functionality so that "we might use it somewhere else at some stage in the future?"
Plenty of good reasons on Wikipedia.
http://en.wikipedia.org/wiki/You_Ain%27t_Gonna_Need_It
The time spent is taken from adding, testing or improving necessary
functionality.
The new features must be debugged, documented, and supported.
Any new feature imposes constraints on what can be done in the future, so
an unnecessary feature now may prevent
implementing a necessary feature
later.
Until the feature is actually needed, it is difficult to fully
define what it should do and to test
it. If the new feature is not properly
defined and tested, it may not work
right, even if it eventually is
needed.
It leads to code bloat; the software becomes larger and more
complicated.
Unless there are specifications and some kind of revision control, the
feature may not be known to
programmers who could make use of it.
Adding the new feature may suggest other new features. If these new
features are implemented as well, this
may result in a snowball effect
towards creeping featurism.
See also: http://en.wikipedia.org/wiki/KISS_principle
Especially on an embedded device, size is money (larger Flash part, say, which costs more and has a longer programming time at manufacture; or more peripheral components).
Even on a Windows application, the larger your application and the more features it has, the more it costs to develop; wait until you know what's needed and what's not and you'll waste far less money developing stuff that turns out not to be what was needed at all.
Also, any additional functionality brings with it the potential for bugs.
It's good to think properly about the requirements before developing something, but over-designing is often just borrowing trouble.
Them: "We might use it somewhere else at some stage in the future."
Me: "Yes, we might. And we might not. And we can't know, now, in what ways we might want to. And if we do want it at some stage in the future - that's the time when we'll know the ways in which we want it. That's when we can write it with confidence. On the other hand, if we write it today, but never need it, we've wasted resources to develop something we didn't need. And we've added to our code bloat, so it's harder to find the pieces of our code base that are in use, because we've got all this (presently) unnecessary code crowding out the useful stuff."
In our team we just say "YAGNI". It reminds people why. There is MORE than enough stuff on the web about YAGNI if you think you need to collate it to provide a report.
Actually, having someone say "YAGNI" to you on our team cuts a little because it's like saying "C'mon; you know better than that", which always hurts a little. :)
It's a balance, as a rule I over design only where it's cheap to do so. For instance, I wouldn't write a function that takes in 2 parameters and adds them, instead I'd write a function that takes n-parameters and adds them. However, I don't write a function that takes n-parameters and adds them using assembly.
You say
I can understand extending a design when you know it makes sense for requirements or possible uses that exist either right now or in the near future.
and I guess that sometimes people see this line as blurry something that "makes sense" to you may be over-design to someone else.
Overdesign (solving a problem in a way that is more generic than needed) a specific piece of architecture is acceptable only if you can afford it.
If you accept extraneous functionality (which is generally speaking different from overdesign) you need - again - to accept the costs that come with it (time ==> money) - if you can't afford those extra costs then you got your answer :)
There's a big difference in providing for future functionality and adding future functionality. A good design should have the "hooks" or whatever to provide for new features or modifications.
There are two ways that you can possibly handle this situation. The first way is that if they are contractors and providing the software to you. You can simply refuse to pay for all of the extra functionality and impose very strict deadlines for your required functionality. If they miss the deadline then you impose financial penalties for every day they are late.
The other way is if they actually work for you. If this is the case then you can get rid of them or downgrade them in their performance reviews.
Closed. This question needs to be more focused. It is not currently accepting answers.
Closed 2 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Any code can be reused in a way or an other, at least if you modify the code. Random code is not very reusable as such. When I read some books, they usually say that you should explicitly make the code reusable by taking into account other situations of code usage too. But certain code should not be an omnipotent all doing class either.
I would like to have reusable code that I don't have to change later. How do you make code reusable? What are the requirements for code being reusable? What are the things that reusable code should definitely have and what things are optional?
See 10 tips on writing reusable code for some help.
Keep the code DRY. Dry means "Don't Repeat Yourself".
Make a class/method do just one thing.
Write unit tests for your classes AND make it easy to test classes.
Remove the business logic or main code away from any framework code
Try to think more abstractly and use Interfaces and Abstract classes.
Code for extension. Write code that can easily be extended in the future.
Don't write code that isn't needed.
Try to reduce coupling.
Be more Modular
Write code like your code is an External API
If you take the Test-Driven Development approach, then your code only becomes re-usable as your refactor based on forthcoming scenarios.
Personally I find constantly refactoring produces cleaner code than trying to second-guess what scenarios I need to code a particular class for.
More than anything else, maintainability makes code reusable.
Reusability is rarely a worthwhile goal in itself. Rather, it is a by-product of writing code that is well structured, easily maintainable and useful.
If you set out to make reusable code, you often find yourself trying to take into account requirements for behaviour that might be required in future projects. No matter how good you become at this, you'll find that you get these future-proofing requirements wrong.
On the other hand, if you start with the bare requirements of the current project, you will find that your code can be clean and tight and elegant. When you're working on another project that needs similar functionality, you will naturally adapt your original code.
I suggest looking at the best-practices for your chosen programming language / paradigm (eg. Patterns and SOLID for Java / C# types), the Lean / Agile programming literature, and (of course) the book "Code Complete". Understanding the advantages and disadvantages of these approaches will improve your coding practice no end. All your code will then become reausable - but 'by accident', rather than by design.
Also, see here: Writing Maintainable Code
You'll write various modules (parts) when writing a relatively big project. Reusable code in practice means you'll have create libraries that other projects needing that same functionality can use.
So, you have to identify modules that can be reused, for that
Identify the core competence of each module. For instance, if your project has to compress files, you'll have a module that will handle file compression. Do NOT make it do more than ONE THING. One thing only.
Write a library (or class) that will handle file compression, without needing anything more than the file to be compressed, the output and the compression format. This will decouple the module from the rest of the project, enabling it to be (re)used in a different setting.
You don't have to get it perfect the first time, when you actually reuse the library you will probably find out flaws in the design (for instance, you didn't make it modular enough to be able to add new compression formats easily) and you can fix them the second time around and improve the reusability of your module. The more you reuse it (and fix the flaws), the easier it'll become to reuse.
The most important thing to consider is decoupling, if you write tightly coupled code reusability is the first casualty.
Leave all the needed state or context outside the library. Add methods to specify the state to the library.
For most definitions of "reuse", reuse of code is a myth, at least in my experience. Can you tell I have some scars from this? :-)
By reuse, I don't mean taking existing source files and beating them into submission until a new component or service falls out. I mean taking a specific component or service and reusing it without alteration.
I think the first step is to get yourself into a mindset that it's going to take at least 3 iterations to create a reusable component. Why 3? Because the first time you try to reuse a component, you always discover something that it can't handle. So then you have to change it. This happens a couple of times, until finally you have a component that at least appears to be reusable.
The other approach is to do an expensive forward-looking design. But then the cost is all up-front, and the benefits (may) appear some time down the road. If your boss insists that the current project schedule always dominates, then this approach won't work.
Object-orientation allows you to refactor code into superclasses. This is perhaps the easiest, cheapest and most effective kind of reuse. Ordinary class inheritance doesn't require a lot of thinking about "other situations"; you don't have to build "omnipotent" code.
Beyond simple inheritance, reuse is something you find more than you invent. You find reuse situations when you want to reuse one of your own packages to solve a slightly different problem. When you want to reuse a package that doesn't precisely fit the new situation, you have two choices.
Copy it and fix it. You now have to nearly similar packages -- a costly mistake.
Make the original package reusable in two situations.
Just do that for reuse. Nothing more. Too much thinking about "potential" reuse and undefined "other situations" can become a waste of time.
Others have mentioned these tactics, but here they are formally. These three will get you very far:
Adhere to the Single Responsibility
Principle - it ensures your class only "does one thing", which means it's more likely it will be reusable for another application which includes that same thing.
Adhere to the Liskov
Substitution Principle - it ensures your code "does what it's supposed without surprises", which means it's more likely it will be reusable for another application that needs the same thing done.
Adhere to the Open/Closed Principle - it ensures your code can be made to behave differently without modifying its source, which means it's more likely to be reusable without direct modification.
To add to the above mentioned items, I'd say:
Make those functions generic which you need to reuse
Use configuration files and make the code use the properties defined in files/db
Clearly factor your code into such functions/classes that those provide independent functionality and can be used in different scenarios and define those scenarios using the config files
I would add the concept of "Class composition over class inheritance" (which is derived from other answers here).
That way the "composed" object doesn't care about the internal structure of the object it depends on - only its behavior, which leads to better encapsulation and easier maintainability (testing, less details to care about).
In languages such as C# and Java it is often crucial since there is no multiple inheritance so it helps avoiding inheritance graph hell u might have.
As mentioned, modular code is more reusable than non-modular code.
One way to help towards modular code is to use encapsulation, see encapsulation theory here:
http://www.edmundkirwan.com/
Ed.
Avoid reinventing the wheel. That's it. And that by itself has many benefits mentioned above. If you do need to change something, then you just create another piece of code, another class, another constant, library, etc... it helps you and the rest of the developers working in the same application.
Comment, in detail, everything that seems like it might be confusing when you come back to the code next time. Excessively verbose comments can be slightly annoying, but they're far better than sparse comments, and can save hours of trying to figure out WTF you were doing last time.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Are there any methods/systems that you have in place to incentivize your development team members to write "good" code and add comments to their code? I recognize that "good" is a subjective term and it relates to an earlier question about measuring the maintainability of code as one measurement of good code.
This is tough as incentive pay is considered harmful. My best suggestion would be to pick several goals that all have to be met simultaneously, rather than one that can be exploited.
While most people respond that code reviews are a good way to ensure high quality code, and rightfully so, they don't seem to me to be a direct incentive to getting there. However, coming up with a positive incentive for good code is difficult because the concept of good code has large areas that fall in the realm of opinion and almost any system can be gamed.
At all the jobs I have had, the good developers were intrinsically motivated to write good code. Chicken and egg, feedback, catch 22, call it what you will, the best way to get good code is to hire motivated developers. Creating an environment where good developers want to work is probably the best incentive I can think of. I'm not sure which is harder, creating the environment or finding the developers. Neither is easy, but both are worth it in the long term.
I have found that one part of creating an environment where good developers want to work includes ensuring situations where developers talk about code. I don't know a skilled programmer that doesn't appreciate a good critique of his code. This helps the people that like to be the best get better. As a smaller sub-part of this endeavor, and thus an indirect incentive to create good code, I think code reviews work wonderfully. And yes, your code quality should gain some direct benefit as well.
Another technique co-workers and I have used to communicate good coding habits is a group review in code. It was less formal and allowed people to show off new techniques, tools, features. Critiques were made, kudos were given publicly, and most developers didn't seem to mind speaking in front of a small developer group where they knew everyone. If management cannot see the benefit in this, spring for sammiches and call it a brown bag. Devs will like free food too.
We also made an effort to get people to go to code events. Granted, depending on how familiar you all are with the topic, you might not learn too much, but it keeps people thinking about code for a while and gets people talking in an even more relaxed environment. Most devs will also show up if you offer to pick up a round or two of drinks afterwards.
Wait a second, I noticed another theme. Free food! Seriously though, the point is to create an environment where people that already write good code and those that are eager to learn want to work.
Code reviews, done well, can make a huge difference. No one wants to be the guy presenting code that causes everyone's eyes to bleed.
Unfortunately, reviews don't always scale well either up (too many cooks and so on) or down (we're way too busy coding to review code). Thankfully, there are some tips on Stack Overflow.
I think formal code reviews fill this purpose. I'm a little more careful not to commit crappy looking code knowing that at least two other developers on my team are going to review it.
Make criteria public and do not connect incentives with any sort of automation. Publicize examples of what you are looking for. Be nice and encourage people to publicize their own bad examples (and how they corrected them).
Part of the culture of the team is what "good code" is; it's subjective to many people, but a functioning team should have a clear answer that everyone on the team agrees upon. Anyone who doesn't agree will bring the team down.
I don't think money is a good idea. The reason being is that it is an extrinsic motivator. People will begin to follow the rules, because there is a financial incentive to do so, and this doesn't always work. Studies have shown that as people age financial incentives are less of a motivator. That being said, the quality of work in this situation will only be equal to the level you set to receive the reward. It's a short term win nothing more.
The real way to incent people to do the right thing is to convince them their work will become more rewarding. They'll be better at what they do and how efficient they are. The only real way to incentivize people is to get them to want to do it.
This is advice aimed at you, not your boss.
Always remind yourself of the fact that if you go that extra mile and write as good code as you can now, that'll pay off later when you don't have refactor your stuff for a week.
I think the best incentive for writing good code is by writing good code together. The more people write code in the same areas of the project, the more likely it will be that code conventions, exception handling, commenting, indenting and general thought process will be closer to each other.
Not all code is going to be uniform, but upkeep usually gets easier when people have coded a lot of work together since you can pick up on styles and come up with best practice as a team.
You get rid of the ones that don't write good code.
I'm completely serious.
I agree with Bill The Lizard. But I wanted to add onto what Bill had to say...
Something that can be done (assuming resources are available) is to get some of the other developers (maybe 1 who knows something about your work, 1 who knows your work intimately, and maybe 1 who knows very little about it) together and you walk them through your code. You can use a projector and sit them down in a room and you can drive through all of your changes. This way, you have a mixed crowd that can provide input, ask questions, and above all make you a better developer.
There is no need to have only negative feedback; however, it will happen at times. It is important to take negative as constructive, and perhaps try to couch your feedback in a constructive way when giving feedback.
The idea here is that, if you have comment blocks for your functions, or a comment block that explains some tricky math operations, or a simple commented line that explains why you are required to change the date format depending on the language selected...then you will not be required to instruct the group line by line what your code is doing. This is a way to annotate changes you have made and it allows for the other developers to keep thinking about the fuzzy logic you had in your previous function because they can read your comments and see what you did else-where.
This is all coming from a real life experience and we continue to use this approach at my job.
Hope this helps, good question!
Hm.
Maybe the development team should do code-reviews of each other codes. That could motivate them to write better, commented code.
Code quality may be like pornography - as the famous quote from the Justice Potter Stewart goes, "I know it when I see it"
So one way is to ask others about the code quality. Some ways of doing that are...
Code reviews by their peers (and reviews of others code by them), with ease of comprehension being one of the criteria in the review checklist (personally, I don't think that necessarily means comments; sometimes code can be perfectly clear without them)
Request that issues caused by code quality are raised at retrospectives (you do hold retrospectives, right?)
Track how often changes to their code works first time, or whether it takes several attempts?
Ask for peer reviews at the annuak (or whatever) review time, and include a question about how easy it is to work with the reviewee's code as one of the questions.
Be very careful with incentivizing: "What gets measured gets done". If you reward lines of code, you get bloated code. If you reward commenting, you get unnecessary comments. If you reward lack of bugs found in the code, your developers will do their own QA work which should be done by lower-paid QA specialists. Instead of incentivizing parts of the process, give bonuses for the whole team's success, or the whole company's.
IMO, a good code review process is the best way to ensure high code quality. Pair programming can work too, depending on the team, as a way of spreading good practices.
The last person who broke the build or shipped code that caused a technical support call has to make the tea until somebody else does it next. The trouble is this person probably won't give the tea the attention it requires to make a real good cuppa.
I usually don't offer my team monetary awards, since they don't do much and we really can't afford them, but I usually sit down with each team member and go over the code with them individually, pointing out what works ("good" code) and what does not ("bad" code). This seems to work very well, since I don't get nearly as much junk code as I did before we started this process.