Get number of code rows for a subsystem in Endevor - cobol

Is it possible to count the number of rows of code for Cobol programs for a certain subsystem in Endevor?
Someone told me that Endevor can generate reports with this information but I can't find any information about this.
I'm aware that the metric (number of rows) isn't necessarily relevant, but I was asked the question and would like to answer it, even though I don't really see the point of this exact metric. I'm parallel to answering the question try to turn the focus to a more meaningful metric, here any advice and recommendations are much appreciated.
Thoughts are, number of verbs and so on...

Related

how to find the abnormal id from so many ids

We run an affiliate program. Users who sign up can gain points when they successfully recruit other users. However, spammers are abusing this program, and automatically signing up large numbers of accounts. We want to prevent this from happening by closing down clearly machine-generated accounts. My idea for this is to write a program to identify machine-generated account names, or at least select a subset for manual inspection.
So far, we have found that there are two types of abnormal ids:
The first one is that there are some ids looks very similar to others, such as:
wss12345
wss12346
wss12347
test1
test2
...
The second one is that there are some ids looks like randomly generated with out rules, such as:
MiDjiSxxiDekiE
NiMjKhJixLy
DAFDAB7643
...
For the first one, I use the Levenshtein(edit) distance. This method can find out some ids, which was illustrate in type 1. (I have done this, and can get good performance)
For the second one, I can calculate the probabilty for the ids, just like:
id = "DAFDAB7643:
p(id) = p(D)*p(A|D)*p(F|A)*p(D|F)*...*p(3|4)
So I can use the probability to filter out the abnormal ids. (Just an idea; I haven't tried it out.)
Can anyone give me other suggestions about this topic? How else could I approach this problem? Can you see flaws or omissions in my attempts?
Assuming that these new accounts refer back to the the recruiter's ID, I'd look at the rate and/or sheer number of new accounts associated with a given recruiter.
Some analysis on IP addresses or similar may also indicate if multiple users are coming from the same computer.
I'd use a dictionary of words, and kind of do the reverse of detecting poor passwords -- human user names should have dictionary words, personal names, lack punctuation, not include repeated characters, be mostly lower case etc.
Sort of going back to 1. above -- if a recruiter has an anamalously tight cluster of IDs, using the features you've already identified, would be a good flag. I think that this might be, essentially, #larsmans comment directly under the question.
I'd be curious to know if re-purposing password checking algorithms (item 3) provides any benefit.
You're not telling us what sort of site you are running, so this is a bit on the speculative side; but consider Stack Overflow as a prime example of successfully promoting good behavior through the use of a user reputation system, and weeding out many kinds of unwanted behaviors.
A quick, hackish fix might be to progressively deduct from the score when the amount of dormant recruit accounts grows larger, but a more rewarding and compelling fix is to award higher reputation scores for actually contributing to the site's content. However, this depends on the type of site you have; a stock market tips site, say, obviously works quite differently from a techical discussion forum.

practical problems of transforming data in data warehouse

i need to explain the practical problems that might be encountered when transforming their transactional (and other) data from their diverse sources into the Data Warehouse. according to my knowledge this is about cleansing and scrubbing data. if anyone knows about any practical problem please help me.thanks for your help
That's a broad topic, but I'll offer a few good starting points.
For starters, think about history. If a transaction updates some data point, do you need to apply that retroactively, or do you need to remember what the value was at any given point in time. For example, suppose you have a monthly report of customers by city, and one of your customers moves. How should the DW reflect that.
Think about data acceptance. Is every input row a good input? For example, if you're dealing with web data, there are crawlers and spammers that you might not want to count the same as you count user traffic.
Think about data synchronization. Do all your inputs use the same keys? Do you know how to translate between them? Does Team A mean the same thing by "cust_id" as Team B does? A project glossary is very helpful here.
Think about localization. Are you inputs all in the same time zone? Do they all use the same calendar system? Do you need to handle unicode?
Think about reporting. Are the data you're capturing able to answer the questions people will ask of the DW? If not, how can you capture data that can?
Think about presentation. Should you be showing customers the same data you're using for internal reporting? Does finance need to see a different slice of the data than marketing?
This really only scratches the surface of the issues that come up on a major DW project. I would refer you to Ralph Kimball's assorted books on Data Warehousing for a more in depth discussion of problems and solutions. Hope this helps you get started.
You give the answer in your question.
According to my knowledge this is about cleansing and scrubbing data.
And you are correct. Cleansing data means that you have a company-wide list of clean element attributes, and a mapping that changes the unclean elements into clean elements.
Processing the data against the clean element attributes is a piece of cake compared to creating the company-wide list of clean element attributes.
You have to get people from different departments to agree on what data to warehouse, and to agree on what each element means. This is a difficult sociological problem. It's not a terribly hard technical problem.
Good luck getting your company-wide list of clean element attributes.

Software development metrics and reporting [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I've had some interesting conversations recently about software development metrics, in particular how they can be used in a reasonably large organisation to help development teams work better. I know there have been Stack Overflow questions about which metrics are good to use - like this one, but my question is more about which metrics are useful to which stakeholders, and at what level of aggregation.
As an example, my view is that code coverage is a useful metric in the following ways (and maybe others):
For a team's own internal use when combined with other measurements.
For facilitating/enabling/mentoring
teams, where it might be instructive
when considered on a team-by-team
basis as a trend (e.g. if team A and
B have coverage this month of 75 and
50, I'd be more concerned with team A
than B if the previous month they'd
had 80 and 40).
For senior management
when presented as an aggregated
statistic across a number of teams or
a whole department.
But I don't think it's useful for senior management to see this on a team-by-team basis, as this encourages artifical attempts to bolster coverage with tests that merely exercise, rather than test, code.
I'm in an organisation with a couple of levels in its management hierarchy, but where the vast majority of managers are technically minded and able (with many still getting their hands dirty). Some of the development teams are leading the way in driving towards agile development practices, but others lag, and there is now a serious mandate from the top for this to be the way the organisation works. A couple of us are starting a programme to encourage this. In this sort of an organisation, what sort of metrics do you think are useful, to whom, why, and at what level of aggregation?
I don't want people to feel their performance is being assessed based on a metric that they can artificially influence; at the same time, the senior management are going to want some sort of evidence that progress is being made. What advice or caveats can you provide based on experience in your own organisations?
EDIT
We are definitely wanting to use metrics as a tool for organisational improvement not as a tool for individual performance measurement.
A tale from personal experience. Apologies for the length.
A few years ago our development group tried setting "proper" measurable objectives for individuals and team leaders. The experiment lasted for just one year, because hard metrics didn't really work very well for individual objectives (see my question on the subject for some links and further discussion).
Note that I was a team leader, and involved in planning it all with my technical boss and the other team leaders, so the objectives weren't something dictated from on high by clueless upper management -- at the time we really wanted them to work. It is also worth noting that the bonus structure inadvertently encouraged competition between developers. Here are my observations on the things we tried.
Customer-visible issues
In our case, we counted outages on the service we provided to customers. In a shrink-wrapped product it might be the number of bugs reported by customers.
Advantages: This was the only real measure that was visible to upper management. It was also the most objective, being measured outside the development group.
Disadvantages: There weren't that many outages -- just around one per developer for the whole year -- which meant that failing or exceeding the objective was a matter of "pinning blame" for the few outages that did occur in each team. This led to bad feeling and loss of morale.
Amount of work completed
Advantages: This was the only positive measure. Everything else was "we notice when bad things happen," which was demoralising. Its inclusion was also necessary because, without it, a developer who did nothing all year would exceed all the other objectives, which clearly wouldn't be in the interests of the company. Measuring the amount of work completed checked the natural optimism of developers when estimating task size, which was useful.
Disadvantages: The measure of "work completed" was based on estimates provided by the developers themselves (usually a good thing), but making it part of their objectives encouraged gaming of the system to inflate estimates. We had no other viable measure of work completed: I think the only possible valuable way of measuring productivity is "impact on the company bottom line," but most developers are so far removed from direct sales that this is rarely practical at an individual level.
Defects found in new production code
We measured defects introduced into new production code during the year, as it was felt that bugs from previous years should not count against any individual in this year's objectives. Defects spotted by internal quality teams were included in the count even if they didn't impact customers.
Advantages: Surprisingly few. The time lag between the introduction of a defect and its discovery meant that there was really no immediate feedback mechanism to improve code quality. Macro trends at a team level were more useful.
Disadvantages: There was a heavy focus on the negative, since this objective was only invoked when a defect was found and we needed someone to blame for it. Developers were reluctant to record defects they found themselves, and a simple count meant that minor bugs were as bad as severe problems. Since the number of defects per individual was still quite low, the number of minor and severe defects didn't even out as it might with a larger sample. Old defects were not included, so the group's reputation for code quality (based on all bugs found) did not always match the measurable introduced-this-year count.
Timeliness of project delivery
We measured timeliness as the percentage of work delivered to internal QA teams by the stated deadline.
Advantages: Unlike counting defects, this was a measure that was under immediate, direct control of the developers, as they effectively decided when the work was complete. The presence of the objective focused the mind on completing tasks. This helped the team commit to realistic amounts of work, and improved the perception by internal customers of the development group's ability to deliver on promises.
Disadvantages: As the only objective directly under the developers' control, it was maximised at the expense of code quality: on the day of a deadline, given the choice between saying a task is complete or doing further testing to improve confidence in its quality, the developer would choose to mark it complete and hope any resulting bugs never come to the surface.
Complaints from internal customers
To gauge how well developers communicated with internal customers during development and subsequent support of their software, we decided that the number of complaints received about each individual would be recorded. The complaints would be validated by the manager, to avoid any possible vindictiveness.
Advantages: Really nothing I can recall. Measured at a sufficiently large group level it becomes a more useful "customer satisfaction" score.
Disadvantages: Not only highly negative, but also a subjective measure. As with other objectives, the numbers for each individual were around the zero mark, which meant that a single comment about someone could mean the difference between "infinitely exceeded" and "did not meet".
General comments
Bureaucracy: While our task management tools held much of the data for these metrics, there was still quite a lot of manual effort involved to collate it all. The time spent obtaining all the numbers was not enjoyable, generally focused on negative aspects of our work and may not even have been reclaimed by increased productivity.
Morale: For the measures where individuals were blamed for problems, not only did those with "bad" scores feel demotivated, but so did those with "good" scores, as they didn't like the loss in team morale and sometimes felt they were ranked higher not because they were better but because they were luckier.
Summary
So what did we learn from the episode? In later years we tried to re-use some of the ideas but in a "softer" way, where there was less emphasis on individual blame and more on team improvement.
It is impossible to define objectives for individual developers that are objectively measurable, add value to the company and cannot be gamed, so don't bother to try.
Customer issues and defects can be counted at a wider team level, if the location of the defect is unequivocally the responsibility of that team -- that is, you don't ever have to play the "blame game".
Once you measure defects only at the level of responsibility for a code module, you can (and should) measure old bugs as well as new ones, since it is in that group's interest to eliminate all defects.
Measuring defect counts at a group level increases the sample size per group, and so anomalies between minor and severe defects are smoothed out and a simple "number of bugs" measure can mean something, such as to see if you are improving month-on-month.
Include something that upper management care about, because keeping them happy is your primary purpose as a development group. In our case it was customer-visible outages, so even if the measure is sometimes arbitrary or seemingly unfair, if it's what the bosses are measuring then you need take notice too.
Upper management don't need to see metrics they don't have in their own objectives. This way it avoids the temptation to blame individuals for errors.
Measuring timeliness of project delivery did change developer behaviour and put a focus on completing tasks. It improved estimation and allowed the group to make realistic promises. If it were easy to collect the timeliness information then I would consider using it again at a team level to measure improvement over time.
All of this doesn't help when you are required to set measurable objectives for individual developers, but hopefully the ideas will be more useful for team improvement.
The key thing about metrics is knowing what you are using them for. Are you using them as a tool for improvement, a tool for reward, a tool for punishment, etc. It sounds like you're planning to use them as a tool for improvement.
The number one principle when setting metrics is to keep the information relevant so that the person receiving it can use it to make a decision. Most likely a senior manager cannot dictate the micro level of whether you need more tests, less complexity, etc. But a team leader can do that.
Therefore, I don't believe a measure of code coverage is going to be useful to management beyond the individual team. At the macro level, the organisation is probably interested in:
Cost of delivery
Timeliness of delivery
Scope of delivery & external quality
Internal quality won't be high on their list of things to cover off. It's a development team's mission to make it clear that internal quality (maintainability, test coverage, self-documenting code, etc) is a key factor in achieving the other three.
Therefore you should target metrics to more senior managers which cover off those three such as:
Overall Velocity (note that comparing velocity between teams is often artificial)
Expected vs Actual scope delivered to agreed timelines
Number of production defects (possibly per capita)
And measure things like code coverage, code complexity, cut 'n' paste score (code repetition using flay or similar), method length, etc at a team level where the recipients of the information can really make a difference.
A metric is a way of answering a question about a project, team or company. Before you start looking for the answers, you need to decide what questions you want to ask.
Typical questions include:
what is the quality of our code?
is the quality improving or degrading over time?
how productive is the team? Is it improving or degrading?
how effective is our testing?
...and so on.
Each question will require a different set of metrics to answer. Collecting metrics without knowing what questions you want answered is at best a waste of time and at worst counterproductive.
You also need to be aware that there is an 'uncertainty principle' at work - unless you are very careful the act of collecting metrics will change people's behaviour, often in unexpected and sometimes detrimental ways. This is especially so if people believe they are being evaluated on the metrics, or worse still have the metrics tied to some reward or punishment scheme.
I recommend reading Gerald Weinberg's Quality Software Management Vol 2: First Order Measurement. He goes into a lot of detail on software metrics, but says the most important are often what he calls "Zero Order Measurement" - asking people their opinion on how a project is going. All four volumes in the series are expensive and hard to get hold of, but well worth it.
Software writing
What must be optimised?
CPU(s) use, memory(s) use, memory cache(s) use, user time use, code size at run-time, data size at run-time, graphics performance, file access performance, network access performance, bandwidth use, code conciseness and readability, electricity use, (count of) distinct API calls used, (count of) distinct methods and algorithms used, maybe more.
How much must it be optimised?
It must be optimised the minimum reasonable amount (except in areas where surpassing acceptance test criteria is desirable) required to pass acceptance tests, facilitate maintenance, facilitate audit and meet user requirements.
("... for legal/illegal input test data and legal/illegal test events in all test states at all required test data volumes and test request volumes for all current and future test integration scenarios.")
Why the minimum reasonable amount?
Because optimised code is harder to write and so costs more.
What leadership is required?
Coding standards, basic structure, acceptance criteria and guidance on levels of optimisation required.
How can success of software writing be measured?
Cost
Time
Acceptance test passes
Extent to which acceptance tests it is desirable to surpass are surpassed
User approval
Ease of maintenance
Ease of audit
Degree of absence of over-optimisation
What cost/time should be ignored in assessing aggregate performance of programmers?
Wasted cost/time incurred because of requirements (inc architecture) changes
Extra cost/time incurred because of deficiencies in platforms/tools
But this cost/time should be included in assessing aggregate performance of teams (inc architects, managers).
How can success of architects be measured?
Other measures plus:
Instances of "avoiding early" being affected by deficiencies in platforms/tools
Degree of absence of changes in architecture
As I said in What is the fascination with code metrics?, metrics include:
different populations, meaning the scope of interest is not the same for developer or for manager
trends meaning any metrics in itself is meaningless without its associated trend, in order to take the decision to act upon it or to ignore it.
We are using a tool able to provide:
lots of micro-level metrics (interesting for developers), with trends.
lots of rules with multi-level (UI, Data, Code) static analysis capabilities
lots of aggregations rules (meaning those vast number of metrics are condensed in several domains of interests, adequate for higher level of populations)
The result is an analysis which can be drilled-down, from high level aggregation domains (security, architecture, practices, documentation, ...) all the way down to some line of code.
The current feedback is:
project managers can get defensive very quickly when some rules are not respected and make their global note significantly lower.
Each study has to be re-tailored to respect each project quirks.
The benefit is the definition of a contract where exceptions are acknowledged but rules to be respected are defined.
higher levels (IT department, stakeholder) use the global notes just as one element of their evaluation of the progress made.
They will actually look more closely at other elements based on delivery cycles: how often are we able to iterate and put an application into production?, how many errors did we had to solve before that release? (in term of merges, or in term of pre-production environment not correctly setup), what immediate feedbacks are generated by a new release of an application?
So:
which metrics are useful to which stakeholders, and at what level of aggregation
At high level:
the (static analysis) metrics are actually the result of low-level metric aggregations, and organized by domains.
Other metrics (more "operational-oriented", based on the release cycle of the application, and not just on the static analysis of the code) are taken into account
The actual ROI is achieved through other actions (like six-sigma studies)
At lower level:
the static analysis is enough (but has to encompass multi-level tiers applications, with sometimes multi-languages developments)
the actions are piloted by the trends and importance
the study has to be approved/supported by all levels of hierarchy to be accepted/acted upon (in particular, budget for the ensuing refactoring has to be validated)
If you have some Lean background/knowledge, then I would suggest the system that Mary Poppendieck recommends (that I've already mentioned in this previous answer). This system is based on three holistic measurements that must be taken as a package:
Cycle time
From product concept to first release or
From feature request to feature deployment or
From bug detection to resolution
Business Case Realization (without this, everything else is irrelevant)
P&L or
ROI or
Goal of investment
Customer Satisfaction
e.g. Net Promoter Score
The aggregation level is product/project level and I believe that these metrics are helpful for everybody (developers should never forget that they don't write code for fun, they write code to create value and should always keep that in mind).
Teams may (and actually do) use technical metrics to measure quality standards conformance which are integrated in the Definition of Done (as "no increase of the technical debt"). But high quality is not a end in itself, it's just a mean to achieve short cycle time (to be a fast company) which is the real target (with Business Case Realization and Customer Satisfaction).
This is a bit of a side note to the main question, but I had a very similar experience to Paul Stephensons answer above. One thing I would add to that is about collection of data and visibility of metrics.
In our case, the development director was meant to collate a bunch of data from various disparate systems and distribute individual metric results once a month. This often didn't happen, as it was a time consuming job and he was a busy man.
The results of this were:
Unhappy developers, as performance bonuses were based on metrics and people didn't know how they were getting on.
Some time consuming multiple entry of data into various different systems.
If you are going down this route, you need to be sure that all metric data can be collated automatically and is easily visible to those it affects.
One of the interesting approaches that's currently getting some hype is Kanban. It's fairly Agile. What's particularly interesting is that it permits a metric of "work done" to be applied. I havn't used/encountered this in actual practice yet, but I'd like to work towards getting a kanban-ish flow going at my job.
Interestingly I just finished reading PeopleWare, and the authors strongly discourage individual metrics being made visible to superiors (even direct managers), but that aggregate metrics should be very visible.
As far as code specific metrics I think it's good for a team to know the state of the code at the current time, and to know the trends affecting the code as it matures and grows.
The question is obviously not focussed on .NET, but I think the .NET product NDepend has done a lot of work to define and document common metrics that are useful.
The documentation section on metrics is educational reading, even if you're not doing .NET.
Software metrics have been with us for a long time and as best I
can tell nothing to date has emerged individually or in aggregate
that is capable of guiding projects during development. The nut of
the problem is that we want to use objective measures and these
can only measure what has happened,
not what is happening or about to happen.
By the time we have measured, analyzed and interpreted some
series of metrics we are reacting to things that
have already gone wrong, or very occasionally, gone right.
I don't want to underplay the importance of learning from
objective metrics but I do want to
point out that this is a reactive not a pro-active response.
Developing a "confidence index" may be a better way of monitoring
whether project is on-track or headed for trouble. Try
developing a voting system where a reasonable number of
representatives from each project area of interest are asked
to anonymously vote their
confidence from time to. Confidence is voted in two areas:
1) Things are on-track 2) Things will continue to be on-track or get
back on-track.
These are purely subjective measurements from people closest to the
"action".
Feed the results into a Kanban type chart where the
columns represent voting areas and you
should have a pretty good idea where to focus your attention. Use
question 1 to evaluate whether management reacted to the
previous voting cycle appropriately. Use question 2 to identify
where management should focus next.
This idea is based on each of us having a comfort level
within our own area of responsibility. Our confidence level
is a product of experience, knowledge within our
domain of expertise, the number and severity of problems
we are facing, the amount of time we have to accomplish our
tasks, the quality of the information we are working with and
a whole bunch of other factors.
MBWA (Management By Walking Around) is often touted as
one of the most effective tools we have - this is a variation of it.
This technique is not much use at the level of
individual teams because it only reflects the general mood
of the team. Kind of like using someone’s watch to tell them
the time. However, at higher levels of management it should
be quite informative.

Dealing with illogical managers [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
At a place I used to work they typical response to any problem was to blame the hardware or the users for not using the system perfectly. I had adopted the philosophy that it's my fault until I can prove otherwise prior to that job (and so far, at least 99 times out of 100 it's correct).
One of the last "unsolvable" problems when I was there was an abundance of database timeouts. After months of research, I still only had theories but couldn't prove any of them. One of my developers adamantly suggested replacing the network (every router, switch, access point) but couldn't provide any evidence that the network was the cause; it was, however, "obviously the cause" according to my manager (no development/IT experience) so he took over the problem. One caveat and Fog Creek plug: He couldn't account for the fact that the error reporting via FogBugz worked perfectly and to the same SQL Server as the rest of the data.
A couple, timeout-free months later, my manager boasted that he had fixed the timeouts ("Look, no timeouts!"). I had to hold back from grabbing a rock and saying "Look, no tigers!" but I did ask how he knew they would have occurred to which I got no response. The timeouts did return (and in greater numbers) a couple months later.
I'm pretty content with how I handled the situation but I'm curious how the SO crowd would have responded to letting a superior/colleague implement a solution you know (or are very sure) is wrong and will likely waste thousands of dollars?
Let them, but at the same time continue searching for the real cause.
A couple thousand dollars is money well spent if it keeps me from going against that kind of thinking (which is futile).
Well, if the problem is upper management, then I would do as you have done - lodge your complaint, then follow instructions. If it turns out they were right (it happens every now and then) then you look like a good employee despite your misgivings. If it turns out you were right, then they might be more willing to listen to you given that you allowed them their turn.
This is, of course, optimistic.
In the case of a colleague, take the problem up a level and consult your superiors for advice on how to approach the subject. Be fair to both your perspective and that of your colleague's, then follow the advice you're given.
Sometimes it's best to leave a manager be. If you think about his pressures and responsibilities he had to be seen to be doing something, rather than "nothing". After enough time "investigating" resolves into nothing to outside parties that need the timeouts to stop.
By taking an action he creates an opportunity to keep researching. The trick is to find a way to put your solutions in his context. Here is something we can do now, and here's what we can continue to do. For example, "We can replace the networking gear as a precautionary step, and then look at the version control logs to rule out that possibility."
This gives him something proactive so he can look productive up his chain while achieving the solution you want which will ultimately be successful.
In the long term you should look to work for someone who trusts your technical decisions implicitly, you can talk candidly with and who well help you help him navigate the politics in a way you both know what's going on. If you manager isn't that person, move.
Is this a big problem? Its not your job to save your company dollars other than that you would like your company to remain solvent so you get paid.
If its just one manager, he will be weeded out sooner or later, if your entire company culture is like this, maybe it would be time to move on.
In the mean time, see if you can see this from your manager's perspective.
I'd consider you're manager's intent to be a good thing. It's the people that don't want to bother that I find more difficult. It's just best to find a way to use that energy to be helpful.
One common problem for lots of people (occasionally myself), is that they flail around when trying to diagnose a problem. If you're wildly guessing at it, then particularity with modern computers, you have only the slimmest possibility of being right. Approaching this type of problem with that type of attitude, generally means that you'll never fix it.
The best way of handing complex debugging, is by divide and conquer. In this case, first think up a test, them implement it. Did that test act as expected? Depending on where you are with your tests, you are getting closer or farther away from the problem. The key, is that ALL of the tests must result in some concrete (objective) behavior. If the results are ambiguous, then the test is useless.
If you're getting a disconnect in part of the system, but some other part is not, then you have a huge amount of valuable information (it also shows it's not the network). What's the difference between the parts? Just start descending until you get somewhere ...
Getting back to your manager. Whenever I encounter that type of personality problem, I try to redirect the energy into something more useful. The desire is there, it just needs some help in getting shaped. If you can convince your manager to make sure the tests are concrete, then if they do enough of them, they'll produce enough information to correctly guess the bug. Sure, a more consistent approach might be faster, but why turn down some free assistance. I generally feel that there is some useful role for anyone on any project, it's all about making it possible to harness their efforts ....
Paul.

How do you incentivize good code? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Are there any methods/systems that you have in place to incentivize your development team members to write "good" code and add comments to their code? I recognize that "good" is a subjective term and it relates to an earlier question about measuring the maintainability of code as one measurement of good code.
This is tough as incentive pay is considered harmful. My best suggestion would be to pick several goals that all have to be met simultaneously, rather than one that can be exploited.
While most people respond that code reviews are a good way to ensure high quality code, and rightfully so, they don't seem to me to be a direct incentive to getting there. However, coming up with a positive incentive for good code is difficult because the concept of good code has large areas that fall in the realm of opinion and almost any system can be gamed.
At all the jobs I have had, the good developers were intrinsically motivated to write good code. Chicken and egg, feedback, catch 22, call it what you will, the best way to get good code is to hire motivated developers. Creating an environment where good developers want to work is probably the best incentive I can think of. I'm not sure which is harder, creating the environment or finding the developers. Neither is easy, but both are worth it in the long term.
I have found that one part of creating an environment where good developers want to work includes ensuring situations where developers talk about code. I don't know a skilled programmer that doesn't appreciate a good critique of his code. This helps the people that like to be the best get better. As a smaller sub-part of this endeavor, and thus an indirect incentive to create good code, I think code reviews work wonderfully. And yes, your code quality should gain some direct benefit as well.
Another technique co-workers and I have used to communicate good coding habits is a group review in code. It was less formal and allowed people to show off new techniques, tools, features. Critiques were made, kudos were given publicly, and most developers didn't seem to mind speaking in front of a small developer group where they knew everyone. If management cannot see the benefit in this, spring for sammiches and call it a brown bag. Devs will like free food too.
We also made an effort to get people to go to code events. Granted, depending on how familiar you all are with the topic, you might not learn too much, but it keeps people thinking about code for a while and gets people talking in an even more relaxed environment. Most devs will also show up if you offer to pick up a round or two of drinks afterwards.
Wait a second, I noticed another theme. Free food! Seriously though, the point is to create an environment where people that already write good code and those that are eager to learn want to work.
Code reviews, done well, can make a huge difference. No one wants to be the guy presenting code that causes everyone's eyes to bleed.
Unfortunately, reviews don't always scale well either up (too many cooks and so on) or down (we're way too busy coding to review code). Thankfully, there are some tips on Stack Overflow.
I think formal code reviews fill this purpose. I'm a little more careful not to commit crappy looking code knowing that at least two other developers on my team are going to review it.
Make criteria public and do not connect incentives with any sort of automation. Publicize examples of what you are looking for. Be nice and encourage people to publicize their own bad examples (and how they corrected them).
Part of the culture of the team is what "good code" is; it's subjective to many people, but a functioning team should have a clear answer that everyone on the team agrees upon. Anyone who doesn't agree will bring the team down.
I don't think money is a good idea. The reason being is that it is an extrinsic motivator. People will begin to follow the rules, because there is a financial incentive to do so, and this doesn't always work. Studies have shown that as people age financial incentives are less of a motivator. That being said, the quality of work in this situation will only be equal to the level you set to receive the reward. It's a short term win nothing more.
The real way to incent people to do the right thing is to convince them their work will become more rewarding. They'll be better at what they do and how efficient they are. The only real way to incentivize people is to get them to want to do it.
This is advice aimed at you, not your boss.
Always remind yourself of the fact that if you go that extra mile and write as good code as you can now, that'll pay off later when you don't have refactor your stuff for a week.
I think the best incentive for writing good code is by writing good code together. The more people write code in the same areas of the project, the more likely it will be that code conventions, exception handling, commenting, indenting and general thought process will be closer to each other.
Not all code is going to be uniform, but upkeep usually gets easier when people have coded a lot of work together since you can pick up on styles and come up with best practice as a team.
You get rid of the ones that don't write good code.
I'm completely serious.
I agree with Bill The Lizard. But I wanted to add onto what Bill had to say...
Something that can be done (assuming resources are available) is to get some of the other developers (maybe 1 who knows something about your work, 1 who knows your work intimately, and maybe 1 who knows very little about it) together and you walk them through your code. You can use a projector and sit them down in a room and you can drive through all of your changes. This way, you have a mixed crowd that can provide input, ask questions, and above all make you a better developer.
There is no need to have only negative feedback; however, it will happen at times. It is important to take negative as constructive, and perhaps try to couch your feedback in a constructive way when giving feedback.
The idea here is that, if you have comment blocks for your functions, or a comment block that explains some tricky math operations, or a simple commented line that explains why you are required to change the date format depending on the language selected...then you will not be required to instruct the group line by line what your code is doing. This is a way to annotate changes you have made and it allows for the other developers to keep thinking about the fuzzy logic you had in your previous function because they can read your comments and see what you did else-where.
This is all coming from a real life experience and we continue to use this approach at my job.
Hope this helps, good question!
Hm.
Maybe the development team should do code-reviews of each other codes. That could motivate them to write better, commented code.
Code quality may be like pornography - as the famous quote from the Justice Potter Stewart goes, "I know it when I see it"
So one way is to ask others about the code quality. Some ways of doing that are...
Code reviews by their peers (and reviews of others code by them), with ease of comprehension being one of the criteria in the review checklist (personally, I don't think that necessarily means comments; sometimes code can be perfectly clear without them)
Request that issues caused by code quality are raised at retrospectives (you do hold retrospectives, right?)
Track how often changes to their code works first time, or whether it takes several attempts?
Ask for peer reviews at the annuak (or whatever) review time, and include a question about how easy it is to work with the reviewee's code as one of the questions.
Be very careful with incentivizing: "What gets measured gets done". If you reward lines of code, you get bloated code. If you reward commenting, you get unnecessary comments. If you reward lack of bugs found in the code, your developers will do their own QA work which should be done by lower-paid QA specialists. Instead of incentivizing parts of the process, give bonuses for the whole team's success, or the whole company's.
IMO, a good code review process is the best way to ensure high code quality. Pair programming can work too, depending on the team, as a way of spreading good practices.
The last person who broke the build or shipped code that caused a technical support call has to make the tea until somebody else does it next. The trouble is this person probably won't give the tea the attention it requires to make a real good cuppa.
I usually don't offer my team monetary awards, since they don't do much and we really can't afford them, but I usually sit down with each team member and go over the code with them individually, pointing out what works ("good" code) and what does not ("bad" code). This seems to work very well, since I don't get nearly as much junk code as I did before we started this process.

Resources