Tuesday, January 29, 2008

Measuring Programmer Productivity

In my role as CIO at CareGroup and Harvard Medical school, I oversee nearly 100 software developers. Many organizations purchase software or outsource development to avoid managing developers, but we Build AND Buy and will continue to do so. I was recently asked to share the metrics we use to evaluate our developers. Here is our framework:

1. How may applications does the developer maintain? Applications can vary in size and complexity, so that must also be documented. This helps us understand how many staff are needed as our application portfolio expands.

2. How many new applications can a developer create annually? Some programmers develop version 1.0 of new applications, while others maintain existing applications or modules. This helps us understand our development skill set and our ability to take on new development projects.

3. What is the lifecycle stage of each application assigned to a developer? (New, mature, need to replace within 2 years). This is a reasonable way to measure work completed and forecast upcoming work.

4. How much bug free code is developed per year? This is an imperfect measure, since it measures quantity, not quality, but it is useful to understand as a proxy for coding productivity.

5. Are the application stakeholders satisfied with the quantity and quality of application features developed?

We piloted these measures at Harvard Medical School by

1. Inventorying all the major web applications by developer, including the technologies used.

2. Categorizing this inventory into applications which are new and those which are in maintenance. We also rated each application for intensity of support (High, Medium, Low) based on the frequency of code change that has been historically required to maintain it

3. Rating each application's lifecycle stage.

4. Documenting the number of application releases by each developer over the past year.

5. Creating a survey to measure user satisfaction with each application.

In the end, we published a detailed scorecard for each developer and a summary for all developers. This data alone doesn't tell the whole story, but it does help us plan and better manage our development staff.

3 comments:

  1. Hi John:

    Nice article, and its really great how you share this kind of information with others.

    I was wondering if there was a profile of the skill sets needed across the applications. Do you have a common (or close to common) development platform and database for everything or is it very diverse like a lot of healthcare organizations? Some environments seem to require more time or effort to deal with than others.

    Or, is this a function of the application for your model?

    -Lyle

    ReplyDelete
  2. John,
    MIS research studies over the years have not shown the value of "lines of code produced" as a programmer productivity metric. Clearly, lines of code produced does not account for the bulk of time spent in programming activities, which consists of analysis and testing. Writing the actual lines of code is trivial. Thinking about how to best code a solution and then testing that solution is what consumes the great majority of programmer time. And---the more elegant the solution, the fewer lines of code result---completely confounding the simplistic metric.

    A more rational way to measure programmer productivity is with functional goals, for example, completion of each functional module of a given project, as well as an objective assessment of how well the functions fulfill the objectives laid out in the project plan. Augment this with some measurement of the efficient use of resources (de-bugging time, machine cycles, etc.) and then you have something approaching an honest measurement of productivity. You could throw in a bonus for the cleverness of the solution, too.

    Programmers are often intensely frustrated by arbitrary and uninformed performance assessments of the kind that are inherent in the "lines of code" metric. It's the kind of thing that leads them to seek greener and better measured pastures where they are treated as the creative professionals they are, rather than as code monkeys.

    MarianC, Ph.D. MIS

    ReplyDelete
  3. Interesting developer scorecard. I am assuming the developers deliver functionality in very small increments because of the number of releases on page 2 of the scorecard. If this is correct, do you find it better in your environment to release many small functional gains or to batch the functions into larger releases? This is something our organization consistently debates. On the one hand, small releases push the incremental gains to the users much more quickly and allow small but focused improvements, however, testing and consistent integration of the functional gain into processes tend to be a challenge. Larger releases allow more user participation for defined periods of time in testing and generally more focused training. What is your take?

    ReplyDelete