Achieving 10x

Crossed paths with an old but still relevant conversation thread on Slashdot asking “What practices impede developers’ productivity?” The conversation is in response to an excellent post by Steve McConnell addressing productivity variations among software developers and teams and the origin of “10x” – that is, the observation noted in the wild of “10-fold differences in productivity and quality between different programmers with the same levels of experience and also between different teams working within the same industries.”

The Slashdot conversation has two main themes, one focuses fundamentally on communication: “good” meetings, “bad” meetings, the time of day meetings are held, status reports by email – good, status reports by email – bad, interruptions for status reports, perceptions of productivity among non-technical coworkers and managers, unclear development goals, unclear development assignments, unclear deliverables, too much documentation, to little documentation, poor requirements.

A second theme in the conversation is reflected in what systems dynamics calls “shifting the burden”: individuals or departments that do not need to shoulder the financial burden of holding repetitively unproductive meetings involving developers, arrogant developers who believe they are beholding to none, the failure to run high quality meetings, code fast and leave thorough testing for QA, reliance on tools to track and enhance productivity (and then blaming them when they fail), and, again, poor requirements.

These are all legitimate problems. And considered as a whole, they defy strategic interventions to resolve. The better resolutions are more tactical in nature and rely on the quality of leadership experience in the management ranks. How good are they at 1) assessing the various levels of skill among their developers and 2) combining those skills to achieve a particular outcome? There is a strong tendency, particularly among managers with little or no development experience, to consider developers as a single complete package. That is, every developer should be able to write new code, maintain existing code (theirs and others), debug any code, test, and document. And as a consequence, developers should be interchangeable.

This is rarely the case. I can recall an instance where a developer, I’ll call him Dan, was transferred into a group for which I was the technical lead. The principle product for this group had reached maturity and as a consequence was beginning to become the dumping ground for developers who were not performing well on projects requiring new code solutions. Dan was one of these. He could barely write new code that ran consistently and reliably on his own development box. But what I discovered is that he had a tenacity and technical acuity for debugging existing code.

Dan excelled at this and thrived when this became the sole area of his involvement in the project. His confidence and respect among his peers grew as he developed a reputation for being able to ferret out particularly nasty bugs. Then management moved him back into code development where he began to slide backward. I don’t know what happened to him after that.

Most developers I’ve known have had the experience of working with a 10x developer, someone with a level of technical expertise and productivity that is undeniable, a complete package. I certainly have. I’ve also had the pleasure of managing several. Yet how many 10x specialists have gone underutilized because management was unable to correctly assess their skills and assign them tasks that match their skills?

Most of the communication issues and shifting the burden behaviors identified in the Slashdot conversation are symptomatic of management’s unrealistic expectations of relative skill levels among developers and their inability to assess and leverage the skills that exist within their teams.


Image by alan9187 from Pixabay

Cook’s Theory of Performance Evaluation

The ideas presented here evolved from a post titled “Evaluate people at their best or their worst?” on John Cook’s blog. In order to make this post a little tighter, I’ll refer to John’s ideas as “Cook’s Theory of Performance Evaluation” and describe it as follows.

John identifies three ways a person’s performance can be evaluated:

  1. How good are they at their worst?
  2. How good are they on average?
  3. How good are they at their best?

Cook observes that schools evaluate performance using the first two benchmarks and markets use the third benchmark. To illustrate, consider the following assignment grades for a student in a hypothetical course:

Assignment Score
1 100
2 90
3 92
4 87
5 100
6 45
7 90
8 100
9 95
10 100
Average: 89.9
Result: B+

Graphically, this looks like:

The student’s worst performance pulls the grade average down and results in a B+ for the course. Performance evaluation in markets, however, is only interested in how well you do, that is, your best. Consider the following sales volumes for a fictional author for each of ten books:

Book Copies Sold
1 1,000
2 2,500
3 900
4 1,100
5 3,400
6 1,000,000
7 42,000
8 6,500
9 2,750
10 3,100
Result: Bestseller!

Graphically, this looks like:

Number 6 must have been some story. But as they say, you can’t argue with success and this author will forever be known as a bestseller. Subsequent flops won’t change that.

So there you have Cook’s Theory of Performance Evaluation. The consequences of this theory when played out in real life are noted by Cook:

We all want others to see the best in us. There are ethical and economic reasons to look for the best in others. But years of education can incline us to look for the worst in others and in ourselves.

Another point that can be made is that in school, everyone starts out with a perfect score that for most students erodes as the class progresses. In markets, everyone starts out with essentially a zero score that for most entrepreneurs improves over time, commensurate with the individual’s effort. Money, of course, is another way to keep score in market-based performance evaluations.

If education has conditioned us to look for the worst in others and ourselves, it has also conditioned us to become demoralized when encountering even the slightest failure that diminishes our chances at succeeding. Once lost, the perfect score can never be regained, so we settle for less. The greater the failure, the less we must settle for.

Moreover, we are conditioned that we can never exceed the highest possible achievement as defined by academia. The best we can do is match it. Most come up short. This conditioning is difficult to shake and in my own experience took several years after obtaining my undergraduate degrees. Nothing like 100+ job rejection letters to cause one to reevaluate the nature and size of the door opened by a couple of college degrees.

There are other ways to evaluate an individual’s performance.

  1. How good are they compared to others (past and present)?
  2. How good are they compared to themselves in the past?
  3. How good are they compared to their personal criteria and expectations?
  4. How good are they compared to the criteria and expectations of others?

The answer to each of these questions can be radically different depending on the referential index of the questioner. “How good am I when compared to others?” is significantly different from “How good is he/she when compared to others?”

The answers to each of these questions can also be significantly influenced by various biases and prejudices. Confirmation bias, hindsight bias, self-serving bias, the Dunning–Kruger effect, the misinformation effect, self-handicapping, self-fulfilling prophecies, introspection illusion, groupthink, the affect heuristic – numerous ways an individual can skew the evaluation of their own performance.

When the performance evaluation comes from a third party, for example a university professor evaluating a student’s performance, there are a different combination of biases in play which can have an independent impact on the performance score. The fundamental attribution error, confirmation bias, the illusion of transparency, credentialism or argument from authority – more ways the individual’s eventual performance score can can be unconsciously influenced. The combination of unconscious incompetence and the Dunning–Kruger effect can have a particularly adverse effect on the student who asks questions that expose a professor’s incompetence.

Here again, the level playing field appears to be with market-based performance evaluations. An individual’s ability to understand and mitigate biases and prejudices affecting their success will have a direct impact on their performance in the market. Students, however, have less influence over these drivers when they are manifest in individuals working from a position of authority.


Image by StockSnap from Pixabay

Good Intentions, Bad Results

In The Logic of Failure, Dietrich Dörner makes the following observation:

In our political environment, it would seem, we are surrounded on all sides with good intentions. But the nurturing of good intentions is an utterly undemanding mental exercise, while drafting plans to realize those worthy goals is another matter. Moreover, it is far from clear whether “good intentions plus stupidity” or “evil intentions plus intelligence” have wrought more harm in the world. People with good intentions usually have few qualms about pursuing their goals. As a result, incompetence that would otherwise have remained harmless often becomes dangerous, especially as incompetent people with good intentions rarely suffer the qualms of conscience that sometimes inhibit the doings of competent people with bad intentions. The conviction that our intentions are unquestionably good may sanctify the most questionable means. (emphasis added, Kindle location 133)

That sounds about right. To this I would add that incompetent people with good intentions rarely suffer the consequences of imposing their good intentions on others.

The distinguishing feature of a competent individual with good intentions and an incompetent individual with good intentions is the ability to predict and understand the consequences of their actions. Not just the immediate consequences, but the long term consequences as well. The really competent individuals with good intentions will also have a grasp of the systemic effects of acting on their intentions. People with a systemic view of the situation are deliberate in their actions and less likely to act or react emotionally to circumstances. Doesn’t mean they will always get it right, but when they fall short they are also more likely to learn from the experience in useful ways.


Photo by Michael Dziedzic on Unsplash

Expert 2.0

A common characteristic among exceptionally creative and innovative people is that they read outside their central field of expertise. Many of the solutions they find have their origins in the answers other people have found to problems in unrelated fields. Breakthrough ideas can happen when you adopt practices that are common in other fields. This is a foundational heuristic to open software development. Raymond (1999) observes:

Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone. Or, less formally, “Given enough eyeballs, all bugs are shallow.” I dub this “Linus’s Law.” (P. 41)

So named for Linus Torvalds, best known as the founder of the Linux operating system. In the case of the Linux operating system, no one developer can can have absolute expert knowledge of every line of code and how it interacts (or not) with every other line of code. But collectively a large pool of contributing developers can have absolute expert knowledge of the system. The odds are good that one of these contributors has the expertise to identify an issue in cases where all the other contributors may not understand that particular part of the system well enough to recognize it as the source of the agony.

This idea easily scales to include knowledge domains beyond software development. That is, solutions being found by people working outside the problem space or by people working within the problem space in possession of expertise and interests outside the problem space.

Imagine, around 1440, a gentleman from Verona named Luigi D’vino who makes fine wines for a living. And imagine a gentleman from Munich named Hans Münze who punches out coins for a living. Then imagine a guy who is familiar with the agricultural screw presses used by winemakers, has experience with blacksmithing, and knowledge of coin related metallurgy. Imagine this third gentleman figures out a way to combine these elements to invent “movable type.” This last guy actually existed in the form of Johannes Gutenberg.

 

Assuming D’vino and Münze were each experts in their problem space, they very likely found incremental innovations to their respective crafts. But Guttenberg’s interests ranged farther and as a consequence was able to envision an innovation that was truly revolutionary.

But if you, specifically, wish to make these types of connections and innovations, there has to be a there there for the “magic” to happen. Quality “thinking outside the box” doesn’t happen without a lot of prior preparation. You will need to know something about what’s outside the box. And note, there aren’t any limitations on what this “what” may be. The only requirement is that it has to be outside the current problem space. Even so, any such knowledge doesn’t guarantee that it will be useful. It only enhances the possibility for innovative thinking.

There is more that can be done to tune and develop innovative thinking skills. What Bock suggests touches on several fundamental principles to transfer of learning, the “magic” of innovative thinking, as defined by Haskell (2001, pp. 45-46).

  • Learners need to acquire a large primary knowledge base or high level of expertise in the area that transfer is required.
  • Some level of knowledge base in subjects outside the primary area is necessary for significant transfer.
  • An understanding of the history of the transfer area(s) is vital.

To summarize, in your field of interest you must be an expert of both technique and history (lest your “innovation” turn out to be just another re-invented wheel), and you must have a sufficiently deep knowledge base in the associated area of interested from which elements will be derived that contribute to the innovation.

References

Haskell, R. E. (2001). Transfer of learning: Cognition, instruction, and reasoning. San Diego, CA: Academic Press.

Raymond, E. S. (1999) The cathedral and the bazaar: Musings on Linux and open source by an accidental revolutionary. Sebastopol, CA: O’Reilly & Associates, Inc.

 

Image by István Kis from Pixabay