Openness, Grapevines, and Strangleholds

If you truly value openness on your Agile teams, you must untangle them from the grapevine.

Openness is one of the core scrum values. As stated on Scrum.org:

“The scrum team and it’s stakeholders agree to be open about all the work and the challenges with performing the work.”

This is a very broad statement, encompassing not only openness around work products and processes, but also each individual’s responsibility for ensuring that any challenges related to overall team performance are identified, acknowledged, and resolved. In my experience, issues with openness related to work products or the processes that impact them are relatively straightforward to recognize and resolve. If a key tool, for example, is mis-configured or ill-suited to what the team needs to accomplish than the need to focus on issues with the tool should be obvious. If there is an information hoarder on the team preventing the free flow of information, this will reveal itself within a few sprints after a string of unknown dependencies or misaligned deliverables have had a negative impact on the team’s performance. Similarly, if a team member is struggling with a particular story card and for whatever reason lacks the initiative to ask for help, this will reveal itself in short order.

Satisfying the need for openness around individual and team performance, however, is a much more difficult behavior to measure. Everyone – and by “everyone” I mean everyone – is by nature very sensitive to being called out as having come up short in any way. Maybe it’s a surprise to them. Maybe it isn’t. But it’s always a hot button. As much as we’d like to avoiding treading across this terrain, it’s precisely this hypersensitivity that points to where we need to go to make the most effective changes that impact team performance.

At the top of my list of things to constantly scan for at the team level are the degrees of separation (space and time) between a problem and the people who are part of the problem. Variously referred to as “the grapevine”, back channeling, or triangulation, it can be one of the most corrosive behaviors to a team’s trust and their ability to collaborate effectively. From his research over the past 30 years, Joseph Grenny [1] has observed “that you can largely predict the health of an organization by measuring the average lag time between identifying and discussing problems.” I’ve found this to be true. Triangulation and back-channeling adds significantly to the lag time.

To illustrate the problem and a possible solution: I was a newly hired scrum master responsible for two teams, about 15 people in total. At the end of my first week I was approached by one of the other scrum masters in the company. “Greg,” they said in a whisper, “You’ve triggered someone’s PTSD by using a bad word.” [2]

Not an easy thing to learn, having been on the job for less than a week. Double so because I couldn’t for the life of me think of what I could have said that would have “triggered” a PTSD response. The only people I knew who had been diagnosed with PTSD were several Vietnam veterans and a cop – men who had been through violent and life-or-death circumstances. This set me back on my heels but I did manage to ask the scrum master to please ask this individual to reach out to me so I could speak with them one-to-one and apologize. At the very least, suggest they contact HR as a PTSD response triggered by a word is a sign that someone needs help beyond what any one of us can provide. My colleague’s responses was “I’ll pass that on to the person who told me about this.”

“Hold up a minute. Your knowledge of this issue is second hand?”, I asked.

Indeed it was. Someone told someone who told the scrum master who then told me. Knowing this, I retracted my request for the scrum master to pass along my request. The problem here was the grapevine and a different tack was needed. I coached the scrum master to 1) never bring something like this to me again, 2) inform the person who told you this tale that you will not be passing anything like this along to me in the future, and 3) to coach that person to do the same to the person who told them. The person for whom this was an issue should either come to me directly or to my manager. I then coached my manager and my product owners that if anyone were to approach them with a complaint like this to listen carefully, acknowledge that you heard them, and to also encourage them to speak directly with me.

This should be the strategy for anyone with complaints that do not rise to the level of needing HR intervention. The goal of this approach is to develop behaviors around personal complaints such that everyone on the team knows they have a third person to talk to and that the issue isn’t going to be resolved unless they talked directly to the person with whom they have an issue. It’s a good strategy for cutting the grapevines and short circuiting triangulation (or in my case the quadrangulation.) To seal the strategy, I gave a blanket apology to each of my teams the following Monday and let them know what I requested of my manager and product owners.

The objective was to establish a practice of resolving issues like this at the team level. It’s highly unlikely (and in my case 100% certain) that anyone new to a job would have prior knowledge of sensitive words and purposely use language that’s upsetting to their new co-workers. The presupposition of malice or an assumption that a new hire should know such things suggested a number of systemic issues with the teams, something later revealed to be accurate. It wouldn’t be a stretch to say that in this organization the grapevine supplanted instant messaging and email as the primary communication channel. With the cooperation of my manager and product owners, several sizable branches to the grapevine had been cut away. Indeed, there was a marked increase in the teams attention during daily scrums and the retrospectives became more animated and productive in the weeks that followed.

Each situation is unique, but the intervention pattern is more broadly applicable: Reduce the number of node hops and associated lag time between the people directly involved with any issues around openness. This in and of itself may not resolve the issues. It didn’t in the example described above. But it does significantly reduce the barriers to applying subsequent techniques for working through the issues to a successful resolution. Removing the grapevine changes the conversation.

References

[1] Grenny, J. (2016, August 19). How to Make Feedback Feel Normal. Harvard Business Review, Retrieved from https://hbr.org/2016/08/how-to-make-feedback-feel-normal

[2] The “bad” word was “refinement.” The team had been using the word “grooming” to refer to backlog refinement and I had suggested we use the more generally accepted word. Apparently, a previous scrum master for the team had been, shall we say, overly zealous in pressing this same recommendation to a degree that it became a traumatic experience for someone on the one of the teams. I later learned this event was grossly exaggerated. The developer had a well known reputation for claiming psychological trauma to cower others into backing down and would laugh and brag about using this club. The strategy described in this article proved effective at preventing this type of behavior.


Photo by Ben Neale on Unsplash

False Barriers to Implementing Scrum

When my experience with scrum began to transition from developer to scrum master and on to mentor and coach, early frustrations could have been summed up in the phrase, “Why can’t people just follow a simple framework?” The passage of time and considerable experience has greatly informed my understanding of what may inhibit or prevent intelligent and capable people from picking up and applying a straightforward framework like scrum.

At the top of this list of insights has to be the tendency of practitioners to place elaborate decorations around their understanding of scrum. In doing so, they make scrum practices less accessible. The framework itself can make this a challenge. Early on, while serving in the role of mentor, I would introduce scrum with an almost clinical textbook approach: define the terms, describe the process, and show the obligatory recursive work flow diagrams. In short order, I’d be treading water (barely) in recursive debates on topics like the differences between epics and stories. I wrote about this phenomenon in a previous post as it relates to story points. So how can we avoid being captured by Parkinson’s law of triviality and other cognitive traps?

Words Matter

I discovered that the word “epic” brought forth fatigue inducing memories of Homer’s Iliad and Odyssey, the Epic of Gilgamesh, and Shakespeare. Instant block. Solution out of reach. It was like putting a priceless, gold-plated, antique picture frame around the picture postcard of a jackalope your cousin sent on his way through Wyoming. Supertanker loads of precious time were wasted in endless debates about whether or not something was an epic or a story. So, no more talk of epics. I started calling them “story categories.” Or “chapters.” Or “story bundles.” Whatever it took to get teams onto the idea that “epics” are just one of the dimensions to a story map or product backlog that helps the product owner and agile delivery team keep a sense of overall project scope. Story writing progress accelerated and teams were doing a decent job of creating “epics” without knowing they had done so. Fine tuning their understanding and use of formal scrum epics came later and with much greater ease.

“Sprint” is another unfortunate word in formal scrum. With few exceptions, the people that have been on my numerous scrum teams haven’t sprinted anywhere in decades. Sprinting is something one watches televised from some far away place every four years. Maybe. Given its fundamental tenets and principles, who’s to say a team can’t find a word for the concept of a “sprint” that makes sense to them. The salient rule, it would seem, is that whatever word they choose, the team fully understand that “it” is a time-boxed commitment for completing a defined set of work tasks. And if “tide,” “phase,” or “iteration” gets the team successfully through a project using scrum then who am I to wear a the badge of “Language Police?”

A good coach meets the novice at their level and then builds their expertise over time, structured in a way that matches and challenges the learner’s capacity to learn. I recall from my early Aikido practice the marked difference between instructors who stressed using the correct Japanese name for a technique over those that focused more on learning the physical techniques and described them in a language I could understand. Once I’d learned the physical patterns the verbal names came much more easily.

Full disclosure: this is not as easy when there are multiple scrum teams in the same organization that eventually rotate team members. Similarly, integrating new hires with scrum experience is much easier when the language is shared. But to start, if the block to familiarization with the scrum process revolves around semantic debates it makes sense to adapt the words so that the team can adopt the process then evolve the words to match more closely those reflected in the scrum framework.

Philosophy, System, Mindset, or Process

A similar fate awaited team members that had latched onto the idea that scrum or agile in general is a philosophy. I watched something similar happen in the late 1980’s when the tools and techniques of total quality management evolved into monolithic world views and corporate religions. More recently, I’ve attended meet-ups where conversations about “What is Agile?” include describing the scrum master as “therapist” or “spiritual guide.” Yikes! That’s some pretty significant mission creep.

I’m certain fields like philosophy and psychotherapy could benefit from many of the principles and practices found in agile. But it would be a significant category error to place agile at the same level as those fields of study. If you think tasking an agile novice with writing an “epic” is daunting, try telling them they will need to study and fully understand the “philosophy of agile” before they become good agile practitioners.

The issue is that it puts the idea of practicing agile essentially out of reach for the new practitioner or business leader thinking about adopting agile. The furthest up this scale I’m willing to push agile is that it is a mindset. An adaptive way of thinking about how work gets done. From this frame I can leverage a wide variety of common, real-life experiences that will help those new to agile understand how it can help them succeed in their work life.

Out in the wild, best to work with the system as much as possible if you want meaningful work to actually get done.

Parkinson’s Law of Triviality and Story Sizing

From  Wikipedia: Parkinson’s law of triviality

Law of triviality is C. Northcote Parkinson’s 1957 argument that people within an organization commonly or typically give disproportionate weight to trivial issues. Parkinson provides the example of a fictional committee whose job was to approve the plans for a nuclear power plant spending the majority of its time on discussions about relatively minor but easy-to-grasp issues, such as what materials to use for the staff bike shed, while neglecting the proposed design of the plant itself, which is far more important and a far more difficult and complex task.

I see this phenomenon in play during team story sizing exercises in the following scenarios.

  1. In the context of the story being sized, the relative expertise of each of the team members is close to equal in terms of experience and depth of knowledge. The assumption is that if everyone on the team is equally qualified to estimate the effort and complexity of a particular story then the estimation process should move along quickly. With a skilled team, this does, indeed, occur. If it is a newly formed team or if the team is new to agile principles and practices, Parkinson’s Law of Triviality can come into play as the effort quickly gets lost in the weeds.
  2. In the context of the story being sized, the relative expertise of the team members is not near parity and yet each of the individual team members has a great deal of expertise in the context of their respective functional areas. What I’ve observed happening is that the team members least qualified to evaluate the particular story feel the need to assert their expertise and express an opinion. I recall an instance where a software developer estimated it would take 8 hours of coding work to place a “Print This” button on a particular screen. The credentialed learning strategist (who asked for the print button and has no coding experience) seemed incredulous that such an effort would require so much time. A lengthy and unproductive argument ensued.

To prevent this I focus my coaching efforts primarily on the product owner as they will be interacting with the team on this effort during product backlog refinement session more frequently than I. They need to watch for:

  1. Strong emotional response by team members when a size or time estimate is proposed.
  2. Conversations that drop further and further into design details.
  3. Conversations that begin to explore multiple “what if” scenarios.

The point isn’t to prevent each of these behaviors from occurring. Rather manage them. If there is a strong emotional response, quickly get to the “why” behind that response. Does the team member have a legitimate objection or does their response lack foundation?

Every team meeting is an opportunity to clarify the bigger picture for the team so a little bit of conversation around design and risks is a good thing. It’s important to time box those conversations and agree to take the conversation off-line from the backlog refinement session.

When coaching the team, I focus primarily on the skills needed to effectively size an effort. Within this context I can also address the issue of relative expertise and how to leverage and value the opinions expressed by team member who may not entirely understand the skill needed to complete a particular story.

(John Cook cites another interesting example of Parkinson’s Law of Triviality (a.k.a. the bike shed principle) from Michael Beirut’s book “How to” involving the design of several logos.)

 

Image by Arek Socha from Pixabay

Waste

What comes to mind when you think of the word “waste?” I’d wager a ten spot it wasn’t something pleasant. Rather, something to be pushed to the curb, rinsed down the drain, or thrown into a hole in the ground and buried. Even the sterile waste from technology projects has a high “ick” factor. If Josef Oehmen and Eric Rebentisch of MIT’s Lean Advancement Institute put the amount of time, money, and resource waste in corporate product development at 77%, how can there be anything good about waste?

Now think of something you value quite highly – a piece of fine jewelry, mastery of a sport or musical instrument, or your home. Have you considered how many resources may have been “wasted” to bring those highly valued things into existence? Shiny diamonds get that way by cutting and throwing away pieces of the original, mastering a sport or a musical instrument involves years filled with bad moves or cringe worthy sounds, and a significant amount of material was used and discarded while building your home.

When organizations consider implementing one of Agile’s many formalized methodologies or frameworks, the idea of eliminating waste is a prominent promise that helps close the sale. Cutting out waste means saving money and saving money means increasing profits. Unfortunately, this promise is frequently delivered to the agile teams as: “All waste is bad. Get it right the first time.” This message doesn’t support exploration and discovery. It doesn’t allow teams the space they need to find innovative solutions in what Stuart Kauffman called the “adjacent possible.” And it certainly doesn’t encourage the iterative development of numerous minimum viable products that build upon each other and lead the way toward the delivery of quality products.

The message I work to reinforce is: “Expect to throw stuff away, especially early on.” By itself, this isn’t enough to overcome the many negative connotations around waste. What is needed is a fundamental re-framing around the activities that have resulted in something one expects to throw away. A couple of questions are worth asking. What value is anticipated from the activity? What are the potential positive effects of having engaged in an effort at risk for ending as waste? Pursuing a goal of zero wasted effort is a fool’s errand. What we want to reduce is any effort that doesn’t add value. If 40 hours were spent exploring a technical option “only” to find out that it wasn’t a viable option in the long term, that throw-away 40 hour effort may just have saved 400 hours of developer time spent trying to make it work. And had the less-than-optimal long term option gone to market, the expense of supporting the early wrong decision could make or break the product, perhaps even the business.

Skilled agile practitioners have a strategy for monitoring the value of any project related efforts:

  • Establish definitions of activities that create value. By identifying the intent behind the effort and acknowledging the value, the team is better positioned for focusing on the goals and objectives of the activity. Discovery, exploration, risk assessment, even fun can be worthwhile justifications if it is clear they add value to the overall effort and end goal success.
  • Identify efforts that are wasteful, but nonetheless necessary, and work to minimize the effects of these efforts. Transitioning from a design sprint to actual production work often results in a lull in activity as the design team works to communicate to the production team what needs to be done. Similarly, when production work is handed off to deployment, support, and maintenance teams.
  • Identify clear signs of waste. Gold-plating (over-engineering), lack of a product vision or road map that causes the agile team to “make it up as they go along” and react to the customer’s reaction, infrequent opportunities for feedback (both inter-team and with the client), or time-tracker cards that attract dozens of hours of nondescript time.

From a lean product development perspective, Oehmen and Rebentisch describe eight types of waste. I’ve included my additions and comments in parenthesis.

  • Overproduction of information
    • Two different groups creating the same deliverable
    • Delivering information too early
  • Over-processing of information
    • Over-engineering of components and systems (Often referred to as Gold-platting.)
    • Working on different IT systems, converting data back and forth (The use of one-off tools rather than leveraging the capabilities within the organization’s suite of tools.)
  • Miscommunication of information
    • Large and long meetings, excessive email distribution lists
    • Unnecessary hand-offs instead of continuous responsibility
    • (Unacknowledged project dependencies, such as the effect of re-architecting system components, on client projects.)
  • Stockpiling of information
    • Saving information due to frequent interruptions
    • Creating large information repositories due to large batch sizes
    • (Withholding opportunities to review work and solicit feedback.)
  • Generating defective information
    • Making errors in component and architecture design
    • Delivering obsolete information to down-stream tasks (Insufficient feedback opportunities, delays in communication due to competing project responsibilities.)
  • Correcting information
    • Optimization iterations (Rework)
    • Reworking deliverables due to changing targets (Design ambiguity, client decision instability)
  • Waiting of people
    • Waiting for long lead time activities to finish
    • Waiting due to unrealistic schedules
  • Unnecessary movement of people
    • Obtaining information by walking up and down the hallway
    • Traveling to meetings

Image by Peggy und Marco Lachmann-Anke from Pixabay

References

Kauffman, S.A. (2003). The Adjacent Possible, A Talk with Stuart A. Kauffman. Retrieved from https://www.edge.org/conversation/stuart_a_kauffman-the-adjacent-possible

Oehmen, J., Rebentisch, E. (2010). Waste in Lean Product Development. Lean Advancement Initiative. Retrieved from http://hdl.handle.net/1721.1/79838

How to Know You Have a Well Defined Minimum Viable Product

Conceptually, the idea of a minimum viable product (MVP) is easy to grasp. Early in a project, it’s a deliverable that reflects some semblance to the final product such that it’s barely able to stand on it’s own without lots of hand-holding and explanation for the customer’s benefit. In short, it’s terrible, buggy, and unstable. By design, MVPs lack features that may eventually prove to be essential to the final product. And we deliberately show the MVP to the customer!

We do this because the MVP is the engine that turns the build-measure-learn feedback loop. The key here is the “learn” phase. The essential features to the final product are often unclear or even unknown early in a project. Furthermore, they are largely undefinable or unknowable without multiple iterations through the build-measure-learn feedback cycle with the customer early in the process.

So early MVPs aren’t very good. They’re also not very expensive. This, too, is by design because an MVP’s very raison d’être is to test the assumptions we make early on in a project. They are low budget experiments that follow from a simple strategy:

  1. State the good faith assumptions about what the customer wants and needs.
  2. Describe the tests the MVP will satisfy that are capable of measuring the MVP’s impact on the stated assumptions.
  3. Build an MVP that tests the assumptions.
  4. Evaluate the results.

If the assumptions are not stated and the tests are vague, the MVP will fail to achieve it’s purpose and will likely result in wasted effort.

The “product” in “minimum viable product” can be almost anything: a partial or early design flow, a wireframe, a collection of simulated email exchanges, the outline to a user guide, a static screen mock-up, a shell of screen panels with placeholder text that can nonetheless be navigated – anything that can be placed in front of a customer for feedback qualifies as an MVP. In other words, a sprint can contain multiple MVPs depending on the functional groups involved with the sprint and the maturity of the project. As the project progresses, the individual functional group MVPs will begin to integrate and converge on larger and more refined MVPs, each gaining in stability and quality.

MVPs are not an end unto themselves. They are tangible evidence of the development process in action. The practice of iteratively developing MVPs helps develop to skill of rapid evaluation and learning among product owners and agile delivery team members. A buggy, unstable, ugly, bloated, or poorly worded MVP is only a problem if it’s put forward as the final product. The driving goal behind iterative MVPs is not perfection, rather it is to support the process of learning what needs to be developed for the optimal solution that solves the customer’s problems.

“Unlike a prototype or concept test, an MVP is designed not just to answer product design or technical questions. Its goal is to test fundamental business hypotheses.” – Eric Ries, The Lean Startup

So how might product owners and Agile teams begin to get a handle on defining an MVP? There are several questions the product owner and team can ask of themselves, in light of the product backlog, that may help guide their focus and decisions. (Use of the following term “stakeholders” can mean company executives or external customers.)

  • Identify the likely set of stakeholders who will be attending the sprint review. What will these stakeholders need to see so that they can offer valuable feedback? What does the team need to show in order to spark the most valuable feedback from the stakeholders?
  • What expectations have been set for the stakeholders?
  • Is the distinction clear between what the stakeholders want vs what they need?
  • Is the distinction clear between high and low value? Is the design cart before the value horse?
  • What are the top two features or functions the stakeholders  will be expecting to see? What value – to the stakeholders – will these features or functions deliver?
  • Will the identified features or functions provide long term value or do they risk generating significant rework down the road?
  • Are the identified features or functions leveraging code, content, or UI/UX reuse?

Recognizing an MVP – Less is More

Since an MVP can be almost anything,  it is perhaps easier to begin any conversation about MVPs by touching on the elements missing from an MVP.

An MVP is not a quality product. Using any generally accepted definition of “quality” in the marketplace, an MVP will fail on all accounts. Well, on most accounts. The key is to consider relative quality. At the beginning of a sprint, the standards of quality for an MVP are framed by the sprint goals and objectives. If it meets those goals, the team has successfully created a quality MVP. If measured against the external marketplace or the quality expectations of the customer, the MVP will almost assuredly fail inspection.

Your MVPs will probably be ugly, especially at first. They will be missing features. They will be unstable. Build them anyway. Put them in front of the customer for feedback. Learn. And move on to the next MVP. Progressively, they will begin to converge on the final product that is of high quality in the eyes of the customer. MVPs are the stepping stones that get you across the development stream and to the other side where all is sunny, beautiful, and stable. (For more information on avoiding the trap of presupposing what a customer means by quality and value, see “The Value of ‘Good Enough…For Now’“)

An MVP is not permanent. Agile teams should expect to throw away several, maybe even many, MVPs on their way to the final product. If they aren’t, then it is probable they are not learning what they need to about what the customer actually wants. In this respect, waste can be a good, even important thing. The driving purpose of the MVP is to rapidly develop the team’s understanding of what the customer needs, the problems they are expecting to have solved, and the level of quality necessary to satisfy each of these goals.

MVPs are not the truth. They are experiments meant to get the team to the truth. By virtue of their low-quality, low-cost nature, MVPs quickly shake out the attributes to the solution the customer cares about and wants. The solid empirical foundation they provide is orders of magnitude more valuable to the Agile team than any amount of speculative strategy planning or theoretical posturing.

Photo by Sarah Dorweiler on Unsplash

The Value of “Good Enough…for Now”

Any company interested in being successful, whether offering a product or service, promises quality to its customers. Those that don’t deliver, die away. Those that do, survive. Those that deliver quality consistently, thrive. Seems like easy math. But then, 1 + 1 = 2 seems like easy math until you struggle through the 350+ pages Whitehead and Russell1 spent on setting up the proof for this very equation. Add the subjective filters for evaluating “quality” and one is left with a measure that can be a challenge to define in any practical way.

Math aside, when it comes to quality, everyone “knows it when they see it,” usually in counterpoint to a decidedly non-quality experience with a product or service. The nature of quality is indeed chameleonic – durability, materials, style, engineering, timeliness, customer service, utility, aesthetics – the list of measures is nearly endless. Reading customer reviews can reveal a surprising array of criteria used to evaluate the quality for a single product.

The view from within the company, however, is even less clear. Businesses often believe they know quality when they see it. Yet that belief is often predicate on how the organization defines quality, not how their customers define quality. It is a definition that is frequently biased in ways that accentuate what the organization values, not necessarily what the customer values.

Organization leaders may define quality too high, such that their product or service can’t be priced competitively or delivered to the market in a timely manner. If the high quality niche is there, the business might succeed. If not, the business loses out to lower priced competitors that deliver products sooner and satisfy the customer’s criteria for quality (see Figure 1).

Figure 1. Quality Mismatch I
Figure 1. Quality Mismatch I

Certainly, there is a case that can be made for providing the highest quality possible and developing the business around that niche. For startups and new product development, this may not be be best place to start.

On the other end of the spectrum, businesses that fall short of customer expectations for quality suffer incremental, or in some cases catastrophic, reputation erosion. Repairing or rebuilding a reputation for quality in a competitive market is difficult, maybe even impossible (see Figure 2).

Figure 2. Quality Mismatch II
Figure 2. Quality Mismatch II

The process for defining quality on the company side of the equation, while difficult, is more or less deliberate. Not so on the customer side. Customers often don’t know what they mean by “quality” until they have an experience that fails to meet their unstated, or even unknown, expectations. Quality savvy companies, therefore, invest in understanding what their customers mean by “quality” and plan accordingly. Less guess work, more effort toward actual understanding.

Furthermore, looking to what the competition is doing may not be the best strategy. They may be guessing as well. It may very well be that the successful quality strategy isn’t down the path of adding more bells and whistles that market research and focus groups suggest customers want. Rather, it may be that improvements in existing features and services are more desirable.

Focus on being clear about whether or not potential customers value the offered solution and how they define value. When following an Agile approach to product development, leveraging minimum viable product definitions can help bring clarity to the effort. With customer-centric benchmarks for quality in hand, companies are better served by first defining quality in terms of “good enough” in the eyes of their customers and then setting the internal goal a little higher. This will maximize internal resources (usually time and money) and deliver a product or service that satisfies the customer’s idea of “quality.”

Case in point: Several months back, I was assembling several bar clamps and needed a set of cutting tools used to put the thread on the end of metal pipes – a somewhat exotic tool for a woodworker’s shop. Shopping around, I could easily drop $300 for a five star “professional” set or $35 for a set that was rated to be somewhat mediocre. I’ve gone high end on many of the tools in my shop, but in this case the $35 set was the best solution for my needs. Most of the negative reviews revolved around issues with durability after repeated use. My need was extremely limited and the “valuable and good enough” threshold was crossed at $35. The tool set performed perfectly and more than paid for itself when compared with the alternatives, whether that be a more expensive tool or my time to find a local shop to thread the pipes for me. This would not have been the case for a pipefitter or someone working in a machine shop.

By understanding where the “good enough and valuable” line is, project and organization leaders are in a better position to evaluate the benefits of incremental improvements to core products and services that don’t break the bank or burn out the people tasked with delivering the goods. Of course, determining what is “good enough” depends on the end goal. Sending a rover to Mars, “good enough” had better be as near to perfection as possible. Threading a dozen pipes for bar clamps used in a wood shop can be completed quite successful with low quality tools that are “good enough” to get the job done.

Addendum

I’ve been giving some more thought to the idea of “good enough” as one of the criteria for defining minimum viable/valuable products. What’s different is that I’ve started to use the phrase “good enough for now.” Reason being, the phrase “good enough” seems to imply an end state. “Good enough” is an outcome. If it is early in a project, people generally have a problem with that. They have some version of an end state that is a significant mismatch with the “good enough” product today. The idea of settling for “good enough” at this point makes it difficult for them to know when to stop work on an interim phase and collect feedback.

“Good enough for now” implies there is more work to be done and the product isn’t in some sort of finished state that they’ll have to settle for. “Good enough for now” is a transitory state in the process. I’m finding that I can more easily gain agreement that a story is finished and get people to move forward to the next “good enough for now” by including the time qualifier.

References

1Volume 1 of Principia Mathematica by Alfred North Whitehead and Bertrand Russell (Cambridge University Press, page 379). The proof was actually not completed until Volume 2. (This article cross-posted at LinkedIn.)

Parkinson’s Law of Perfection

C. Northcote Parkinson is best known for, not surprisingly, Parkinson’s Law:

Work expands so as to fill the time available for its completion.

But there are many more gems in “Parkinson’s Law and Other Studies in Administration.” The value of re-reading classics is that what was missed on a prior read becomes apparent given the accumulation of a little more experience and the current context. On a re-read this past week, I discovered this:

It is now  known  that  a  perfection  of  planned  layout  is  achieved  only  by institutions  on  the   point  of  collapse.  This   apparently  paradoxical conclusion is based upon a wealth of archaeological and historical research, with the  more esoteric details of  which we need not concern  ourselves. In general  principle, however, the method pursued has been to  select and date the buildings  which  appear to have been perfectly  designed for  their purpose. A study and comparison of these has tended to prove that perfection of planning is a symptom of decay. During a  period of exciting discovery or progress there is  no time  to  plan the perfect headquarters.  The time for that comes  later, when all the important work has been done. Perfection, we know, is finality; and finality is death.

Several years back my focus for the better part of a year was on mapping out software design processes for a group of largely non-technical instructional designers. If managing software developers is akin to herding cats, finding a way to shepherd non-technical creative types such as instructional designers (particularly old school designers) can be likened to herding a flock of canaries – all over the place in three dimensions.

What made this effort successful was framing the design process as a set of guidelines that were easy to track and monitor. The design standards and common practices, for example, consisted of five bullet points. Building just enough fence to keep everyone in the same area while limiting free range behaviors to specific places was important. These were far from perfect, but they allowed for the dynamic vitality suggested by Parkinson. If the design standards and common practices document ever grew past something that could fit on one page, it would suggest the company was moving toward over specialization and providing services to a narrow slice of the potential client pie. In the rapidly changing world of adult education, this level of perfection would most certainly suggest decay and risk collapse as client needs change.

Image by EWAR from Pixabay

Best Practices or Common Practices

I’m using the phrase “best practices” less and less when working to establish good agile practices. In fact, I’ve stopped using it at all. The primary reason is that it implies there is a set of practices that apply to all circumstances. And in the case of “industry best practices,” they are externally established criteria – they are the best practices and all others have been fully vetted and found wanting. I have found that to be untrue. I’ve also found that people have a hard time letting go of things that are classified as “best.” When your practices are the “best,” there’s little incentive to change even when the evidence strongly suggests there are better alternatives. Moreover, peer pressure works against the introduction of innovative practices. Deviating from a “best” practice risks harsh judgment, retribution, and the dreaded “unprofessional” label.

If an organization is exploring a new area of business or bringing in-house a set of expertise that was previously outsourced, adopting “best” practices may be the smart way to go until some measure of stability has been established. But to keep the initial set of practices and change only as the external definition of “best” changes ends up dis-empowering the organization’s employees. It sends the message, “You aren’t smart enough to figure this out and improve on your own.” When denied the opportunity to excel and improve, employees that need that quality in their work will move on. Over time, the organization is left with just the sort of people who indeed are not inclined to improve – the type of individuals who need well defined job responsibilities and actively resist change of any sort. The friction builds until change and adaptation grind along at glacial speeds or stop altogether.

The inertia endemic to “best” practices often goes unnoticed. When one group reaches a level of success by implementing a particular practice, it is touted as one of the keys to its success. And so other groups or organizations adopt the practice. Since everyone wants success, these practices are faithfully implemented according to tradition and change little even as the world around them changes dramatically. Classic cargo cult thinking.

In his Harvard Business Review article “Which Best Practice Is Ruining Your Business?”, Freek Vermeulen observes that “when managers don’t see [a] practice as the root cause of their eroding competitive position, the practice persists — and may even spread further to other organizations in the same line of business.” Consequently, business leaders “never connect the problems of today with [a] practice launched years ago.” Common practices, on the other hand, suggest there is room for improvement. They are common because a collection of people have accepted them as generally valuable, not because they are presumed universally true or anointed as “best.” They are derived internally, not imposed externally. As a result, letting go of a “common” practice for a better practice is easier and carries less stigma. With enough adoption throughout the organization, the better practice often becomes the common practice. When we use practices that build upon the collected wisdom from an organization’s experiences we are more likely to take ownership of the process and adapt in ways that naturally lead to improvement.

There are long term benefits to framing prevailing practices as “common.” It reverses the “you are not smart enough” message and encourages practitioners to take more control and ownership in the quality of their practices. Cal Newport argues that “[g]iving people more control over what they do and how they do it increases their happiness, engagement, and sense of fulfillment.” This message is at the heart of Dan Pink’s book, “Drive,” in which he makes the case that more control leads to better grades, better sports performance, better productivity, and more happiness. Pink cites research from Cornell that followed over three hundred small businesses. Half of the businesses deliberately gave substantial control and autonomy to their employees. Over time, these businesses grew at four times the rate of their counterparts.

When you are considering the adoption or pursuit of any best practice, ask yourself, “best” according to whom? It may help avoid some unintended consequences down the line where someone else’s “best” practice yields the worst results for you, your team, or your organization.

Image by Clker-Free-Vector-Images from Pixabay

References

Newport, C. (2012). So Good They Can’t Ignore You. New York, NY: Grand Central Publishing.

Pink, D.H. (2009). Drive: The Surprising Truth About What Motivates Us. New York, NY: Riverhead

Vermeulen, F. (2012). Which Best Practice Is Ruining Your Business? Retrieved from https://hbr.org/2012/12/which-best-practice-is-ruining

Transparency, Source Code Quality, and Metrics

 

In “Hello World: Being Human in the Age of Algorithms,” Hannah Fry relates this story:

In 2012, a number of disabled people in Idaho were informed that their Medicaid assistance was being cut. Although they all qualified for benefits, the state was slashing their financial support – without warning – by as much as 30 per cent, leaving them struggling to pay for their care. This wasn’t a political decision; it was the result of a new ‘budget tool’ that had been adopted by the Idaho Department of Health and Welfare – a piece of software that automatically calculated the level of support that each person should receive.

Unable to understand why their benefits had been reduced, or to effectively challenge the reduction, the residents turned to the American Civil Liberties Union (ACLU) for help.

[The ACLU] began by asking for details on how the algorithm worked, but the Medicaid team refused to explain their calculations. They argued that the software that assessed the cases was a ‘trade secret’ and couldn’t be shared. Fortunately, the judge presiding over the case disagreed. The budget tool that wielded so much power over the residents was then handed over, and revealed to be – not some sophisticated AI, not some beautifully crafted mathematical model, but an Excel spreadsheet.

Within the spreadsheet, the calculations were supposedly based on historical cases, but the data was so badly riddled with bugs and errors that it was, for the most part, entirely useless. Worse, once the ACLU team managed to unpick the equations, they discovered ‘fundamental statistical flaws in the way that the formula itself was structured’. The budget tool had effectively been producing random results for a huge number of people. The algorithm – if you can call it that – was of such poor quality that the court would eventually rule it unconstitutional.

My first thoughts were, “How bad a spreadsheet hack do you gotta be to have your work be declared unconstitutional? And just how many hacks does it take to build an unconstitutional spreadsheet?”

To be fair, math is hard. Government is complex. And I’m comfortable with the assumption that everyone who had a hand in building this spreadsheet had good intentions. Venturing a guess, the breakdown happened at the manager/politician/lawyer level.

It is probable that the complexity of the task quickly overtook the abilities of the spreadsheet author(s) and the capabilities of the tool. Eventually, no single person understood how the whole thing worked. Consequently, making a change in one place affected how the spreadsheet worked in n other places and no one was capable of regression testing the beast. But the manager/politician/lawyer types knew what to do: Hide behind the “trade secret” smoke.

There are many lessons from this story. Plenty of points of failure. What I’m interested in writing about is the importance of transparency and how a good set of performance metrics can help in maintaining transparency.

The externally facing opacity in this story is readily apparent. What we don’t (and probably never will) see is the lack of transparency prevalent internally to the Idaho Department of Health and Welfare and whomever designed and built the spreadsheet tool. I’d bet a round of drinks that neither has heard of Agile much less employed its principles and practices. These by themselves – when actually practiced long term – go a long way toward establishing a culture of transparency. This is the key. Long term practice. A period of time is needed to change behaviors, mindsets, attitudes, beliefs, and when necessary, personnel. Even over the long term, implementing an Agile methodology isn’t improvisational theater. A strategy and a way to measure progress is needed.

Which gets me to metrics.

Selecting metrics and tuning them over time is critical to measuring team performance and developing improvement plans. Metrics that inform meaningful actions are the goal. Leave the vanity metrics that verify what managers want to hear or already “know” to the competition.

I’ve encountered my share of overly complex ways to measure the performance of individuals and teams. Often the metrics taken from machine-like task work (for example, assembly line work) are applied to creative or intellectual/knowledge tasks. This type of re-purposing results in, for example, counting lines of code or the number of source code check-ins as an indicator of software developer productivity. It never ends well.

When working to define a set of metrics to track an individual or team’s performance it is more effective to begin by asking several questions.

  • What problems are you trying to solve?
  • What questions will your chosen metrics answer?
  • What questions will your chosen metrics not answer?
  • How, specifically, will you know you can trust you metrics? How will you know when they are right and how will you know when they are wrong?
  • How well do your metrics compliment each other? That is, by combining them do you end up with a much better picture of individual or team performance the you do by considering individual metrics?
  • Do your metrics support any planned actions for improvement? Are you collecting actionable metrics or vanity metrics?

Finally, it is important to understand the limits of performance metrics. Displaying velocity charts that have fractions of story points implies an accuracy that simply isn’t there. Significantly adjusting project timelines based on the first three sprints worth of velocity data can have adverse secondary effects on the project.

There is no perfect set of metrics, no divine set of measures that match an impossible standard of perfect objectivity and fairness. The best possible set of metrics is one that supports useful decisions rather than simply instructs managers where to apply the stick. They should help show the way to performance improvement rather than simply report results.

I work to have 3-5 metrics, depending on the individual, the team, and the project. Less than 3 and the picture starts to look rather flat. More then 5 and the task of performance monitoring can become overly complicated and cumbersome. Keep it lean and manageable. That way, it’s easier to tell when things aren’t working and your metrics are much less likely to violate your team’s constitutional rights.