Transparency, Source Code Quality, and Metrics

 

In “Hello World: Being Human in the Age of Algorithms,” Hannah Fry relates this story:

In 2012, a number of disabled people in Idaho were informed that their Medicaid assistance was being cut. Although they all qualified for benefits, the state was slashing their financial support – without warning – by as much as 30 percent, leaving them struggling to pay for their care. This wasn’t a political decision; it was the result of a new ‘budget tool’ that had been adopted by the Idaho Department of Health and Welfare – a piece of software that automatically calculated the level of support that each person should receive.

Unable to understand why their benefits had been reduced, or to effectively challenge the reduction, the residents turned to the American Civil Liberties Union (ACLU) for help.

[The ACLU] began by asking for details on how the algorithm worked, but the Medicaid team refused to explain their calculations. They argued that the software that assessed the cases was a ‘trade secret’ and couldn’t be shared. Fortunately, the judge presiding over the case disagreed. The budget tool that wielded so much power over the residents was then handed over, and revealed to be – not some sophisticated AI, not some beautifully crafted mathematical model, but an Excel spreadsheet.

Within the spreadsheet, the calculations were supposedly based on historical cases, but the data was so badly riddled with bugs and errors that it was, for the most part, entirely useless. Worse, once the ACLU team managed to unpick the equations, they discovered ‘fundamental statistical flaws in the way that the formula itself was structured’. The budget tool had effectively been producing random results for a huge number of people. The algorithm – if you can call it that – was of such poor quality that the court would eventually rule it unconstitutional.

My first thoughts were, “How bad a spreadsheet hack do you gotta be to have your work be declared unconstitutional? And just how many hacks does it take to build an unconstitutional spreadsheet?”

To be fair, math is hard. Government is complex. And I’m comfortable with the assumption that everyone who had a hand in building this spreadsheet had good intentions. Venturing a guess, the breakdown happened at the manager/politician/lawyer level.

It is probable that the complexity of the task quickly overtook the abilities of the spreadsheet author(s) and the capabilities of the tool. Eventually, no single person understood how the whole thing worked. Consequently, making a change in one place affected how the spreadsheet worked in n other places and no one was capable of regression testing the beast. But the manager/politician/lawyer types knew what to do: Hide behind the “trade secret” smoke.

There are many lessons from this story. Plenty of points of failure. What I’m interested in writing about is the importance of transparency and how a good set of performance metrics can help in maintaining transparency.

The externally facing opacity in this story is readily apparent. What we don’t (and probably never will) see is the lack of transparency prevalent internally to the Idaho Department of Health and Welfare and whomever designed and built the spreadsheet tool. I’d bet a round of drinks that neither has heard of Agile much less employed its principles and practices. These by themselves – when actually practiced long term – go a long way toward establishing a culture of transparency. This is the key. Long term practice. A period of time is needed to change behaviors, mindsets, attitudes, beliefs, and when necessary, personnel. Even over the long term, implementing an Agile methodology isn’t improvisational theater. A strategy and a way to measure progress is needed.

Which gets me to metrics.

Selecting metrics and tuning them over time is critical to measuring team performance and developing improvement plans. Metrics that inform meaningful actions are the goal. Leave the vanity metrics that verify what managers want to hear or already “know” to the competition.

I’ve encountered my share of overly complex ways to measure the performance of individuals and teams. Often the metrics taken from machine-like task work (for example, assembly line work) are applied to creative or intellectual/knowledge tasks. This type of re-purposing results in, for example, counting lines of code or the number of source code check-ins as an indicator of software developer productivity. It never ends well.

When working to define a set of metrics to track an individual or team’s performance it is more effective to begin by asking several questions.

  • What problems are you trying to solve?
  • What questions will your chosen metrics answer?
  • What questions will your chosen metrics not answer?
  • How, specifically, will you know you can trust you metrics? How will you know when they are right and how will you know when they are wrong?
  • How well do your metrics compliment each other? That is, by combining them do you end up with a much better picture of individual or team performance the you do by considering individual metrics?
  • Do your metrics support any planned actions for improvement? Are you collecting actionable metrics or vanity metrics?

Finally, it is important to understand the limits of performance metrics. Displaying velocity charts that have fractions of story points implies an accuracy that simply isn’t there. Significantly adjusting project timelines based on the first three sprints worth of velocity data can have adverse secondary effects on the project.

There is no perfect set of metrics, no divine set of measures that match an impossible standard of perfect objectivity and fairness. The best possible set of metrics is one that supports useful decisions rather than simply instructs managers where to apply the stick. They should help show the way to performance improvement rather than simply report results.

I work to have 3-5 metrics, depending on the individual, the team, and the project. Less than 3 and the picture starts to look rather flat. More then 5 and the task of performance monitoring can become overly complicated and cumbersome. Keep it lean and manageable. That way, it’s easier to tell when things aren’t working and your metrics are much less likely to violate your team’s constitutional rights.

Teams, Tribes, and Community – 0.1.0

Several months ago, I made bold decision: Take command of the helm for a brilliant tribe of diverse creative thinkers dedicated to helping each other succeed. This is the first of an on-going series of posts – maybe once or twice a month – describing this evolving effort.

For an extrovert, this might not have been a bold decision. But in my case, you should know I designed the card that card-carrying introverts carry. So this decision involved a more thorough application of my already robust decision-making process. On a professional level, this may be the most significant challenge I’ve taken on to date. Will my years of experience with forming and guiding teams help this tribe further their success? Will I be able to find the gravitational force that holds us together and the spark that keeps us inspired? These are open questions. They are also questions that occupy much of my thinking.

We are not dedicated to achieving a single goal or moving in a unified direction. We each have our areas of expertise and independent business goals. We are much more a tribe than a team. As such, I believe we will be guided more by tribal dynamics and models than team rules and policies. The path is not clear, but this much I know…

  • There is no leader of this tribe. Not in the sense of a single person who’s responsible for setting the direction and making all the decisions that impact the organization. There is no “Chief” or “Czar” of anything. I’ll fill the role of Launch Commander and Flight Commander in order to get us organized and moving forward. However, I have been clear from the start about my intention to structure our tribe on principles of self-organization.
  • The emphasis is on simple and accessible technology and easy ways to organize meetings based on Agile principles and practices – lean coffee, for example, has served us well for our initial meetings. What has emerged since then are more involved and interactive meeting formats, such as client role-plays and accountability exercises. Keeping things simple and remaining mindful of barriers to participation is vital. Too many tools with too many logins risks the creation of a Tower of Babel. For now, the weekly video call is the center-point around which we all meet. This in itself is enough of a challenge given the global participation. Other than this, email is the acknowledged primary channel for asynchronous communication.
  • We are not accepting new members. Whether or for how long this remains the case is undecided. We have discussed various ways of introducing new members, but have decided to decide on this issue later. The circumstances that brought each of us together created a unique bond of trust and familiarity with each other’s business interests that makes the introduction of new members a risk to maintaining these relationships. At the moment, we are tipped slightly toward being on the large size and everyone acknowledges if we grow much bigger the meetings may become unmanageable and the interactions less valuable. Since trust is foundational, none of the details related to who we are and what we discuss will be revealed in this space. My writing will be limited to the general case of what I discover from having participated in and helped guide our tribe. It is my hope this may help others with forming and guiding their own teams and tribes.

Whatever the outcome, it’s been more fun thus far than I’ve had in a loooooong time.


Image by Youssef Jheir from Pixabay

Agile and Changing Requirements or Design

I hear this (or some version) more frequently in recent years than in past:

Agile is all about changing requirements at anytime during a project, even at the very end.

I attribute the increased frequency to the increased popularity of Agile methods and practices.

That the “Responding to change over following a plan” Agile Manifesto value is cherry picked so frequently is probably due to a couple of factors:

  • It’s human nature for a person to resist being cornered into doing something they don’t want to do. So this value gets them out of performing a task.
  • The person doesn’t understand the problem or doesn’t have a solution. So this value buys them time to figure out how to solve the problem. Once they do have a solution, well, it’s time to change the design or the requirements to fit the solution. This reason isn’t necessary bad unless it’s the de facto solution strategy.

The intent behind the “Responding to change” value, and the way successful Agile is practiced, does not allow for constant and unending change. If this were otherwise nothing would ever be completed and certainly nothing would ever be released to the market.

I’m not going to rehash the importance of the preposition in the value statement. Any need to explain the relativity implied by it’s use has become a useful signal for me to spend my energies elsewhere. But for those who are not challenged by the grammar, I’d like to say a few thing about how to know when change is appropriate and when it’s important to follow a plan.

The key is recognizing and tracking decision points. With traditional project management, decisions are built-in to the project plan. Every possible bit of work is defined and laid out on a Gantt chart, like the steel rails of a train track. Deviation from this path would be actively discouraged, if it were considered at all.

Using an Agile process, decision points that consider possible changes in direction are built into the process – daily scrums, sprint planning, backlog refinement, reviews and demonstrations at the end of sprints and releases, retrospectives, acceptance criteria, definitions of done, continuous integration – these all reflect deliberate opportunities in the process to evaluate progress and determine whether any changes need to be made. These are all activities that represent decisions or agreements to lock in work definitions for short periods of time.

For example, at sprint planning, a decision is made to complete a block of work in a specified period of time – often two weeks. After that, the work is reviewed and decisions are made as to whether or not that work satisfies the sprint goal and, by extension, the product vision. At this point, the product definition is specifically opened up for feedback from the stakeholders and any proposed changes are discussed. Except under unique circumstances, changes are not introduced mid-sprint and the teams stick to the plan.

Undoing decisions or agreements only happens if there is supporting information, such as technical infeasibility or a significant market shift. Undoing decisions and agreements doesn’t happen just because “Agile is all about changing requirements.” Agile supports changing requirements when there is good reason to do so, irrespective of the original plan. With traditional project management, it’s all about following the plan and change at any point is resisted.

This is the difference. With traditional project management, decisions are built-in to the project plan. With Agile they are adapted in.


Photo by Chris Lawton on Unsplash

Crafting a Product Vision

In his book, “Crossing the Chasm,” Geoffrey Moore offers a template of sorts for crafting a product vision:

For (target customer)

Who (statement of the need or opportunity)

The (product name) is a (product category)

That (key benefit, problem-solving capability, or compelling reason to buy)

Unlike (primary competitive alternative, internal or external)

Our product/solution (statement of primary differentiation or key feature set)

To help wire this in, the following guided exercise can be helpful. Consider the following product vision statement for a fictitious software program, Checkwriter 1.0:

For the bill-paying member of the family who also uses a home PC

Who is tired fo filling out the same old checks month after month

Checkwriter is a home finance program for the PC

That automatically creates and tracks all your check-writing.

Unlike Managing Your Money, a financial analysis package,

Our product/solution is optimized specifically for home bill-paying.

Ask the team to raise their hand when an item on the following list of potential features does not fit the product/solution vision and to keep it up unless they hear an item that they feel does fit the product/solution vision. By doing this, the team is being asked, “At what point does the feature list begin to move outside the boundaries suggested by the product vision?” Most hands should go up around item #4 or #5. All hands should be up by #9. A facilitated discussion related to the transition between “fits vision” and “doesn’t fit vision” is often quite effective after this brief exercise.

  1. Logon to bank checking account
  2. Synchronize checking data
  3. Generate reconciliation reports
  4. Send and receive email
  5. Create and manage personal budget
  6. Manage customer contacts
  7. Display tutorial videos
  8. Edit videos
  9. Display the local weather forecast for the next 5 days

It should be clear that one or more of the later items on the list do not belong in Checkwriter 1.0. This is how product visions work. They provide a filter through which potential features can be run during the life of the project to determine if they are inside or outside the project’s scope of work. As powerful as this is, the product vision will only catch the larger features that threaten the project work scope. To catch the finer grain threats to scope creep, a product road map needs to be defined by the product owner.


Photo by S Migaj on Unsplash

Systems Thinking, Project Management, and Agile – Part 8: Design Changes and Scope

[For this series, it will help to have read “System Dynamics and Causal Loop Diagrams 101.”]

For the conclusion of this series, I’ll look at several other levers that management can control when working to keep their teams in good health: Design and scope changes.

Changes in design can either be tightly or loosely coupled to changes in scope. In general, you can’t change one without changing the other. This is how I think of design and scope. Others think of them differently.

Few people intentionally change the scope of a project. Design changes, however, are usually intentional and frequent. They are also usually small relative to the overall project design so their effect on scope and progress can go unnoticed.

Nonetheless, small design changes are additive. Accumulate enough of them and it becomes apparent that scope has been affected. Few people recognize what has happened until it’s too late. A successive string of “little UI tweaks,” a “simple” addition to handle another file format that turned out to be not-so-simple to implement, a feature request slipped in by a senior executive to please a super important client – changes like this incrementally and adversely impact the delivery team’s performance.

Scope changes primarily impact the amount of Work to Do (Figure 1). Of course, Scope changes impact other parts of the system, too. The extent depends on the size of the Scope change and how management responds to the change in Scope. Do they push out the Deadline? Do they Hire Talent?

Figure 1

The effect of Design Changes on the system are more immediate and significant. Progress slows down while the system works to understand and respond to the Design Changes. As with Scope, the effect will depend on the extent of the Design Changes introduced into the system. The amount of Work to Do will increase. The development team will need to switch focus to study the changes (Task Switching. ) If other teams are dependent on completion of prior work or are waiting for the new changes, Overlap and Concurrence will increase. To incorporate the changes mid-project, there will likely be Technical Debt incurred in order to keep the project on schedule. And if the design impacts work already completed or in progress, there will be an increase in the amount of Rework to Do for the areas impacted by the Design Changes.

Perhaps the most important secondary consequence of uncontrolled design changes is the effect on morale. Development teams love a good challenge and solving problems. But this only has a positive effect on morale if the goal posts don’t change much. If the end is perpetually just over the next hill, morale begins to suffer. This hit to morale usually happens much quicker than most managers realize.

It is better to push off non-critical design changes to a future release. This very act often serves as a clear demonstration to development teams that management is actively working to control scope and can have a positive effect on the team’s morale, even if they are under a heavy workload.


Photo by Ariel Pilotto on Unsplash

Minimum Viable Product – It’s What You Don’t See

Take a moment or two to gaze at the image below. What do you see?

 

Do you see white dots embedded within the grid connected by diagonal white lines? If you do, try and ignore them. Chances are, your brain won’t let you even though the white circles and diagonal lines don’t exist. Their “thereness” is created by the thin black lines. By carefully drawing a simple repetitive pattern of black lines, your brain has filled in the void and enhanced the image with white dots and diagonal white lines. You cannot not do this. This cognitive process is important to be aware of if you are a product owner because both your agile delivery team members and clients will run this program without fail.

Think of the black lines as the minimum viable product definition for one of your sprints. When shown to your team or your client, they will naturally fill the void for what’s next or what’s missing. Maybe as a statement, most likely as a question. But what if the product owner defined the minimum viable product further and presented, metaphorically, something like this:

 

By removing the white space from the original image there are fewer possibilities for your team and the client to explore. We’ve reduced their response to our proposed solution to a “yes” or “no” and in doing so have started moving down the path of near endless cycles of the product owner guessing what the client wants and the agile delivery team guessing what the product owner wants. Both the client and the team will grow increasingly frustrated at the lack of progress. Played out too long, the client is likely to doubt our skills and competency at finding a solution.

On the other hand, by strategically limiting the information presented in the minimum viable product (or effort, if you like) we invite the client and the agile delivery team to explore the white space. This will make them co-creators of the solution and more fully invested in its success. Since they co-created the solution, they are much more likely to view the solution as brilliant, perfect, and the shiniest of shiny objects.

I can’t remember where I heard or read this, but in the first image the idea is that the black lines are you talking and the white spaces are you listening.

Responding to change over following a plan

Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.

Agile Manifesto Principle #2

Following from the Agile Manifesto value that is the title of this post, Principle #2 may be the most mis-interpreted and misunderstood principle among the set of twelve. Teams frequently behave as if this principle was prefaced with the word “always.”

Constantly shifting requirements leads to a frustrating and unsatisfying environment in which to work. It feeds burn-out and erodes morale. The satisfaction of a job well done depends on the opportunity to actually finish the job, no matter how small. Consider the effects on a finish carpenter who has just spent several days installing and trimming a full set of kitchen cabinets when the homeowner declares they want to change the kitchen design such that all those new cabinets will need to be ripped out and work begun on a new design. Or a film editor who has just worked 21 days straight to pare down an hour’s worth of video to fit into 7 minutes only to learn the scene has to be re-shot from scratch in order to match a change in the story line.

Of course, the second principle does not state we should “always welcome changing requirements.” Nor does anyone I know claim that it does. But that doesn’t stop people from behaving as if it did. The rationale offered for agreeing to change requests from the stakeholders may be “We’re an agile shop and agile welcomes changing requirements” when, in fact, the change was agreed to because the product owner didn’t challenge the value of the change or make clear the consequences to the stakeholders. Or the original design was, and remains, needlessly ambiguous. Or the stakeholders have changed without renegotiating the contract or working agreements. Or any number of reasons that are conveniently masked with “welcoming changing requirements.” At some point, welcoming changing requirements is about as attractive as welcoming a rabid dog into the house. This won’t end well.

So, what kind of change is the Agile Manifesto referring to? There are several key scenarios that embody the need for flexibility around requirements.

  • The change that results from periods of deliberate design, such as during design sprints.
  • The change that is driven by the lessons learned from exploration and prototyping. If it is understood that the work being “completed” is for the purposes of testing a hypothesis and the expectation is that the work will most likely be thrown away, there can still be a great deal of satisfaction derived from the effort as the actual deliverable wasn’t working software, but the lessons from the experiment (usually in the form of a wireframe or prototype.)

So what is it that locks out the option for additional change? It’s a simple event, really. A decision is made.

Each of these scenarios where adapting to lessons and discovery is essential nonetheless end in a decision, a leverage point from which progress can be made toward a final deliverable. Each of these decisions can themselves form the basis of a series of experiments which, depending on the eventual outcome, may change.  Often, a single decision point may look good but when several decisions are evaluated together they may suggest a new direction and therefore impact the requirements. If the cumulative insight from a series of decisions results in the need to change direction, that shift is usually more substantial and on the scale of a project plan pivot rather than a simple response to a single change in a single requirement. The need to pivot cannot reliably be revealed if the underlying decisions do not coalesce into some sort of stable understanding of the emerging design.

Changing requirements cannot go on indefinitely or a final product will never be delivered. Accepting change for the sake of change is what gets teams into trouble.

Much like the forces on evolution, there will always be some external force that seeks to change the project requirements so that the delivered product can be stronger, faster, better, taller, smarter, etc.  This must be countered by clear definitions of “minimum viable” and “good enough for now” relative to what the customer is expecting.

In addition, product owners would serve their teams well by vigorously challenging any proposed changes to the requirements.

  • What is the source of the change?
  • Is it random change or triggered by some agent that does not announce its arrival ahead of time?
  • Was the change in requirements a surprise? If so, why was it a surprise?
  • Will this (or something like it) happen again? With what frequency? At what probability?

Photo by juan pablo rodriguez on Unsplash

How to Know You Have a Well Defined Minimum Viable Product

Conceptually, the idea of a minimum viable product (MVP) is easy to grasp. Early in a project, it’s a deliverable that reflects some semblance to the final product such that it’s barely able to stand on it’s own without lots of hand-holding and explanation for the customer’s benefit. In short, it’s terrible, buggy, and unstable. By design, MVPs lack features that may eventually prove to be essential to the final product. And we deliberately show the MVP to the customer!

We do this because the MVP is the engine that turns the build-measure-learn feedback loop. The key here is the “learn” phase. The essential features to the final product are often unclear or even unknown early in a project. Furthermore, they are largely undefinable or unknowable without multiple iterations through the build-measure-learn feedback cycle with the customer early in the process.

So early MVPs aren’t very good. They’re also not very expensive. This, too, is by design because an MVP’s very raison d’être is to test the assumptions we make early on in a project. They are low budget experiments that follow from a simple strategy:

  1. State the good faith assumptions about what the customer wants and needs.
  2. Describe the tests the MVP will satisfy that are capable of measuring the MVP’s impact on the stated assumptions.
  3. Build an MVP that tests the assumptions.
  4. Evaluate the results.

If the assumptions are not stated and the tests are vague, the MVP will fail to achieve it’s purpose and will likely result in wasted effort.

The “product” in “minimum viable product” can be almost anything: a partial or early design flow, a wireframe, a collection of simulated email exchanges, the outline to a user guide, a static screen mock-up, a shell of screen panels with placeholder text that can nonetheless be navigated – anything that can be placed in front of a customer for feedback qualifies as an MVP. In other words, a sprint can contain multiple MVPs depending on the functional groups involved with the sprint and the maturity of the project. As the project progresses, the individual functional group MVPs will begin to integrate and converge on larger and more refined MVPs, each gaining in stability and quality.

MVPs are not an end unto themselves. They are tangible evidence of the development process in action. The practice of iteratively developing MVPs helps develop to skill of rapid evaluation and learning among product owners and agile delivery team members. A buggy, unstable, ugly, bloated, or poorly worded MVP is only a problem if it’s put forward as the final product. The driving goal behind iterative MVPs is not perfection, rather it is to support the process of learning what needs to be developed for the optimal solution that solves the customer’s problems.

“Unlike a prototype or concept test, an MVP is designed not just to answer product design or technical questions. Its goal is to test fundamental business hypotheses.” – Eric Ries, The Lean Startup

So how might product owners and Agile teams begin to get a handle on defining an MVP? There are several questions the product owner and team can ask of themselves, in light of the product backlog, that may help guide their focus and decisions. (Use of the following term “stakeholders” can mean company executives or external customers.)

  • Identify the likely set of stakeholders who will be attending the sprint review. What will these stakeholders need to see so that they can offer valuable feedback? What does the team need to show in order to spark the most valuable feedback from the stakeholders?
  • What expectations have been set for the stakeholders?
  • Is the distinction clear between what the stakeholders want vs what they need?
  • Is the distinction clear between high and low value? Is the design cart before the value horse?
  • What are the top two features or functions the stakeholders  will be expecting to see? What value – to the stakeholders – will these features or functions deliver?
  • Will the identified features or functions provide long term value or do they risk generating significant rework down the road?
  • Are the identified features or functions leveraging code, content, or UI/UX reuse?

Recognizing an MVP – Less is More

Since an MVP can be almost anything,  it is perhaps easier to begin any conversation about MVPs by touching on the elements missing from an MVP.

An MVP is not a quality product. Using any generally accepted definition of “quality” in the marketplace, an MVP will fail on all accounts. Well, on most accounts. The key is to consider relative quality. At the beginning of a sprint, the standards of quality for an MVP are framed by the sprint goals and objectives. If it meets those goals, the team has successfully created a quality MVP. If measured against the external marketplace or the quality expectations of the customer, the MVP will almost assuredly fail inspection.

Your MVPs will probably be ugly, especially at first. They will be missing features. They will be unstable. Build them anyway. Put them in front of the customer for feedback. Learn. And move on to the next MVP. Progressively, they will begin to converge on the final product that is of high quality in the eyes of the customer. MVPs are the stepping stones that get you across the development stream and to the other side where all is sunny, beautiful, and stable. (For more information on avoiding the trap of presupposing what a customer means by quality and value, see “The Value of ‘Good Enough…For Now’“)

An MVP is not permanent. Agile teams should expect to throw away several, maybe even many, MVPs on their way to the final product. If they aren’t, then it is probable they are not learning what they need to about what the customer actually wants. In this respect, waste can be a good, even important thing. The driving purpose of the MVP is to rapidly develop the team’s understanding of what the customer needs, the problems they are expecting to have solved, and the level of quality necessary to satisfy each of these goals.

MVPs are not the truth. They are experiments meant to get the team to the truth. By virtue of their low-quality, low-cost nature, MVPs quickly shake out the attributes to the solution the customer cares about and wants. The solid empirical foundation they provide is orders of magnitude more valuable to the Agile team than any amount of speculative strategy planning or theoretical posturing.

Photo by Sarah Dorweiler on Unsplash

The Value of “Good Enough…for Now”

Any company interested in being successful, whether offering a product or service, promises quality to its customers. Those that don’t deliver, die away. Those that do, survive. Those that deliver quality consistently, thrive. Seems like easy math. But then, 1 + 1 = 2 seems like easy math until you struggle through the 350+ pages Whitehead and Russell1 spent on setting up the proof for this very equation. Add the subjective filters for evaluating “quality” and one is left with a measure that can be a challenge to define in any practical way.

Math aside, when it comes to quality, everyone “knows it when they see it,” usually in counterpoint to a decidedly non-quality experience with a product or service. The nature of quality is indeed chameleonic – durability, materials, style, engineering, timeliness, customer service, utility, aesthetics – the list of measures is nearly endless. Reading customer reviews can reveal a surprising array of criteria used to evaluate the quality for a single product.

The view from within the company, however, is even less clear. Businesses often believe they know quality when they see it. Yet that belief is often predicate on how the organization defines quality, not how their customers define quality. It is a definition that is frequently biased in ways that accentuate what the organization values, not necessarily what the customer values.

Organization leaders may define quality too high, such that their product or service can’t be priced competitively or delivered to the market in a timely manner. If the high quality niche is there, the business might succeed. If not, the business loses out to lower priced competitors that deliver products sooner and satisfy the customer’s criteria for quality (see Figure 1).

Figure 1. Quality Mismatch I
Figure 1. Quality Mismatch I

Certainly, there is a case that can be made for providing the highest quality possible and developing the business around that niche. For startups and new product development, this may not be be best place to start.

On the other end of the spectrum, businesses that fall short of customer expectations for quality suffer incremental, or in some cases catastrophic, reputation erosion. Repairing or rebuilding a reputation for quality in a competitive market is difficult, maybe even impossible (see Figure 2).

Figure 2. Quality Mismatch II
Figure 2. Quality Mismatch II

The process for defining quality on the company side of the equation, while difficult, is more or less deliberate. Not so on the customer side. Customers often don’t know what they mean by “quality” until they have an experience that fails to meet their unstated, or even unknown, expectations. Quality savvy companies, therefore, invest in understanding what their customers mean by “quality” and plan accordingly. Less guess work, more effort toward actual understanding.

Furthermore, looking to what the competition is doing may not be the best strategy. They may be guessing as well. It may very well be that the successful quality strategy isn’t down the path of adding more bells and whistles that market research and focus groups suggest customers want. Rather, it may be that improvements in existing features and services are more desirable.

Focus on being clear about whether or not potential customers value the offered solution and how they define value. When following an Agile approach to product development, leveraging minimum viable product definitions can help bring clarity to the effort. With customer-centric benchmarks for quality in hand, companies are better served by first defining quality in terms of “good enough” in the eyes of their customers and then setting the internal goal a little higher. This will maximize internal resources (usually time and money) and deliver a product or service that satisfies the customer’s idea of “quality.”

Case in point: Several months back, I was assembling several bar clamps and needed a set of cutting tools used to put the thread on the end of metal pipes – a somewhat exotic tool for a woodworker’s shop. Shopping around, I could easily drop $300 for a five star “professional” set or $35 for a set that was rated to be somewhat mediocre. I’ve gone high end on many of the tools in my shop, but in this case the $35 set was the best solution for my needs. Most of the negative reviews revolved around issues with durability after repeated use. My need was extremely limited and the “valuable and good enough” threshold was crossed at $35. The tool set performed perfectly and more than paid for itself when compared with the alternatives, whether that be a more expensive tool or my time to find a local shop to thread the pipes for me. This would not have been the case for a pipefitter or someone working in a machine shop.

By understanding where the “good enough and valuable” line is, project and organization leaders are in a better position to evaluate the benefits of incremental improvements to core products and services that don’t break the bank or burn out the people tasked with delivering the goods. Of course, determining what is “good enough” depends on the end goal. Sending a rover to Mars, “good enough” had better be as near to perfection as possible. Threading a dozen pipes for bar clamps used in a wood shop can be completed quite successful with low quality tools that are “good enough” to get the job done.

Addendum

I’ve been giving some more thought to the idea of “good enough” as one of the criteria for defining minimum viable/valuable products. What’s different is that I’ve started to use the phrase “good enough for now.” Reason being, the phrase “good enough” seems to imply an end state. “Good enough” is an outcome. If it is early in a project, people generally have a problem with that. They have some version of an end state that is a significant mismatch with the “good enough” product today. The idea of settling for “good enough” at this point makes it difficult for them to know when to stop work on an interim phase and collect feedback.

“Good enough for now” implies there is more work to be done and the product isn’t in some sort of finished state that they’ll have to settle for. “Good enough for now” is a transitory state in the process. I’m finding that I can more easily gain agreement that a story is finished and get people to move forward to the next “good enough for now” by including the time qualifier.

References

1Volume 1 of Principia Mathematica by Alfred North Whitehead and Bertrand Russell (Cambridge University Press, page 379). The proof was actually not completed until Volume 2. (This article cross-posted at LinkedIn.)