The scrum framework is forever tied to the language of sports in general and rugby in particular. We organize our project work around goals, sprints, points, and daily scrums. An unfortunate consequence of organizing projects around a sports metaphor is that the language of gaming ends up driving behavior. For example, people have a natural inclination to associate the idea of story points to a measure of success rather than an indicator of the effort required to complete the story. The more points you have, the more successful you are. This is reflected in an actual quote from a retrospective on things a team did well:
We completed the highest number of points in this sprint than in any other sprint so far.
This was a team that lost sight of the fact they were the only team on the field. They were certain to be the winning team. They were also destine to be he losing team. They were focused on story point acceleration rather than a constant, predictable velocity.
More and more I’m finding less and less value in using story points as an indicator for level of effort estimation. If Atlassian made it easy to change the label on JIRA’s story point field, I’d change it to “Fuzzy Bunnies” just to drive this idea home. You don’t want more and more fuzzy bunnies, you want no more than the number you can commit to taking care of in a certain span of time typically referred to as a “sprint.” A team that decides to take on the care and feeding of 50 fuzzy bunnies over the next two weeks but has demonstrated – sprint after sprint – they can only keep 25 alive is going to lose a lot of fuzzy bunnies over the course of the project.
It is difficult for people new to scrum or Agile to grasp the purpose behind an abstract idea like story points. Consequently, they are unskilled in how to use them as a measure of performance and improvement. Developing this skill can take considerable time and effort. The care and feeding of fuzzy bunnies, however, they get. Particularly with teams that include non-technical domains of expertise, such as content development or learning strategy.
A note here for scrum masters. Unless you want to exchange your scrum master stripes for a saddle and spurs, be wary of your team turning story pointing into an animal farm. Sizing story cards to match the exact size and temperament from all manner of animals would be just as cumbersome as the sporting method of story points. So, watch where you throw your rope, Agile cowboys and cowgirls.
Estimating levels of effort for a set of tasks by a group of individuals well qualified to complete those tasks can efficiently and reliable be determined with a collaborative estimation process like planning poker. Such teams have a good measure of skill overlap. In the context of the problem set, each of the team members are generalist in the sense it’s possible for any one team member to work on a variety of cross functional tasks during a sprint. Differences in preferred coding language among team members, for example, is less an issue when everyone understands advanced coding practices and the underlying architecture for the solution.
With a set of complimentary technical skills it’s is easier agree on work estimates. There are other benefits that flow from well-matched teams. A stable sprint velocity emerges much sooner. There is greater cross functional participation. And re-balancing the work load when “disruptors” occur – like vacations, illness, uncommon feature requests, etc. – is easier to coordinate.
Once the set of tasks starts to include items that fall outside the expertise of the group and the group begins to include cross functional team members, a process like planning poker becomes increasingly less reliable. The issue is the mismatch between relative scales of expertise. A content editor is likely to have very little insight into the effort required to modify a production database schema. Their estimation may be little more than a guess based on what they think it “should” be. Similarly for a coder faced with estimating the effort needed to translate 5,000 words of text from English to Latvian. Unless, of course, you have an English speaking coder on your team who speaks fluent Latvian.
These distinctions are easy to spot in project work. When knowledge and solution domains have a great deal of overlap, generalization allows for a lot of high quality collaboration. However, when an Agile team is formed to solve problems that do not have a purely technical solution, specialization rather than generalization has a greater influence on overall success. The risk is that with very little overlap specialized team expertise can result in either shallow solutions or wasteful speculation – waste that isn’t discovered until much later. Moreover, re-balancing the team becomes problematic and most often results in delays and missed commitments due to the limited ability for cross functional participation among team mates.
The challenge for teams where knowledge and solution domains have minimal overlap is to manage the specialized expertise domains in a way that is optimally useful, That is, reliable, predictable, and actionable. Success becomes increasingly dependent on how good an organization is at estimating levels of effort when the team is composed of specialists.
One approach I experimented with was to add a second dimension to the estimation: a weight factor to the estimator’s level of expertise relative to the nature of the card being considered. The idea is that with a weighted expertise factor calibrated to the problem and solution contexts, a more reliable velocity emerges over time. In practice, was difficult to implement. Teams spent valuable time challenging what the weighted factor should be and less experienced team members felt their opinion had been, quite literally, discounted.
The approach I’ve had the most success with on teams with diverse expertise is to have story cards sized by the individual assigned to complete the work. This still happens in a collaborative refinement or planning session so that other team members can contribute information that is often outside the perspective of the work assignee. Dependencies, past experience with similar work on other projects, missing acceptance criteria, or a refinement to the story card’s minimum viable product (MVP) definition are all examples of the kind of information team members have contributed. This invariably results in an adjustment to the overall level of effort estimate on the story card. It also has made details about the story card more explicit to the team in a way that a conversation focused on story point values doesn’t seem to achieve. The conversation shifts from “What are the points?” to “What’s the work needed to complete this story card?”
I’ve also observed that by focusing ownership of the estimate on the work assignee, accountability and transparency tend to increase. Potential blockers are surfaced sooner and team members communicate issues and dependencies more freely with each other. Of course, this isn’t always the case and in a future post I’ll explore aspects of team composition and dynamics that facilitate or prevent quality collaboration.
In a recent conversation with colleagues we were debating the merits of using story point velocity as a metric for team performance and, more specifically, how it relates to determining a team’s predictability. That is to say, how reliable the team is at completing the work they have promised to complete. At one point, the question of what is a story point came up and we hit on the idea of story points not being “points” at all. Rather, they are more like currency. This solved a number of issues for us.
First, it interrupts the all too common assumption that story points (and by extension, velocities) can be compared between teams. Experienced scrum practitioners know this isn’t true and that nothing good can come from normalizing story points and sprint velocities between teams. And yet this is something non-agile savvy management types are want to do. Thinking of a story’s effort in terms of currency carries with it the implicit assumption that one team’s “dollars” are not another team’s “rubles” or another teams “euros.” At the very least, an exchange evaluation would need to occur. Nonetheless, dollars, rubles, and euros convey an agreement of value, a store of value that serves as a reliable predictor of exchange. X number of story points will deliver Y value from the product backlog.
The second thing thinking about effort as currency accomplished was to clarify the consequences of populating the product backlog with a lot of busy work or non-value adding work tasks. By reducing the value of the story currency, the measure of the level of effort becomes inflated and the ability of the story currency to function as a store of value is diminished.
There are a host of other interesting economics derived thought experiments that can be played out with this frame around story effort. What’s the effect of supply and demand on available story currency (points)? What’s the state of the currency supply (resource availability)? Is there such a thing as counterfeit story currency? If so, what’s that look like? How might this mesh with the idea of technical or dark debt?
Try this out at your next backlog refinement session (or whenever it is you plan to size story efforts): Ask the team what you would have to pay them in order to complete the work. Choose whatever measure you wish – dollars, chickens, cookies – and use that as a basis for determining the effort needed to complete the story. You might also include in the conversation the consequences to the team – using the same measures – if they do not deliver on their promise.
“Keep your eye on the ball!” I was always coached when learning how to play baseball. Seemed like reasonable advice while standing at the plate, facing down the pitcher for the opposing team. Certainly wouldn’t want to be daydreaming or casting my gaze to the horizon. It didn’t seem to help, though. I excelled at striking out.
Later…much later…I came across Wayne Gretzky’s quote: “Skate to where the puck is going, not where it has been.” I wonder if I had learned to figure out where the baseball was going to be and made sure my bat was there to meet it if I might have spent more time on bases. Keeping my eye on the ball didn’t tell me much about when to start my swing.
No regrets. I still love the game (as it was, not as it currently is.)
I think of this Gretzky quote when I watch product owners struggle with organizing their backlog. (I also think how tragic it is that the business world has beat this quote into an intolerable pulpy platitude.)
Ask a product owner what their team is working on today, they should be able to give a succinct answer. Ask them what their team is going to be working on in three months and watch the clock. The longer they can go on about what their team is going to be working on, the healthier their backlog is likely to be. Struggling product owners scramble to keep their teams busy sprint-to-sprint. Good product owners can see where their teams are going to be in several months. Great product owners see to the end of the game.
In Parkinson’s Law of Triviality and Story Sizing, I touched on the issue of relative expertise among team members during collaborative efforts to size story cards. I’d like to expand on that idea by considering several types of team compositions.
Team 1 is a tight knit band of four software developers represented in Figure 1.
Team 1 represents a near-ideal team composition for a typical software related project. However, the real world isn’t so generous in it’s allocation of near-ideal, let alone ideal, teams. A typical team for a software related project is more likely to resemble Team 2, as represented in Figure 2.
As Agile practices become more ubiquitous in the business world, team composition beings to resemble Team 3, as shown in Figure 3.
The mix now includes non-technical people – content developers and editors, strategists, and designers. Even assuming an equal level of experience in their respective domains, the company, and the business environment, there is very little overlap. Arriving at a consensus during a story sizing exercise now becomes a significant challenge. But again, the real world isn’t even so kind as this. We are increasingly more likely to encounter teams that resemble Team 4 as shown in Figure 4.
As before, the relative circle of expertise among team members can vary quite a bit. When a team resembles the composition of Team 4, the software developers (HTML5/CSS and C#) will have trouble understanding what the Learning Strategist is asking for while the Learning Strategist may not understand why what he wants the software developers to deliver isn’t possible.
When I’ve attempted to facilitate story sizing sessions with teams that resemble Team 4 they either become quite contentious (and therefore time consuming) or team members that don’t have the expertise to understand a particular card simply accept the opinion of the stronger voices. Neither one of these situations is desirable.
To counteract these possibilities, I’ve found it much more effective to have the card assignee determine the card size (points and time estimate) and work to have the other team members ask questions about the work described on the card such that the assignee and the team better understand the context in which the card is positioned. The team members that lack domain expertise, it turns out, are in a good position to help craft good acceptance criteria.
Who will consume the work product that results from the card? (dependencies)
What cards need to be completed before a particular card can be worked on? (dependencies)
Is everything known about what a particular card needs before it can be completed? (dependencies, discovery, exploration)
At the end of a brief conversation where the entire team is working to evaluate the card for anything other than level of effort (time) and complexity (points), it is not uncommon for the assignee to reconsider their sizing, break the card into multiple cards, or determine the card shouldn’t be included in the sprint backlog. In short, it ends up being a much more productive conversation if teammates aren’t haggling over point distinctions or passively accepting what more experienced teammates are advocating. The benefit to the product owner is that they now have additional information that will undoubtedly influence the product backlog prioritization.
Three years ago, I published the following article:
It appears mindfulness is…well…on a lot ofpeople’smindslately. I’ve seen this wave come and go twice before. This go around, however, will be propelled and amplified by the Internet. Will it come and go faster? Will there be a lasting and deeper revelation around mindfulness? I predict the former.
Mindfulness is simple and it’s hard. As the saying goes, mindfulness is not what you think. It was difficult when I first began practicing Rinzai Zen meditation and Aikido many years ago. It’s even more difficult in today’s instant information, instant gratification, and short attention span culture. The uninitiated are ill equipped for the journey.
With this latest mindfulness resurgence expect an amplified parasite wave of meditation teachers and mindfulness coaches. A Japanese Zen Master (Roshi, or “teacher”) I studied with years ago called them “popcorn roshis” – they pop up everywhere and have little substance. No surprise that this wave includes a plethora of mindfulness “popcorn apps.”
Spoiler alert: There are no apps for mindfulness. Attempting to develop mindfulness by using an app on a device that is arguably the single greatest disruptor of mindfulness is much like taking a pill to counteract the side effects of another pill in your quest for health. At a certain point, the pills are the problem. They’ve become the barrier to health.
The “mindfulness” apps that can be found look to be no different than thousands of other non-mindfulness apps offering timers, journaling, topical text, and progress tracking. What they all have in common is that they place your mindfulness practice in the same space as all the other mindfulness killing apps competing for your attention – email, phone, texts, social media, meeting reminders, battery low alarms, and all the other widgets that beep, ring, and buzz.
The way to practicing mindfulness is by the deliberate subtraction of distractions, not the addition of another collection of e-pills. The “killer app” for mindfulness is to kill the app. The act of powering off your smart phone for 30 minutes a day is in itself a powerful practice toward mindfulness. No timer needed. No reminder required. Let it be a random act. Be free! At least for 30 minutes or so.
Mental states like mindfulness, focus, and awareness are choices and don’t arise out of some serendipitous environmental convergence of whatever. They are uniquely human states. Relying on a device or machine to develop mindfulness is decidedly antithetical to the very state of mindfulness. Choosing to develop such mental states requires high quality mentors (I’ve had many) and deliberate practice – a practice that involves subtracting the things from your daily life that work against them.
“For if a person shifts their caution to their own reasoned choices and the acts of those choices, they will at the same time gain the will to avoid, but if they shift their caution away from their own reasoned choices to things not under their control, seeking to avoid what is controlled by others, they will then be agitated, fearful, and unstable.” – Epictetus, Discourses, 2.1.12
Looking at the past three Internet years I’d have to say the prospects for the latest mindfulness wave amounting to anything substantial are bleak. There probably aren’t enough words to describe how far off the rails this fad has gone. But there is a number! 2020.
Bonus: There’s a study! “Minding your own business? Mindfulness decreases prosocial behavior for those with independent self-construals.” (Preprint) There was a concept in this study that was new to me: “self-construal.” According to the APA dictionary:
n. any specific belief about the self. The term is used particularly in connection with the distinction between independent self-construals and interdependent self-construals.
Well, that definition sorta has itself as the definition. So…
a view of the self (self-construal) that emphasizes one’s separateness and unique traits and accomplishments and that downplays one’s embeddedness in a network of social relationships. Compare interdependent self-construal.
a view of the self (self-construal) that emphasizes one’s embeddedness in a network of social relationships and that downplays one’s separateness and unique traits or accomplishments. Compare independent self-construal.
Clear on terms, on to the abstract:
Mindfulness appears to promote individual well-being, but its interpersonal effects are less clear. Two studies in adult populations tested whether the effects of mindfulness on prosocial behavior differ by self-construals. In Study 1 (N = 366), a brief mindfulness induction, compared to a meditation control, led to decreased prosocial behavior among people with relatively independent self-construals, but had the opposite effect among those with relatively interdependent self-construals. In Study 2 (N = 325), a mindfulness induction led to decreased prosocial behavior among those primed with independence, but had the opposite effect among those primed with interdependence. The effects of mindfulness on prosocial behavior appear to depend on individuals’ broader social goals. This may have implications for the increasing popularity of mindfulness training around the world.
TL;DR: Mindfulness “training” makes selfish people more selfish and narcissistic people more narcissistic. On the other hand, it makes altruistic people more altruistic and compassionate people more compassionate. So there’s that.
I think it’s fair to say the last several years in particular have revealed an awe inspiring explosion in selfishness and narcissism. Evidenced by the extreme polarization manifest in identity politics and all it’s dubious offspring. The thought of all these people pulling on a mindfulness mask like some fashion accessory is less than comforting.
Three years on, it looks to be certain that “mindfulness” has been co-opted and applied as a temporary bandage in a world lacking resilience, flexibility, and genuine tolerance. Another mindfulness wave that has failed to crash onto the shores of civilization with a cleansing thunder, instead mindlessly trickled up to the edge and done little more than rearrange the garbage floating at the shoreline.
I really wanted it to succeed. Maybe next time.
Poulin, M., Ministero, L., Gabriel, S., Morrison, C., & Naidu, E. (2021, April 9). Minding your own business? Mindfulness decreases prosocial behavior for those with independent self-construals. https://doi.org/10.31234/osf.io/xhyua
Conceptually, the idea of a minimum viable product (MVP) is easy to grasp. Early in a project, it’s a deliverable that reflects some semblance to the final product such that it’s barely able to stand on it’s own without lots of hand-holding and explanation for the customer’s benefit. In short, it’s terrible, buggy, and unstable. By design, MVPs lack features that may eventually prove to be essential to the final product. And we deliberately show the MVP to the customer!
We do this because the MVP is the engine that turns the build-measure-learn feedback loop. The key here is the “learn” phase. The essential features to the final product are often unclear or even unknown early in a project. Furthermore, they are largely undefinable or unknowable without multiple iterations through the build-measure-learn feedback cycle with the customer early in the process.
So early MVPs aren’t very good. They’re also not very expensive. This, too, is by design because an MVP’s very raison d’être is to test the assumptions we make early on in a project. They are low budget experiments that follow from a simple strategy:
State the good faith assumptions about what the customer wants and needs.
Describe the tests the MVP will satisfy that are capable of measuring the MVP’s impact on the stated assumptions.
Build an MVP that tests the assumptions.
Evaluate the results.
If the assumptions are not stated and the tests are vague, the MVP will fail to achieve it’s purpose and will likely result in wasted effort.
The “product” in “minimum viable product” can be almost anything: a partial or early design flow, a wireframe, a collection of simulated email exchanges, the outline to a user guide, a static screen mock-up, a shell of screen panels with placeholder text that can nonetheless be navigated – anything that can be placed in front of a customer for feedback qualifies as an MVP. In other words, a sprint can contain multiple MVPs depending on the functional groups involved with the sprint and the maturity of the project. As the project progresses, the individual functional group MVPs will begin to integrate and converge on larger and more refined MVPs, each gaining in stability and quality.
MVPs are not an end unto themselves. They are tangible evidence of the development process in action. The practice of iteratively developing MVPs helps develop to skill of rapid evaluation and learning among product owners and agile delivery team members. A buggy, unstable, ugly, bloated, or poorly worded MVP is only a problem if it’s put forward as the final product. The driving goal behind iterative MVPs is not perfection, rather it is to support the process of learning what needs to be developed for the optimal solution that solves the customer’s problems.
“Unlike a prototype or concept test, an MVP is designed not just to answer product design or technical questions. Its goal is to test fundamental business hypotheses.” – Eric Ries, The Lean Startup
So how might product owners and Agile teams begin to get a handle on defining an MVP? There are several questions the product owner and team can ask of themselves, in light of the product backlog, that may help guide their focus and decisions. (Use of the following term “stakeholders” can mean company executives or external customers.)
Identify the likely set of stakeholders who will be attending the sprint review. What will these stakeholders need to see so that they can offer valuable feedback? What does the team need to show in order to spark the most valuable feedback from the stakeholders?
What expectations have been set for the stakeholders?
Is the distinction clear between what the stakeholders want vs what they need?
Is the distinction clear between high and low value? Is the design cart before the value horse?
What are the top two features or functions the stakeholders will be expecting to see? What value – to the stakeholders – will these features or functions deliver?
Will the identified features or functions provide long term value or do they risk generating significant rework down the road?
Are the identified features or functions leveraging code, content, or UI/UX reuse?
Recognizing an MVP – Less is More
Since an MVP can be almost anything, it is perhaps easier to begin any conversation about MVPs by touching on the elements missing from an MVP.
An MVP is not a quality product. Using any generally accepted definition of “quality” in the marketplace, an MVP will fail on all accounts. Well, on most accounts. The key is to consider relative quality. At the beginning of a sprint, the standards of quality for an MVP are framed by the sprint goals and objectives. If it meets those goals, the team has successfully created a quality MVP. If measured against the external marketplace or the quality expectations of the customer, the MVP will almost assuredly fail inspection.
Your MVPs will probably be ugly, especially at first. They will be missing features. They will be unstable. Build them anyway. Put them in front of the customer for feedback. Learn. And move on to the next MVP. Progressively, they will begin to converge on the final product that is of high quality in the eyes of the customer. MVPs are the stepping stones that get you across the development stream and to the other side where all is sunny, beautiful, and stable. (For more information on avoiding the trap of presupposing what a customer means by quality and value, see “The Value of ‘Good Enough…For Now’“)
An MVP is not permanent. Agile teams should expect to throw away several, maybe even many, MVPs on their way to the final product. If they aren’t, then it is probable they are not learning what they need to about what the customer actually wants. In this respect, waste can be a good, even important thing. The driving purpose of the MVP is to rapidly develop the team’s understanding of what the customer needs, the problems they are expecting to have solved, and the level of quality necessary to satisfy each of these goals.
MVPs are not the truth. They are experiments meant to get the team to the truth. By virtue of their low-quality, low-cost nature, MVPs quickly shake out the attributes to the solution the customer cares about and wants. The solid empirical foundation they provide is orders of magnitude more valuable to the Agile team than any amount of speculative strategy planning or theoretical posturing.
Any company interested in being successful, whether offering a product or service, promises quality to its customers. Those that don’t deliver, die away. Those that do, survive. Those that deliver quality consistently, thrive. Seems like easy math. But then, 1 + 1 = 2 seems like easy math until you struggle through the 350+ pages Whitehead and Russell1 spent on setting up the proof for this very equation. Add the subjective filters for evaluating “quality” and one is left with a measure that can be a challenge to define in any practical way.
Math aside, when it comes to quality, everyone “knows it when they see it,” usually in counterpoint to a decidedly non-quality experience with a product or service. The nature of quality is indeed chameleonic – durability, materials, style, engineering, timeliness, customer service, utility, aesthetics – the list of measures is nearly endless. Reading customer reviews can reveal a surprising array of criteria used to evaluate the quality for a single product.
The view from within the company, however, is even less clear. Businesses often believe they know quality when they see it. Yet that belief is often predicate on how the organization defines quality, not how their customers define quality. It is a definition that is frequently biased in ways that accentuate what the organization values, not necessarily what the customer values.
Organization leaders may define quality too high, such that their product or service can’t be priced competitively or delivered to the market in a timely manner. If the high quality niche is there, the business might succeed. If not, the business loses out to lower priced competitors that deliver products sooner and satisfy the customer’s criteria for quality (see Figure 1).
Certainly, there is a case that can be made for providing the highest quality possible and developing the business around that niche. For startups and new product development, this may not be be best place to start.
On the other end of the spectrum, businesses that fall short of customer expectations for quality suffer incremental, or in some cases catastrophic, reputation erosion. Repairing or rebuilding a reputation for quality in a competitive market is difficult, maybe even impossible (see Figure 2).
The process for defining quality on the company side of the equation, while difficult, is more or less deliberate. Not so on the customer side. Customers often don’t know what they mean by “quality” until they have an experience that fails to meet their unstated, or even unknown, expectations. Quality savvy companies, therefore, invest in understanding what their customers mean by “quality” and plan accordingly. Less guess work, more effort toward actual understanding.
Furthermore, looking to what the competition is doing may not be the best strategy. They may be guessing as well. It may very well be that the successful quality strategy isn’t down the path of adding more bells and whistles that market research and focus groups suggest customers want. Rather, it may be that improvements in existing features and services are more desirable.
Focus on being clear about whether or not potential customers value the offered solution and how they define value. When following an Agile approach to product development, leveraging minimum viable product definitions can help bring clarity to the effort. With customer-centric benchmarks for quality in hand, companies are better served by first defining quality in terms of “good enough” in the eyes of their customers and then setting the internal goal a little higher. This will maximize internal resources (usually time and money) and deliver a product or service that satisfies the customer’s idea of “quality.”
Case in point: Several months back, I was assembling several bar clamps and needed a set of cutting tools used to put the thread on the end of metal pipes – a somewhat exotic tool for a woodworker’s shop. Shopping around, I could easily drop $300 for a five star “professional” set or $35 for a set that was rated to be somewhat mediocre. I’ve gone high end on many of the tools in my shop, but in this case the $35 set was the best solution for my needs. Most of the negative reviews revolved around issues with durability after repeated use. My need was extremely limited and the “valuable and good enough” threshold was crossed at $35. The tool set performed perfectly and more than paid for itself when compared with the alternatives, whether that be a more expensive tool or my time to find a local shop to thread the pipes for me. This would not have been the case for a pipefitter or someone working in a machine shop.
By understanding where the “good enough and valuable” line is, project and organization leaders are in a better position to evaluate the benefits of incremental improvements to core products and services that don’t break the bank or burn out the people tasked with delivering the goods. Of course, determining what is “good enough” depends on the end goal. Sending a rover to Mars, “good enough” had better be as near to perfection as possible. Threading a dozen pipes for bar clamps used in a wood shop can be completed quite successful with low quality tools that are “good enough” to get the job done.
I’ve been giving some more thought to the idea of “good enough” as one of the criteria for defining minimum viable/valuable products. What’s different is that I’ve started to use the phrase “good enough for now.” Reason being, the phrase “good enough” seems to imply an end state. “Good enough” is an outcome. If it is early in a project, people generally have a problem with that. They have some version of an end state that is a significant mismatch with the “good enough” product today. The idea of settling for “good enough” at this point makes it difficult for them to know when to stop work on an interim phase and collect feedback.
“Good enough for now” implies there is more work to be done and the product isn’t in some sort of finished state that they’ll have to settle for. “Good enough for now” is a transitory state in the process. I’m finding that I can more easily gain agreement that a story is finished and get people to move forward to the next “good enough for now” by including the time qualifier.
1Volume 1 of Principia Mathematica by Alfred North Whitehead and Bertrand Russell (Cambridge University Press, page 379). The proof was actually not completed until Volume 2. (This article cross-posted at LinkedIn.)