Using Origami to Explain What I Do

This is interesting: Getting Crafty: Why Coders Should Try Quilting and Origami

I’ve never done any quilting (I’ve a sister who’s excellent at that), but I’ve done origami since forever. In fact, origami was a way to explain to other people what I did for a living. I’d start with a 6” x 6” piece of paper that was white on one side and black on the other (digital!). Then I’d fold a boat, or a frog, or a crane. That’s what software developers work with – they begin with ones and zeros, bits and bytes. From there, they can build some amazing things.

Not sure the explanation always worked, but at least it was entertaining and instilled a small measure of appreciation for what I and other software developers did for a living. Certainly better than describing the challenges of buffer overflow issues or SQL injection attack counter measures. That approach gets one uninvited from parties.


Image by Anthony Jarrin from Pixabay

Farming as a Metaphor for Workplace Culture

Michael Wade has an interesting post considering how non-agricultural workplaces can resemble farms.

Workplace cultures are in large part a reflection of the underlying metaphor driving the organization, whether by design or chance. When much younger, I used and advocated the “business is war” metaphor. I have been much more successful (and much less stressed) organizing around a farming metaphor. For truth, there can be times of battle on the farm that, as in the war metaphor, require the immediate and drastic mobilization of resources: the barn is on fire, the locus are coming, a tornado approaches. Life on the farm is more than endless summer days spent blissfully feeding magic ponies and dancing under rainbows. One must be prepared to “take up arms” and employ non-farm related tools and tactics in order to deal with any short term crisis that may occur.


Photo by Lucy Chian on Unsplash

Plans, Strategies, and Uncertainty

“It is difficult to make predictions, especially about the future,” is an insight attributed to about two dozen sources. Actually, any one of us could have said this for it’s certain we’ve all had an intuitive feel for the truth revealed by these words. This is undoubtedly why we work so hard to tease out the details about what the future may hold by making plans, laying out strategies, and running scenarios based on our version of the best data available today. Each of these tools are employed in an effort to reduce uncertainty about the future. But which tool to use? Therein lies the rub. The answer, rather unhelpfully, is “It depends.”

  • How complex is the problem space?
  • How well is the problem space understood?
  • What is the availability of resources (time, money, people, materials, etc.)?
  • What is the skill level and experience depth of those tasked with developing a plan or a strategy?

Stated simply, creating a plan and sticking to it is ideal for simple, well understood, small scale problem spaces where one or more resources are limited. They work if the individual or team tasked with finding a way through the problem space is inexperienced or lacking skills required by the problem space. As complexity and uncertainty increase, the way forward benefits with a more flexible approach. This is where it’s helpful to have a strategy, something that is more than a single course of action. Rather, a strategy is a collection of possible paths, each with its own set of plans ready to be implemented if the need arises. Working a strategy requires a higher order of skills. It requires systems thinking that has been tested and vetted for competence rather than just a shallow claim of being a “systems thinker.”


Image by Maddy Mazur from Pixabay

Transparency, Source Code Quality, and Metrics

 

In “Hello World: Being Human in the Age of Algorithms,” Hannah Fry relates this story:

In 2012, a number of disabled people in Idaho were informed that their Medicaid assistance was being cut. Although they all qualified for benefits, the state was slashing their financial support – without warning – by as much as 30 percent, leaving them struggling to pay for their care. This wasn’t a political decision; it was the result of a new ‘budget tool’ that had been adopted by the Idaho Department of Health and Welfare – a piece of software that automatically calculated the level of support that each person should receive.

Unable to understand why their benefits had been reduced, or to effectively challenge the reduction, the residents turned to the American Civil Liberties Union (ACLU) for help.

[The ACLU] began by asking for details on how the algorithm worked, but the Medicaid team refused to explain their calculations. They argued that the software that assessed the cases was a ‘trade secret’ and couldn’t be shared. Fortunately, the judge presiding over the case disagreed. The budget tool that wielded so much power over the residents was then handed over, and revealed to be – not some sophisticated AI, not some beautifully crafted mathematical model, but an Excel spreadsheet.

Within the spreadsheet, the calculations were supposedly based on historical cases, but the data was so badly riddled with bugs and errors that it was, for the most part, entirely useless. Worse, once the ACLU team managed to unpick the equations, they discovered ‘fundamental statistical flaws in the way that the formula itself was structured’. The budget tool had effectively been producing random results for a huge number of people. The algorithm – if you can call it that – was of such poor quality that the court would eventually rule it unconstitutional.

My first thoughts were, “How bad a spreadsheet hack do you gotta be to have your work be declared unconstitutional? And just how many hacks does it take to build an unconstitutional spreadsheet?”

To be fair, math is hard. Government is complex. And I’m comfortable with the assumption that everyone who had a hand in building this spreadsheet had good intentions. Venturing a guess, the breakdown happened at the manager/politician/lawyer level.

It is probable that the complexity of the task quickly overtook the abilities of the spreadsheet author(s) and the capabilities of the tool. Eventually, no single person understood how the whole thing worked. Consequently, making a change in one place affected how the spreadsheet worked in n other places and no one was capable of regression testing the beast. But the manager/politician/lawyer types knew what to do: Hide behind the “trade secret” smoke.

There are many lessons from this story. Plenty of points of failure. What I’m interested in writing about is the importance of transparency and how a good set of performance metrics can help in maintaining transparency.

The externally facing opacity in this story is readily apparent. What we don’t (and probably never will) see is the lack of transparency prevalent internally to the Idaho Department of Health and Welfare and whomever designed and built the spreadsheet tool. I’d bet a round of drinks that neither has heard of Agile much less employed its principles and practices. These by themselves – when actually practiced long term – go a long way toward establishing a culture of transparency. This is the key. Long term practice. A period of time is needed to change behaviors, mindsets, attitudes, beliefs, and when necessary, personnel. Even over the long term, implementing an Agile methodology isn’t improvisational theater. A strategy and a way to measure progress is needed.

Which gets me to metrics.

Selecting metrics and tuning them over time is critical to measuring team performance and developing improvement plans. Metrics that inform meaningful actions are the goal. Leave the vanity metrics that verify what managers want to hear or already “know” to the competition.

I’ve encountered my share of overly complex ways to measure the performance of individuals and teams. Often the metrics taken from machine-like task work (for example, assembly line work) are applied to creative or intellectual/knowledge tasks. This type of re-purposing results in, for example, counting lines of code or the number of source code check-ins as an indicator of software developer productivity. It never ends well.

When working to define a set of metrics to track an individual or team’s performance it is more effective to begin by asking several questions.

  • What problems are you trying to solve?
  • What questions will your chosen metrics answer?
  • What questions will your chosen metrics not answer?
  • How, specifically, will you know you can trust you metrics? How will you know when they are right and how will you know when they are wrong?
  • How well do your metrics compliment each other? That is, by combining them do you end up with a much better picture of individual or team performance the you do by considering individual metrics?
  • Do your metrics support any planned actions for improvement? Are you collecting actionable metrics or vanity metrics?

Finally, it is important to understand the limits of performance metrics. Displaying velocity charts that have fractions of story points implies an accuracy that simply isn’t there. Significantly adjusting project timelines based on the first three sprints worth of velocity data can have adverse secondary effects on the project.

There is no perfect set of metrics, no divine set of measures that match an impossible standard of perfect objectivity and fairness. The best possible set of metrics is one that supports useful decisions rather than simply instructs managers where to apply the stick. They should help show the way to performance improvement rather than simply report results.

I work to have 3-5 metrics, depending on the individual, the team, and the project. Less than 3 and the picture starts to look rather flat. More then 5 and the task of performance monitoring can become overly complicated and cumbersome. Keep it lean and manageable. That way, it’s easier to tell when things aren’t working and your metrics are much less likely to violate your team’s constitutional rights.

How does Agile help with long term planning?

I’m often involved in discussions about Agile that question its efficacy in one way or another. This is, in my view, a very good thing and I highly encourage this line of inquiry. It’s important to challenge the assumptions behind Agile so as to counteract any complacency or expectation that it is a panacea to project management ills. Even so, with apologies to Winston Churchill, Agile is the worst form of project management…except for all the others.

Challenges like this also serve to instill a strong understanding of what an Agile mindset is, how it’s distinct from Agile frameworks, tools, and practices, and where it can best be applied. I would be the first to admit that there are projects for which a traditional waterfall approach is best. (For example, maintenance projects for nuclear power reactors. From experience, I can say traditional waterfall project management is clearly the superior approach in this context.)

A frequent challenge the idea that with Agile it is difficult to do any long-term planning.

Consider the notion of vanity vs actionable metrics. In many respects, large or long-term plans represent a vanity leading metric. The more detail added to a plan, the more people tend to believe and behave as if such plans are an accurate reflection of what will actually happen. “Surprised” doesn’t adequately describe the reaction when reality informs managers and leaders of the hard truth. I worked a multi-million dollar project many years ago for a Fortune 500 company that ended up being canceled. Years of very hard work by hundreds of people down the drain because projected revenues based on a software product design over seven years old were never going to materialize. Customers no longer wanted or needed what the product was offering. Our “solution” no longer had a problem to solve.

Agile – particularly more recent thinking around the values and principles in the Manifesto – acknowledges the cognitive biases in play with long-term plans and attempts to put practices in place that compensate for the risks they introduce into project management. One such bias is reflected in the planning fallacy – the further out the planning window extends into the future, the less accurate the plan. An iterative approach to solving problems (some of which just happen to use software) challenges development teams on up through managers and company leaders to reassess their direction and make much smaller course corrections to accommodate what’s being learned. As you can well imagine, we may have worked out how to do this in the highly controlled and somewhat predictable domain of software development, however, the critical areas for growth and Agile applicability are at the management and leadership levels of the business.

Another important aspect the Agile mindset is reflected in the Cone of Uncertainty. It is a deliberate, intentional recognition of the role of uncertainty in project management. Yes, the goal is to squeeze out as much uncertainty (and therefore risk) as possible, but there are limits. With a traditional project management plan, it may look like everything has been accounted for, but the rest of the world isn’t obligated to follow the plan laid out by a team or a company. In essence, an Agile mindset says, “Lift your gaze up off of the plan (the map) and look around for better, newer, more accurate information (the territory.) Then, update the plan and adjust course accordingly.” In Agile-speak, this is what is behind phrases like “delivery dates emerge.”

Final thought: You’ll probably hear me say many times that nothing in the Agile Manifesto can be taken in isolation. It’s a working system and some parts if it are more relevant than others depending on the project and the timing. So consider what I’ve presented here in concert with the Agile practices of developing good product visions and sprint goals. Product vision and sprint goals keep the project moving in the desired direction without holding it on an iron-rails-track that cannot be changed without a great deal of effort, if at all.

So, to answer the question in the post title, Agile helps with long term planning by first recognizing the the risks inherent in such plans and implementing process changes that mitigate or eliminate those risks. Unpacking that sentences would consist of listing all the risks inherent with long-term planning and the mechanics behind and reasons why scrum, XP, SAFe, LeSS, etc., etc., etc. have been developed.


Image by Lorri Lang from Pixabay

The Limits of Planning Poker

As an exercise, planning poker can be quite useful in instances where no prior method or process existed for estimating levels of effort. Problems arise when organizations don’t modify the process to suite the project, the composition of the team, or the organization.

The most common team composition for these these types of sizing efforts have involved technical areas – developers and UX designers – with less influence from strategists, instructional designers, quality assurance, and content developers. With a high degree for functional overlap, consensus on an estimated level of effort is easier to achieve.

As the estimating team begins to include more functional groups, the overlap decreases. This tends to increase the frequency of  back-and-forth between functional groups pressing for a different size (usually larger) based on their domain of expertise. This is good for group awareness of overall project scope, however, it can extend the time needed for consensus as individuals may feel the need to press for a larger size so as not to paint themselves into a commitment corner.

Additionally, when a more diverse set of functional groups are included in the estimation exercise, it become important to captured the size votes from the individual functional domains while driving the overall exercise based on the group consensus. Doing so means the organization can collect a more granular set of data useful for future sizing estimates by more accurately matching and comparing, for example, the technical vs support material vs. media development efforts between projects. This may also minimize the desire by some participants to press harder for any estimates padded to allow for risks from doubt and uncertainty, knowing that it will be captured somewhere.

Finally, when communicating estimates to clients or after the project has moved into active development, product owners and project managers can better unpack why a particular estimate was determined to be a particular size. While the overall project (or a component of the project) may have been given a score of 95 on a scale of 100, for example, a manager can look back on the vote and see that the development effort dominated the vote whereas content editors may have voted a size estimate of 40. This might also influence how manager negotiate timelines for (internal and external) resource requirements.


Photo by Aditya Chinchure on Unsplash

Collaboration vs Clobber-ation – Redux

A reader took me to task for “not being a team player” in my example of walking away from an opportunity to co-develop a training program with a difficult Agile coach. It was easy to set this criticism aside as the person offering it was in no position to be familiar with the context or full story. Nonetheless, the comment gave me pause to consider more deeply the rationale behind my decision. What experiential factors did I leverage when coming to this seemingly abrupt decision?

I can think of five context characteristics to consider when attempting to collaborate in an environment charged with conflict.

  1. Is the disagreement over the details of the work to be done? My peer and I didn’t have agreement on whether or not it was important or useful to include information on basic story sizing as part of the story splitting presentation. I wanted to include this information, my peer did not.
  2. Is there a disagreement over how the work is to be done? I wanted to preface the story splitting section with a story sizing section whereas my peer was intent on eviscerating the story sizing section to such an extent as to make it meaningless.
  3. Is there any type of struggle around status or who “should” be in charge? My peer demonstrated unambiguous behavior that she was “The Coach” for the company and that anything that may be presented to employees should be an expression of her authorship. When she instructed me to send my deck of slides to her for “revision” and I refused, she visibly bristled. By this point, I wasn’t about to release my copyrighted material into her possession.
  4. Are there corporate politics that promote – intentionally or unintentionally – silos and turf protection? My client’s organization could be be held forth as a textbook example of Conway’s Law. The product reflected an uncounted number of incomplete efforts and failed attempts at unifying the underlying architecture. The Agile Coach’s behavior was just one more example of someone in the organization working to put their stamp of value on the ever-growing edifice of corporate blobness.
  5. Is there a conflict of personalities or communication styles? Again, this was true in this case. I wanted to co-create whereas my peer wanted to commandeer and direct. I wanted to present, she wanted to interrupt.

No work environment is free of these characteristics and it may be they are all present in some degree or another. I expect these characteristics to be in place no matter where I work. However, in this case, it was clear to me we were not in alignment with any of these characteristics and each of them were present at very high levels. Sorting this out wasn’t worth my time at just about any price. Certainly not at the price I was being paid. Walking away wasn’t going to burn any bridges as no bridges had been built.


Image by Dirk81 from Pixabay

Accountability as a Corporate Value

My experience, and observation with clients, is that accountability doesn’t work particularly well as a corporate value. The principle reason is that it is an attribute of accusation. If I were to sit you down and open our conversation with “I need to talk to you about something your accountable for.”, would your internal response be positive or negative? Similarly, if you were to observe a person of higher status on the corporate ladder clearly engaged in a behavior that was contrary to the interests of the business, but not illegal, how likely are you to confront them directly and hold them accountable for the transgression? In many cases, that’s likely to be a career limiting move.

There is a reason no one gives awards for accountability. Human nature is such that most people don’t want to be held accountable. It carries the inference of shouldering the blame for something when it goes wrong. Credit is what we get when things go right. People do, however, want others to be held accountable. It’s a badge worn by scapegoats and fall guys. Consequently, accountability as a corporate value tends to elicit blame behavior and, in several extreme cases I’ve observed, outright vindictiveness. The feet of others are held to the accountability fire with impunity in the name of upholding the enshrined corporate value.

Another limitation to accountability as a corporate value is that it implies a finality to prior events and a reckoning of behaviors that somehow need to balance. What’s done is done. Time now to visit the bottom line, determine winners and losers, good and bad. Human performance within any business isn’t so easily measured. And this is certainly no way to inspire improvement.

So overall, then, a corporate value of accountability is a negative value, like the Sword of Damocles, something to make sure never hangs over your own head.

Yet, in virtually every case, I can recognize the positive intention behind accountability as a corporate value. What I think most organizations are going after is more in line with the ability to recognize when something could have been done better. To that end, a value of ‘response ability” would serve better; the complete package of being able to recognize a failure, learn from the experience, and respond in a way that builds toward success. On the occasions I’ve observed individuals behaving in this manner repeatedly and consistently, the idea of “accountability” is near meaningless. The inevitable successes have as their foundation all the previous failures. That’s how the math of superior human performance is calculated.


Image by Chris Pastrick from Pixabay

The Changeability Decision Matrix

Responding to change over following a planThe Agile Manifesto

That’s one of the four values to the Agile Manifesto. It’s also one of the values that is commonly plucked from the context of three other values and twelve principles. Once isolated, it’s exaggerated and inflated to some form of “We can’t define scope before we start work! There’s too much discovery work to be done first! We don’t know what we don’t know! Scope (and requirements) are emergent!” That bends the intent of the Manifesto and disregards the context from which a single value has been extracted.

I don’t believe Agile practices ever meant for software development to be a free-for-all, a never ending saga of finding and implementing better and better ways to code something before a product can be released. Projects run like this never see the light of day, let alone a shelf to languish on waiting for a long since departed market opportunity.

What isn’t in the Agile Manifesto, but is implicit in the Agile methodologies I’ve worked with is the notion of decision points. These are the points around which change, to a small or large degree, is not allowed. At least not for a while. Decision points bring stability to the development process from which Agile teams can move forward with a stable set of assumptions. If subsequent discoveries inform the team that they need to revisit a decision, than they must do so. The key element is that the work subsequent to the decision is what generates the need to revisit the decision. It isn’t done arbitrary, on a hunch, or with minimal information.

There are numerous decision points that exist within Scrum and SAFe, for example. Stories are decisions. “We need to create this thing.” Acceptance criteria, definitions of ready and done, sprint duration, feature and epic definitions, milestones, minimum viable/valuable products are also examples of decisions. Some of these can be quite changeable. Stories, for example, can be refined many times prior to and during sprint planning. The description, acceptance criteria, definition of done, and effort estimation can change many times before a story is committed to a sprint. And there’s the decision point. When the team agrees that a story can be brought into a sprint and they commit to completing it before the sprint is over, they have made a decision and the story shouldn’t change on its way to being completed by the team. (As noted previously, the work on the story may reveal a need to change something about the story – maybe even indicate that work on the story should stop – but that should be an edge case and not part of common practice.)

To help teams understand these distinctions, I’ve developed a 2X2 matrix called the Changeability Decision Matrix. Its purpose is to help teams evaluate the effects of changing work in the queue. The horizontal axis goes from “Small Impact” to “Big Impact.” The vertical axis goes from “Few Changes” to “Many Changes.”

The two questions the team needs to ask when thinking about changing a decision they’ve made (acceptance criteria, story description, MVP, etc.) are:

  • Will this change have a small or big impact? They may consider any number of variables: cost, time, productivity, effort, etc.
  • Will this change require a few or many changes (lines of code, documentation updates, other components that consume the code, budgets, release dates, etc.)

Where the proposed change resides on the grid may be dependent on where the team is on the project timeline. Consider the Epic, feature, and story hierarchy: Early in the project – during the design phase, for example – there may be little more than features in the backlog. As placeholders for ideas, they may be quite volatile as new marketing information enters the conversation or obvious technical issues become apparent. So changing an epic or a feature may have a relatively small impact on the project and involve few changes. Most probably there won’t be any code involved at this point.

As the project progress and backlog refinement continues, epics and features will be broken up into large stories. More detail is added to the backlog and more time and money has been invested in the design so the epics and features are less changeable. If any changes are needed, it is probably that the impact of those changes and the number of things that need to change will be greater than it would have been during the design phase.

Eventually, as the project moves into high gear, the backlog will become populated with more and more smaller stories that can be easily estimated and planned into sprints and increments.

For the duration of the project, it’s likely most of the stories in the backlog can and should be responsive to multiple changes…right up to the point the decision is made to drop the story into a sprint.

The Changeability Decision Matrix is an easy way to evaluate whether or not an Agile team is pondering undoing a small or large decision by forcing the conversation around the consequences of making the change. If either of these two axis are not a good fit for your organization or what you consider important to consider, then change them to something that makes more sense to your project.

Here is a representation of these phases on a hypothetical project timeline:See also:


Photo by Linus Nylund on Unsplash