Using Origami to Explain What I Do

This is interesting: Getting Crafty: Why Coders Should Try Quilting and Origami

I’ve never done any quilting (I’ve a sister who’s excellent at that), but I’ve done origami since forever. In fact, origami was a way to explain to other people what I did for a living. I’d start with a 6” x 6” piece of paper that was white on one side and black on the other (digital!). Then I’d fold a boat, or a frog, or a crane. That’s what software developers work with – they begin with ones and zeros, bits and bytes. From there, they can build some amazing things.

Not sure the explanation always worked, but at least it was entertaining and instilled a small measure of appreciation for what I and other software developers did for a living. Certainly better than describing the challenges of buffer overflow issues or SQL injection attack counter measures. That approach gets one uninvited from parties.


Image by Anthony Jarrin from Pixabay

Farming as a Metaphor for Workplace Culture

Michael Wade has an interesting post considering how non-agricultural workplaces can resemble farms.

Workplace cultures are in large part a reflection of the underlying metaphor driving the organization, whether by design or chance. When much younger, I used and advocated the “business is war” metaphor. I have been much more successful (and much less stressed) organizing around a farming metaphor. For truth, there can be times of battle on the farm that, as in the war metaphor, require the immediate and drastic mobilization of resources: the barn is on fire, the locus are coming, a tornado approaches. Life on the farm is more than endless summer days spent blissfully feeding magic ponies and dancing under rainbows. One must be prepared to “take up arms” and employ non-farm related tools and tactics in order to deal with any short term crisis that may occur.


Photo by Lucy Chian on Unsplash

Plans, Strategies, and Uncertainty

“It is difficult to make predictions, especially about the future,” is an insight attributed to about two dozen sources. Actually, any one of us could have said this for it’s certain we’ve all had an intuitive feel for the truth revealed by these words. This is undoubtedly why we work so hard to tease out the details about what the future may hold by making plans, laying out strategies, and running scenarios based on our version of the best data available today. Each of these tools are employed in an effort to reduce uncertainty about the future. But which tool to use? Therein lies the rub. The answer, rather unhelpfully, is “It depends.”

  • How complex is the problem space?
  • How well is the problem space understood?
  • What is the availability of resources (time, money, people, materials, etc.)?
  • What is the skill level and experience depth of those tasked with developing a plan or a strategy?

Stated simply, creating a plan and sticking to it is ideal for simple, well understood, small scale problem spaces where one or more resources are limited. They work if the individual or team tasked with finding a way through the problem space is inexperienced or lacking skills required by the problem space. As complexity and uncertainty increase, the way forward benefits with a more flexible approach. This is where it’s helpful to have a strategy, something that is more than a single course of action. Rather, a strategy is a collection of possible paths, each with its own set of plans ready to be implemented if the need arises. Working a strategy requires a higher order of skills. It requires systems thinking that has been tested and vetted for competence rather than just a shallow claim of being a “systems thinker.”


Image by Maddy Mazur from Pixabay

Transparency, Source Code Quality, and Metrics

 

In “Hello World: Being Human in the Age of Algorithms,” Hannah Fry relates this story:

In 2012, a number of disabled people in Idaho were informed that their Medicaid assistance was being cut. Although they all qualified for benefits, the state was slashing their financial support – without warning – by as much as 30 percent, leaving them struggling to pay for their care. This wasn’t a political decision; it was the result of a new ‘budget tool’ that had been adopted by the Idaho Department of Health and Welfare – a piece of software that automatically calculated the level of support that each person should receive.

Unable to understand why their benefits had been reduced, or to effectively challenge the reduction, the residents turned to the American Civil Liberties Union (ACLU) for help.

[The ACLU] began by asking for details on how the algorithm worked, but the Medicaid team refused to explain their calculations. They argued that the software that assessed the cases was a ‘trade secret’ and couldn’t be shared. Fortunately, the judge presiding over the case disagreed. The budget tool that wielded so much power over the residents was then handed over, and revealed to be – not some sophisticated AI, not some beautifully crafted mathematical model, but an Excel spreadsheet.

Within the spreadsheet, the calculations were supposedly based on historical cases, but the data was so badly riddled with bugs and errors that it was, for the most part, entirely useless. Worse, once the ACLU team managed to unpick the equations, they discovered ‘fundamental statistical flaws in the way that the formula itself was structured’. The budget tool had effectively been producing random results for a huge number of people. The algorithm – if you can call it that – was of such poor quality that the court would eventually rule it unconstitutional.

My first thoughts were, “How bad a spreadsheet hack do you gotta be to have your work be declared unconstitutional? And just how many hacks does it take to build an unconstitutional spreadsheet?”

To be fair, math is hard. Government is complex. And I’m comfortable with the assumption that everyone who had a hand in building this spreadsheet had good intentions. Venturing a guess, the breakdown happened at the manager/politician/lawyer level.

It is probable that the complexity of the task quickly overtook the abilities of the spreadsheet author(s) and the capabilities of the tool. Eventually, no single person understood how the whole thing worked. Consequently, making a change in one place affected how the spreadsheet worked in n other places and no one was capable of regression testing the beast. But the manager/politician/lawyer types knew what to do: Hide behind the “trade secret” smoke.

There are many lessons from this story. Plenty of points of failure. What I’m interested in writing about is the importance of transparency and how a good set of performance metrics can help in maintaining transparency.

The externally facing opacity in this story is readily apparent. What we don’t (and probably never will) see is the lack of transparency prevalent internally to the Idaho Department of Health and Welfare and whomever designed and built the spreadsheet tool. I’d bet a round of drinks that neither has heard of Agile much less employed its principles and practices. These by themselves – when actually practiced long term – go a long way toward establishing a culture of transparency. This is the key. Long term practice. A period of time is needed to change behaviors, mindsets, attitudes, beliefs, and when necessary, personnel. Even over the long term, implementing an Agile methodology isn’t improvisational theater. A strategy and a way to measure progress is needed.

Which gets me to metrics.

Selecting metrics and tuning them over time is critical to measuring team performance and developing improvement plans. Metrics that inform meaningful actions are the goal. Leave the vanity metrics that verify what managers want to hear or already “know” to the competition.

I’ve encountered my share of overly complex ways to measure the performance of individuals and teams. Often the metrics taken from machine-like task work (for example, assembly line work) are applied to creative or intellectual/knowledge tasks. This type of re-purposing results in, for example, counting lines of code or the number of source code check-ins as an indicator of software developer productivity. It never ends well.

When working to define a set of metrics to track an individual or team’s performance it is more effective to begin by asking several questions.

  • What problems are you trying to solve?
  • What questions will your chosen metrics answer?
  • What questions will your chosen metrics not answer?
  • How, specifically, will you know you can trust you metrics? How will you know when they are right and how will you know when they are wrong?
  • How well do your metrics compliment each other? That is, by combining them do you end up with a much better picture of individual or team performance the you do by considering individual metrics?
  • Do your metrics support any planned actions for improvement? Are you collecting actionable metrics or vanity metrics?

Finally, it is important to understand the limits of performance metrics. Displaying velocity charts that have fractions of story points implies an accuracy that simply isn’t there. Significantly adjusting project timelines based on the first three sprints worth of velocity data can have adverse secondary effects on the project.

There is no perfect set of metrics, no divine set of measures that match an impossible standard of perfect objectivity and fairness. The best possible set of metrics is one that supports useful decisions rather than simply instructs managers where to apply the stick. They should help show the way to performance improvement rather than simply report results.

I work to have 3-5 metrics, depending on the individual, the team, and the project. Less than 3 and the picture starts to look rather flat. More then 5 and the task of performance monitoring can become overly complicated and cumbersome. Keep it lean and manageable. That way, it’s easier to tell when things aren’t working and your metrics are much less likely to violate your team’s constitutional rights.

How does Agile help with long term planning?

I’m often involved in discussions about Agile that question its efficacy in one way or another. This is, in my view, a very good thing and I highly encourage this line of inquiry. It’s important to challenge the assumptions behind Agile so as to counteract any complacency or expectation that it is a panacea to project management ills. Even so, with apologies to Winston Churchill, Agile is the worst form of project management…except for all the others.

Challenges like this also serve to instill a strong understanding of what an Agile mindset is, how it’s distinct from Agile frameworks, tools, and practices, and where it can best be applied. I would be the first to admit that there are projects for which a traditional waterfall approach is best. (For example, maintenance projects for nuclear power reactors. From experience, I can say traditional waterfall project management is clearly the superior approach in this context.)

A frequent challenge the idea that with Agile it is difficult to do any long-term planning.

Consider the notion of vanity vs actionable metrics. In many respects, large or long-term plans represent a vanity leading metric. The more detail added to a plan, the more people tend to believe and behave as if such plans are an accurate reflection of what will actually happen. “Surprised” doesn’t adequately describe the reaction when reality informs managers and leaders of the hard truth. I worked a multi-million dollar project many years ago for a Fortune 500 company that ended up being canceled. Years of very hard work by hundreds of people down the drain because projected revenues based on a software product design over seven years old were never going to materialize. Customers no longer wanted or needed what the product was offering. Our “solution” no longer had a problem to solve.

Agile – particularly more recent thinking around the values and principles in the Manifesto – acknowledges the cognitive biases in play with long-term plans and attempts to put practices in place that compensate for the risks they introduce into project management. One such bias is reflected in the planning fallacy – the further out the planning window extends into the future, the less accurate the plan. An iterative approach to solving problems (some of which just happen to use software) challenges development teams on up through managers and company leaders to reassess their direction and make much smaller course corrections to accommodate what’s being learned. As you can well imagine, we may have worked out how to do this in the highly controlled and somewhat predictable domain of software development, however, the critical areas for growth and Agile applicability are at the management and leadership levels of the business.

Another important aspect the Agile mindset is reflected in the Cone of Uncertainty. It is a deliberate, intentional recognition of the role of uncertainty in project management. Yes, the goal is to squeeze out as much uncertainty (and therefore risk) as possible, but there are limits. With a traditional project management plan, it may look like everything has been accounted for, but the rest of the world isn’t obligated to follow the plan laid out by a team or a company. In essence, an Agile mindset says, “Lift your gaze up off of the plan (the map) and look around for better, newer, more accurate information (the territory.) Then, update the plan and adjust course accordingly.” In Agile-speak, this is what is behind phrases like “delivery dates emerge.”

Final thought: You’ll probably hear me say many times that nothing in the Agile Manifesto can be taken in isolation. It’s a working system and some parts if it are more relevant than others depending on the project and the timing. So consider what I’ve presented here in concert with the Agile practices of developing good product visions and sprint goals. Product vision and sprint goals keep the project moving in the desired direction without holding it on an iron-rails-track that cannot be changed without a great deal of effort, if at all.

So, to answer the question in the post title, Agile helps with long term planning by first recognizing the the risks inherent in such plans and implementing process changes that mitigate or eliminate those risks. Unpacking that sentences would consist of listing all the risks inherent with long-term planning and the mechanics behind and reasons why scrum, XP, SAFe, LeSS, etc., etc., etc. have been developed.


Image by Lorri Lang from Pixabay

The Limits of Planning Poker

As an exercise, planning poker can be quite useful in instances where no prior method or process existed for estimating levels of effort. Problems arise when organizations don’t modify the process to suite the project, the composition of the team, or the organization.

The most common team composition for these these types of sizing efforts have involved technical areas – developers and UX designers – with less influence from strategists, instructional designers, quality assurance, and content developers. With a high degree for functional overlap, consensus on an estimated level of effort is easier to achieve.

As the estimating team begins to include more functional groups, the overlap decreases. This tends to increase the frequency of  back-and-forth between functional groups pressing for a different size (usually larger) based on their domain of expertise. This is good for group awareness of overall project scope, however, it can extend the time needed for consensus as individuals may feel the need to press for a larger size so as not to paint themselves into a commitment corner.

Additionally, when a more diverse set of functional groups are included in the estimation exercise, it become important to captured the size votes from the individual functional domains while driving the overall exercise based on the group consensus. Doing so means the organization can collect a more granular set of data useful for future sizing estimates by more accurately matching and comparing, for example, the technical vs support material vs. media development efforts between projects. This may also minimize the desire by some participants to press harder for any estimates padded to allow for risks from doubt and uncertainty, knowing that it will be captured somewhere.

Finally, when communicating estimates to clients or after the project has moved into active development, product owners and project managers can better unpack why a particular estimate was determined to be a particular size. While the overall project (or a component of the project) may have been given a score of 95 on a scale of 100, for example, a manager can look back on the vote and see that the development effort dominated the vote whereas content editors may have voted a size estimate of 40. This might also influence how manager negotiate timelines for (internal and external) resource requirements.


Photo by Aditya Chinchure on Unsplash

Collaboration vs Clobber-ation – Redux

A reader took me to task for “not being a team player” in my example of walking away from an opportunity to co-develop a training program with a difficult Agile coach. It was easy to set this criticism aside as the person offering it was in no position to be familiar with the context or full story. Nonetheless, the comment gave me pause to consider more deeply the rationale behind my decision. What experiential factors did I leverage when coming to this seemingly abrupt decision?

I can think of five context characteristics to consider when attempting to collaborate in an environment charged with conflict.

  1. Is the disagreement over the details of the work to be done? My peer and I didn’t have agreement on whether or not it was important or useful to include information on basic story sizing as part of the story splitting presentation. I wanted to include this information, my peer did not.
  2. Is there a disagreement over how the work is to be done? I wanted to preface the story splitting section with a story sizing section whereas my peer was intent on eviscerating the story sizing section to such an extent as to make it meaningless.
  3. Is there any type of struggle around status or who “should” be in charge? My peer demonstrated unambiguous behavior that she was “The Coach” for the company and that anything that may be presented to employees should be an expression of her authorship. When she instructed me to send my deck of slides to her for “revision” and I refused, she visibly bristled. By this point, I wasn’t about to release my copyrighted material into her possession.
  4. Are there corporate politics that promote – intentionally or unintentionally – silos and turf protection? My client’s organization could be be held forth as a textbook example of Conway’s Law. The product reflected an uncounted number of incomplete efforts and failed attempts at unifying the underlying architecture. The Agile Coach’s behavior was just one more example of someone in the organization working to put their stamp of value on the ever-growing edifice of corporate blobness.
  5. Is there a conflict of personalities or communication styles? Again, this was true in this case. I wanted to co-create whereas my peer wanted to commandeer and direct. I wanted to present, she wanted to interrupt.

No work environment is free of these characteristics and it may be they are all present in some degree or another. I expect these characteristics to be in place no matter where I work. However, in this case, it was clear to me we were not in alignment with any of these characteristics and each of them were present at very high levels. Sorting this out wasn’t worth my time at just about any price. Certainly not at the price I was being paid. Walking away wasn’t going to burn any bridges as no bridges had been built.


Image by Dirk81 from Pixabay

Accountability as a Corporate Value

My experience, and observation with clients, is that accountability doesn’t work particularly well as a corporate value. The principle reason is that it is an attribute of accusation. If I were to sit you down and open our conversation with “I need to talk to you about something your accountable for.”, would your internal response be positive or negative? Similarly, if you were to observe a person of higher status on the corporate ladder clearly engaged in a behavior that was contrary to the interests of the business, but not illegal, how likely are you to confront them directly and hold them accountable for the transgression? In many cases, that’s likely to be a career limiting move.

There is a reason no one gives awards for accountability. Human nature is such that most people don’t want to be held accountable. It carries the inference of shouldering the blame for something when it goes wrong. Credit is what we get when things go right. People do, however, want others to be held accountable. It’s a badge worn by scapegoats and fall guys. Consequently, accountability as a corporate value tends to elicit blame behavior and, in several extreme cases I’ve observed, outright vindictiveness. The feet of others are held to the accountability fire with impunity in the name of upholding the enshrined corporate value.

Another limitation to accountability as a corporate value is that it implies a finality to prior events and a reckoning of behaviors that somehow need to balance. What’s done is done. Time now to visit the bottom line, determine winners and losers, good and bad. Human performance within any business isn’t so easily measured. And this is certainly no way to inspire improvement.

So overall, then, a corporate value of accountability is a negative value, like the Sword of Damocles, something to make sure never hangs over your own head.

Yet, in virtually every case, I can recognize the positive intention behind accountability as a corporate value. What I think most organizations are going after is more in line with the ability to recognize when something could have been done better. To that end, a value of ‘response ability” would serve better; the complete package of being able to recognize a failure, learn from the experience, and respond in a way that builds toward success. On the occasions I’ve observed individuals behaving in this manner repeatedly and consistently, the idea of “accountability” is near meaningless. The inevitable successes have as their foundation all the previous failures. That’s how the math of superior human performance is calculated.


Image by Chris Pastrick from Pixabay