Av. Frederick W. Taylor 42
1000 Gotham City
Dear IT Manager,
before you hire your next Agile coach to either kickstart or breathe some life into your Agile change initiative, take a step back and think about it. You might be surprised to hear this from me, but maybe that budget could be better spent elsewhere.
I'm not saying this out of some suicidal desire to kill the very market where I earn my living. Quite the opposite. I say this because I desperately want to improve it, and make sure it's both a market that motivates and challenges me, as well as one whose existence is based on actually improving organizations.
So before you hire your (next) Agile coach, think about the journey you are embarking on. Agile is not a quick fix for your delivery problems. These problems are a symptom of a much larger dysfunctionality in your organization. Any (real) Agile coach you hire will only be as effective as the breadth of the change initiative. If this initiative is coming solely from the IT department, and it has no support from your other delivery partners such as product management, sales, customer service, operations, or the project management office, then chances are the initiative will yield poor or limited results (when compared to its real potential).
So if you are going to hire an Agile coach, you should be ready to support them when they inevitably start to reach out to these other departments. This support should be strong yet honest since there will likely be some resistance to the change, especially on the political side of things. Cross-departmental collaboration means ignoring the silo'd hierarchy that got so many people their fancy job titles in the first place.
Also, the very fact that you are considering introducing Agile in your organization is most likely because you have experienced the pains caused by an organization driven by predictive planning approaches. Embarking on an Agile change initiative means going in the opposite direction of predictive planning in almost every sense. Here is where the resistance from the organization will really show its teeth, especially when Agile starts shining a bright light on all the waste clogging the delivery process.
Any Agile change initiative will eventually try to change the culture of the organization. It must. Unless it succeeds in doing this, it will ultimately fail. And changing organizational culture is by far the toughest thing to do in the business world. So if you want to hire an Agile coach, you must be open for change and eager to drive it internally. You also should be ready for some tough discussions.
You'll have to embrace failure (as long as it happens quickly) because it's the best time to learn and a necessary by-product of exploration. Because ultimately, it is about delivering value, by allowing your knowledge workers the freedom to focus on collaboratively identifying, prioritizing and solving your organization's toughest challenges.
Now, if what I described above sounds too ambitious, too frightening or just plain too difficult, then I think you should re-consider your Agile plans. You're not going to find the quick-fixes you're looking for. Quick-fixes are a specialty of the predictive planning guys, so you're better off spending your money on them.
Why am I telling you this?
Because if we're honest from the start about what an Agile change initiative entails, then I won't need to hear about yet another Agile coach stuck trying to help a company that desperately wants to put an Agile face on its waterfall heart. Trying to jam the square peg in the round hole. These cases are later re-counted as "Agile failures", which is a disservice to the coaching market and an insult to the word "failure". Failure would be a valuable learning opportunity. But in order to fail, you first need to actually try to achieve something.
If, on the other hand, you think all this sounds like a liberating experience of discovery and challenging work, if you can see the real and wonderful benefits that result from it, then you're ready to drive this important change ahead. And in this case, indeed yes, please find yourself an experienced Agile coach to support you in... actually, forget about that. Just contact me directly instead. You sound exactly like the kind of person I would love to work with.
Let's get to work?
Relative estimation and story points is one of the topics I find people most often struggling to grasp, whether in trainings or at client sites. The main issue seems to be the belief that eventually, Story Points (SPs) need to be translated into Man Days (MDs) if you want to be able to do things like capacity planning, estimation and portfolio management. And because of this, some people have a hard time understanding the real reasons for using relative estimation. And even worse, their focus on the abstract concept of MDs prevents them from seeing the bigger picture and what really matters - value.
When I discuss this topic with clients, I always try to highlight that there are two distinct issues at play here:
- how relative estimation can be plugged into any existing MDs driven process
- how the focus on MDs clouds managers from seeing the real issue
Using relative estimations in a MD-driven organization
The MDs currency is a constraint that Agile coaches typically cannot avoid when working with clients. Often, the organization's entire planning & budgeting structure is based around MDs and changing that structure is not in the scope of the Agile change initiative (for now). Rather, managers want to know how they can use relative estimations within this MD structure.
After going through this exercise with most clients I've worked with, I've found the easiest way to explain how to do this is by writing 3 simple equations on the whiteboard, like this:
I explain that the first line (estimation x factor = effort) is the formula that everybody uses to obtain MDs, regardless of methodology or discipline.
The second line is how that formula works in the Waterfall world, where a team will estimate the requirements in MDs and the manager will apply some conversion factor to account for the non-productive time and overhead of the team, to finally obtain a MD figure that they can use for budgetting and planning.
The third line is how you can obtain your MDs when using relative estimation. In that equation, the velocity factor = MDs per Sprint / Velocity.
Fine, no issues up to here, it's very straightforward and managers get the equation. But this is where the questions start. In a recent case, a team manager was challenging this model, and his crticism focused on two basic points:
- The "velocity factor" (MDs p/ SP) is essentially an exchange rate for MDs. So if the team does not have a consistent velocity (which was the case for his Scrum teams), this exchange rate fluctuates too much meaning your estimates will likely be incorrect.
- Story Points are too abstract, and since the two formulas are essentially the same thing, why not just estimate in MDs?
Both criticisms he raised helped me understand the rootcause of the disconnect.
I explained that in issue # 1 (fluctuating velocity), he was absolutely correct. If your team's velocity is fluctuating a lot, your MD estimation using the conversion formula will indeed likely be incorrect. Never forget that Scrum does not solve your problems, it just makes them painfully visible. And that is exactly what was happening here. You still have to do the hard work of fixing them.
He should be talking to his team about what are the reasons that their velocity is fluctuating so much and listening attentively to their feedback. Very likely these issues have already been raised in their retrospectives (which was the case here). The manager's focus should be in removing these impediments, helping the team deliver more consistently, instead of trying to somehow magically improve their ability to estimate.
And this was the perfect lead in to address the second issue - "story points are too abstract, why not just use MDs?" I went back to the whiteboard and drew two red circles.
Yes, the formulas were very similar (they are both equally simplistic), but the difference was on where the focus was placed. In the waterfall version of the formula, there is no consideration for productivity. Its focus is on getting the estimate correct, since the overhead factor is easy to calculate and doesn't fluctuate much.
On the other hand, the Agile version of the formula flips that focus away from the estimate. Relative estimation is not difficult and can be learned in 1 hour. Besides trying out different relative estimation techniques (team estimation, planning poker, ...), there isn't much to improve there. Rather, it is the velocity factor that we focus on. That is a measure of our productivity, and if that factor is changing wildly or trending in the wrong direction, Agilists want to find out why. We're trying to unearth the problems that are keeping us from a sustainable, productive pace.
(note: velocity is an imperfect measure of productivity. Using it is a barometer for the Team’s delivery capacity and as a data set for retrospectives is very helpful, but trying to use it as a performance goal for a Team is missing the point completely. As with any imperfect measurement, velocity can be gamed, so don’t lose time trying to use it as a performance metric for the Team.)
If your team's velocity is too unpredictable for you to give better estimates, then no formula in the world is going to change that fact. You need to get your hands dirty and find out why that's happening. Often times, it will be related to the fights managers are trying to avoid for political reasons (dependencies on other teams, inconsistent test data at the corporate level, bad product management, bad technical practices, ...).
Focusing on MDs is missing the point
The reason MDs are so prevalent on the mind of managers is because this is the accepted currency of IT organizations. In fact, it is so common place that managers often forget it is not the end goal, but rather an abstraction layer used to represent the hard-to-measure-yet-always-mentioned Business Value.
Side note: while in the corporate world, business value will usually mean profit, it (value) does not have to equal money. It varies depending on the purpose of your organization (customer happiness, lives saved, quality, ...).
Instead of trying to think about how to measure Business Value, managers feel comfortable in the MD abstraction layer. This leads to misguided success metrics for projects, such as "deviation from initial estimate". These only perpetuate the focus on getting that damn MD estimation correct and cements the erroneous belief that eventually, everything must be translated into MDs.
I say this debate is mis-guided because I've seen many managers lose sight of what they should be focusing on. I've heard many IT managers tell me that their number one priority was making sure that they delivered projects on budget (MDs). Not improving the throughput of their teams, not reducing technical debt or time-to-market. No, their priority was nailing the estimates.
Essentially, they're saying "I don't care if we're delivering cr*p, I just want us to deliver cr*p in a predictable manner".
This is a problem. And until managers are willing to move beyond this focus on being predictable over being productive, they are essentially becoming an impediment to the improvement of their teams. Because it's only natural that their teams will sense the focus on predictability over value creation, and they will make it their priority also.
The mindset shift that must happen is the realization that the focus should be on delivering value. And to achieve this, even SPs are not sufficient, they only measure the amount of work a Team is able to deliver. Organizations looking to improve their ability to deliver value, must first figure out how to measure it.
In fact, one can easily imagine that it is precisely the inability to measure value that makes managers, instead, focus on the cost side of the equation. It’s hard to say how valuable a story (or even a project) is, but calculating how much a project deviated from its original estimates involves little more than 6th grade math.
The #NoEstimates movement has been making a lot of noise about this recently. Vasco Duarte wrote a good, short overview of it, for those interested.
I don't disagree (my favourite double negative) with anything they say, except that I don't like the name (everybody estimates, even in the scenario they are proposing) and also I don't think they are describing any breakthrough, but rather an advanced state for organizations applying Lean Thinking.
I prefer to think of estimations as Waste. Better estimations are not what will make an organization successful or help you deliver your new, super-cool product to the market. But even though it doesn't add any real value, it's inherent to software development. Minimizing this waste is a gradual and never-ending journey.
What's in a word?
Agile talks about the importance of transparency and words can be a very powerful means of conveying transparency. The flip side of that is also true: words can very easily be used to confuse the listener and obscure the Truth.
In Gonzo journalism, for example, hyperbolic metaphors and fantastical narratives that bare only vague connections to reality are used to create an image in the reader's mind that cuts through all the BS and shows the naked truth. This is why Hunter Thompson's "Fear and Loathing in the Campaign Trail '72" was once described by George McGovern's campaign manager as the "most accurate and least factual account of the election".
(that remains one of my favorite all-time phrases…)
Imagine if you didn't know much about Richard Nixon and wanted to know what kind of president he was. If you listened to Bill Clinton, you would hear this, "He (Nixon) understood the threat of Communism, but he also had the wisdom to know when it was time to reach out to the Soviet Union and to China." Clinton is not lying when he says this, but if you know anything about Nixon, you know this is far from being an accurate description of the man.
If, on the other hand, you had listened to Hunter Thompson, you would have heard this, "He was a swine of a man and a jabbering dupe of a president. Nixon was so crooked that he needed servants to help him screw his pants on every morning. Even his funeral was illegal. (...) His body should have been burned in a trash bin."
Which one gives you a better understanding of Richard Nixon's presidency?
Interestingly, though, almost everything Thompson said was not true. Nixon did not screw on his pants in the morning, his funeral was not illegal and by most accounts, he was not a dupe.
Words matter and we should take advantage of their power unless we want to suffer under their ambivalence.
Like Thompson, we want to be more accurate than factual when placing labels. We want to be a little bit gonzo.
So why this rant on the importance of words to convey meaning and transparency? Because I think some of the words we use a lot in our industry have lost all meaning and are desperately calling for a re-branding. The one I want to tackle in this post is a ubiquitous presence in the IT industry - professional recruitment companies (a.k.a. “staffing companies” or “placement professionals”).
Currently, the word we use to refer to them is "vendor", and sometimes even as "IT Consultancies" (argh...). It's a crowded field, but it includes companies such as Harvey Nash, Trasys and Unisys.
Large organizations like to work with them because they perceive there is real value in the service they are offering. After all, who wouldn’t want the support of a professional recruitment company when looking to hire someone?
But when you really think about it, what do these recruitment companies really know about Agile? My experience is not much. I have seen some of them asking ridiculous things such as candidates with a “minimum of 12 years experience in Scrum”.
(I replied to that request by recommending they try to hire Ken Schwaber or Jeff Sutherland)
So their actual value add is trivial at best. It's just a question of matching requests with profiles, no more than a basic dating website when you think about it. But of course the recruitment companies have no interest in setting up the dating website since it would destroy their business case - the information of which job postings are currently available is their only competitive advantage.
The true value of a professional recruiter would be to find and place the ideal candidate. If, however, they want to do this for Agile coaches and consultants, they first have to understand two things:
1) what is their client's real objectives with Agile? (and how Agile can help achieve them)
2) understand the skills and characteristics that are required to help the client achieve their objectives
Their lack of knowledge about Agile, organizational change or consulting means they are not able to do either one of the above. Also, their profitability comes from placing people, not helping their clients achieve their goals.
A free market for job postings would be in the interest of both the searching company and the interested freelancer. One has access to more candidates, the other has access to more opportunities. Also, by not having a middle-man, daily rates would be lower (even though revenue for the freelancer would actually be higher).
So the presence of the recruitment company creates inefficiency in the market by adding an extra layer. And that extra layer is permanent, as their commission on the freelancers' daily rate remains constant. The result is clients pay more than they should for an independent Agile coach, and more often than not hire the wrong one because they don't understand what the ideal candidate(s) for their situation look like. Afterall, if they had a good knowledge of Agile internally, they wouldn't be using a recruiting company to hire Agile coaches.
And what has Ohno and Lean taught us about improvement? We should always look at the process and identify the non-value adding steps.
So what gives? If removing an inefficiency is to the benefit of both buyer and seller, why is it not done yet?
Well, it's partly because of fear and intimidation and partly because there aren't many other options out there (more on that later, but we desperately need a stronger Agile consulting market). The recruitment companies have cornered the market with hardline contracts that border on being illegal, and force people to give up part of their wage ad infinitum to an entity who adds no value to the process.
Where have we seen this type of relationship? In the world of prostitution. Pimps have no real value to the end customer in prostitution. They don’t perform any of the actual work and by going through a pimp the customer has no additional guarantee of the quality of the service (afterall the pimp is not the one performing the service). The only thing that is clear is that the pimp is involved in an abusive, borderline proprietary relationship with a pseudo-independent worker who would clearly benefit from being set free.
I’m not the first one to use this term when referring to recruitment companies. I’m not that smart. But it is such a powerful and applicable description for their actual role, that I want to do my part in disseminating it. If we want to change the world of work, we will not be able to do it by working via pimps.
So what's holding us back?
Clearly, these pimps remain in business because the clients don't have (or don't perceive) other options. That's a huge gap in the market that we need to fill.
We need more Agile consultancies, but more on that in another post.
What's in a word?
Well, if we at least start referring to these recruitment professionals as "pimps", this will already help in increasing the transparency of the situation. And (hopefully) this will start to make people uncomfortable.
It's time to break free from our pimp overlords!
One of the typical challenges I’ve faced when companies used to waterfall choose to start adopting Agile is the concern from managers accustomed to waterfall reporting. They suddenly feel they have no control over their project portfolio.
Indeed, I quickly learned that introducing Agile to upper management can be a semantic minefield. For example, attempts to redefine words such as “estimates” or introduce concepts such as “self-managing teams” were met with blank expressions from managers used to fixed price contracts and bold (yet empty) promises of delivery dates.
In a previous assignment, we introduced Scrum to a team responsible for 16 integrated applications, handling sales, delivery and billing. Projects would typically impact many of the 16 applications and, often times, other applications outside our domain. This meant multiple dependencies, which of course increased the risk of a delayed or buggy release.
One of the tools that has proven helpful in these cases was a team board. Since this application team was divided into 3 sub-teams, we were using the Scrum of Scrums approach from Sensei Quesada to have an overview of the sprint progress for all the 3 sub-teams. And while it was extremely helpful for the people who knew the contents of the sprint to have a complete overview of the progress (in this case, typically a project manager or a business analyst), we noticed that management didn’t see much value in it since they don’t know their projects at the story level. Instead, they were used to looking at % complete progress bars and tracking milestone delays and remediation actions.
So we did a couple of iterations on that board and came up with a Team Board v2.0.
First of all, we defined a clear purpose for the board: give an overview of ongoing and upcoming work at the project level, as well as information about the team’s recent sprints.
- Product backlog
- Velocity chart
- Project burndown chart(s)
- Escalations or external action(s) from latest retrospective (if applicable)
- Release planning
- Project backlog overview
On the right side of the board, there is information concerning work, results and actions from the team. Work committed for the current sprint can be found on the product backlog, as well as the team’s upcoming work for the next 2-3 sprints.
We found that the abstraction level of the board gave a clearer picture of the current state of progress of the team, as well as its planned actions. It made it easier for stakeholders who were not familiar with the details of the work we were doing to understand the status and risks.
On the left side of the board is the Project Backlog. Each row represents one project, identified on a “project card.” The project card contains some general information, including:
- Total budget
- Planned release
- Project manager and business analyst
- Applications impacted
The 5 columns after the project cards represent the basic stages our projects go through (keep in mind that despite working as a Scrum team, we were very much working within the constraints of a waterfall organization):
- Contract (analysis and contracting/budget)
- E2E testing
- Warranty (maintenance for 30 days after release)
Each column must have one state – Not Started, In Progress or Done.
We also use the small post-its as meta tags to highlight exceptions or issues in the project (sadly, the typical example is “blocked” or "waiting"…).
The order of the projects on the board follows the planned release for each project – next planned release on top. This means new projects come in at the bottom and will climb their way up the board as their release approaches. So you would typically see the projects on top being in either the “UAT” or “Warranty” stage while the projects at the bottom would be in the “Contracting” stage. In this sense, the project board should follow the healthy diagonal line typical of Scrum Taskboards:
Since we had to work within the constraints of corporate releases, the typical situation was actually that groups of projects were committed to specific corporate releases. From the board's perspective, this meant that stages such as E2E testing and UAT would start at the same date for a multiple projects, meaning that the "healthy" diagonal was more of a staircase:
When this expected flow is super-imposed onto the board, it quickly becomes evident which projects are "on track" and which projects have issues. For example, the project on the fourth row from the top is clearly an issue since it is blocked in "development" while projects further down the board are already in E2E testing.
We quickly realized that visualizing the progress ladder was important, so we started coloring the project cards by release. So all the projects for the October corporate release would be colored blue, all the ones for the November corporate release would be colored green, etc.
(yes, it would be easier to just print them in color already, but for budgetary reasons at that time, nobody in the building was allowed to print in color anymore...).
The impact of the board was that line managers and domain managers started to come by and take a look at the team board. It provided them with an overview of their project portfolio and the overall progress and issues.
Not surprisingly, they also started asking productive questions, such as: "I see project X is blocked in E2E testing, what's going on? Can I do something to help unblock it?"