Skip to main content

Author: Robert Galen

Release-Level Retrospectives: Stepping up Your Game

From 2009 to 2012 I worked as the Director Software Development and Lead Agile Coach at a company called iContact. If you’ve followed my writing, I’ve used my experiences there as a backdrop for many of my lessons learned. This is another one of those learnings, and if I may say so, I think it’s one of the most powerful.

Very early on we adopted a Release Train model where we had a sprint tempo of 2-week sprints and quarterly releases. We would “commit to” a release schedule about a year in advance and share that with our customers. I modeled it after Dean Leffingwell’s early writings about Release Trains, which was significantly before he introduced SAFe.

In fact, I’ve been using Scrum of Scrums and Release Trains in my coaching and for personal use since about 2005 or so.

Release Retrospective

One of the things we stumbled up early on was having a release level retrospective at the end of the train. Yes, we would do retros for each sprint. And they would always focus on “connecting the dots” from the first to last sprint as we built our stories and features.

But we also started doing a release retrospective as a way of sharing release feedback across the entire organization. The retrospective went beyond release specifics and became a way to gather feedback on all aspects of our agile delivery approaches from virtually the entire company.

Related Article: Agile Chartering – Agile Chartering – Beginning With The End in Mind

We found that these retrospectives were instrumental in the success of each of our releases. They also helped us in transforming our entire organization to more agile thinking and behavior.

To illustrate the latter point, I’ll share a story about our VP of Sales.

Kevin Fitzgerald Story

Kevin was our Vice President of Sales at iContact. He and I had worked together at a previous company too, so we were fairly comfortable with each other. Kevin is a great leader when it comes to building sales teams and he did just that at iContact.

As part of his role, Kevin was very driven to build cross-functional relationships from sales throughout the organization. One of the ways he did that was to support the “transparency” of our Agile / Scrum methods. He encouraged his various teams to attend and engage in our sprint reviews. He also engaged with our Scrum of Scrums. But one of the more important opportunities where he really provided great feedback and leadership was encouraging his teams to attend our release retrospectives.

I was always pleasantly surprised by the number of sales folks who attended the retrospectives and with their level of engagement and feedback.

But the real reason I’m sharing this story is about something that Kevin shared with me on many occasions. Early on he pulled me aside after a release retrospective and gave our teams some high praise. He said –

Bob, I wish my teams were as transparent and mature as yours. Not only do your teams work hard, but they also make everything they do completely transparent. That includes their successes, efforts, and even their failures. But the thing that impresses me most is their ability to deal with any and all constructive feedback.

I listen to the feedback your teams typically get in the review and some of it is quite harsh. But they maturely take the good and the bad and they take action on it. It’s an incredible example of how they are all about continuous improvement and results.

To be honest, I often challenge my teams to be “like yours”. Keep up the great work my friend.

And the real point to this story is the example that our release retrospectives made for the entire organization.

Over time, iContact began to transform as an Agile organization. One of the critical practices that I attribute that transformation to was our whole company, release retrospectives.

Another Perspective

Mark Riedeman was a Software Manager that facilitated our release retrospectives for probably 2 years at iContact. Mark was a great facilitator, but also was very personable and comfortable in front of large crowds.

Over time, he and I brainstormed our strategies and adjustments. One of the key trends was the overall company level of engagement in the process as it steadily increased.

Here are some of Mark’s recollections in his own words:

Mark Riedeman’s perspective

Here’s what my recollections are from running the Release Retrospectives. I think they’re one of the most underutilized tools in Agile development. They can really be difficult to manage, delicate to facilitate and cumbersome to organize. But they are completely worth it.

As you recall, I tried to run the first one like a big Sprint Retrospective, but with more people. That didn’t work out so well. In true agile fashion, I tried to improve every time and ended up learning a lot of sometimes-awkward lessons. I started with the basics, big Post-it pads for Good, Bad, and Try, tons of Post-it Notes and pens around the room, and a company-wide invite. Needless to say, it wasn’t the easiest thing to facilitate. Here’s how I ended up running them and reasons for some of the choices made:

At first, I put Post-it Notes and pens around the room and had people write things down.  I read them all and decided where they went so we could all vote on them to come up with a prioritized list by the end of the hour. That didn’t scale so well and we didn’t finish anywhere close to on time.

As the attendance grew, I had to figure out ways to get more of the input done faster so that we would have time to talk about ways to improve…the real goal of the meeting. I think the iterations on that went something like this:

  1. People don’t want to read their own written criticism out loud, so I collected all the cards, read them, and put them in a rough grouping on the boards and then did a final, pre-vote grouping. Took forever.

  1. Instead of my running around the room, I then had other people run around and collect cards for me so I could read them, put them on the boards, etc.

  1. Then, instead of my grouping them on the board, I’d just suggest a group theme to put them with, and then volunteers would organize them on the boards around the room.

  1. When that just wouldn’t scale anymore, we had to go electronic too, and I solicited anonymous feedback via a Google Form, categorized and organized it into groups ahead of time and sent it all out the morning of the retrospective. That way, everyone could see what was already said and what topics Mark might have to bring up in front of the whole company. (That boosted attendance.) We still did about 10-15 minutes of written sticky additions as well.

Here are some of the other lessons learned along the way:

  • Facilitating this meeting is not for the faint of heart or the thin-skinned. You will inevitably have to manage discussions about internal company divisions and you become the spokesperson for the status of the company’s problems.

  • Because of that, I always thought it was important that the same person lead all of the meetings. No one wants a committee to hear to their complaints. They want a person who can be held accountable and the continuity seemed to really help.

  • In the meeting invite, I would send out all of the notes from the previous release highlighting the big items from that meeting. I would also highlight the progress that was made from the last retrospective, or lack thereof.

  • At the beginning of every meeting, I would start by reviewing the last Retrospective. In time, people start to notice the recurrence of the same kinds of problems. That’s the best value of the retrospectives. Not much can stay stuck under the rug forever and it forces people to start having difficult discussions about topics that really should be discussed.

  • After every meeting, I collected every Post-it Note and typed them up in a Release Retrospective wiki page and email. Even if there were almost word-for-word variations of notes, no one wants to think you editorialized their feedback out, and they know what they wrote.

  • Hand out Sharpies instead of pens with Post-its. You get more big-picture feedback when people have to use fewer words.

Running Really Big Retrospectives

In a nutshell release retrospectives are simply BIG retrospectives. They encompass a longer duration of time and a broader audience.

In our case at iContact, we invited the entire company. There is some artistry to running big retrospectives and Open Space techniques often can help with some of the dynamics. As can electronic tooling support pre and post retrospective.

Here is a link to other perspectives in running larger retrospectives – http://joakimsunden.com/2013/01/running-big-retrospectives-at-spotify/

And I’ll leave it to you to find your favorite Open Space guides to mine for hints about running effective retrospectives.

Wrapping Up

I would strongly encourage you to adopt retrospectives at various tempos across your agile team dynamics – sprint-ly, release-ly, quarterly, or whatever makes the most sense.

You’ll get various levels of feedback depending on whom you invite and how you frame the scope of the retrospective. But beyond the raw data, you’ll be exposing your firm to a fundamental agile practice, which just might influence your organization…by example.

Stay agile my friends,

Bob.

A few related articles:

·        A nice article on the Scrum Alliance website about release retrospectives .

·        A 5-part series posted on the VersionOne blog. Part-1 can be found here .

Forecasting: Is it EVIL in Agile Portfolios?

I’m often quite wordy in my blogs. I’ll pose an initial question in the title, throw out a thousand words or so, and then present a conclusion. I’m not going to do that here. Instead, I’ll be much more succinct.

IS FORECASTING EVIL IN AGILE PORTFOLIOS?

YES!

The context for this conclusion and subsequent discussion is three-fold:

1. Forecasting when you are just building your Agile teams OR are in the early stages of an Agile transformation;

2. And, when you’ve been doing Agile for awhile, and you’ve become overconfident with your capacity awareness;

3. And forecasting in this sense is anyone determining how large or how long something will take and NOT fully engaging their team in the estimation, planning, and forecasting.

Let me be clear, in my experience this IS the way traditional projects have been forecast.

Usually, a small group of product folks will get together with technical managers, a project manager, architects and perhaps a technical team lead or two. Often QA and other functional team representation are left out of the mix, with the thinking being that the developers can estimate “for” them. The product folks present an “ask” to this small team and they estimate the LOE (Level-of-Effort) associated with the functionality. If it’s a Waterfall project, then who (# of Dev, QA, etc.), how long (days, weeks, months, transitions or workflow, etc.), and output (lines of code or components, tests, documents, etc.) are the usual units of estimation.

In Agile contexts, the same things occur. However, the estimation units usually change to be specific numbers of small Agile teams, velocity or capacity, and number of iterations.

To be even more clear, these folks are not destined to do the work. But they set an expectation for the work and then go back to whatever their day job is. At some point the “ask”, if the cost and ROI are deemed worthwhile, is handed off to a set of teams to deliver. Often, the view is that the “hard work” has already been done, and there is simply “execution” left for the teams.

MY KEY PROBLEM

My key problem with all of this is that someone else is estimating for, and let’s be real here – committing for, the teams. I know, I know, there are many excuses – I mean reasons for it. Here are some I often hear:

1. The team is working on something important right now. We simply don’t have the time to interrupt them to estimate another project. And it would COST too much to do that as well. What we’ll do is select a small set of high-skill team members to do the pre-work before we get the entire team engaged.

2. This project is WAY too BIG to get everyone together. We do Enterprise-level projects around here. We have large numbers of teams AND many of them are distributed around the world. There’s just no way that we can get EVERYONE together.

3. This project has new technologies and new approaches intended. The team doesn’t have a clue about this. But Bob, the architect does. We’ll get Bob and his team to “iron out” the hard-bits in advance of the team’s execution. Bob can even run some “Lunch & Learns” as a way of passing off technical knowledge in the beginning.

4. We’re trying to prioritize our entire PORTFOLIO right now and we need high-level LOE estimates to do that. So there’s no way we can ask the entire team to estimate 10-15 major initiatives. Of course, we’ll give each team the opportunity to pull together a more realistic estimate before we start execution.

Now there is validity to all of these points. I truly understand the balance in getting things done (current project work) versus planning and forecasting (determining future capacity). But if our goal is to forecast accurately and best understand our projects, then I still feel the best way to do that is to engage as much of our team targeted towards the work, as early as possible, so that we get the most realistic estimates.

Here are five reasons that engaging your teams is the best way to forecast. It’s not all-inclusive but simply intended to show the “why” behind my recommendation that forecasting is evil IF you don’t engage your teams in the effort.

5 REASONS TO ENGAGE YOUR AGILE TEAMS IN FORECASTING

Velocity is a fragile thing

First of all, anyone who is using velocity as a measure of his or her Agile team’s output or performance must realize that it’s a fragile thing indeed. Beyond not wanting to compare it across teams, there are simply so many factors that can influence velocity. For example, illness, attrition, interruptions, multi-tasking, skill, team maturity, and co-location are just a sample of the factors.

I’m not saying not to use it. I consider it quite a valuable metric. It’s just that you need to consider it within the context of each team and not over or under react to sudden changes in velocity. If we include the teams in the planning mix, they’ll take a more reasoned approach to estimating their velocity AND the possible variations they might experience due to “external forces”.

TEAM COHESION MATTERS

I’ve found that teams take quite a long time to come together and to become truly cohesive as a team. Once formed, they are incredibly capable. But if you start nit-picking teams members away for special projects or higher priority interruptions, then you undermine the capability of the entire team.

Often in our traditional forecasting we ignore the real world of multi-tasking, project interruptions, focus factors, customer support, and such. If we include our teams in the planning mix, they naturally include these factors.

Not that long ago I had someone ask me what was the ramp-up factor for a new Agile team? In other words, how many sprints would it take for a new team to become fully functional? They were looking for a magic number to plug into some spreadsheets as they did their forecasting.

My answer to them – there is no magic number for this AND you might want to ask your teams. While I felt it was the correct answer, I don’t think they appreciated it.

PEOPLE OR NOT “RESOURCES”, THEY’RE NOT FUNGIBLE

If you’re dealing with chairs or computers desktops, then I could see someone else forecasting the needs over the next few years as being reasonable. These are fungible resources and of course they can be forecast.

But are people (skills, collaboration, morale, ability to attract/hire, etc.) so easily handled? My broad experience tells me – no.

So if we include our teams in the planning mix, we’ll get a sense of the capabilities of the team that we’ve formed. We’ll find out if they have the equipment, tools, and training to do the job. We’ll find out how long it will take the team we’ve formed to do the job we’ve placed before them. They might even include real-world events like vacations and family events in the plans.

UNDER COMMIT AND OVER DELIVER

It’s incredibly easy for someone not doing the work to commit to an outcome. Often they’re quite optimistic about the LOE – maintaining a sunny day view to everything and not truly considering risks. I think it’s human nature. But there’s a reason that your building contractor not only estimates building your home, but they build it as well. Can you imagine if I was doing the estimation for your contractor?

Sure many of them come in “over schedule”, but I guarantee a terrible result if I’m doing the estimation for them.

If we include our teams in the high-level planning mix, we’ll get much more realistic plans that include risk and contingency. The other thing that always impacts our plans is cross-team and external dependencies. Again, teams can more realistically and broadly consider and plan for these.

When I do release planning (PSI – Potentially Shippable Increment planning for you SAFe folks) with Agile teams I’m coaching I often talk about discovering what I call “glue stories or glue work”. This is work that isn’t typically associated with a functional Feature or User Story, but it needs to be completed in order for the PSI or Release to be usable or considered complete. My experience is that ~40% of the work for a project/release/PSI usually surrounds this non-obvious or hidden work.

And only your teams can truly surface these risks, dependencies and hidden work in your plans. You ever wonder why most projects are over schedule? I think this is one of the major reasons for it.

SPEAK TRUTH ABOUT UNKNOWNS AND AMBIGUITY

In other words, be willing to say – “I don’t know”.

One of the things I’ve always found interesting when getting estimates from an “advisory team” as opposed to the eventual team themselves is that I rarely hear those magical words – I don’t know.

If you think it’s hard for a team member to admit this, it’s even harder for these more seasoned folks to admit their lack of specific knowledge and practical experience. But if you want to understand accurately your projects, then having this honest dialogue about unknowns and ambiguity as early as possible is exactly what you want.

And in my experience, teams doing planning will have a tendency to “Tell Truth” much more often than folks who have seniority or perceptions to worry about.

WRAPPING UP

Now that I think about it, do I have trouble with forecasting? Actually, no!

It’s the perceptions around it that are the problem, words like: commitment, fixed date and scope, customer expectations, and promises made for the teams. If you truly do forecasting as a high-level triangulation mechanism, but don’t believe/use those LOE estimates until the teams “make them their own”, then I’m perfectly fine with forecasting.

Scaled Agile Framework (SAFe) conceptually handles this quite nicely. It allows for ongoing portfolio and project-level planning. But there is NO COMMITMENT to a PSI or Release Train until the team gets the chance to break down the features into user stories and pull together their PSI plan (a response) to all of the higher level planning. Then they commit to the PSI and begin execution.

If the leaders and stakeholders planned for the team releasing 10 Features in this PSI, but the team only committed to 5, then those leaders and stakeholders should have made soft enough commitments so they can go back and reset them to 5. And they should also use that data-point of “5 feature velocity or capacity” to readjust their portfolio and project level forecasted plans.

Now that sounds easy in words, but in my experience the devil is in the details readjusting those initial “commitments”. And it’s not for the team to do. It’s for the leaders, stakeholders, architects, managers, product owners, and project managers to do. They are the ones “disconnected from reality”.

Stay Agile my friends,

Bob.

Agile Journey Index – A “Balanced” Guide for Continuous Improvement

If you were to have asked me about five years ago about agile team and organizational assessments, you might have gotten your head bit off. You see I used to be violently opposed to formally assessing agile teams in any way.

The roots of it probably related to aggregating team velocity. If you’re wondering, that’s not such a good thing to do either. I was worried about teams comparing themselves to each other and creating unhealthy or dysfunctional behaviors. I also worried about what THEY (leadership, managers, Project Managers, HR folks, etc.) would do with the information.

Now I’ve always felt that having maturity data around, in some form, was helpful to seasoned agile coaches. It just gets hairy when you start using it for organizational and x-team metrics. And it’s the inherent “metrics dysfunction” that is always lurking in the shadows this is a real concern.

Agile Assessments

But as time has passed, I’ve warmed up to assessment tools – or at least the notion of performing assessments. I think the key contributor to my growth in this area is Agile Bill Krebs. Bill is the creator of the Agile Journey Index, which is the assessment tool I use in my own coaching. Bill has shown me that some assessment tools can be used in a healthy way within agile transformations. That it’s not just about the tool, but it’s how you approach using it, communicating the results, and inspiring continuous improvement.

But before I get into the AJI, there are others on the market and the list is growing…

Agile Assessment Tools

I recently noticed that Sally Elatta and Agile Transformations have released their agile assessment tool – AgilityHealth. It reminded me that this space is getting a bit crowded as well (and probably will continue to expand) as folks figure out how to monetize assessments and their supporting tools.

Already in this space are:

  • Evidence Based Management – Schwaber;
  • Comparative Agility – Cohn;
  • Forrester;
  • AgileRBI – DavisBase;
  • Scaled Agile Framework assessments – Leffingwell; and
  • Agile Adoption Index – Sidky.

So just like agile certifications and scaling methods, the field of agile assessments is apparently alive and well…and growing. That growth is what has inspired me to share on AJI in the hope of introducing you to what I think is less well known, but congruent and effective approach to agile assessments.

Agile Journey Index

The Agile Journey Index (AJI) looks at agile practices in three, broad-brush categories:

  1. Plan
  2. Do
  3. Check

Within each category, there are specific tactics, techniques, or practices that are evaluated. Let’s expand each category:

  1. Plan
    1. User Stories
    2. Product Backlog
    3. Estimation
    4. Release Plan
    5. Iteration Plan
    6. Big Picture
    7. Governance
  2. Do
    1. Stand-up Meeting
    2. Task Board
    3. Burndown
    4. Code Review
    5. Unit Test
    6. QA automation
    7. Quality Engineering
    8. Continuous Integration
    9. Done
  3. Check
    1. Demo
    2. Retrospective
    3. Kaizen

The rating scale is from 1 (new) to 9 (experienced) for all 19 of the reviewed practices or activities. Getting a score of 10 is a special case that results from a peer / coach reviewing your “findings”.

The AJI provides evaluation examples for each, so the reviewer or assessor can ascertain the level of performance for each. But remember, and Bill makes this quite clear, these are subjective and context-based measures or evaluations.

Gamification

One of the primary reasons I like the AJI is that Bill has combined it with gamification. That is, assessments lead to teams gaining “Kaji Badges” that indicates their level of performance, which inspires good-natured and healthy competition across teams.

He works very hard to emphasize this positive interaction aspect that he wants the tool to produce and the badges help reinforce that. Reporting or x-team consolidation also is represented by badge levels of performance, which augment the “raw numbers”.

Assessment Dynamics

An AJI assessment is a periodic activity – often done quarterly or semi-annually. The assessment is mostly based on interviewing team members and having them tell stories and share examples along the boundaries of the 19 core practices.

The other side of the assessment is observation. So it’s a combination of the two. When I do an assessment, I like to get a broader brush view than simply gathering data the team I’m reviewing. Often I’ll go to leadership, other teams, and even customers to get an “external” perspective. Then I’ll contrast that against what the team is saying. The third component is my own direct observation. So to sum that up:

  1. Team interviewing
  2. Expert (coach-level) observation
  3. Outside interviewing (x-team, leadership, and customer perceptions)

The assessment is an aggregation of all of the feedback into the AJI framework. In fact, the framework provides a nice model for all of the discussion and aggregation. I can’t imagine doing it without some sort of baseline tool to use.

The Point

From my perspective, and this is why I like the name Agile Journey Index, the entire point of the assessment is not to grade or compare cross-team performance. Remember, we are not grading teams for performance evaluation, monetary compensation or any other carrot & stick motivations. It’s to provide each team an assessment of their maturity and practices in a holistic and balanced fashion so that they can plot and plan their own personal improvement. Their agile journey if you will.

It’s mostly FOR the team.

Outsourced and Distributed Teams

Another useful part of assessments is integrating outsourced or distributed teams. I work for Velocity Partners, which is a leading nearshore outsource partner in Latin America. Nearly all of our client teams are “split” across the client and our remote teams or distributed in some way. Often the client teams and our teams have very different experience and approaches toward agile delivery.

One way to vet these differences and to integrate the two teams is to assess each of our capabilities relative to each other. Then we can discuss merging the approaches in a way that actuates the teams’ potential. We have something we call an Agile Alignment Workshop that I usually facilitate as a coach that works to align our clients and us.

And I’m not talking about adopting the very same approach or trying to create shared experience, which is often impossible. But if you simply expose the differences and discuss them, the teams can creatively adopt a shared style for operation and delivery. Assessment tooling gives us the platform and the language though for these conversations leading towards practical alignment.

The other valuable part is for our internal use. Our clients often engage us because we’re expert in agile approaches. One reason for that is our experience. But the other reason is the pressure that being a nearshore vendor places on us. You see, successfully “being Agile” is hard with a co-located team, but even more challenging in a distributed context. So we have to truly commit to the agile principles in order to deliver for our clients.

So assessing our capabilities can also serve to “show off” our strengths to our clients and very often they look to us for guidance and leadership in what are the best approaches to agile delivery and how to execute them effectively.

Scrum Product Ownership – Assessment

Even more to the point, I like the AJI so much that I “extended” it conceptually to include a deeper dive assessment into the maturity of your product ownership practices. I included this in my second edition of Scrum Product Ownership, in Appendix D.

Again, I don’t intend for this to be used as an outside-in organizational assessment. Instead, I am hoping that Product Owners and organizations will use it as a maturation benchmark and a means of focusing their own private learning’s.

Wrapping Up

What’s also nice about the Agile Journey Index is that Bill isn’t looking to attach certifications or generate immense revenue from it. He’s essentially “giving it away” to the agile community via Creative Commons licensing. Yes, he does ask that you recognize him as the source and that you implement the tool thoughtfully in the way it’s intended. And he does offer consulting and a bit of training around the tool. But that being said, he isn’t marketing it as, for example the AgilityHealth, Forrester, EBM, or AgileRBI products have been.

And another difference in the tool is that Bill has used it at IBM and AllScripts directly in his personal coaching. I’ve been using it for 2-3 years in my own coaching and many others have been leveraging it as well. So the point is that it is grounded in the “real world” and its been proven to help create an environment for evolving high-performing agile teams.

I hope this article has opened your eyes a bit to agile assessments and inspired you to take a look at the AJI and my product ownership extensions.

Stay agile my friends,
Bob.

Don’t forget to leave your comments below.

References
W. Krebs, P. Morgan, R. Ashton. The Agile Journey Index, 2012, http://www.agiledimensions.com
Kaji badges(tm) is a trademark of Agile Dimensions LLC and (c) 2012 Agile Dimensions LLC. If your organization would like issue formally certified badges contact Bill Krebs to have the facilitation course overview.

Read my Lips There are NO Magic Numbers in Agile!

When I was a new software manager, which if I’m honest was more than a few years ago, one of the things I went looking for were magic numbers.

Let me share a few with you:

  • What is the most effective ratio of Developers to Testers?
  • If I was planning a software project, how long did it usually take to create the requirements? vs. writing the code? vs. doing the testing?
  • How many architects does it typically take for a larger-scale software project?
  • When I hire a new engineer, how long should it take for them to ramp up?
    • The answer is 3 months…Trust Me!
  • When someone resigns, how long should it take (on average) to replace them?
  • For every tester, for every developer, and for every team lead, what was the right “focus factor” to plan their weekly work efforts?
    • I can’t help myself…the right number is 70% or sometimes folks like to say 5.5 hours of work time per day…Trust Me!
  • If I invested in automation, how many testers could I re-target per 1000 of automated test cases?
  • How many projects could I split (share) across my 100-person development organization?
  • How many resources (mostly I mean people) does it take to tackle a small project? A medium project? A large project?
  • How many large projects could I run through my organization in parallel?
  • I’m ramping up a new company with some new “Agile” projects. How fast can I hire folks? How fast can I pull agile teams together? How fast can they be fully operational?
  • How many bugs, on average will I find for every 1000 lines of code? And what is the average time to fix those bugs?
  • What is the right amount of the backlog to reserve for technical debt?
  • How about the % of coverage for: Unit tests? Automation? Regression?
    • I can help myself again…it’s 80% for unit and 100% for everything else…Trust Me!
  • What’s the right balance between test automation and manual test cases?
  • 2 weeks is the perfect sprint length, right?
  • We don’t have the money to hire a Scrum Master per team. Is it a full-time job? It can’t be. How many teams can a Scrum Master effectively handle? Please tell me it’s 5!
  • I have 1000 user stories on our Product Backlog…is that too many? Or too few? What is the right number?
    • The answer is – 2 releases worth of PBI’s or ~250 stories…Trust Me!
  • Scrum says that teams should be 7 +/- 2 in size. That means I can’t have a 10-person Scrum team—right?

Well, that may have been more than a “few”. Did this list help you to understand what I mean by magic numbers?

Why the Focus?

Somewhere back in history, software project managers and leaders allowed our “engineering heads” to take over in our planning. We discovered that we could pull spreadsheets together to plan, monitor, and predict almost any aspect of software development.

Microsoft Project and Excel then are the primary culprits in this endeavor. We felt, and it felt good I might add, that we can take all of the complexity and variability and thinking behind the creation of software and model it arithmetically.

How cool is that?

We could also remove context-driven thinking from the equation – especially since the numbers hold true no matter the context. We can simply plug them in…and away we go. I often think of their use as being the easy way out or a silver bullet. While certainly easy, I’ve found that there are no silver bullets and we were nearly always wrong. How about you?

Harsh Reality

But the reality is software projects (people, complexity, learning, creativity) cannot be modeled, estimated, or precisely measured.

Every team and project is different and you should stay away from using any magic numbers as replacements for your own thinking, observations, and ongoing learning about what works in your teams’ contexts.

Trying to find a hiring time factor and a new engineer ramp-up factor for your project costs versus impact forecasts is a fool’s errand. Sure, you can easily come up with some magic numbers and put the model together. But my experience is that it NEVER models the reality of your situation.

And the worst part is we usually convince ourselves that it does – constantly staring at them and tweaking our models constantly to “manage” the project.

Are They ALL Bad?

No, of course not. There’s usually some hard fought wisdom embedded within the numbers. And for certain contexts they effectively serve to establish an entry-level thinking model for our planning.

For example, I like using developer to tester ratios as a “health check” for agile teams I’m coaching. If I see a 6:0 or 10:1 ratio, then I’d probably consider that suspect and look to see if the poor tester is overloaded. So, in this case, the ratios aren’t fixed per team but are balance indicators.

I might add though that without the ratio awareness, I could also look at the teams’ velocity and output quality and determine the imbalance just as easily.

What to do Instead?

Look at your context. Ask your teams. And create your numbers from your history and your own experience…in your contexts.

But Bob, we’re starting a new set of teams – so I have no context. Yet, I need to pull a model and a plan together. My boss, the CTO, wants it by tomorrow.

My advice then would be to wait. Tell your managers and executives that you don’t know yet and that you’ll have to collect some real-time execution data from your teams. I know this won’t be that popular, but it IS the truth.

For example, my colleague Mary Thorn and I go round and round about developer-to-tester ratios in agile team contexts. Mary has a reference ratio of 3:1 if you’re doing more manual testing and 2:1 if you also want to develop automation in parallel with your functional testing. I always push back on Mary about even SHARING these numbers with people in classes and at conferences. I’m always afraid that they’ll blindly take her advice as gospel and rush back to their organizations to implement them. Many do.

But Mary is right that these ratios can be useful IF we view them and act on them in the right way.

There are other magic numbers that can be more harmful. For example, if your boss asks you to pull together a project level plan and date commitment for a set of 10 agile teams, then you’ll be challenged to guess at each teams’ ongoing velocity over time. If they are new teams, you’ll want to explore the ramp-up. And in order to provide a holistic date / scope commitment, you’ll want to aggregate their velocities into a single number. In this case I think you’re on very dangerous ground – more dangerous if these are new teams.

Why?

Because you’re guessing about the numbers, but you’re also making some sort of business commitment based on them. It would be much better to gather some execution based data across the teams and then do your forecasting based on real numbers versus magic numbers.

Wrapping Up

I actually misspoke in the title. We should get away from magic numbers in all of our software methods and approaches, not just in agile. They’re simply misleading at best and create dysfunctional organizations at worst.

Let me wrap up with one final story. For many years, I’ve coached new Scrum teams towards effectively starting up. I have a “recipe” for sprint planning for new teams.

  1. Instead of allowing the organization to assign a per-day magic number for team members, I ask each individual team member to offer their capacity for sprint planning. I want them to consider their vacation time, other projects, meetings, bug fixes, interrupts, etc. and give me an estimate for the time they have FOR this teams’ sprint.
  2. I also ask team members only to plan in ½ day increments. That would be defining their capacity in ½ day increments and tasking out the sprint work in ½ day increments.

I want to start them off avoiding hours and avoiding magic numbers. I want them thinking of their personal work capacity and committing to their work.

I usually get pushback from the leadership team with this approach. Telling me that testers should plan at 6.5 hours per day, developers at 7 hours per day, and team leads at 5.5 hours per day. They usually have names for these numbers – focus factor being a common one. While the numbers may generally be accurate or based on some good intent, I always ask to allow each team member to consider their context for the next sprint and to commit their own capacity.

What I always find is much more real world variance than the magic numbers accommodated. But beyond that, it gives the team a sense of empowerment and trust to plan and commit for themselves. It also turns out that they’re pretty good at it. Now how agile is that?

Stay agile my friends,
Bob.

Don’t forget to leave your comments below.

Agile Teams – The Weakest Link: Can You Hear Me Now?

In my last post, we explored a situation where a Product Owner had a long-term challenge with their performance that was weighing their team down.

But as I finished that article, I realized that there might be something else going on that I wanted to explore here.

In that situation, the teams’ coach assured me that conversations and escalations had happened between herself, the team, and the Product Owner. She even said she’d escalated things to the PO’s boss. She made it sound like there was a huge amount of clear feedback over the course of two full years.

Given this, they seemed to be at an insurmountable obstacle—a poorly performing Product Owner and nobody willing to do anything to improve the situation. In other words, they were stuck.

But that’s not it…

But that’s not the topic of this post. As I thought about what I was hearing, it began to dawn on me that the coach might not be communicating as clearly as she thought. In the last post, I put the burden of improvement squarely on the shoulders of the Product Owner. But now I want to put some of it on the coach herself.

Perhaps I’ll illustrate the point with a story –

For years I’ve taught leadership workshops, both for traditional and agile-centric leadership. Most often these are general, technical leadership classes that are not agile-centric. Find that hard to believe don’t you? I frequently ask the room full of managers and leaders how strong they feel their feedback skills are. Usually I get quite a bit of bravado, and the majority is emphatic that they’re doing an outstanding job of providing feedback to their teams.

I hear things like:

  • Providing solid feedback is my #1 priority.
  • Bob, it’s my JOB to give feedback, I take it seriously and I do it well.
  • I’m a straight-shooter, everyone knows where they stand with me.
  • Bob, let me give YOU some feedback…

So I’ve clearly touched a nerve and let’s assume everyone IS doing a great job.

But in the meantime, I’ve surveyed their teams. I’ve also looked at past performance reviews the leaders have written and contrast it against the verbal discussions I’ve gleaned from the leaders. The point is, I look for evidence of their feedback and how congruent and clear it has been.

What do you think I typically find?

Quite often that performance feedback isn’t being given at all. For example, people they identify as marginal performers have often been given outstanding reviews and positive feedback. In some cases, even promoted. When I ask team members about feedback, the majority implies that they have “no idea” where their performance stands from their managers’ perspective.

I usually communicate this as an 80:20 relationship, with the leaders thinking they’re 80% effective and the reality is closer to 20% effectiveness. You might ask what causes the disconnect?

I have no definitive answer, but I think the following come into play:

  1. Giving congruent and constructive feedback is HARD, so we often avoid it.
  2. Often we think we’ve clearly communicated something, but we don’t confirm receipt.
  3. Sometimes we like to tell people what they want to hear vs. what they need to hear, avoiding any conflict.
  4. While we might provide the feedback, we don’t follow-up to check on acceptance and progress/improvement.
  5. It takes effort and time.

I have the feeling that some of this might be coming into play in my weakest link article.

Saying is only the first step

I have a general rule on feedback. Just giving it is only half the job, probably the easier half. You then need to strive to ensure your feedback was accurately received AND that it is driving some sort of action or result. If not, then the burden is on you to ensure you provide additional feedback.

The point is, you are not only responsible for giving the feedback, but also responsible for the results. I know, I know, that’s not fair. But from my perspective, just saying words doesn’t drive action. It doesn’t verify that they ‘got’ your message, and it doesn’t ensure that there is a healthy and productive outcome. All of which I suspect you want as part of your giving the feedback.

Dirty little secret

And I shared this with the coach in this sad tale. Leaders don’t always pay attention to what you say. Really, Bob? Yes, really!

Most senior leaders get feedback all of the time, from everyone. Most of it is venting, complaining, or whining about something. So, they often filter the feedback. If they get it and they see events unfolding that support the feedback, then they might take action on it.

But if they don’t see events that support the feedback, often they just assume you were venting and not really expecting them to do anything about it.

That’s why I recommended to the coach to stop hiding the issues and simply expose them. Then the reality on the ground would connect to the feedback that they were giving the Product Owner’s manager.

Wrapping up

I think the entire point of this post is providing improved, more courageous and congruent feedback communication within your teams. Make sure you’re doing it and doing it so that it’s received in the way you envision it to be.

That simply talking and walking away isn’t always effective.

There’s a wonderful book entitled Crucial Conversations. If you found this theme intriguing, I would highly recommend your reading the book.

Stay agile my friends,
Bob.

Don’t forget to leave your comments below.