Skip to main content

Author: Robert Galen

Agile Teams – The Weakest Link

I was talking to a fellow coach the other day and she was venting a bit about one of her teams and their Product Owner.

Bob, she said, I have an outstanding agile team. We’ve been working within our product organization for nearly two years. In that time, we’ve delivered an application upgrade that everyone has viewed as simply fantastic. Now we’re onto a building a critical piece of new system functionality for them—so we’ve earned everyone’s confidence in our abilities.

We work hard, we work well together, we deliver high-quality working code, and we have fun doing it.

Ok, I asked. That sounds like a fantastic situation. To be honest, I’m a bit jealous.

Well, she said, don’t be. I’m incredibly frustrated with our Product Owner. He:

  • always seems to be ill-prepared with stories;
  • hardly ever emphasizes or attends Backlog Refinement meetings;
  • brings new or under prepared stories directly to Sprint Planning;
  • in general is always undecided and under prepared.

We’ve gotten into the habit of being able to work around him and still deliver sufficient customer value at each sprint review…but it’s incredibly challenging. The team has to “bend over backwards” to help develop a view to our backlog.

And he constantly changes his mind, which is the cherry on top – increasing our rework and driving the entire team crazy.

Wow. I guess everything isn’t so great. Then I asked, have you made him aware of the behavior and your (and the teams’) concerns?

Of course, she said. This comes up in virtually every retrospective. He listens, agrees, and then promises to improve. There might be a slight change for a sprint or two, but he always falls back into the same old patterns.

I’ve even tried to escalate it (carefully) to his supervisor. And even she agrees that it’s a problem, but again, nothing seems to change.

I’m at a loss. What do you think I should do?

Taking a step back

On the surface, this is sort of an intractable situation. It reminds me that every team, agile teams included, is only as good as their “weakest link”. Sure, teams can collaborate and work around internal challenges such as this. And in some cases that’s the right thing to do. But when the behavior is affecting the entire team’s morale and performance, and when there is no consistent improvement; then I say something needs to be done.

What do you think? And what are the options here?

  1. It seems clear to me that the Product Owner’s manager is dropping the ball. There is a performance issue and (apparently) they’re ignoring it. One obvious and immediate action would be for his manager to begin taking it seriously and start actively coaching the PO, with a possible performance improvement plan as an outcome.
  2. Could the team “vote the Product Owner” off the island (team)? I’ve seen this happen in extreme cases with team members. It’s a very messy situation and somewhat of a last resort. However, if the team really is being negatively affected, then it might be the most congruent thing to do.
  3. Under the banner of “good team play”, should they just persevere along and keep dealing with it? That’s what this team has been doing. Sure, they’re talking about it. But it’s been happening for nearly TWO YEARS. What makes anyone think it will improve without a major intervention?

Overreact or Underreact

I normally see two reactions to similar situations in my agile coaching travels. Either the team grows impatient too quickly and overreacts to remove someone from the team, many times without having a conversation with the person. It’s just easier for them to complain behind their back and then look to eject them.

Or

Like the team in this example, tolerate the underperforming behavior in perpetuity, working around it within the team. Sure, they bring it up occasionally, but the team essentially accepts it.

Is there somewhere in “the middle”?

I sure hope so!

You might be asking yourself, what counsel did I provide to my coaching colleague? Well, I’ll share it with you. But I’m still thinking about the situation and whether my advice was sound.

I told her that the team was “working around” an issue and that they weren’t transparent enough with the effects of the under performance. You see, they were working overtime to cover for the Product Owner and from a results perspective; the team was perceived to be high performing.

I basically said to stop doing that.

My advice was to allow the Product Owner to fail more often and to make those failures more transparent.

For example, the Product Owner was essentially bringing empty stories to the team for execution. The team would then meet with stakeholders and analysts across the organization to fill in the blanks. This would take tremendous time that wasn’t theirs to offer. In the end, they got very good at writing stories FOR the Product Owner.

I told her to stop that, to put in place Readiness Criteria for stories that would enter the teams’ sprints and to not accept “under-cooked” stories. If the Product Backlog became empty as a result of this, then so be it. Raise that as an impediment and wait until the Product Owner did their job and worked with the team to deliver a proper backlog.

I was careful not imply that the team sabotage or not help the Product Owner as a healthy team would. But I was firm in that they needed to stop doing the PO’s job for them AND allow the dysfunction, under performance and honest results to become transparent to all.

I suggested that IF they did this well, then the organization would reach out to the entire team to discover “what was wrong” and to help them “fix it”.

Wrapping Up

Now at this point, we broke the conversation off and went on our way. But I could tell that my colleague was very uncomfortable with my advice. To be honest, so was I.

But I keep thinking that it was a congruent suggestion for handling a “weak link” that is affecting team health AND not improving. That if teams have the option of helping / masking the problem vs. making it transparent, they might want to reflect whether their masking it IS part of the problem.

That’s what I intended here, but I do agree that it’s a fine line to walk. My biggest fear is that teams read this and start throwing struggling team members under the bus. That’s certainly not my intention.

I wonder what you think of my story and my advice? And if you’ve had a similar situation, what have you done?

Stay agile my friends,
Bob.

Don’t forget to leave your comments below.

2 Dozen Weird Agile Metrics Ideas

In a recent recording of our Meta-Cast, Josh and I went through a couple of questions from the audience. One of the questions surrounded what to measure for individual developers. To be honest, I was taken aback by the question.

You see, I’ve been preaching for years that when you move to agile metrics, you want to do three things:

(One) Move from individual team member metrics to a more holistic, whole-team view.

(Two) Move from measuring functional teams, for example test team progress, again towards holistic, executing agile team view.

(Three) That you’ll want to collect fewer metrics and focus them in four distinct areas of interest. They are:

  • Predictability
  • Value
  • Quality
  • Team Health

(And Four) That you want the metrics to be more output based or results based, than input based. For example, instead of being interested in “planned velocity”, be more interested in “resulting velocity” from your teams.

An addendum here, is that trending is much more important than any specific data point. Then recently I ran into the following thoughts around agile metrics.

Outcome Metrics

Gabrielle Benefield wrote about outcome as being what truly matters in this blog post. She makes the case for “outcome” metrics that focus slightly elsewhere than are traditional views. Here’s an excerpt:

Throughput tells us how fast we go, output tells us how much we delivered. But why are we doing any of this work? What are the problems we are trying to solve? What are the business results we desire?

By measuring throughputs and outputs, we are incentivising people to deliver more of them. More creates more waste. While some people may believe if they throw enough features out it may increase the probability of hitting their target, it feels eerily similar to the belief that if you put enough monkeys in a room typing for long enough they will produce the complete works of Shakespeare. This not only misguided, but is wasteful for our products and out planet. Yes you heard it right. Think of the amount of resources being used for all of those wasted features, let alone all the power they end up consuming as they linger around on servers for years. When the glaciers melt away, you know who to blame.

The one thing very few organisations appear to be measuring are Outcomes. Outcomes are the value we create. They range from wanting to increase revenue for a company, to improving the usability and learnability of a product.
In spite of all of this, we continue to measure what is easy rather than what is important. I think the reason is that it is not as simple as measuring throughput or output. You have to think, understand and test relentlessly.

Type of Metric

Based on her article and my own thinking, I’ll try to categorize each of the metrics as either:

Input – something defined or measures at the “beginning” of the pipeline. A typical measure might be planned values, for example, planned test cases.

Output – something that results from the pipeline being executed. Velocity is good example of an output metric.

Outcome – and to Gabrielle’s point, these are metrics focusing on experimentation and customer testing. For example, measuring actual customer satisfaction with a new releases feature set by leveraging a survey.

Metrics Brainstorming List

Most of these are wild and crazy ideas. However, I thought it might be useful to share some of them around what to measure in agile teams.

What might be even more interesting is what I haven’t listed. For example, individual metrics (test cases run per tester per day) or functional metrics test case run coverage per day per plan.

  1. Features removed from the Product Backlog—over releases, quarterly, or perhaps as a percentage? Features de-scoped or simplified in the Backlog; same as above. These two are trying to show if we’re as willing to subtract as we are to add. (Value, Output)
  2. Stop the line events for a release or across an organization; could include Continuous Integration / Continuous Deployment stops and/or other “process” stops. Clearly a “quality centric” sort of look into the organization. (Quality, Outcome)
  3. Root Cause discovery sessions conducted per team – per Release; could view types of corrective actions and correlate patterns across teams? Again a quality-centric metric—are we truly focusing on continuous improvement? We could also consider the sheer number of issues and their trending over time. (Quality, Outcome)
  4. Number of Retrospective Items resolved per team. I’ve often wanted a way to look at the retrospective and still maintain the integrity/confidentiality of it for the team. I wonder if this would work? (Quality, Output)
  5. Number of stories delivered with zero bugs – per Sprint, per Release? And by zero bugs, I’m implying newly introduced bugs. Is the team holding to their agile quality values? (Quality, Output)
  6. Number of User Stories that were reworked based on PO / Customer review; perhaps even maintain a “Cost of Rework” factor. There is a range here where this is very healthy. But it can also become repetitive and unhealthy. How would we determine that? (Value, Output)
  7. The percentage of technical debt addressed; targets > 20%. You’d need to be clear about what fell into the “technical debt” bucket. (Value or Quality, Outcome)
  8. Trending of velocity per team, perhaps a rolling average in story points. What’s interesting here is release-level predictability per team in points. Avoid aggregation of results across teams (organization-level) and beware of the affect of team turbulence. NO individual team member velocity measurement! (Predictability, Output)
  9. Time-stamp types of work as they move through your teams—capturing throughput per story size. Then you’ll have a range and average/mean for story throughput. You can choose from this to forecast your release-level commitments. Avoid aggregation of results across teams (organization-level) and beware of the affect of team turbulence. (Predictability, Output)
  10. Delivery predictability per sized user story; average variance across teams and is the trending improving? This is the non-specific variant of the above—simply looking at raw story delivery variance. (Predictability, Output)
  11. The percentage of test automation (including UI level, component / middle tier, and Unit level) coverage. Or instead of coverage, you could show the ratio of planned vs. running automation. Trending here would be the most interesting bit. (Quality, Output)
  12. The percentage of each sprint spent on automation investments. The percentage of each sprint spent on Continuous Integration and Continuous Deployment investments. Again, trending over time would be the most interesting. (Quality, Output)
  13. Team agile health survey data; monitor trending & improvements. Happiness Factor of the team 
  14. Training budget per agile team member. Simple. Perhaps look at year-over-year trending. (Team Health, Output)
  15. Agile Maturity Survey – there are a wide variety of these sorts of tools developed. I lean towards the Agile Journey Index developed by Bill Krebs. The AJI strikes a very nice balance between measuring but not being too heavy handed in reacting to team-based maturity and evolution. (Team Health, Output & Outcomes)
  16. Customer usage of delivered features (actual usage). Somehow instrument your application or product so that usage can be collected and analyzed. For example, Google Analytics for web pages. Establish Product-driven Business Case values and then actual values. The delta might be interesting. (Value, Outcome)
  17. Customer survey’s to identify value delivered (actual, not wishful) Perhaps use some sort of Net Promoter Score? (Value, Outcome)
  18. Dedicated Scrum Masters vs. Multi-tasking Scrum Masters (Dual or more roles). Create some sort of ratio that represents your investment in the core roles for agile transformation. Could extend this to coaches and Product Owners as well. (Team Health, Output)
  19. Number of failures due to teams experimenting, taking risks, stretching, etc. I might be looking for something greater than zero. Could also make this cross-team or organizational. (Value or Team Health, Output)
  20. Measure organizational commit levels to date driven targets/expectations. This is an indicator of the level of planning within your organization and who is “signing up” for the credibility and feasibility of the plan.
    1. 1, senior leadership only
    2. 5, plus – directors & managers
    3. 10, plus – team leads
    4. 15, plus – whole team
    5. 25 plus – integrated across ALL contributing teams
    6. 50, EVERYONE (contributing to the release) thumbs up…
      for any delivered release
      (Predictability, Output)
  21. New test cases added per Release or per Quarter. Retired test cases removed per Release or per Quarter. Both focus towards the effort to make our testing relevant in real-time. (Quality, Outcome)
  22. Stakeholder & Senior Leadership attendance at Team Sprint Reviews. Could measure distinct feedback from this group, rework driven by feedback, and pure attendance. Point is, how often are the “right people” in the room for the review? And were they engaged? (Value, Outcome)
  23. Team Churn – measuring internal and external changes (impacts) to a team sprint over sprint. At iContact we had a formula for this that created an index of sorts. I simply can’t recall. But it’s a powerful metric because it’s basically “waste” and slowing the team down. OR you could correlate with velocity. (Predictability, Output)
  24. Backlog grooming and look-ahead is an incredibly important team activity. There are also milestones where you want to have groomed the work for the next release – within the current release. Track backlog grooming frequency and the pace of story analysis through it. I often use an egg timer to reinforce “crisp” grooming of stories at a 5-8 minute pace per story. (Value, Output)

Wrapping Up

Remember the title of the article. These are potentially weird, outrageous, or downright silly metrics ideas. I’m not sure if any of them deserve serious consideration in your agile projects, teams, and work.

Now I wouldn’t have written them down and shared them if I didn’t think they had “potential”. Still, I’d like to hear back from you on their viability as high quality, agile-centric measures.

  • Which ones are viable or do you like?
  • Which ones aren’t?
  • Do you have any “pet” metrics that you’d like to add to my list? Please do so.
  • What do you think about the notion of: input, output, and outcome metrics? Does it matter?
  • And if you had to “boil it down” to 5 simple metrics to measure High-Performance agile teams, what would you look at?

I’m looking for any and all feedback, so please give it to me.

Stay agile my friends,
Bob.

Don’t forget to leave your comments below.

Playing it Safe

I’m wondering if you think this post will be about the Scaled Agile Framework or SAFe? Well, it’s not. Before there was SAFe, there was good old-fashioned “safety” from an agile team perspective. And that’s where I want to go in this piece. So just a warning that no scaling will be discussed.

Within Retrospectives

I often advise teams and organizations that are contemplating “going Agile” to consider safety as a factor when running their retrospectives. I share the “Galen-rule” around not inviting or having “managers” in the teams’ retrospective.

This usually gets the attention of the managers in the room and we have a lively debate around why I’m recommending this. They normally get quite defensive and start explaining how effective they’ve been in driving results from the retrospectives. How the team produces plans and action logs from each retrospective and how they hold each other accountable to those results.

I try to explain that it’s not about the manager or their skill or their style—specifically that they’re not doing anything wrong. Instead I explain it’s about creating a safe environment for the team. In his seminal work on Retrospectives, Norm Kerth spoke about creating the conditions for an effective retrospective. One of the key conditions was the notion of safety.

  • Does the team feel safe in expressing themselves?
  • Do they feel that there will be no ramifications to exposing team and personal failures or challenges?
  • And do they feel there will be a sense of confidentiality within the team?

Safety is the area that management attendance will often negatively influence and why I ask managers (often including myself) not to attend team retrospectives. I want each team to feel safe in exploring their ideas for team accountability and continuous improvement.

Who is the “Team”?

In the context of retrospectives, it’s literally anyone doing the work. In the case of Scrum, it includes the development team, the Scrum Master and the Product Owner. I usually include part-time folks as well, for example a Database Architect who helped the team with some particularly challenging back-end work.

How to determine if a team feels safe?

Well, one way is to simply ask them. Pose the question whether your attendance would influence the level of candor and discussion in the teams retrospective. If the answer is yes, then excuse yourself.

But often even this question is difficult for the team to answer forthrightly.

Another thing you can do is poll the team on its level of safety. The planning poker technique of voting can be helpful here. You can assign the following values if the manager (Bob) is in the retrospective:

  1. Team feels totally unsafe with Bob, if Bob comes…cancel the retrospective.
  2. Team feels very unsafe with Bob around. Perhaps he can come in at the end for a “synopsis”.
  3. Team feels moderately safe with Bob around. However, it WILL influence the level of discussion!
  4. Team feels quite safe with Bob around. However, they’d like him to quietly listen and not participate directly.
  5. Team feels totally safe with Bob around. In fact, they’d rather have him attend.

Then see how the overall team “stacks up”. A variation on this approach is to have team members vote anonymously based on the above scale. Just have them fill out their views on 3×5 cards and see where things stand.

The key point here is to assess the level of impact that “outsiders” will have on the teams’ retrospective and to try and create as safe an environment as possible.

A New Level of Safety?

Moving beyond retrospective safety, I just happened upon a blog post by Joshua Kerievsky where he introduces the notion of cultural safety within agile or technical contexts. Here’s the introduction to the post:

Want to know what decades in the software field has taught me?

Protecting people is the most important thing we can do, because it frees people to take risks and unlocks their potential.

I call this Anzeneering, a new word derived from anzen (meaning safety in Japanese) and engineering.

Every day, our time, money, information, reputation, relationships and health are vulnerable.

Anzeneers protect people by establishing anzen in everything from relationships to workspaces, codebases to processes, products to services.

Anzeneers consider everyone in the software ecosystem, whether they use, make, market, buy, sell or fund software.

Anzeneers approach failure as an opportunity to introduce more anzen into their culture, practices, and tools.

By making anzen their single driving value, anzeneers actively discover hazards, establish clear anzen priorities and make effective anzen decisions.

On first glance, I didn’t understand the point that Joshua was making, or better put, I didn’t think it was that important. But as is my way, I thought on it for a few days and I started to connect-the-dots for myself.
Based on my overall agile experience and organizational observations, I also think safety, as a focus, can be a cultural change-agent. Let me explore some of my connections to safety.

Company Cultures

I’ve run into a quite a few clients over the past few years and their reactions are consistent when I discuss certain aspects of agility.

For example, I often talk about failure. Failure in estimating or failure understanding some forms of technical risk are simply part of life in software projects. That failure is a part of what we do and our learning, at least from my point of view.

But I normally get shocked reactions when I even mention the F-word.

A Story

I often talk about how I want teams to take risks. I’ll share one of my favorite stories on the matter. When I was the head agile coach at iContact I remember having a chat with our Scrum Masters. We had completed approximately 100-120 sprints without a “failure” and I asked them to try and influence their teams to fail more often.

I’ll never forget the reaction of Maureen, one of our more experienced Scrum Masters. She said:

Maureen: Bob, let me get this right. We’ve had an extraordinary run of solid sprints across our teams.
Bob: I know.
Maureen: But that’s not “good enough”. You want us to push the teams a bit more and try to fail?
Bob: Yes, I do.
Bob: Although, I don’t know if “push” is the right word, perhaps more influence them to try new things…to stretch…to take more risks.
Maureen: Clearly we’re doing well. What are you concerned about?
Bob: I know we are. And I’m proud of our journey. But I’m concerned about complacency. I’m concerned that the teams may not trust us enough to truly take risk. I simply want you to encourage that.
Maureen: Ok, we can try…

Now that I reflect on that conversation and moment, I realize that I was trying to increase the safety in our culture. I was encouraging failure. But not simply encouraging the failure. I was also testing our reaction to it, to see if we would walk our talk as a leadership team. Had we created a culture and environment where agile principles flourished and where the teams felt safe?

It turned out that the Scrum Masters were effective AND that our culture was safe. But the exercise was good for all of us and we learned and grew as a leadership team and organization. Now beyond failure, what are some aspects of safety in agile contexts?

Safety Checking

From a cultural perspective some of the questions that come to mind include:

  • Is it safe to fail?
  • Is it safe to say I don’t know?
  • Is it safe to explore, to learn, to try new ideas?
  • Is it safe to pushback on a management idea?
  • Is it safe to refactor code?
  • Is it safe not to work overtime? To truly strive for a work-life balance?
  • Is it safe to take a day off right before a release? Or to go to the doctor?
  • Is it safe to implement a story correctly regardless of the time it takes?
  • Is it safe to pair?
  • Is it safe to challenge the value of a User Story from your Product Owner?
  • Is it safe to say no?
  • Is it safe to ask someone to tell you the WHY behind the project you’re working on?
  • Is it safe to explore that WHY and to challenge it’s inherent assumptions?
  • Is it safe to work on technical debt?
  • Is it safe to ask for help?
  • Is it safe to “swarm” around the teams work?

Is it safe, truly safe, to do all of these within the context of your “real world” role?

And remember, safety isn’t just an upward, towards “management”, factor. I like to think of it as a 360-degree attribute that is viewed from four perspectives:

  1. Is it safe within your team?
  2. Is it safe within your management?
  3. Is it safe within your organization?
  4. Is it safe within your culture?

So these assessment-like questions would have to be viewed through these various lenses to get a true feel for your level of overall safety.

Wrapping Up

I think Joshua and the Industrial Logic folks are onto something with their focus on Anzeneering. However, I would seriously reconsider re-branding the name if I were them.

In the last section I tried to pull together a short list of safety checking questions. I know it’s probably incomplete and can be improved. I’m also thinking of pulling together a survey, much as I did on a failure topic I did a few years ago, to capture broader feedback.

So, could you please help me with developing more questions for safety checks? Just send them to me in email, [email protected], or comment on this post. Either way, I’m intrigued by what Joshua and his team are doing and I’d like to develop a tool for checking, because I don’t think we can improve our safety unless we know specifically where it’s lacking.

Software is a very dangerous business, so stay “safe” out there. And stay agile my friends,
Bob.

Don’t forget to leave your comments below.

References

Roles and Responsibilities – Do we need such things for agile teams?

galen Dec2If you’ve followed my blogging at all, you know that I’ve worked for several companies in the last 6-8 years that have colored my thinking as an agile coach. Sure, I’ve coached a wide variety of other organizations, but there’s nothing like being an employee of a company and assuming the role of technical leader and agile coach to get your attention each day.

One of those companies was iContact (now Vocus), which develops an email marketing SaaS platform. This story comes from my time spent there working with some wonderful development teams.

Conflict and Confusion

We had a fairly well established Scrum instance at iContact. And the teams had generally received entry-level Scrum training at one time or another, so everyone was relatively on the same page. One of the things I noticed early on though was quite a bit of what I’ll call – role confusion.

It often happened with the Product Owners. For example, they would be haranguing the teams about their estimates being too high and the sprints not producing enough work. Often, the User Stories they were writing included design details and internal construction guidance as well as sharing the functional requirements.

This was causing the teams angst, because they felt the Product Owners were stepping into their area of responsibility. And, they were.

But the problem didn’t solely rest with the Product Owners. In fact, it was much more pervasive from my perspective. Literally everyone seemed to be struggling to figure out his or her ‘place’ within our agile transformation. And it was leading to a great deal of confusion and some unhealthy conflict. It was also undermining our capacity and results—in that we were wasting time posturing instead of driving forward as teams.
The Root Cause
After observing this for a short while, it struck me that perhaps our initial Scrum training spent insufficient time or focus on clearly defining and establishing the core values, principles, and roles within agile Scrum teams. You could clearly see it in the behaviors and conflicts that were occurring.

I felt it might also be too individually focused, so too little emphasis at the organizational and team levels. When I spoke to several of our Scrum Masters, this was exactly the case. They had sort of “jumped into” sprinting without establishing some basic team-based agreements and guidelines.

And by the way, this didn’t come as a surprise. I think many teams jump in too soon towards execution without establishing a baseline of understandings and agreements for how they’ll “behave”.
The Way Forward
I pulled together a presentation for the teams that tried to explain the expectations of what I thought were good agile practices, tactics and the mindset. Not only did I explain the role of the Scrum Master and Product Owner, but I also explained the role of the cross-functional team and managers/leaders within our agile context.

So I tried to explore all aspects of the organizational dynamics.

I also tried to make the point that the roles are situational in nature and quite nuanced. For example, I’ve always felt that leadership forms across the entire agile team. I believe the Scrum Master and Product Owner play an important role as leaders. But I also want the entire team to take on situational leadership roles as each sprint and release unfolds.

Beyond these roles and responsibilities, I also explored some of the following –

Principles & Values

Looking at core agile values – I went back to reaffirm the Agile Manifesto and the principles behind it. Too often we look at these as something ethereal. Yes, they’re nice, but do they really apply in the “real world”?

Well, the answer is – yes, they do. So we explored these as an organization and I/we tried to make them more real and more relevant within our organizational context. And as a leadership team, we also took the time to “go on record” as supporting the principles. And that included supporting them when the going got tough in our projects.
Constraints or “Guardrails”
One of the things that often amazes me when coaching some agile teams is how frequently they interpret agile methods to be a “free for all” to do whatever the heck they want. I sometimes liken it to – The Inmates Running the Asylum.

But then I think about the “best” agile teams I’ve encountered in my career. The most mature, the ones with the greatest quality and efficiency, the ones who ultimately delighted their clients. And it occurs to me that they didn’t do it by doing whatever they wanted or whatever was easiest.

They did it by being:

  • Very principled and consistently holding to those principles;
  • By being very rigorous in their application of technical practices;
  • By putting the team before the individual and swarming around their work;
  • By having very specific definitions of done and always adhering to them;
  • And by putting the customer first and truly looking to deliver what they needed vs. what they asked for.

In a word, I think these mature and effective agile teams were self-directed BUT incredibly disciplined at the same time.

I would argue that having a finely grained definition of done goes a long way towards establishing team norms and behaviors. I would encourage you to explore the multi-layers and nuance around this.

And, By The Way, You’re Never Done

So, we went through this roles and responsibilities level setting exercise at iContact and at the end I thought great, now we can put this behind us. But I was wrong.

After about 3-4 months, we noticed some of the same patterns emerging. Then it struck me; the organization is growing and evolving. We’re getting new folks in, some are leaving, teams are re-forming and roles are changing, and even our leadership team is evolving.

Point being—it’s a moving target.

We then realized that we had to “re-do” our work iteratively, about every 4 months or so, to re-initialize our understanding across our teams. It was sort of funny. I would “dust off” the same set of Powerpoint slides, updating them as appropriate. Then we would go through the sessions again for the entire organization.

To some it was redundant. To others it was brand new. But there was never a session that was exactly the same nor that didn’t explore new ground in some way. So be prepared for refreshes.

Wrapping Up

I guess the title is misleading in that I don’t believe it’s a question any longer. Do we need roles and responsibility clarity in agile teams?

I hope the answer is a clear and firm – Yes.

I think you owe your teams this as a Scrum Master and Coach. And that even the best of us often forget how important “starting up well” is. One of our more senior Scrum Masters at iContact reminded me of this when she joined our team. Her name was Maureen Green.

Maureen came in and found that her team hadn’t established team norms and agreements, that they hadn’t made our organizational definition of done their own, and that they were waffling on many of the principles we aspired to hold firm to. Instead of just charging forward, she “paused” her team and established these sorts of agreements and clarity first.

Not only that, she served as a role model for our other Scrum Masters to tighten things up in this important area.

If you’re struggling with your teams’ execution and results, I would encourage you to consider “starting over” and redefining your agreements and principles as a way to accelerate forward.

Stay agile my friends,
Bob.

Don’t forget to leave your comments below.

Bandwagon’s – The Good and the Bad?

I remember years ago, Microsoft was considered the benchmark of all things leading edge when it came to software development. They seemed to be the “poster child” for how to build software organizations and software products.

For example, they had a multi-tiered strategy for a code freeze model and everyone seemed to be copying it. And today their Software Developer in Test (SDET) model is also incredibly popular. There were many books written about their strategies and approaches, and everyone seemed to want to “be like Microsoft”.

What’s interesting to me is my perspective. If you’ve been in technology long enough, you see recursive themes unfolding. What today is a benchmark company with everyone jumping on the bandwagon to copy them, in ten years becomes a passing thought. Microsoft is clearly an example of this curve—first being the “darling” of what to do – to falling into a category of status quo or even an anti-pattern. Sure, Microsoft is still a viable company and sometime role model, but it’s no longer seen as the one to emulate.

Google is going through a similar transition. At one point, their 20% time was the talk of every agile team. And their product innovation and creativity seemed to be boundless. Sure, they are still way cool and successful. But now, they’ve rescinded the 20% notion and are slowly evolving away from being the “sexy new kid” to copy.

Some Examples for Today

Today there’s a “new crop” of companies that are becoming the new darlings. Much of the hoopla is associated with Agile and Lean Startup sorts of strategies.

What are some of the examples today?

And I could probably find many more examples, but I trust you get the idea.

But beyond the hype, youthful enthusiasm, and the excitement, are these companies’ true role models for every context? Meaning, if you simply copy their ideas, will it make your company successful?

Bandwagon Cycle Time

I’m beginning to think that companies go through a bandwagon phase when they’re really successful in their space. They’re written up in articles and are invited to share their “secrets” on news shows. Or their founders write a popular book and go on the review circuit. Everyone starts talking about their tactics and approaches, as if something magical has been discovered.

Then unfortunately, everyone it seems begins to copy their tactics—blindly, without reservation and without thinking. But I’m also convinced that it doesn’t last. There are two critical points that happen:

  1. Some new company with some innovative and sexy new ideas surges on the scene and replaces older more staid companies on the bandwagon; and
  2. The ideas start failing often enough in other contexts that word “gets out” that the approaches aren’t the Silver Bullets everyone thought they were.

But things do move on and the approaches do change.

Now does this mean the companies fail. Hardly. In fact, and I’ll use IDEO as an example, they survive and thrive. And the ideas have great merit, but they rarely work as well as they did in the original incubator.

Context Matters

The real point is not around ignoring these examples. Nor is it around hoping that they quickly fail or get replaced on the bandwagon.

No the real point or question is – should we relentlessly copy them?

There’s a school of thinking in the software testing community called Context-Based. In this school, the idea is that there are NO Best Practices. In fact, practitioners often get quite rude to folks who share their tactics under the banner of a best practice.

They eschew things like document templates and checklists that organizations dutifully fill in without actually thinking about organizational, project, customer, and team context.

Their point is that there are no best practices, but only good practices that are used in a specific context. I will add the notion of “by THINKING teams” to the end of that clause.

And I think this is my point regarding folks who take the same approach with these new idea companies.

There are no silver bullet approaches or ideas coming out of today’s marvelous new crop of innovative companies. Instead, there are only ideas that may work when applied in specific contexts by thinking teams.

Wrapping up

Let me be clear. I believe some wonderful thinking, new strategies, and approaches are coming out of the companies I mention in this article. And from many more that I didn’t. We are continuously evolving in the technology space, not only in direct technology, but also in the structures we leverage to build them. And the agile and lean approaches are truly exciting. Indeed, we’re bringing the people back into the mix.

But I do worry that jumping on the bandwagon can be incredibly dangerous in that we follow the shiny, bright objects without thinking. We do it because we consider them a silver bullet solution for every context. And that’s the rub as far as I’m concerned, they aren’t.

Every new idea has to be applied and adjusted for the context your looking to use it in. And, I would argue, that there are contexts where the approach is inappropriate or might do more harm than good. It’s these contexts where jumping on the bandwagon is most dangerous. Too bad there isn’t a warning sign on the bandwagons.

Stay agile and consider your contexts my friends,
Bob.

Don’t forget to leave your comments below.