I try to explain that it’s not about the manager or their skill or their style—specifically that they’re not doing anything wrong. Instead I explain it’s about creating a safe environment for the team. In his seminal work on Retrospectives, Norm Kerth spoke about creating the conditions for an effective retrospective. One of the key conditions was the notion of safety.
- Does the team feel safe in expressing themselves?
- Do they feel that there will be no ramifications to exposing team and personal failures or challenges?
- And do they feel there will be a sense of confidentiality within the team?
Safety is the area that management attendance will often negatively influence and why I ask managers (often including myself) not to attend team retrospectives. I want each team to feel safe in exploring their ideas for team accountability and continuous improvement.
Who is the “Team”?
In the context of retrospectives, it’s literally anyone doing the work. In the case of Scrum, it includes the development team, the Scrum Master and the Product Owner. I usually include part-time folks as well, for example a Database Architect who helped the team with some particularly challenging back-end work.
How to determine if a team feels safe?
Well, one way is to simply ask them. Pose the question whether your attendance would influence the level of candor and discussion in the teams retrospective. If the answer is yes, then excuse yourself.
But often even this question is difficult for the team to answer forthrightly.
Another thing you can do is poll the team on its level of safety. The planning poker technique of voting can be helpful here. You can assign the following values if the manager (Bob) is in the retrospective:
- Team feels totally unsafe with Bob, if Bob comes…cancel the retrospective.
- Team feels very unsafe with Bob around. Perhaps he can come in at the end for a “synopsis”.
- Team feels moderately safe with Bob around. However, it WILL influence the level of discussion!
- Team feels quite safe with Bob around. However, they’d like him to quietly listen and not participate directly.
- Team feels totally safe with Bob around. In fact, they’d rather have him attend.
Then see how the overall team “stacks up”. A variation on this approach is to have team members vote anonymously based on the above scale. Just have them fill out their views on 3x5 cards and see where things stand.
The key point here is to assess the level of impact that “outsiders” will have on the teams’ retrospective and to try and create as safe an environment as possible.
A New Level of Safety?
Moving beyond retrospective safety, I just happened upon a blog post by Joshua Kerievsky where he introduces the notion of cultural safety within agile or technical contexts. Here’s the introduction to the post:
Want to know what decades in the software field has taught me?
Protecting people is the most important thing we can do, because it frees people to take risks and unlocks their potential.
I call this Anzeneering, a new word derived from anzen (meaning safety in Japanese) and engineering.
Every day, our time, money, information, reputation, relationships and health are vulnerable.
Anzeneers protect people by establishing anzen in everything from relationships to workspaces, codebases to processes, products to services.
Anzeneers consider everyone in the software ecosystem, whether they use, make, market, buy, sell or fund software.
Anzeneers approach failure as an opportunity to introduce more anzen into their culture, practices, and tools.
By making anzen their single driving value, anzeneers actively discover hazards, establish clear anzen priorities and make effective anzen decisions.
On first glance, I didn’t understand the point that Joshua was making, or better put, I didn’t think it was that important. But as is my way, I thought on it for a few days and I started to connect-the-dots for myself.
Based on my overall agile experience and organizational observations, I also think safety, as a focus, can be a cultural change-agent. Let me explore some of my connections to safety.
I’ve run into a quite a few clients over the past few years and their reactions are consistent when I discuss certain aspects of agility.
For example, I often talk about failure. Failure in estimating or failure understanding some forms of technical risk are simply part of life in software projects. That failure is a part of what we do and our learning, at least from my point of view.
But I normally get shocked reactions when I even mention the F-word.
I often talk about how I want teams to take risks. I’ll share one of my favorite stories on the matter. When I was the head agile coach at iContact I remember having a chat with our Scrum Masters. We had completed approximately 100-120 sprints without a “failure” and I asked them to try and influence their teams to fail more often.
I’ll never forget the reaction of Maureen, one of our more experienced Scrum Masters. She said:
Maureen: Bob, let me get this right. We’ve had an extraordinary run of solid sprints across our teams.
Bob: I know.
Maureen: But that’s not “good enough”. You want us to push the teams a bit more and try to fail?
Bob: Yes, I do.
Bob: Although, I don’t know if “push” is the right word, perhaps more influence them to try new things…to stretch…to take more risks.
Maureen: Clearly we’re doing well. What are you concerned about?
Bob: I know we are. And I’m proud of our journey. But I’m concerned about complacency. I’m concerned that the teams may not trust us enough to truly take risk. I simply want you to encourage that.
Maureen: Ok, we can try…
Now that I reflect on that conversation and moment, I realize that I was trying to increase the safety in our culture. I was encouraging failure. But not simply encouraging the failure. I was also testing our reaction to it, to see if we would walk our talk as a leadership team. Had we created a culture and environment where agile principles flourished and where the teams felt safe?
It turned out that the Scrum Masters were effective AND that our culture was safe. But the exercise was good for all of us and we learned and grew as a leadership team and organization. Now beyond failure, what are some aspects of safety in agile contexts?
From a cultural perspective some of the questions that come to mind include:
- Is it safe to fail?
- Is it safe to say I don’t know?
- Is it safe to explore, to learn, to try new ideas?
- Is it safe to pushback on a management idea?
- Is it safe to refactor code?
- Is it safe not to work overtime? To truly strive for a work-life balance?
- Is it safe to take a day off right before a release? Or to go to the doctor?
- Is it safe to implement a story correctly regardless of the time it takes?
- Is it safe to pair?
- Is it safe to challenge the value of a User Story from your Product Owner?
- Is it safe to say no?
- Is it safe to ask someone to tell you the WHY behind the project you’re working on?
- Is it safe to explore that WHY and to challenge it’s inherent assumptions?
- Is it safe to work on technical debt?
- Is it safe to ask for help?
- Is it safe to “swarm” around the teams work?
Is it safe, truly safe, to do all of these within the context of your “real world” role?
And remember, safety isn’t just an upward, towards “management”, factor. I like to think of it as a 360-degree attribute that is viewed from four perspectives:
- Is it safe within your team?
- Is it safe within your management?
- Is it safe within your organization?
- Is it safe within your culture?
So these assessment-like questions would have to be viewed through these various lenses to get a true feel for your level of overall safety.
I think Joshua and the Industrial Logic folks are onto something with their focus on Anzeneering. However, I would seriously reconsider re-branding the name if I were them.
In the last section I tried to pull together a short list of safety checking questions. I know it’s probably incomplete and can be improved. I’m also thinking of pulling together a survey, much as I did on a failure topic I did a few years ago, to capture broader feedback.
So, could you please help me with developing more questions for safety checks? Just send them to me in email, firstname.lastname@example.org, or comment on this post. Either way, I’m intrigued by what Joshua and his team are doing and I’d like to develop a tool for checking, because I don’t think we can improve our safety unless we know specifically where it’s lacking.
Software is a very dangerous business, so stay “safe” out there. And stay agile my friends,
Don't forget to leave your comments below.