Implementing a new methodology is a painful exercise. Lots change, lots break, and lots of so-called “colateral damage”. I have tried implementing new methodologies including XP and Scrum many times. I have also witnessed a lot of attempts by other people, and been involved while others drove the initiative. Every time, it has lead the organisation into a disruptive, stressful state. The most common position taken by the implementors is:
Of course it is disruptive. That’s part of the change-process. We all knew that from the moment we started. How else do you think it’s going to get better?
In the past, I’ve been guilty of the same. The end result is that I am left unsatisfied and unfulfilled, and so is the organisation. Yes, it may eventually get better. Eventually I got sick of taking this high road. Well, it only took two such situations a long time ago to realise that I was messing up royally.
In my quest to do things better, I drew inspiration from test driven development and dealing with old, messy legacy code. Three very distinct things which are rooted in software development and changing legacy code is very, very apt.
- Rolling out a methodology is at the implementation level. So, was there a design for the implementation in the first place? Implementation without design always ends up in a mess.
- Even if we abstracted the design from one implementation, does the design support all implementations? “Similar” does not equate to “same”.
- The existing methodology has a set of protocols by which the organisation functions, while the new methodology introduces a new set of protocols. Just dumping down the new protocols is the equivalent of rip and replace – the grand rewrite without care for migration. Is this the only way?
So, taking inspiration from code, here is something that you can try when attempting a new rollout.
Understand the existing implementation. Use a test based approach to concretely discover existing protocols within the organisation. This may be as simple as playing out what-if scenarios that test the communication pathways. Keep your eyes wide open for seams or boundaries. These seams are candidates for incision points for introducing a new protocol facing in towards the area under change while honoring the old protocol facing out towards the other side that should not be changed (yet).
Design, design, design. Once you understand the existing protocols and how they behave under certain scenarios, you switch to design mode. Look again at the dependency graph within the organisation for a particular protocol. What is affected when a certain event occurs? Then look at your candidate seams and incision points and design your wedge. It may be a transformer that completely modifies information as it crosses over the seam. Maybe it’s a buffer with a finite capacity that slows things down and trickle feeds info to the other side. What about a filter that removes some info? How about something that just decorates existing info with a just a wee bit more that is tolerable on the other side.
This design is mostly about designing feedback loops. As such, you need to consider the temporal and synchronous aspects of feedback also. What is the expected latency when I stick in this new wedge? Will it take one week or one day or one hour when something crosses this boundary? Do we send off some info and wait for a response on the same pathway, or do we get informed of which other pathway to regularly check for the response? Perhaps someone gets nudged when the response arrives.
Implement it test first. While it may seem like a lot of upfront work is necessary to get the ball rolling, it can be done in tiny steps. You don’t need to fall into the analysis hole when looking for seams. Nor do you need to get stuck looking for the perfect design. It is better to remain concrete for long periods of time, than speculate at possibilities. Doing a little bit at a time with some small tests helps you keep both feet on the ground. For example, you want to switch from the old protocol of reporting progress with budgeted vs actual to burning down story points. Some people still need to use the budget vs actual report and it is inefficient to have to maintain both (not to mention not DRY at all). We need a way of transforming from burn down to budget vs actual. Think about candidate tests that will shift you towards that goal? Maybe it’s “I should be able to take existing budgeted tasks in a time frame and map it to tasks on the burndown”. Perhaps it is “I should be able to introduce new time frames on the budgeted vs actuals that synchronise with new sprint boundaries”.
These are just some things that I think and try out. It comes from me being sick and tired of subjecting people to stressful implementations and being party to messed up implementations too. It’s just too easy to blame someone or something else for our own ineptitude. I don’t take the high road anymore. I take the middle road. It doesn’t travel straight, and has branches with dead ends too, but it leaves me a lot more content.