Update on Grokking Functional Programming

Writing this book has been quite a journey.  Being a personal experiment in user experience design in a non-interactive medium has proven to be quite a challenge.  The constraints are significant from page size, typography and then to consider cognitive overload and leaps from concept to concept.  Yet, at the core I am still aiming for a pairing experience.  I want to create an experience that someone is working with me at the same computer and sheets of paper.

This is what is emerged as a workflow.

  • sketch out a plan for the chapter;
  • focus on what are the main takeaway points;
  • what pre-requisite knowledge is needed for the chapter; and
  • fill in some examples and exercise, pretty much faux-code

The gap that surfaced this week with my editors is that we swirl around the main takeaway points a lot.  The fundamental question that I now ask is “What is the one super power that we receive with this chapter?”.  That’s a hard question to answer and it forces me to ask very abstract and pointed questions about the main concept of the chapter.  Often, the answer is philosophical which forces me to reduce it to something practical.

Sometimes, answering the super power question has left me admitting “Well, that’s pretty unremarkable!”.  Initially, I was disappointed when that happened.  Now, I use that as a feedback to dig further.  It forces me to seek for my own deeper understanding.  The bottom line is that I am still hindered by my own ignorance.  It takes effort to break through ignorance barriers and it is not about pounding on the same door all the time.  It is about circling around and attempting to find tiny cracks to chip into.

In addition, I have now realised is that UX can only be solved after the philosophical and abstract are reduced to concrete.  Having tight UI constraints is actually a blessing for UX.  I now appreciate the limitations as it simply removes options not worth their bandwidth consumption.

Grokking FP JourneyWhere have I taken people on this journey so far?  On the right is what I’ve covered so far.  Next up are algebraic data types and then more in depth coverage of high order functions.  Very surprising for me is that this journey of constraints, user experience challenges, asking super power questions has led me to shifting this chapter on high order functions from being chapter 2 to chapter 7.  That was unexpected.

Keep an eye out for tweets on discounts on the book.  The latest promo from Manning is for 30 October 2014.  Using discount code dotd103014au at http://manning.com/khan will give you 50% off the list price.


Why Maths Matters

A few months ago I was working on a piece of code that polls a service for a set of data, then for each item in the set, sends a request to a second service.  This was just one part of a pipeline of data processing.  My first naive solution was to just take a guess at the polling interval and the buffer size of the sets and hope for the best.  Then, I made these two parameters configurable.  Now, someone else is responsible for the guesses.

The actual problem is that I was irresponsible.  The situation called for precision in my design and I was being casual.  I focused on the plumbing (i.e. HTTP requests) and not the underlying questions that needed answers.  For example,

  • How big a buffer do I need for the data set?
  • How frequently must I poll for new data?
  • How frequently can I push data out? 
  • How can I reduce the push rate if the receiver cannot accept data faster than I can give it?

One very precise model comes ready made and is known as the leaky bucket problem.  The leaky bucket solution is applied very often in low level networking solutions, but is just as applicable in my problem space.  Now before you roll your eyes and fake a yawn at the maths behind this, hear me out.

The moment we go into higher order control systems, we need higher order maths to build precise mathematical models of these systems. That is not the maths that I’m suggesting we chase.  Instead, with a little bit of creativity and some strategic design constraints, we may be able to reduce many classes of systems to first and second order systems. In other words, introduce constants until you have one or two variables that you can tweak.

This is exactly the situation that I had.  Sure, there were more than two buffers to be managed, but I was in a position to fix the size of some buffers or the flow rates.  Then, the buffers and flow rates that were critical to me could be modeled with straight forward linear equations.

Had I done this, then I would have been in a position to have built a system that could have adjusted itself to a steady state, without frequent reconfiguration.  The fact that I was casual about my design led directly to a case of the pipeline being in a non-deterministic state.  This problem was highlighted early when the users started asking for different kinds of “status reports” for data flowing through the pipeline.  Of course, being casual about it, I treated it as a feature request and implemented a few reports.

This is when maths and science make a difference in software development.  Unsurprisingly, mathematical models are generally deterministic, self contained (i.e. highly cohesive), and at the right level of abstraction for the problem.  And all of those characteristics lead to highly testable models that you can do in wee little steps, test first.

That’s why it matters to have a maths background.  And if you came into software development through some other route, then do some layman studying of algorithms, control systems and simple higher order maths.  It will serve you well forever. It will certainly give you a design advantage when you need it most.  Right now, I’ll take any advantage because design is just so darn difficult.

Reflections of BDD Stories

There was an interesting discussion on the AgileSA Linked-in group around the use of BDD stories, and whether they should contain technical references or not.  I found myself saying that I don’t mind having, for example, a login story. To help Kevin Trethewey get over the shock and horror of this, I reflected on how my use of BDD stories has changed over time.

I remember Dan North explaining BDD stories to me when it was just a thought in his own mind.  That was around 2005 or 2006 and I remember being so inspired by the simplicity of the BDD grammar. So, seven or eight years later, let me share how my use has changed.  And, Kevin, I hope you enjoy your own journey too. It’s a lot of fun.

On CRUD.  I agree that CRUD is bad, blah, blah, blah.  But there are times when CRUD can a valid, and reasonable design choice. I don’t discount it, but it is not my first choice, and it is very rare for me. Oh, I sometimes just use it because I don’t know anything else about the domain. Once I discover more, I noticed that those CRUD things, quite naturally, fade away.

Who is the best judge of a story?  The customer is unlikely the best person to articulate these stories, nor judge the quality of the story. I have to guide them and extract that. I now ask the following questions.

  • Who are they?
  • What do they need?
  • What do they think they need?
  • What do they really want?

What does the story describe? Of the above questions, the last is the most powerful for me.  It balances my perspective. It stimulates creativity and moves me from the problem space to solution space. The story then exists in the solution space; i.e. it is now reflects a design intention, not a requirement statement in the problem space.

BDD stories are great conversation artifacts. It’s like a book on a coffee table. It stimulates conversation.  It is of same value as using a metaphor. In conversation with the customer, the story is mostly about things in the problem space. In other words, it is an analysis and clarifying tool. I found that direct, literal and very early use of this analysis statement as an executable specification results can result in brittle tests.

On the use of technical references. When I’m working in the design space and writing design stories, then I don’t mind if there is reference to a technical implementation such as a login screen.  At some time, I have to get concrete. I like getting concrete very early and abstract from that. It’s just how my mind works. So, if there is alternative authentication mechanism (say, LDAP or ActiveDirectory), then it is just another concrete implementation. If the authentication choice is an exclusive one, then, the abstraction of proprietary authentication and ActiveDirectory authentication doesn’t offer any benefit. So, I’ll just go for one of those and the story on task board will make reference to the technical aspects directly. It’s a great reminder of a design choice that is explicit to everyone.

Most stories start out as bad stories. My early stories in an unfamiliar domain are awful.  Like code, the story should exhibit single responsibility. That takes a lot of domain insight and discipline. Unfortunately, refactoring stories towards single responsibility is not trivial. It’s not as simple as extract class/method/variable. The result is that my story based test suite is in constant turmoil longer than it is calm with a few small ripples. For this reason, I use the story grammar as a conversation piece, but not as code artifact.

BDD Stories on the backlog. To avoid confusion about when the story is in the problem space or solution space, I don’t use BDD stories on the backlog. I prefer the XP style of a short phrase as a reminder of something to discuss.

On the use of outside-in style testing. I like outside-in to analyse the problem space, but I often find it equally valuable to evolve the design from the assertions inside-out.  I oscillate between the two perspectives rapidly and quite often. I think I’m searching for that harmony in perspective.  I then make it a choice to use the BDD story as an executable test for the outside in perspective. Often, though I find it unnecessary because I already have tests that reflect that perspective; it’s just not using the BDD grammar. Yet, the BDD grammar was a starting point. I just am not fixated on the BDD grammar being the ending point.

On the BDD grammar. From a language perspective, the BDD template is a general grammar that can be used to express any domain.  Just like we can use a general purpose language to solve any problem,  the BDD grammar can be used similarly. Yet, we have learned that domain specific languages can be more expressive of a particular domain.  Equivalently, I keep my mind wide open, looking for a story grammar that is domain specific.  For example, in a time sensitive domain such as investment portfolios, I might extract a grammar that expresses time as a first class citizen. There won’t be a “When” clause. I might have something like “At time t(0), the portfolio capital amount is…, at t(n) the portfolio has …, at t(n+1) the surrender value should be …”.

Remember, these are just reflections and observations about myself. Please don’t treat it as a gospel. You have your own journey that takes different pathways. Just enjoy those excursions.

Split stories as a design activity

“A story should be big enough to fit into a sprint, otherwise chop it up until it does” — this is advice that is more or less given to Scrum teams during planning or backlog grooming. The problem is that this is not easy to do.  My friends at Growing Agile describe a few ways to achieve this (see their blog post Breaking Down User Stories). These techniques are not wrong in any particular way, and they will certainly result in smaller stories.  However, these are what I call “mechanized” techniques. When I’ve been mechanical about splitting stories, I’ve always ended up with weak fracture points in the problem space.  So, I prefer to look in the solution space for boundaries that promote or retain conceptual integrity of the software.

Below are just three techniques that are quite commonly used by Scrum teams.  I steer away from them at all costs.

  • CRUD. 
    I find that thinking in terms of these database operations removes a
    lot of the richness in the domain.  Instead, I think about the life
    cycle of things.  For example, there is no CRUD for an invoice.  Instead
    a customer buys something which means that a sales person issues an invoice. The customer pays the invoice. Perhaps, a debtors clerk requests payment for an overdue
    invoice.  These are all different things that can be done with an
    invoice at different times in its life.  Note also that “creation” is
    a very special case in any life cycle, and to bring something into
    existence that maintains integrity is quite an effort.
  • Dependent Stories.  I try to break all dependencies between stories.  I’ve found that looking to create “stand-alone” stories results in some very deep and powerful analysis of the domain.  Inadvertently, you will crack open a crucial part of the domain.  Often the concept which holds several stories together in sequence turns out to be orthogonal to the original stories.  For example, there is a workflow for invoices (issue, authorise, pay, remind, resend, etc) that can result in several dependent stories.  Alternatively, we can model the invoice state (and operations allowed for each state) independent of the sequence(s) of states.  Now we can build software that deals with specific sequences, independently of the operations for each state.  This separation can lead to such powerful discussions with the domain expert.
  • Job Functions: I’ve
    never found job functions to yield useful modules.  Extending the
    invoice example above, a job function breakdown could be on sales
    (create, authorise), debtors (record payment, payment reminders),
    customer service (credit notes), marketing (cross sales campaigns).  Now
    each of those job functions work with invoices in some way, but the
    conceptual integrity and cohesion is stronger around the invoice and its
    states.  Designing good modules is by far, the hardest part of any software design
    effort.  Get it wrong and it will hurt.  More often than not, it is just too costly to try to create new
    cohesive modules from code that is already laid down along job
    functions (or any other weak boundary criteria).

There are significant consequences to splitting stories in the solution space.

  • The product owner just needs a simple phrase or sentence that describes the problem space, but remains part of the feedback loop for the solution space stories.
  • Backlog grooming becomes an exercise in understanding the problem space. 
  • Sprint planning blitzes (one day or less) is not sufficient.
  • To be effective, sprint planning becomes continuous; i.e. design is continuous
  • Each story can (potentially) be released on completion
  • Sprint boundaries (time boxes) become less important moments in time
  • … and you will tend towards continuous flow.

Live with it for a while

Before I rush off and refactor my code, I like to live with my code for a while. The refactoring I do in the TDD cycle is to get rid of trivial duplication and, perhaps, some better naming. I deliberately delay extracting methods and classes and pushing up or down. For me, those are quite important design choices, and I want to make those decisions only when I have a good understanding of my problem.

What do I mean by “live with it for a while”?  Literally, I leave it alone and move along to something that is in it’s vicinity.  I choose something that is close enough that will need to interact or modify the “living-with” code.  This new use case, scenario or question is to further my understanding of the problem and  I choose it deliberately. If it turns out to be tangential, I don’t sweat it. I just pick something else that will move me closer.  The fact that my first choice was poor is always valuable insight into my lack of understanding of the problem.

Aside: The
simplicity of my solution is directly related to my depth of
understanding.  The deeper I understand the problem, the simpler I can
potentially get.  Shallow understanding leads to more complex solutions. This takes time, and “living with it” gives me freedom to play with the problem from several angles.

I don’t mean “ignore it after a while”.  Ignoring it is like noticing a crack on your lounge wall and after 2 weeks of doing nothing about it, you don’t see it anymore. So, living with my code is not giving myself permission to be sloppy.  It’s deliberate choice to look for a better, simpler solution as I increase my knowledge. Once I have a bit more knowledge, I can start making more adventurous design decisions.  I think it’s almost impossible to find a simple solution if you don’t have deep domain knowledge.

It means that I refactor a little at a time frequently.  Even if I know my code is not clean, I’d rather have a pulse for my code, and let it beat weakly.  All I’m doing is constantly looking for little things that will make that pulse beat a bit stronger.  I now realize that I refactor a little at a time, but almost continuously. I’m not searching for clean code. I’m searching for markers in the domain that confirm or refute my understanding.  The clean code will come with time.

Living with my design for a while has saved me lots of time, especially when I’m not confident of my knowledge of the problem.  Give it a try and let me know whether it works for you too.

Cape Town’s latest agile development course

I got my first break as a software developer about 20 or so years ago.  It was the first time I heard that a table can have keys. That start put me on a career path that I never anticipated, but is thoroughly enjoyable.  And now, finally, I’ve finally got a chance to give back in a way that shows my appreciation for what was offered me all those years ago.  For those of you who know my history (or just peeked at my LinkedIn profile), you know that I’m talking about KRS.

So, I’m quite thrilled to be a contributor and collaborator for their Advanced Agile Developer course.  The inauguration course happens on 19 November. That’s not much time, so book a spot quickly.  I don’t want to give too much away, but I can tell you that it’s going to be fun, intense, and inspiring – and seriously code centric.  There will be times when you will feel like you just don’t know how to code anymore, and then feel like you can conquer the world.

I think this is a course with a difference.  We want to bring together developers that are already competent at writing code and want to become proficient at being agile developers.  We made a big decision to go deep, and not skim the surface of lots of topics.  The result is a course that is very code centric, working at quite an intensity that passionate developers will find inspiring.

Working with Lorraine Steyn and team KRS has the same warmth, openness, and security from 20 years ago. This is something that I know they will bring to this course in a way that I can only hope to, someday, emulate.

Accurate estimation, really?

I ended up in a twitter conversation last weekend about estimation and velocity.  It started when Annu Augustine asked about what qualities to look for in a lead developer, other than technical skills.  One of the qualities I put forward was accurate estimations. This went around a bit and ended up, not surprisingly for me, at velocity.  There are a couple of points that I need to offer my opinion on:

I ended up in a twitter conversation last weekend about estimation and velocity.  It started when Annu Augustine asked about what qualities to look for in a lead developer, other than technical skills.  One of the qualities I put forward was accurate estimations. This went around a bit and ended up, not surprisingly for me, at velocity.  There are a couple of points that I need to offer my opinion on:

Accurate estimation is not an oxymoron.  Let’s just get over some technicalities first: “accurate” is not the same as “exact”.  Estimation in software is never exact, but the magnitude of accuracy is very significant.  If I say that something will take approximately 2 weeks to achieve, then I need to qualify “approximately”.  What is the magnitude hidden behind “approximately”?  Is it 10% accurate, 50% accurate, 80% accurate?  Let’s say it is 50% accurate.  This means that I can finish this work at best in 1 week or worst 3 weeks.  That’s a big swing.  As a good developer, it has to be better than 50%.  If it’s 50% or less, then it is an indicator that you are lacking understanding (i.e. knowledge) in a key part of the problem and/or solution domain.  Then you have to estimate on what it will take to overcome that ignorance, before committing to the original request.

An estimate is not a commitment.  If you mix these two concepts together, then you most likely will be in trouble sooner than later.  The estimate is based on something that you understand and that you can foresee with good accuracy.  In other words, an estimate is a prediction of the future.  The commitment is a promise that you make on part of the estimate in order to make that prediction come true.  If I predict that a problem can be solved in 3 days, then I may make a promise for some part of it by the end of the first day. This distinction may surprise some who have been using the poker game, or other estimation technique in Scrum. Scrum teams use estimates in planning make commitments for the entire sprint, then track and calibrate velocity via burndowns, which leads me to the next point.

Team velocity or relative estimation does not make for better estimation.  Knowing what a team’s velocity is based on tracking historical trends is only a measure of the energy the team has expended to date.  This expended energy (the area under the curve) is used to predict the potential energy of a team.  That’s all, nothing more nor less.  I will stick my neck out and go so far as to say that relative estimation in points is not at all about estimating, but a way to accurately predict the potential energy of a team.  I’ll go even further: relative sizing does not work because software development is about crunching new knowledge, and every new piece of work should be crunching more knowledge.  If it’s the same thing over again, then the design is flawed or at least at the wrong level of abstraction.  Jumping levels of abstraction, and re-shaping designs takes quite a lot of knowledge crunching and is not relative to anything else but your existing knowledge.  So, relative estimation does not make for better estimations and velocity just tells you the potential energy of the team.

Where does this leave me?  I’ve given up on relative sizing and estimation.  Knowing my own velocity has not added any value to my ability to estimate because every problem is unique in some way.  I estimate based on the knowledge that I have at that point in time, and force myself to be aware of my accuracy.  All of this, and more, has made me appreciate the value of one piece flow a lot more than time boxed flow.

Stay in bed or come to SGZA

I will be hosting a 3 hour session at the South African Scrum Gathering titled “Live your principles or stay in bed”.  You can read the abstract here.  In my opinion, there is far too little focus on software development itself in Scrum.  So, this is unashamedly a developer session.  I will be present various snippets of code, and we will “live our principles” to transform the code into something that is less messy.
I often hear developers, and managers too, saying “It’s so much easier without, so why bother?”.  Well, design is hard.  Applying principles for life is harder.  But if you are professional developer and have a conscience about your design, your code, and your product then “an easy life without principles” is not an option.

If you are planning to come along, bring your laptop with your development environment.  I will likely have code samples in Java, C#, Ruby, Javascript, and even, yup, Basic (well, maybe).  All the samples should be very readable and you could easily translate them to something equivalent in your language pretty easily.  Better still, bring some of your own code along, that you want to share.

In reality, this is stuff that Scrum does not teach you, but need to know to avoid Scrum burnout.  Looking back, I should have done something like this sooner.

What’s the point in Scrum?

Scrum people like to use points for estimating and measuring velocity.  I won’t go into detail about how points work and how to play those poker estimation games.  Just search around and you will find a ton of stuff.  So, back to this points stuff.  I have a divided relationship with the humble point.  I like it when a team switches to using points for the first time, because it gives them a chance to think a little bit deeper about what they want to do.  I don’t like it when we start inventing rules around points (and you can lump guidelines and best practices into the rules pot too).  When the rules appear, the thinking disappears.
In every team trying Scrum, there is bound to be a rule about points.  I dare you to put up a hand and say you have none.  These rules are things like “We can’t take anything over 13 points into a sprint”, “Our epics are 100 points”, “The login screen is our baseline of 3 points”, “Anything over 40 points must be broken down”.  So, I double dare you 🙂

Sprint backlog shape with high shared understanding

I have different view of the humble point. A point may seem like a one dimensional thing, but it has a some facets built into it.  One facet is the “amount of effort to build something”.  Another facet is “amount of ignorance” and this has an inverse – “amount of shared knowledge”.  Sometimes I find it useful to make a judgement based on what I don’t know as opposed to what I do know.  Regardless of whether I choose to view the cup as half full or half empty, I cannot estimate effort to build something based upon what I don’t know.  So, effort tends to track the amount of knowledge, not ignorance.  As knowledge increases, my ignorance decreases and each point starts representing more and more of pure effort.

However, if I am in a state of complete ignorance, then it is completely impossible for me to make any judgement on effort to build.  I’d be simply speculating.  What I can do, though, is create a time box to explore the unknown so that I can start moving out of my state of ignorance.  This is also an estimate and I am not making an excuse for non-delivery either.  I need to understand some things and also show my understanding in some code.  Yes, the code that I produce may not have a visible user interface or some other convenient demo-friendly stuff, but I need to carefully plan my sprint review to express my understanding.

It’s all about gaining a SHARED understanding. This understanding is body of knowledge that I have learned which I need to confirm with others.  This act of confirmation can happen in several ways.  I can have a conversation and explain what I understand, I can draw a blocks and lines picture, or show a spreadsheet, and so on.  Regardless of the method of communication, I still use the opportunity of discovery to express my understanding in code as tests.  Another powerful way of expressing my understanding is to write out a story and a few scenarios.  Using BDD style grammar can be a great way of concisely expressing some things, that can be easily shared.  Yes, you heard me correctly – as a developer, I write the stories and scenarios.  When I am given a story and scenario by someone and asked to estimate, then I am attempting to estimate based on  another person’s expression of their understanding and my assumed understanding.

In a recent discussion with Jimmy Nilsson, he said that he prefered to call scenarios “examples”.  That really resonated with me.  I also do a lot of discovery by example, and then gradually introduce more a more into the examples, as I get more and more confident of my knowledge.

How do I know how much I don’t know? That’s a tough question.  What I do comes straight out of my TDD habits.  I create a list of questions – my test list.  For some questions, I will know the answer easily, some not all, and some are debatable.  The more that I can answer, the better I can estimate effort.  I can then turn the questions that I can answer into statements of fact.  The more facts I have, the less ignorant I am.

Recently, I worked with a team that wanted to get TDD going, and the most significant change that I introduced was in backlog grooming and sprint planning.  During these two ceremonies, we (as a team) threw questions madly at a requirement, regardless of whether we knew the answer or not.  We then worked through the questions (as a team) to establish how much we could answer.  The trend that emerged was that the original estimates where either half of the new estimate or double of the new estimate.  When they where halved, it was generally because we were able to negotiate some of the unknowns (the ignorant areas) to a future sprint with the product owner.  In some cases, the product owner was equally ignorant, and was reacting to the “business wants the feature” pressure.  When they were doubled, it was so much more was discovered than originally assumed.  At the end of the session, we always asked the meta-question “If we answer all these questions sufficiently, will we be done?”.  I call this style of working “test first backlog grooming” or “test first sprint planning”.

Often I discover more things I don’t know. Annoyingly, this happens in the middle of a sprint, but if it did not happen in that phase of work, then perhaps I was not digging deep enough.  When this happens, I just keep on adding them to my list of questions.  These new questions are raised at any time with others on the team, the customer or with whoever can help me understand a bit more.  Sometimes, it’s put on the table for negotiation to be dealt with at another time.  Nevertheless, standups still seem to be a good time to put new questions on the table, for discussion later.

There are several ripple effects of thinking about points in this manner – this notion of ignorance and shared knowledge gauges.

The first is about the possible shape of your sprint backlog. If you have deep understanding, then it is likely that you will be able to decompose complex problems into simple solutions, that take less effort.  The effect is that low point stories are in greater number in a sprint.

If you are highly ignorant, then the estimation points reflect that and there are more medium to high point stories in the sprint.

The second is about what you value in a story. You will find less value in the ontology of epics, themes and stories.  It is no longer about size of effort but degree of understanding or ignorance.  Instead, the shape of the product backlog is something that is constantly shifting from high uncertainty (big point numbers) to high certainty (low point numbers).  That’s what test first backlog grooming gives you.

The third is about continuous flow that is the nature of discovery.  When you work steadily at reducing your degree of ignorance, then you are steadily answering questions through answers expressed in code, and steadily discovering new questions that need answering.  This process of discovery is one of taking an example based on what you know in this moment and modeling it.  Then expanding that example with one or two more additional twists, and modeling that, and so it goes.

It also touches product ownership and software development. When you work in this way, then explicit estimation of effort becomes less significant.  Moments that have been earmarked as important  points in the life of the product become more significant.  Call them milestones.  These milestones are strategically and tactically defined, and become a dominant part of product ownership.  Software development becomes the act of having long running conversations with the customer.  Those milestones give context for the content of those conversations.  Ultimately, those conversations are then expressed as a set of organised thoughts in code.  If your code is not organised well, then perhaps you also don’t understand the problem or solution or both.

This is a long story for a short message. A high priority is to resolve the tension that exists in an estimation in the form of knowlege/ignorance fighting against effort.  When you release that tension through shared understanding, then you can deal with the tension that exists in the act of creating those significant milestones.  In my opinion, that’s the real wicked problem.

Rolling out a methodology is about design

Implementing a new methodology is a painful exercise.  Lots change, lots break, and lots of so-called “colateral damage”.  I have tried implementing new methodologies including XP and Scrum many times.  I have also witnessed a lot of attempts by other people, and been involved while others drove the initiative.  Every time, it has lead the organisation into a disruptive, stressful state.  The most common position taken by the implementors is:

Of course it is disruptive.  That’s part of the change-process.  We all knew that from the moment we started. How else do you think it’s going to get better?

In the past, I’ve been guilty of the same.  The end result is that I am left unsatisfied and unfulfilled, and so is the organisation. Yes, it may eventually get better.  Eventually I got sick of taking this high road.  Well, it only took two such situations a long time ago to realise that I was messing up royally.

In my quest to do things better, I drew inspiration from test driven development and dealing with old, messy legacy code.  Three very distinct things which are rooted in software development and changing legacy code is very, very apt.

  1. Rolling out a methodology is at the implementation level.  So, was there a design for the implementation in the first place?  Implementation without design always ends up in a mess.
  2. Even if we abstracted the design from one implementation, does the design support all implementations?  “Similar” does not equate to “same”.
  3. The existing methodology has a set of protocols by which the organisation functions, while the new methodology introduces a new set of protocols.  Just dumping down the new protocols is the equivalent of rip and replace – the grand rewrite without care for migration.  Is this the only way?

So, taking inspiration from code, here is something that you can try when attempting a new rollout.

Understand the existing implementation. Use a test based approach to concretely discover existing protocols within the organisation.  This may be as simple as playing out what-if scenarios that test the communication pathways.  Keep your eyes wide open for seams or boundaries.  These seams are candidates for incision points for introducing a new protocol facing in towards the area under change while honoring the old protocol facing out towards the other side that should not be changed (yet).

Design, design, design. Once you understand the existing protocols and how they behave under certain scenarios, you switch to design mode.  Look again at the dependency graph within the organisation for a particular protocol.  What is affected when a certain event occurs?  Then look at your candidate seams and incision points and design your wedge.  It may be a transformer that completely modifies information as it crosses over the seam.  Maybe it’s a buffer with a finite capacity that slows things down and trickle feeds info to the other side.  What about a filter that removes some info?  How about something that just decorates existing info with a just a wee bit more that is tolerable on the other side.

This design is mostly about designing feedback loops.  As such, you need to consider the temporal and synchronous aspects of feedback also. What is the expected latency when I stick in this new wedge?  Will it take one week or one day or one hour when something crosses this boundary?  Do we send off some info and wait for a response on the same pathway, or do we get informed of which other pathway to regularly check for the response?  Perhaps someone gets nudged when the response arrives.

Implement it test first. While it may seem like a lot of upfront work is necessary to get the ball rolling, it can be done in tiny steps.  You don’t need to fall into the analysis hole when looking for seams.  Nor do you need to get stuck looking for the perfect design.  It is better to remain concrete for long periods of time, than speculate at possibilities.  Doing a little bit at a time with some small tests helps you keep both feet on the ground.  For example, you want to switch from the old protocol of reporting progress with budgeted vs actual to burning down story points.  Some people still need to use the budget vs actual report and it is inefficient to have to maintain both (not to mention not DRY at all).  We need a way of transforming from burn down to budget vs actual.  Think about candidate tests that will shift you towards that goal?  Maybe it’s “I should be able to take existing budgeted tasks in a time frame and map it to tasks on the burndown”.  Perhaps it is “I should be able to introduce new time frames on the budgeted vs actuals that synchronise with new sprint boundaries”.

These are just some things that I think and try out.  It comes from me being sick and tired of subjecting people to stressful implementations and being party to messed up implementations too.  It’s just too easy to blame someone or something else for our own ineptitude.  I don’t take the high road anymore.  I take the middle road.  It doesn’t travel straight, and has branches with dead ends too, but it leaves me a lot more content.