Why am I poking SUGSA?

I’ve been taking a poke or few at SUGSA on twitter in the last few weeks around the2013 South African Scrum Gathering.  It’s time to put it into perspective.  My proposal to present a session at the conference been accepted and I have declined this acceptance.

This is not news to the SUGSA committee.  I have communicated my
decision and my reasons to the SUGSA committee.  I have also had
one-on-one discussions with a few committee members.  I am satisfied
that the committee has received and digested my reasons.  Individuals on
the committee have assured me that my feedback is in line with their
values and that they aim to address it in the future. 

My poking is just me pressuring SUGSA to recognize a few issues which I believe are important to our community, and to force the committee to address us on their position.  Tell all of us, not just me, of your position and we shall listen.  That is all, no more, no less.

These are the issues that I raised with SUGSA.

  • African conferences need to be representative of our community.  One way is to nominate an African keynote.  It will take a courageous decision but it can still happen.   Agile Africa responded symbolically very late and it was fantastic.
  • We must embrace our diversity and find ways of breaking the polarised community that exists in Cape Town.  I think the people from Cape Town that attended Agile Africa in Johannesburg were surprised by the diversity of speakers and audience at that conference.
  • Be transparent with the review process.  The committee needs to maintain its independence and having committee members occupying several speaking slots sends the wrong message.  I’ve seen that in other communities where the committee became the soapbox for individuals.  If I was on the committee, I would withdraw my proposal on principal.  I know of one committee member that did this.  Thank you for taking a courageous step.
  • I also questioned the expertise within the committee, especially for technical submissions.  That is my perception and there is every chance that I might be wrong.  I welcome the correction. 

This is not a boycott.   I will still attend as a paying conference attendee.  I appreciate the
learning I receive from this community and look forward to more of the
same.

I believe my issues have been heard and my actions, seemingly harsh to some, was necessary.  I still want SUGSA to take a position publicly and I leave it them to figure out when and how.  The SUGSA committee have invited me to discuss this further and  I look forward to that.

Thanks go out to Austin Fagan, Sam Laing and Karen Greaves who gave me perspective to resolve a few internal conflicts, and shape my thoughts on why I am doing this, and what I expect out of it.  Had they not given input, this could have been unnecessarily ugly.

Split stories as a design activity

“A story should be big enough to fit into a sprint, otherwise chop it up until it does” — this is advice that is more or less given to Scrum teams during planning or backlog grooming. The problem is that this is not easy to do.  My friends at Growing Agile describe a few ways to achieve this (see their blog post Breaking Down User Stories). These techniques are not wrong in any particular way, and they will certainly result in smaller stories.  However, these are what I call “mechanized” techniques. When I’ve been mechanical about splitting stories, I’ve always ended up with weak fracture points in the problem space.  So, I prefer to look in the solution space for boundaries that promote or retain conceptual integrity of the software.

Below are just three techniques that are quite commonly used by Scrum teams.  I steer away from them at all costs.

  • CRUD. 
    I find that thinking in terms of these database operations removes a
    lot of the richness in the domain.  Instead, I think about the life
    cycle of things.  For example, there is no CRUD for an invoice.  Instead
    a customer buys something which means that a sales person issues an invoice. The customer pays the invoice. Perhaps, a debtors clerk requests payment for an overdue
    invoice.  These are all different things that can be done with an
    invoice at different times in its life.  Note also that “creation” is
    a very special case in any life cycle, and to bring something into
    existence that maintains integrity is quite an effort.
  • Dependent Stories.  I try to break all dependencies between stories.  I’ve found that looking to create “stand-alone” stories results in some very deep and powerful analysis of the domain.  Inadvertently, you will crack open a crucial part of the domain.  Often the concept which holds several stories together in sequence turns out to be orthogonal to the original stories.  For example, there is a workflow for invoices (issue, authorise, pay, remind, resend, etc) that can result in several dependent stories.  Alternatively, we can model the invoice state (and operations allowed for each state) independent of the sequence(s) of states.  Now we can build software that deals with specific sequences, independently of the operations for each state.  This separation can lead to such powerful discussions with the domain expert.
  • Job Functions: I’ve
    never found job functions to yield useful modules.  Extending the
    invoice example above, a job function breakdown could be on sales
    (create, authorise), debtors (record payment, payment reminders),
    customer service (credit notes), marketing (cross sales campaigns).  Now
    each of those job functions work with invoices in some way, but the
    conceptual integrity and cohesion is stronger around the invoice and its
    states.  Designing good modules is by far, the hardest part of any software design
    effort.  Get it wrong and it will hurt.  More often than not, it is just too costly to try to create new
    cohesive modules from code that is already laid down along job
    functions (or any other weak boundary criteria).

There are significant consequences to splitting stories in the solution space.

  • The product owner just needs a simple phrase or sentence that describes the problem space, but remains part of the feedback loop for the solution space stories.
  • Backlog grooming becomes an exercise in understanding the problem space. 
  • Sprint planning blitzes (one day or less) is not sufficient.
  • To be effective, sprint planning becomes continuous; i.e. design is continuous
  • Each story can (potentially) be released on completion
  • Sprint boundaries (time boxes) become less important moments in time
  • … and you will tend towards continuous flow.

Accurate estimation, really?

I ended up in a twitter conversation last weekend about estimation and velocity.  It started when Annu Augustine asked about what qualities to look for in a lead developer, other than technical skills.  One of the qualities I put forward was accurate estimations. This went around a bit and ended up, not surprisingly for me, at velocity.  There are a couple of points that I need to offer my opinion on:

I ended up in a twitter conversation last weekend about estimation and velocity.  It started when Annu Augustine asked about what qualities to look for in a lead developer, other than technical skills.  One of the qualities I put forward was accurate estimations. This went around a bit and ended up, not surprisingly for me, at velocity.  There are a couple of points that I need to offer my opinion on:

Accurate estimation is not an oxymoron.  Let’s just get over some technicalities first: “accurate” is not the same as “exact”.  Estimation in software is never exact, but the magnitude of accuracy is very significant.  If I say that something will take approximately 2 weeks to achieve, then I need to qualify “approximately”.  What is the magnitude hidden behind “approximately”?  Is it 10% accurate, 50% accurate, 80% accurate?  Let’s say it is 50% accurate.  This means that I can finish this work at best in 1 week or worst 3 weeks.  That’s a big swing.  As a good developer, it has to be better than 50%.  If it’s 50% or less, then it is an indicator that you are lacking understanding (i.e. knowledge) in a key part of the problem and/or solution domain.  Then you have to estimate on what it will take to overcome that ignorance, before committing to the original request.

An estimate is not a commitment.  If you mix these two concepts together, then you most likely will be in trouble sooner than later.  The estimate is based on something that you understand and that you can foresee with good accuracy.  In other words, an estimate is a prediction of the future.  The commitment is a promise that you make on part of the estimate in order to make that prediction come true.  If I predict that a problem can be solved in 3 days, then I may make a promise for some part of it by the end of the first day. This distinction may surprise some who have been using the poker game, or other estimation technique in Scrum. Scrum teams use estimates in planning make commitments for the entire sprint, then track and calibrate velocity via burndowns, which leads me to the next point.

Team velocity or relative estimation does not make for better estimation.  Knowing what a team’s velocity is based on tracking historical trends is only a measure of the energy the team has expended to date.  This expended energy (the area under the curve) is used to predict the potential energy of a team.  That’s all, nothing more nor less.  I will stick my neck out and go so far as to say that relative estimation in points is not at all about estimating, but a way to accurately predict the potential energy of a team.  I’ll go even further: relative sizing does not work because software development is about crunching new knowledge, and every new piece of work should be crunching more knowledge.  If it’s the same thing over again, then the design is flawed or at least at the wrong level of abstraction.  Jumping levels of abstraction, and re-shaping designs takes quite a lot of knowledge crunching and is not relative to anything else but your existing knowledge.  So, relative estimation does not make for better estimations and velocity just tells you the potential energy of the team.

Where does this leave me?  I’ve given up on relative sizing and estimation.  Knowing my own velocity has not added any value to my ability to estimate because every problem is unique in some way.  I estimate based on the knowledge that I have at that point in time, and force myself to be aware of my accuracy.  All of this, and more, has made me appreciate the value of one piece flow a lot more than time boxed flow.

Bigger stories with few people spanning two sprints

I came across this tweet by Karen Greeves.

Scrum Master fail… Improvement action: Bigger stories where only few members can participate should be scheduled to run over 2 sprints.

After a quick twitter conversation, Karen explained.

It removes the ability to measure progress via working software at the end of the sprint

My response was

Working software is not the only way to measure progress in a sprint. And what if it works? I think it can.

It may well not be the ScrumMaster that failed when it was decided to have a bulkier story span two sprints.  I appreciate that it is a whole lot better when a story is in one sprint, and that should be our default objective.  However, it’s often not so trivial.  Given that I don’t have a lot of context, I will be assuming a lot.

In this case, it is more likely a failure of the team that they (a) accept a story to develop over 2 sprints, or (b) was unable to do enough analysis to consider what can be done in one sprint.  It can also be a failure on the product owner for not entering into a meaningful conversation on the details of the story, and then, again, a team failure on not engaging the PO significantly to take the problem forward.

If I encounter a story that spans two sprints, and that is more often than you think (often discovered mid-sprint), then I’m not interested in working software but seek clarity of understanding in that sprint.  The outcome at the end of the immediate sprint is an unambiguous story (or maybe several stories) which is a statement of the problem domain, or even better, the solution domain.  That is what I mean by working software is not the only measure of progress in a story.  It is more important to “measure” increased understanding in each sprint, and good statements in the solution domain is at the heart of knowledge crunching.

The most neglected aspect of working software is a measure of understanding of the solution domain.  In my experience, many teams are great at expressing the problem domain in software and their code reflects the analysis of the problem.  Consequently, the code does not reflect the understanding of the solution.  The end result is a weak design.  Over many sprints that require work in the vicinity of the weak design, there will be a degradation in velocity because the code is just baking in the problem statement, and not a well crafted solution to the problem.

Next, let me deal with the case of just a few team members participate and not the entire team.  I think that is completely feasible approach.  I’ve done it many times to great effect and with great efficiency because the conversation is a lot more direct and contained.  It is later that the distilled knowledge is shared with the team.  This is largely crunching in the problem domain with some rough models in the solution domain.  When I’ve included the entire team in the analysis, then the effect is one of dilution and inefficiency – too many people over a longer period.

Lastly, I asked “What if it works?”.  While this may seem to be brash or provocative  question, it is meant literally.  What if it really does work? Hey, then what we’ve achieved as an odd case of delivery over two sprints instead of one.  If it happens often enough, then we need to adapt accordingly: increase sprint duration, have people dedicated to analysis (oops, bite me ‘cos I’m creating a silo) and maybe more if we just think a bit about it.

So, in my opinion, it is absolutely OK to attempt the proposed solution because it is an admittance of ignorance, but if the team does not understand the actual situation they’re in, then it is the failure of the team not the ScrumMaster.  I also think that we should understand the rules of the game a lot more deeply.  Like I’ve said in the past:

It’s not the rules that matter, it’s what we do with the rules that counts.

To each their own, and life goes on.

Stay in bed or come to SGZA

I will be hosting a 3 hour session at the South African Scrum Gathering titled “Live your principles or stay in bed”.  You can read the abstract here.  In my opinion, there is far too little focus on software development itself in Scrum.  So, this is unashamedly a developer session.  I will be present various snippets of code, and we will “live our principles” to transform the code into something that is less messy.
I often hear developers, and managers too, saying “It’s so much easier without, so why bother?”.  Well, design is hard.  Applying principles for life is harder.  But if you are professional developer and have a conscience about your design, your code, and your product then “an easy life without principles” is not an option.

If you are planning to come along, bring your laptop with your development environment.  I will likely have code samples in Java, C#, Ruby, Javascript, and even, yup, Basic (well, maybe).  All the samples should be very readable and you could easily translate them to something equivalent in your language pretty easily.  Better still, bring some of your own code along, that you want to share.

In reality, this is stuff that Scrum does not teach you, but need to know to avoid Scrum burnout.  Looking back, I should have done something like this sooner.

What’s the point in Scrum?

Scrum people like to use points for estimating and measuring velocity.  I won’t go into detail about how points work and how to play those poker estimation games.  Just search around and you will find a ton of stuff.  So, back to this points stuff.  I have a divided relationship with the humble point.  I like it when a team switches to using points for the first time, because it gives them a chance to think a little bit deeper about what they want to do.  I don’t like it when we start inventing rules around points (and you can lump guidelines and best practices into the rules pot too).  When the rules appear, the thinking disappears.
In every team trying Scrum, there is bound to be a rule about points.  I dare you to put up a hand and say you have none.  These rules are things like “We can’t take anything over 13 points into a sprint”, “Our epics are 100 points”, “The login screen is our baseline of 3 points”, “Anything over 40 points must be broken down”.  So, I double dare you 🙂

Sprint backlog shape with high shared understanding

I have different view of the humble point. A point may seem like a one dimensional thing, but it has a some facets built into it.  One facet is the “amount of effort to build something”.  Another facet is “amount of ignorance” and this has an inverse – “amount of shared knowledge”.  Sometimes I find it useful to make a judgement based on what I don’t know as opposed to what I do know.  Regardless of whether I choose to view the cup as half full or half empty, I cannot estimate effort to build something based upon what I don’t know.  So, effort tends to track the amount of knowledge, not ignorance.  As knowledge increases, my ignorance decreases and each point starts representing more and more of pure effort.

However, if I am in a state of complete ignorance, then it is completely impossible for me to make any judgement on effort to build.  I’d be simply speculating.  What I can do, though, is create a time box to explore the unknown so that I can start moving out of my state of ignorance.  This is also an estimate and I am not making an excuse for non-delivery either.  I need to understand some things and also show my understanding in some code.  Yes, the code that I produce may not have a visible user interface or some other convenient demo-friendly stuff, but I need to carefully plan my sprint review to express my understanding.

It’s all about gaining a SHARED understanding. This understanding is body of knowledge that I have learned which I need to confirm with others.  This act of confirmation can happen in several ways.  I can have a conversation and explain what I understand, I can draw a blocks and lines picture, or show a spreadsheet, and so on.  Regardless of the method of communication, I still use the opportunity of discovery to express my understanding in code as tests.  Another powerful way of expressing my understanding is to write out a story and a few scenarios.  Using BDD style grammar can be a great way of concisely expressing some things, that can be easily shared.  Yes, you heard me correctly – as a developer, I write the stories and scenarios.  When I am given a story and scenario by someone and asked to estimate, then I am attempting to estimate based on  another person’s expression of their understanding and my assumed understanding.

In a recent discussion with Jimmy Nilsson, he said that he prefered to call scenarios “examples”.  That really resonated with me.  I also do a lot of discovery by example, and then gradually introduce more a more into the examples, as I get more and more confident of my knowledge.

How do I know how much I don’t know? That’s a tough question.  What I do comes straight out of my TDD habits.  I create a list of questions – my test list.  For some questions, I will know the answer easily, some not all, and some are debatable.  The more that I can answer, the better I can estimate effort.  I can then turn the questions that I can answer into statements of fact.  The more facts I have, the less ignorant I am.

Recently, I worked with a team that wanted to get TDD going, and the most significant change that I introduced was in backlog grooming and sprint planning.  During these two ceremonies, we (as a team) threw questions madly at a requirement, regardless of whether we knew the answer or not.  We then worked through the questions (as a team) to establish how much we could answer.  The trend that emerged was that the original estimates where either half of the new estimate or double of the new estimate.  When they where halved, it was generally because we were able to negotiate some of the unknowns (the ignorant areas) to a future sprint with the product owner.  In some cases, the product owner was equally ignorant, and was reacting to the “business wants the feature” pressure.  When they were doubled, it was so much more was discovered than originally assumed.  At the end of the session, we always asked the meta-question “If we answer all these questions sufficiently, will we be done?”.  I call this style of working “test first backlog grooming” or “test first sprint planning”.

Often I discover more things I don’t know. Annoyingly, this happens in the middle of a sprint, but if it did not happen in that phase of work, then perhaps I was not digging deep enough.  When this happens, I just keep on adding them to my list of questions.  These new questions are raised at any time with others on the team, the customer or with whoever can help me understand a bit more.  Sometimes, it’s put on the table for negotiation to be dealt with at another time.  Nevertheless, standups still seem to be a good time to put new questions on the table, for discussion later.

There are several ripple effects of thinking about points in this manner – this notion of ignorance and shared knowledge gauges.

The first is about the possible shape of your sprint backlog. If you have deep understanding, then it is likely that you will be able to decompose complex problems into simple solutions, that take less effort.  The effect is that low point stories are in greater number in a sprint.

If you are highly ignorant, then the estimation points reflect that and there are more medium to high point stories in the sprint.

The second is about what you value in a story. You will find less value in the ontology of epics, themes and stories.  It is no longer about size of effort but degree of understanding or ignorance.  Instead, the shape of the product backlog is something that is constantly shifting from high uncertainty (big point numbers) to high certainty (low point numbers).  That’s what test first backlog grooming gives you.

The third is about continuous flow that is the nature of discovery.  When you work steadily at reducing your degree of ignorance, then you are steadily answering questions through answers expressed in code, and steadily discovering new questions that need answering.  This process of discovery is one of taking an example based on what you know in this moment and modeling it.  Then expanding that example with one or two more additional twists, and modeling that, and so it goes.

It also touches product ownership and software development. When you work in this way, then explicit estimation of effort becomes less significant.  Moments that have been earmarked as important  points in the life of the product become more significant.  Call them milestones.  These milestones are strategically and tactically defined, and become a dominant part of product ownership.  Software development becomes the act of having long running conversations with the customer.  Those milestones give context for the content of those conversations.  Ultimately, those conversations are then expressed as a set of organised thoughts in code.  If your code is not organised well, then perhaps you also don’t understand the problem or solution or both.

This is a long story for a short message. A high priority is to resolve the tension that exists in an estimation in the form of knowlege/ignorance fighting against effort.  When you release that tension through shared understanding, then you can deal with the tension that exists in the act of creating those significant milestones.  In my opinion, that’s the real wicked problem.

Product Ownership Webinar

On 12 May 2011 I will be joining Kent Beck and Henrik Kniberg in a free webinar hosted by SD Times to take a deeper look at product ownership as described by the Scrum methodology.  I think we all have a lot of questions, especially Kent, but I will also put forward some things that I have tried and some opinions of what I think should be tried. As usual, I welcome critical comment.
For a long time I have been wary of the way product ownership is “taught” in CSPO courses, and the way it is implemented in Scrum teams. I think the fundamental tension of product ownership is not being addressed.  So, at the heart of my talk, I want to explore the tension that a product owner needs to resolve and, maybe, some ways of resolving that tension.

Regardless of whether we offer workable solutions, I think the webinar will raise questions that are well worth discussing in larger groups.

You can’t let Scrum die

In my last post I said we should let Scrum die.  We can’t let Scrum die.  It doesn’t behave like that.  It will only die off its own accord if we die first and then it dies because it has no reason to exist.  So you got to kill it.  Here’s why (again?).
Software development is about people and the way people work alone and together.  People create code in software development.  Without that code, these people don’t exist; they have no purpose.  Code is the creation of the people, and people live off this code.  When the code is good, then life is good.  When the code is poisonous, then people start dying slowly.  When the smell of death is in the air, they look for help.  Some stare into the mirror called Scrum. They see themselves and the way they behave.  It’s an ugly sight.  They realise that they should behave better.  After all, software is about the way people work alone and together.

Regularly looking into the Scrum mirror, they improve their behavior over time, and everyone is happier than the moment before.  That’s a nice view.  Just look in the mirror and it looks good.  Very rarely do they also look again through the window into the fields of code that feeds them.  The poison is still coursing through their veins.  They will die, eventually … by the host that they created that was supposed to nourish them.  The only way to survive is to deal with the fields of code.  Get rid of the toxins.  There are two fundamental ways(*) that you can get rid of toxins: (a) eliminate duplication, and (b) make the code as you wish it to be.

If they just stare into the mirror and hardly ever look out the window, they will just exist on the plateau of complacency.  In order to avoid that state of being, they need to focus on the fields of code.  The urge to look in the mirror is strong, and as useful as it was, it becomes a very unbalanced state of existence.

So, look in the mirror, but look through the window too.  Create fields of code without toxins so that you provide nourishment for the next person.  That is ubuntu coding.

Actually, the only mirror you need is the person working next to you.

(*) Think deeply about these two fundamental things and try it out.  Everything else will fall into place from this. For example, the act of eliminating duplication forces you to consider where to locate a single piece of code, how it should be used and where it can be used, etc.  That is design and architecture.  With duplication, you don’t need to consider any of those things.  That’s toxic.

Let Scrum die

I live in Cape Town, South Africa.  Apart from the great beaches, a mountain in the middle of the city, good food, and good wine there is a feeling of enthusiasm for agile software development in this town.  It’s been around for a while but really started getting all hot and sweaty with the Scrum wave.  I estimate that it’s been at least 2 to 3 years since some teams in Cape Town adopted Scrum.  Of course, there are teams adopting Scrum in this community every year.  That’s all good, but I’m afraid it’s shaping to be just like Table Mountain.

Regardless of the hyper-performing tagline of Scrum, each team settles down to something with which everyone is comfortable.  The big change that has happened is that of changed behaviour.  Scrum does that – it alters behaviours.  When everyone plays by the rules (i.e. they behave consistently) then you don’t have chaos.  It’s better than better – it’s just nice!  It is very comfortable.  But I see signs of chaos not far away again.  This is what is happening and it is almost without exception here in Cape Town.  Some are off the table top already.

Let me make the Scrumvangelists feel better for a brief moment.  Scrum won’t kill you  – directly, but your adoption of Scrum can kill you if you ignore one thing – your code base.  It is a code base out of control that leads to certain death, and Scrum won’t be the saviour this time.  Bringing your code base under control is not easy.  It is all about architecture, design and changing your style of development to live the principles that end up characterising good code.  I don’t need to tell you what you need to do.  It’s been told so many times – TDD, refactoring, continuous delivery, single code base, etc.  At the code face it’s SOLID and DRY and lots more.

The plateau of complacency is an interesting place.  We may think we are collaborating but in reality we have just found a way to work together without causing chaos.  I call it co-operation.  It’s just keeping chaos under control so that we can make the sprint, again and again and again.  A sure sign of being on the plateau is when we can’t get rid of our Scrum master.  When we work the Scrum master out of the system, then the team will need to take on more on it’s shoulders.

A major limiting factor to get off the plateau will be the people on the development team.  Hyper-performing teams have talented developers(*) that are able to design and express that design in code without breaking their principles.  A team that is unable to bring a code base under control will compensate by leaning on a scrum master for support.

In the journey of dramatic improvements to bring your code base under control, there are few things that you should take notice off.

  • An architecture will emerge that supports the design of the resident parts.  Things fit together beneficially.
  • The code base will get smaller and the team will shrink to about 2 or 3 people.
  • Each developer will take on greater responsibility and will find it difficult to break core principles.  The act of living those principles will result in a values that don’t need to be listed on a poster on the wall.
  • The scrum master will become redundant.
  • The product owner will do more by working directly with the developers.

Then you won’t need Scrum, because the code base is under control, developers represent the interest of their customers, and the bottleneck is now at the customer.

Am I being idealistic?  No, it’s about pragmatic decisions and the pursuit of freedom.  It’s hippie Scrum.

(*) By talented I mean developers who have the ability to communicate, share, solve problems simply and elegantly, and can sniff out bad design and architecture.  Talented developers are not code monkeys that can churn out code.  Their single differentiating trait is the ability to design well and express that design in code.