Upcoming talks

I’ve got a busy few weeks of preparation to get through for a few talks that I will be giving.  In September I will be speaking at the South African Scrum Gathering.  The one talk for Johannesburg is on product ownership and it is a combination of the content that I presented earlier for the SD Times webinars.  The other I hope to keep quite code centric and is aimed squarely at developers and architects.
Then in November, I’ve been kindly invited to speak at Oredev in Malmo, Sweden.  I will be talking on the Java track and Architecture track.  Being in Sweden, I get a chance to get some face time with my Scandinavian colleagues, and lots of offshore geek friends.  If you are a South African looking for a decent developer conference, then consider Oredev.  It has a good vibe and some very good content, and is generally good value for money.

You did what with your ESB!?

I’ve seen many enterprise service bus (ESB) implementations that are, well, quite extraordinary.  Sadly, they are extraordinary for more wrong reasons than right.  Given that we had a SOA frenzy not too long ago, many developers got caught in that feeding frenzy.  Hey, when you’re a shark off the south coast of Kwazulu-Natal in winter, everything looks like a sardine.  In fact, if you were a tiny sole, just wandering around at the bottom of a big sea, chances are you also wanted some of them some sardines.
That was a long time ago, but now that we get to see the extraordinary things that developers built with their ESB.  Here are my top five extraordinary things.

#5 Look, we can synchronize our databases with our ESB

You have two application databases and you want the changes in one to be propagated to the other.  The ESB seemed like a perfect way to spray data around.  Hey, just give me an end point and I’d pump some data through it.  Well, syncing data is a replication problem with different challenges such as change detection, conflict resolution, retry or abort strategies, and bulk materialization on both ends when things get horribly out of sync.

#4 You can call my stored proc from the other side of my ESB

Somewhere in your app you had something that called a stored procedure. The ESB seemed like a really easy way to wrap that SProc with a service and hand out a new end point.  Cool, now everyone can call your stored proc.  Well, perhaps that original thing that was calling your stored proc was a wrong architecture decision in the first place.  If it was the write decision, suddenly opening it up to multiple callers that you have no control over means you got to be certain about whether your SProc is re-entrant, can handle the concurrency, and a whole lot more. 

#3 My ESB now manages my transactions

Enough said.

#2 My authentication runs as a centralized service on my ESB

Login seems like an easy enough “hello world” service to get up and running.  Hey, since our ESB has all of these apps hanging off it, we can get our Login Service to do single sign on (SSO).  We just need to call all the login services for each app and front it with our single sign on service and then this service will dish out authentication tokens.  Oh, we might as well put in a centralised authorisation service too.  Well, firstly, login is not a service.  You can take that thought further from here.

#1 I replaced my call stack with my ESB

To be fair, most developers don’t know that they did this.  Let me explain.  You had some app that called a method that calls a few more methods and so on.  We all know that this builds up a call stack that gets popped on the return path.  When you take those classes that have those methods and dump them as services on your ESB, the call stack has not changed, but now there is an ESB in between.  Well, you can’t just shift classes wrapped as services to your ESB.  You actually have to get service oriented in your architecture.  Only then will that call stack change.

It’s not surprising that we see this now.  It takes a long time for an architectural style to be understood.  Unfortunately, there are some costly mistakes along the way.  We also can’t blame the ESB, but, beware, the sardine run happens every year.

What’s the point in Scrum?

Scrum people like to use points for estimating and measuring velocity.  I won’t go into detail about how points work and how to play those poker estimation games.  Just search around and you will find a ton of stuff.  So, back to this points stuff.  I have a divided relationship with the humble point.  I like it when a team switches to using points for the first time, because it gives them a chance to think a little bit deeper about what they want to do.  I don’t like it when we start inventing rules around points (and you can lump guidelines and best practices into the rules pot too).  When the rules appear, the thinking disappears.
In every team trying Scrum, there is bound to be a rule about points.  I dare you to put up a hand and say you have none.  These rules are things like “We can’t take anything over 13 points into a sprint”, “Our epics are 100 points”, “The login screen is our baseline of 3 points”, “Anything over 40 points must be broken down”.  So, I double dare you :-)

Sprint backlog shape with high shared understanding

I have different view of the humble point. A point may seem like a one dimensional thing, but it has a some facets built into it.  One facet is the “amount of effort to build something”.  Another facet is “amount of ignorance” and this has an inverse – “amount of shared knowledge”.  Sometimes I find it useful to make a judgement based on what I don’t know as opposed to what I do know.  Regardless of whether I choose to view the cup as half full or half empty, I cannot estimate effort to build something based upon what I don’t know.  So, effort tends to track the amount of knowledge, not ignorance.  As knowledge increases, my ignorance decreases and each point starts representing more and more of pure effort.

However, if I am in a state of complete ignorance, then it is completely impossible for me to make any judgement on effort to build.  I’d be simply speculating.  What I can do, though, is create a time box to explore the unknown so that I can start moving out of my state of ignorance.  This is also an estimate and I am not making an excuse for non-delivery either.  I need to understand some things and also show my understanding in some code.  Yes, the code that I produce may not have a visible user interface or some other convenient demo-friendly stuff, but I need to carefully plan my sprint review to express my understanding.

It’s all about gaining a SHARED understanding. This understanding is body of knowledge that I have learned which I need to confirm with others.  This act of confirmation can happen in several ways.  I can have a conversation and explain what I understand, I can draw a blocks and lines picture, or show a spreadsheet, and so on.  Regardless of the method of communication, I still use the opportunity of discovery to express my understanding in code as tests.  Another powerful way of expressing my understanding is to write out a story and a few scenarios.  Using BDD style grammar can be a great way of concisely expressing some things, that can be easily shared.  Yes, you heard me correctly – as a developer, I write the stories and scenarios.  When I am given a story and scenario by someone and asked to estimate, then I am attempting to estimate based on  another person’s expression of their understanding and my assumed understanding.

In a recent discussion with Jimmy Nilsson, he said that he prefered to call scenarios “examples”.  That really resonated with me.  I also do a lot of discovery by example, and then gradually introduce more a more into the examples, as I get more and more confident of my knowledge.

How do I know how much I don’t know? That’s a tough question.  What I do comes straight out of my TDD habits.  I create a list of questions – my test list.  For some questions, I will know the answer easily, some not all, and some are debatable.  The more that I can answer, the better I can estimate effort.  I can then turn the questions that I can answer into statements of fact.  The more facts I have, the less ignorant I am.

Recently, I worked with a team that wanted to get TDD going, and the most significant change that I introduced was in backlog grooming and sprint planning.  During these two ceremonies, we (as a team) threw questions madly at a requirement, regardless of whether we knew the answer or not.  We then worked through the questions (as a team) to establish how much we could answer.  The trend that emerged was that the original estimates where either half of the new estimate or double of the new estimate.  When they where halved, it was generally because we were able to negotiate some of the unknowns (the ignorant areas) to a future sprint with the product owner.  In some cases, the product owner was equally ignorant, and was reacting to the “business wants the feature” pressure.  When they were doubled, it was so much more was discovered than originally assumed.  At the end of the session, we always asked the meta-question “If we answer all these questions sufficiently, will we be done?”.  I call this style of working “test first backlog grooming” or “test first sprint planning”.

Often I discover more things I don’t know. Annoyingly, this happens in the middle of a sprint, but if it did not happen in that phase of work, then perhaps I was not digging deep enough.  When this happens, I just keep on adding them to my list of questions.  These new questions are raised at any time with others on the team, the customer or with whoever can help me understand a bit more.  Sometimes, it’s put on the table for negotiation to be dealt with at another time.  Nevertheless, standups still seem to be a good time to put new questions on the table, for discussion later.

There are several ripple effects of thinking about points in this manner – this notion of ignorance and shared knowledge gauges.

The first is about the possible shape of your sprint backlog. If you have deep understanding, then it is likely that you will be able to decompose complex problems into simple solutions, that take less effort.  The effect is that low point stories are in greater number in a sprint.

If you are highly ignorant, then the estimation points reflect that and there are more medium to high point stories in the sprint.

The second is about what you value in a story. You will find less value in the ontology of epics, themes and stories.  It is no longer about size of effort but degree of understanding or ignorance.  Instead, the shape of the product backlog is something that is constantly shifting from high uncertainty (big point numbers) to high certainty (low point numbers).  That’s what test first backlog grooming gives you.

The third is about continuous flow that is the nature of discovery.  When you work steadily at reducing your degree of ignorance, then you are steadily answering questions through answers expressed in code, and steadily discovering new questions that need answering.  This process of discovery is one of taking an example based on what you know in this moment and modeling it.  Then expanding that example with one or two more additional twists, and modeling that, and so it goes.

It also touches product ownership and software development. When you work in this way, then explicit estimation of effort becomes less significant.  Moments that have been earmarked as important  points in the life of the product become more significant.  Call them milestones.  These milestones are strategically and tactically defined, and become a dominant part of product ownership.  Software development becomes the act of having long running conversations with the customer.  Those milestones give context for the content of those conversations.  Ultimately, those conversations are then expressed as a set of organised thoughts in code.  If your code is not organised well, then perhaps you also don’t understand the problem or solution or both.

This is a long story for a short message. A high priority is to resolve the tension that exists in an estimation in the form of knowlege/ignorance fighting against effort.  When you release that tension through shared understanding, then you can deal with the tension that exists in the act of creating those significant milestones.  In my opinion, that’s the real wicked problem.

Product Ownership Webinar

On 12 May 2011 I will be joining Kent Beck and Henrik Kniberg in a free webinar hosted by SD Times to take a deeper look at product ownership as described by the Scrum methodology.  I think we all have a lot of questions, especially Kent, but I will also put forward some things that I have tried and some opinions of what I think should be tried. As usual, I welcome critical comment.
For a long time I have been wary of the way product ownership is “taught” in CSPO courses, and the way it is implemented in Scrum teams. I think the fundamental tension of product ownership is not being addressed.  So, at the heart of my talk, I want to explore the tension that a product owner needs to resolve and, maybe, some ways of resolving that tension.

Regardless of whether we offer workable solutions, I think the webinar will raise questions that are well worth discussing in larger groups.

A DVCS does not reduce problems in code

Let’s get this out of the way quickly.
Using any VCS requires you to do merges. The more frequently you commit, the less merge pain you have. With a distributed VCS like git or mercurial, you also have to fetch from a remote repository, merge and push. Merging does not disappear, so deal with it.

Why am I stating the obvious? Because I have seen a couple of cases now where people have switched from centralised VCS (cvs, svn style) to distributed and believed that it will help them reduce problems in code.  The thing that helps you reduce problems is tests. Running your tests under a CI server just creates a tighter, automated feedback loop. Of course, you have to update/fetch, merge, commit/push for your CI to be useful.

Hmmm, did you see what flashed by?  Your CI server does not give you a tighter feedback loop; it is frequent commits that gives you a really tight feedback loop.  The CI server will sit idle for as long as you don’t commit and push.

Rolling out a methodology is about design

Implementing a new methodology is a painful exercise.  Lots change, lots break, and lots of so-called “colateral damage”.  I have tried implementing new methodologies including XP and Scrum many times.  I have also witnessed a lot of attempts by other people, and been involved while others drove the initiative.  Every time, it has lead the organisation into a disruptive, stressful state.  The most common position taken by the implementors is:

Of course it is disruptive.  That’s part of the change-process.  We all knew that from the moment we started. How else do you think it’s going to get better?

In the past, I’ve been guilty of the same.  The end result is that I am left unsatisfied and unfulfilled, and so is the organisation. Yes, it may eventually get better.  Eventually I got sick of taking this high road.  Well, it only took two such situations a long time ago to realise that I was messing up royally.

In my quest to do things better, I drew inspiration from test driven development and dealing with old, messy legacy code.  Three very distinct things which are rooted in software development and changing legacy code is very, very apt.

  1. Rolling out a methodology is at the implementation level.  So, was there a design for the implementation in the first place?  Implementation without design always ends up in a mess.
  2. Even if we abstracted the design from one implementation, does the design support all implementations?  “Similar” does not equate to “same”.
  3. The existing methodology has a set of protocols by which the organisation functions, while the new methodology introduces a new set of protocols.  Just dumping down the new protocols is the equivalent of rip and replace – the grand rewrite without care for migration.  Is this the only way?

So, taking inspiration from code, here is something that you can try when attempting a new rollout.

Understand the existing implementation. Use a test based approach to concretely discover existing protocols within the organisation.  This may be as simple as playing out what-if scenarios that test the communication pathways.  Keep your eyes wide open for seams or boundaries.  These seams are candidates for incision points for introducing a new protocol facing in towards the area under change while honoring the old protocol facing out towards the other side that should not be changed (yet).

Design, design, design. Once you understand the existing protocols and how they behave under certain scenarios, you switch to design mode.  Look again at the dependency graph within the organisation for a particular protocol.  What is affected when a certain event occurs?  Then look at your candidate seams and incision points and design your wedge.  It may be a transformer that completely modifies information as it crosses over the seam.  Maybe it’s a buffer with a finite capacity that slows things down and trickle feeds info to the other side.  What about a filter that removes some info?  How about something that just decorates existing info with a just a wee bit more that is tolerable on the other side.

This design is mostly about designing feedback loops.  As such, you need to consider the temporal and synchronous aspects of feedback also. What is the expected latency when I stick in this new wedge?  Will it take one week or one day or one hour when something crosses this boundary?  Do we send off some info and wait for a response on the same pathway, or do we get informed of which other pathway to regularly check for the response?  Perhaps someone gets nudged when the response arrives.

Implement it test first. While it may seem like a lot of upfront work is necessary to get the ball rolling, it can be done in tiny steps.  You don’t need to fall into the analysis hole when looking for seams.  Nor do you need to get stuck looking for the perfect design.  It is better to remain concrete for long periods of time, than speculate at possibilities.  Doing a little bit at a time with some small tests helps you keep both feet on the ground.  For example, you want to switch from the old protocol of reporting progress with budgeted vs actual to burning down story points.  Some people still need to use the budget vs actual report and it is inefficient to have to maintain both (not to mention not DRY at all).  We need a way of transforming from burn down to budget vs actual.  Think about candidate tests that will shift you towards that goal?  Maybe it’s “I should be able to take existing budgeted tasks in a time frame and map it to tasks on the burndown”.  Perhaps it is “I should be able to introduce new time frames on the budgeted vs actuals that synchronise with new sprint boundaries”.

These are just some things that I think and try out.  It comes from me being sick and tired of subjecting people to stressful implementations and being party to messed up implementations too.  It’s just too easy to blame someone or something else for our own ineptitude.  I don’t take the high road anymore.  I take the middle road.  It doesn’t travel straight, and has branches with dead ends too, but it leaves me a lot more content.

You can’t let Scrum die

In my last post I said we should let Scrum die.  We can’t let Scrum die.  It doesn’t behave like that.  It will only die off its own accord if we die first and then it dies because it has no reason to exist.  So you got to kill it.  Here’s why (again?).
Software development is about people and the way people work alone and together.  People create code in software development.  Without that code, these people don’t exist; they have no purpose.  Code is the creation of the people, and people live off this code.  When the code is good, then life is good.  When the code is poisonous, then people start dying slowly.  When the smell of death is in the air, they look for help.  Some stare into the mirror called Scrum. They see themselves and the way they behave.  It’s an ugly sight.  They realise that they should behave better.  After all, software is about the way people work alone and together.

Regularly looking into the Scrum mirror, they improve their behavior over time, and everyone is happier than the moment before.  That’s a nice view.  Just look in the mirror and it looks good.  Very rarely do they also look again through the window into the fields of code that feeds them.  The poison is still coursing through their veins.  They will die, eventually … by the host that they created that was supposed to nourish them.  The only way to survive is to deal with the fields of code.  Get rid of the toxins.  There are two fundamental ways(*) that you can get rid of toxins: (a) eliminate duplication, and (b) make the code as you wish it to be.

If they just stare into the mirror and hardly ever look out the window, they will just exist on the plateau of complacency.  In order to avoid that state of being, they need to focus on the fields of code.  The urge to look in the mirror is strong, and as useful as it was, it becomes a very unbalanced state of existence.

So, look in the mirror, but look through the window too.  Create fields of code without toxins so that you provide nourishment for the next person.  That is ubuntu coding.

Actually, the only mirror you need is the person working next to you.

(*) Think deeply about these two fundamental things and try it out.  Everything else will fall into place from this. For example, the act of eliminating duplication forces you to consider where to locate a single piece of code, how it should be used and where it can be used, etc.  That is design and architecture.  With duplication, you don’t need to consider any of those things.  That’s toxic.

Let Scrum die

I live in Cape Town, South Africa.  Apart from the great beaches, a mountain in the middle of the city, good food, and good wine there is a feeling of enthusiasm for agile software development in this town.  It’s been around for a while but really started getting all hot and sweaty with the Scrum wave.  I estimate that it’s been at least 2 to 3 years since some teams in Cape Town adopted Scrum.  Of course, there are teams adopting Scrum in this community every year.  That’s all good, but I’m afraid it’s shaping to be just like Table Mountain.

Regardless of the hyper-performing tagline of Scrum, each team settles down to something with which everyone is comfortable.  The big change that has happened is that of changed behaviour.  Scrum does that – it alters behaviours.  When everyone plays by the rules (i.e. they behave consistently) then you don’t have chaos.  It’s better than better – it’s just nice!  It is very comfortable.  But I see signs of chaos not far away again.  This is what is happening and it is almost without exception here in Cape Town.  Some are off the table top already.

Let me make the Scrumvangelists feel better for a brief moment.  Scrum won’t kill you  - directly, but your adoption of Scrum can kill you if you ignore one thing – your code base.  It is a code base out of control that leads to certain death, and Scrum won’t be the saviour this time.  Bringing your code base under control is not easy.  It is all about architecture, design and changing your style of development to live the principles that end up characterising good code.  I don’t need to tell you what you need to do.  It’s been told so many times – TDD, refactoring, continuous delivery, single code base, etc.  At the code face it’s SOLID and DRY and lots more.

The plateau of complacency is an interesting place.  We may think we are collaborating but in reality we have just found a way to work together without causing chaos.  I call it co-operation.  It’s just keeping chaos under control so that we can make the sprint, again and again and again.  A sure sign of being on the plateau is when we can’t get rid of our Scrum master.  When we work the Scrum master out of the system, then the team will need to take on more on it’s shoulders.

A major limiting factor to get off the plateau will be the people on the development team.  Hyper-performing teams have talented developers(*) that are able to design and express that design in code without breaking their principles.  A team that is unable to bring a code base under control will compensate by leaning on a scrum master for support.

In the journey of dramatic improvements to bring your code base under control, there are few things that you should take notice off.

  • An architecture will emerge that supports the design of the resident parts.  Things fit together beneficially.
  • The code base will get smaller and the team will shrink to about 2 or 3 people.
  • Each developer will take on greater responsibility and will find it difficult to break core principles.  The act of living those principles will result in a values that don’t need to be listed on a poster on the wall.
  • The scrum master will become redundant.
  • The product owner will do more by working directly with the developers.

Then you won’t need Scrum, because the code base is under control, developers represent the interest of their customers, and the bottleneck is now at the customer.

Am I being idealistic?  No, it’s about pragmatic decisions and the pursuit of freedom.  It’s hippie Scrum.

(*) By talented I mean developers who have the ability to communicate, share, solve problems simply and elegantly, and can sniff out bad design and architecture.  Talented developers are not code monkeys that can churn out code.  Their single differentiating trait is the ability to design well and express that design in code.

The politics of software delivery

Software is all about delivering something useful to a customer.  That’s it – nothing else.  Politics is about acquisition of power.  Nothing else matters.  Now mix the two together.  How often have you heard a developer say something like “It’s not my problem, it’s just politics”?   That poor developer doesn’t stand a chance.  Imagine trying to deliver software while there is a raging power battle going on.  I don’t think software delivery stands any chance of success in that battle.  In fact, software delivery just becomes a tool for the politicians.
When someone is plotting for power, nothing else matters, not in the least software delivery.  I’ve been there and done that.  It’s just messy, soul destroying stuff.  These days, I look for the power battle and try to focus on software by raising the delivery stakes to higher than the power battle.  If I can’t do that, then the software was never the focus in the first place.  Then I recommend pulling the plug.  Regardless, that’s my cue to leave.  Not because I am a coward, lacking courage, but for the simple fact that those power grabs are completely meaningless, except for the power-hungry.

As long as there is a political game being played, you simply won’t deliver software on time, on budget and keep customers happy.  BTW you can just forget about collaboration too.  That space will always be filled with contempt.

Let me put it another way: Any attempt at being agile in a political environment will always lead to failure.  While you are trying to learn, others are trying to gain power.  It doesn’t work!

Hello World with KRS

Khanyisa Real Systems very kindly asked me to contribute an article to their December newsletter.  I happily obliged.  They’ve just started this initiative and I like it already.  Not because they invited me to write, but because the content is original, including the ‘toons.  In a world that is retweet mad, that says a lot to me – someone out there cares enough to create and gather original content and share it.  Read it for yourself, and if you like you can subscribe on their website.
And I’m dead serious about my message in that article!