Your ESB is going to kill you

Recently I wrote about the fruitless plight of a schizophrenic service.  Now, I think that some of that schizophrenia exists in the ESB too (or is it rubbing off onto the ESB?).  I’ve always felt that the ESB was just another pattern that showed how to isolate things and deal with routing and transformations.  The most common implementation was a messaging gadget with some pluggable framework of sorts for the transformations, and some configurable framework for routing.
With such isolation of parts, it was convenient to not worry about what happened elsewhere when something was thrown to this gadget for processing.  And we started wondering about scalability things and decided that asynchronous was the way to go … disconnected, stateless, etc, all good, well-intentioned things and useful things.

Then the pattern became a product.  And on top of this product we had more products like business process orchestrators or workflow managers.  And below this product we had applications and databases and ftp locations and all sorts of things that catered for every imaginable protocol.  And around all of this we had some enterprise-ish sort of management thing to keep on eye on everything that was happening inside this very busy product.

Then, services moved from applications to the ESB product.  After all, it’s a service and that’s an enterprise service bus, right?  And when the services where moved over to their new home, all the dependencies had to come along too.  And then we started arguing about getting granularity right in the ESB.  I used to just think that the ESB had a proxy of sorts to the service that still was at the application.  Maybe I got it all wrong.

Now this ESB is starting to feel like an Application Server with a messaging gadget, workflow gadget, transformers, routers, protocol handlers.  And some ESB’s have a web server too, since they have browser based management consoles.

Some people also like the idea of a rules engine for their complex domain rules and embedded those in their applications.  Hold on, those content based routers in the ESB also used a rules engine.  Ok, let’s move our rules over to the ESB too.  Cool, my ESB is also a rules engine.

Now, I see people writing the most hellish XML that is meant to do everything from configure routing, define transformations, execute code, persist messages, fire off sets of rules and more.  It reminds me so much of those weird and wonderful stored procedures and cascading triggers that we wrote.  The other day I got a laugh out of a friend when I told him that ESB’s are now DB servers and everyone writes sprocs in XML.

And we tried to do everything in the database server – rules, custom types, defaults, constraints, sprocs, triggers, batch jobs … even jump into a shell and execute something else.  It did not work out very well then.

If I was an ESB, I’d be very confused.  I started life as a pattern with a reasonable implementation using messaging and transformation and routing.  Now all of this.  In fact, I’d be more stressed than confused.

Then again, maybe the ESB is not confused, and maybe the people that use the ESB that are confused.  In fact, if I was one of those people, I’d be stressed too.

Fast Track to Domain Driven Design

I finally got out of neutral and pulled together the first public offering our Domain Driven Design courses in Cape Town, South Africa.  Normally we give these courses on-site with people on the same development team but I thought it may be fun and inspiring to open it up to everyone for a change.  Now I’m all excited again and really looking forward to a diverse mixture of people. Hopefully, I will see some old faces and lots of new people.
The one thing I can tell you is that the course is a very immersive experience.  I really hate lecturing but I enjoy probing conversations and that’s how I give the course. I don’t have answers to the practical work and concerns are addressed as we go along.  As a result, the day takes unexpected turns and routes.  But in the end I get you to the right destination.  Come along; you will leave exhausted, but inspired!

Take the Fast Track to Domain Driven Design

about the coursecourse contents / should you attend? / register for the course

factor10 has expanded its services in South Africa to include our advanced and
expert level courses aimed for the software professional.  On September 8-9, 2009,
we will be offering a fast track to DDD for Architects at the BMW Pavilion in
Cape Town.


Who should attend?

This course is for software professionals that want to take the right steps towards
advanced and expert levels in their careers.  Register for this course if you want to …

  • learn more than just another syntax and set of tools
  • write software for large, long living systems
  • increase the quality and maintainability of your design
  • design high quality models and use code to represent those models effectively
  • develop applications with a good APIs
  • add a design edge to your skill set


Why should you learn DDD?

More and more developers and architects realise that learning every detail of a new
API just isn’t the way to deliver the best business value. It’s such a tough balancing
act; focus on the solving the business problem and focus on building working software
with your frameworks.

One way of taking a big leap in the right direction is to learn and apply domain driven
design. It is definitely not abstract and fluffy; it deals a lot with the code also. DDD
leads us to focus on understanding and to communicate that understanding very well;
in language, in design and in code. You will shift your focus away from designing for a
technology, and you will learn to design for the business domain; to the core of the
problems and solutions. Those are the most interesting parts and what your users
and customers really care about.

about the coursecourse contents / should you attend? / register for the course

Domain Specific Reference Architectures

Many big vendors have invested a lot on blue print or reference architectures.  I came across another in recent months.  I witnessed a vendor team moving from client to client implementing this reference architecture as part of their SOA solution.
What were they actually doing? They were mapping the client’s domain to the reference architecture domain and thereby identified reference architecture services that supported the client’s needs.  This most probably works for some people.   But I feel uncomfortable with it because…

  • It means translating from one domain to another and back again.  It’s like having one massive bounded context around the reference architecture with a gigantic set of adaptors and transformers.
  • There is a very real possibility of semantic impedance on the boundary of the two domains.
  • There is likely to be two domain vocabularies or one large polluted vocabulary with synonyms, etc.

There are other reasons but these few are just old problems and habits coming back again.  Things that we accepted as dangerous and limits success in creating good software.

So, are reference architectures bad? Yes and no.  Maybe you should consider adopting its domain vocabulary as a first step.  A reference architecture with a rich metamodel is more likely to be more valuable than one without a metamodel.

And the moment you start thinking at a meta level, then you’re moving into a higher level of abstraction.  In this higher level, you will have a greater opportunity to describe your intentions agnostic of the reference architecture and the vendor’s technology stack.

The way I see it, services are defined at a meta level.  They describe your intentions and are independent of a reference architecture.  However, if you chose a reference architecture up front, then describe your intentions in the vocabulary of the reference architecture.

Does this make sense?  Because I’m just hypothesising here.

Cloud Computing Zen

I’ve been thinking about Cloud Computing over the past few weeks.  And the business of Chunk Cloud Computing makes me feel more comfortable than the various “definitions” for cloud computing that is out there.
It amazes me that we, as an industry, find it so hard to converge on common ideas.  So, I’ve tried to formulate my own understanding of cloud computing.

I think it is …

  • a very scalable hardware platform that you share but don’t own
  • an infrastructure service that you use but never maintain
  • a computation environment that scales when you scale
  • data storage that is distributed but consistent
  • about writing applications that wire up highly cohesive, loosely coupled chunks
  • about freedom to choose and change but with greater responsibility and consequences

The last point was enlightening for me.  For sure, some vendor will pitch the “Lower your future running costs” line.  But I think Future Running Costs = Zero.  You never plan for the future but only for what you need right now.

Wow!  that is so Zenful.  Cloud Computing is about living in the moment, all the time.

Upcoming Master Class

I will be presenting a half-day master class for the JCSE on 27 May 2009 in Johannesburg.  It’s titled Credit Crunch Metrics and aimed at geek managers.  But it’s all about your code.  It will be an interesting 4 hours.  We will read a lot of code, look at a lot of pictures of designs.  Examine the workflow of teams.  All of this with the explicit intention of determine the cost of writing and maintaining your software.

Why Credit Crunch Metrics? Because we, as an industry, are contributing to the global financial slump.  We need to critically look at how we produce and maintain software by examining “environmental” signs that are commonly ignored.  The cost of development lies in your code, your design and your workflow.  I will show you how to look at these signs and learn from them.  And then, just maybe, we change our industry for the better – one code base at a time.  Otherwise, we might as well substitute “software developer” for “lawyer” in those old lame jokes.

Why am I targeting management in corporate teams? Because managers have the access to the boardroom and they should be fighting the battle for their developers.  So, get your “manager” to sign-up.  Better yet, come along with your “boss” and then take the battle to the boardroom together.

The JCSE and I and both hoping for 15 or more people to attend.  This is not a hard rule, but anything less than 10 will most likely force us to cancel the event.  That would be a sad reflection on the priority thinking of people in our local community.

Services are Intentions

I was talking SOA – again! I was arguing that modeling of services in UML, BPEL, and any other fancy acronym immediately constrains you to a specific implementation.  For example, UML means that your are thinking OO already, BPEL means that your are thinking business processes already.  But are those (and others) the best ways to model or represent a service?
In SOA, I have a suspicion (as yet untested!) that a service is closer to an intention than anything else that I can think of because it describes the latent value of the business that invariably is lost by SOA implementations and product stacks.  Now that leaves us with a problem – how do you describe intentions consistently across any domain?   I don’t know how to do this because to describe intentions in a domain, you need to understand the vocabulary of the domain.  Until we can represent vocabularies then only can we create a metamodel for these business intentions.

So how do we model intentions in a single domain since I cannot use UML (implementation!), XPDL (implementation!), BPEL (implementation!) etc?  Since the domain is constrained by its vocabulary, we need to create a language that uses this vocabulary.  And that, my dear folks, is nothing but a DSL.  If we, therefore, model intentions (the services) with a DSL, then we are in a position to translate or transform that intention into any implementation that we like.  Realistically, we will likely need additional metadata surrounding the intention described in the DSL to satisfy UML, XPDL, BPEL, WSDL, RESTful APIs, etc.

When we think of the business as what they intend to do or achieve, then we are actually working at a higher level of abstraction – at a meta-level.  That is hard to do, but if you do it reasonably well, then you have more freedom when it comes to implementation.

SOA is so screwed up at the moment and most are climbing into or out of rabbit holes because the business intentions are being ignored or forgotten far too early or thought about far too late.  Perhaps the most effective SOA implementations will be realised with a suite of DSL’s and the only toolset that you really need is a language workbench and some very skilled language oriented programmers.

SOA is a wicked problem

I had a really interesting discussion this morning with two bright people about their SOA journey.  They are responsible in many ways for moving their rather massive company to SOA.  So we chatted about all sorts of things and argued and disagreed and converged and disagreed again and got confused and converged again and diverged again … and then we realised 2 hours had gone by and we all had other things to do.
Driving back to my office, I was bothered by the fact that we were struggling with how to implement this thing called SOA.  And I also have a feeling that there are more SOA failures than successes in the world.  And all these contribute to experiential knowledge for the greater good of our geek community, blah, blah, blah.

But I did not have a clear cut solution, neither did they, neither does their vendor partner (for sure!), nor their management, nor anybody!

So, I am now convinced that a SOA implementation (not theory!) is a wicked problem.  Wicked problems come from social sciences to describe an extremely difficult problem to solve, even impossible because of contradictory or ever-changing factors or incomplete requirements.  Some characteristics of wicked problems include:

  • every solution is a degree of goodness (or badness) but there is no distinctly right or wrong solution
  • every solution is a one-shot solution because you don’t have room for trial and error
  • consequently, every solution attempt counts immensely … positively or negatively
  • there is no uber-test of a solution, so the proof is in the execution (my TDD blood froze about now 🙂 )
  • each problem is symptom of another problem, indefinitely, i.e. there is no stop rule.
  • stakeholders all hold different understandings of the domain
  • there is no solution, but the problem is understood only after a solution has been crafted.

And last, but not least … maybe the most telling is

  • Those that are held accountable for the consequences of the solution have no right to be wrong ! Ouch 🙂

I think that some architectural things feel like wicked problems.  Read the work of  Jeff Conklin and Robert Horn, two guys that spend a lot of time researching tools for solving social messes.

Discovering Language and Context

Last night, I attended the 43rd Cape Town SPIN meeting that turned out to be a fun, interactive exercise with John Gloor. John introduced a system of analysis which focused on modeling a domain – but not from and object oriented paradigm. It was more about “things” and “influences”. I am really doing a bad job of using the right terms, but let’s just try something out.

  1. As a group define (frame?) your problem domain (We chose “How to be a successful team”)
  2. Then individually…
    • write down as many thoughts about the domain as possible – short snippets of 3-8 words each.  The recommendation was to aim for about 70 thoughts.
    • Color code (or group) these based on some notion of similarity.
    • Give each group a name or label on a post-it
  3. Then as a group …
    • the first person sticks their post-its on the wall
    • each other person, in turn, then sticks their post-its up and aligns it with whatever else is already on the board (or creates a new spot altogether)
    • Optimize the emerging clusters and give them names
  4. Finally, put directed lines between the named clusters using the guide of “A influences B” and give the line a label as well.

We never had the time to get to complete step 6.  But the really interesting angle for me was that a language for the domain was emerging.  It was not perfect, but it was a nice start.  Secondly, some of the clusters felt a lot like strategic contexts.  Sure, it was a conceptual decomposition of sorts, but it may well be a nice starting point for discovering bounded contexts.

And those influence lines felt like dependencies and interactions between contexts.  The use of the word “influence” is a really nice alternative to the traditionally naive terms like “uses”, “has”, “is like”.  It naturally focuses on behavioural interactions.

So, this simple exercise may be a nice technique for discovering language and contexts within a domain.  And it proves to me, yet again, that language is most critical.  This is not just about maintaining a lifeless glossary of terms – but the energy surrounding the vocabulary and terms need to be depicted and felt as well.  And if we combine all of this with an agile mindset, we can adjust this “language model” with each iteration and gain deeper domain understanding continuously.  Hmmm, this notion of “language model” is intriguing!

.NET Rocks! Podcast

Yesterday I had a telephonic chat with Richard Campbell and Carl Franklin from .NET Rocks! I tried to talk about modularity, but it kind of veered off into design in general and how an agile runtime is really important to being really agile.  A lot revolved around getting the domain understanding right before diving into object oriented design.  We touched on SOA, SaaS, UML, tools.  That’s a heck of a mess for less than an hour!
You can listen to the podcast here.  I think I just rambled on a lot about anything and everything and it felt like a wayward discussion to me at the time.  Have a listen and tell me what you think.  I really would like to improve myself for these kinds of events.

So, thanks a lot to the kind folk at .NET Rocks! for having me on their awesome show.  And for persisting with trying to get hold of me at the hotel in Lech, Austria.  I am deeply priviledged and humbled.  I hope I helped someone with my ramblings.

Oh, and many thanks to Jimmy Nilsson for all his help (again!).

Øredev Presentations

My presentations from Oredev are finally available.  After working through almost all the export options on Keynote, I have settled on QuickTime as the distro format.  The “flying code” in the aspects presentation worked out best with QuickTime.  Note that it’s not a continuous playback and you have to click-through each frame.