Many big vendors have invested a lot on blue print or reference architectures. I came across another in recent months. I witnessed a vendor team moving from client to client implementing this reference architecture as part of their SOA solution.
What were they actually doing? They were mapping the client’s domain to the reference architecture domain and thereby identified reference architecture services that supported the client’s needs. This most probably works for some people. But I feel uncomfortable with it because…
- It means translating from one domain to another and back again. It’s like having one massive bounded context around the reference architecture with a gigantic set of adaptors and transformers.
- There is a very real possibility of semantic impedance on the boundary of the two domains.
- There is likely to be two domain vocabularies or one large polluted vocabulary with synonyms, etc.
There are other reasons but these few are just old problems and habits coming back again. Things that we accepted as dangerous and limits success in creating good software.
So, are reference architectures bad? Yes and no. Maybe you should consider adopting its domain vocabulary as a first step. A reference architecture with a rich metamodel is more likely to be more valuable than one without a metamodel.
And the moment you start thinking at a meta level, then you’re moving into a higher level of abstraction. In this higher level, you will have a greater opportunity to describe your intentions agnostic of the reference architecture and the vendor’s technology stack.
The way I see it, services are defined at a meta level. They describe your intentions and are independent of a reference architecture. However, if you chose a reference architecture up front, then describe your intentions in the vocabulary of the reference architecture.
Does this make sense? Because I’m just hypothesising here.