Oopps . . . . I have built a Distributed Monolith

Recently I have been deep diving into other peoples code to help establish the pain points they have been having on a daily basis. Often when we start out they identify their application as a microservice architecture. Then I start to dig into the code and although there are multiple services that run on their own, there are quite a few “smells” which makes the application actually a Distributed Monolith.

What is a Distributed Monolith?

Firstly, why is your team adopting a microservice architecture? The only real reason to adopt the architecture is because you want to have decoupled services that you can build and release independently of the others. Microservices do have their disadvantage over a monolith

  • They are slower. To talk to another Domain, you have to speak to another services over the wire, requiring network I/O and serialization/deserialization.
  • They are more complex. Because of performance and the communication between microservices we have to use additional middleware such as caching and message brokers.


A Distributed Monolith on the surface looks like a microservice architecture but actually the build and release are the same. When we look at the behaviour of a team they will often release all the services together. Asked if they can release the services individually they say they can, however that normally involves building and publishing all the artefacts and then selecting the individual service for Release pipeline. This is normally done as as hotfix and not part of their regular SDLC.

What are the causes of a Distributed Monolith?

Common Framework

A lot of teams are building Common frameworks for their microservices. This is not in the sense of spring frameworks or libraries that you can install and configure. These are mandatory implementations. Classes that you must use, must inherit with and abstract a lot of the functionality away from the developer. The developer ends up being a config maintainer instead of writing code. Because these frameworks are so restrictive and hides a lot of the functionality it creates an immediate dependency. This results in the development team either continuously upgrading all their services and releasing them together because they don’t know the impact of the new framework upgrade, or some teams are too scared they will never upgrade the framework.

Common Build Script

Having a common parameterise build script means that the 20 services are built in a consist way. I have seen this done by teams using maven and gradle and I believe this is a way of keeping consistent dependencies, a consistent build and only having to maintain it in one place. However this often results in the build all services together and producing multiple artifacts. When you start to go down this road you might as well release them all together. There seems to be a fear that if they need to upgraded a library then they have to do it in multiple places (and I know a lot of people got burnt by the log4j issue) but how often do you need to upgrade that dependency? do you need to do it for all of your services? If you are unsure then you could be suffering from the Common Framework problem or you are upgrading without looking at what is in in the upgrade.

Specialised Services

Some people are designing their applications without using DDD and create specialised services. These services may write to a database or a cache. They might provide some unique feature that a library should be used for. For example I was speaking to an Architect which had problems with Date consistency. So someone had suggested a Date service. How slow would it be if every time you wanted some date functionality that you had to make a network call to another service?

When you have specialised services then you need to couple them together and release them together. Your Bounded Context is split across services and a change to this is a change to multiple services.

Why are people creating Distributed Monoliths and what is the Solution?

Time to Market

We are under a lot of pressure to get the initial release out the door and some of these causes provide a good way of doing that. There are many success stories where a company has saved or produced a lot of money for the business by the introduction of the application. However after the initial release the release cycle can start to deteriorate with the over head of upgrading dependencies, testing all the services and then orchestrating a big release.

To reduce this a team should adopt CI/CD and automate as much as they can with the view that they could release into production every day. Most people start with the CI part and forget the CD part. Sometimes that CD part has a post deployment task or the process is manual in the case of a database where people are still manually running scripts. Factor this into your development and address it up front, otherwise you will find yourself slipping when it comes to deadlines, re-writing code to get it to work and ultimately putting it back on the backlog and abandoned

Release Limitations

Some of the reason some people may opt for not releasing their services individually and more often is Change Control and the paperwork that goes with it. If you need to keep raising ServiceNow tickets for each release then this puts people off and will often just schedule all their services to be released in one go.

Releasing small, minor changes often reduces risk. You can isolate the change that might have caused a negative impact easily and either rollback or release a hotfix with minimal effort. Engaging with Support and Change Control, change the norms and look at automating this part can remove this blocker.

Learnt Behaviour

We have had it hammered into us for the last 20 years that we should share and reuse code. Microservices encourages a share nothing approach (although I am more inline with Mark Richard’s view). Conceptually you could have your microservces written in different languages with this approach.

I would suggest breaking it down and that’s why I have encouraged teams to move from a MonoRepo to a MultiRepo approach. This encourages the developer to create a repo and a pipeline within a project, allowing you to create, build and release a microservice in isolation.

Black Duck

When I mentioned this on a talk I see a lot of nodding heads.  If I have 20 services, all with their own dependency file and BlackDuck starts screaming then the pain in having to upgrade dependencies and build and release again is soul destroying.

I was lucky to attend the AMS Summit London and see Snyk which is a great replacement. It’s a developer-led tool and it has the ability for a project to do the upgrades and create pull requests for you. It’s all done in one place. Brilliant time saving and does’t mean you have to compromise on having a multi repo approach.

Can you think of any other issues that stop people from doing pure microservices? If so please let use know.