I think over the last couple of months I have had engagements with a similar setup. Their release pain is that they can’t release their individual services and unlike my post Oopps . . . . I have built a Distributed Monolith they have created nice separated services.
When I looked further into it actually it was around how they operated.
These teams were identifying as working in Agile Sprints but actually they were operating in a mini waterfall with the Analysis, Design and Testing all separated out into phases and owned by BA, Developers and QA, each throwing the story “over the wall” to the next one.
After talking with these teams we can see that the problem starts at inception with their stories and moves all the way through until the release.
Due to the silo nature of each part of the team they aren’t working together to solve the common problem and after speaking to each of them, they all seemed demotivated and unhappy about how they operated.
So what are the symptoms of this? Let’s look at the different stages…
Analysis
1. Large User Stories
Sometimes I think user stories is just another word used for requirements document but stored in a JIRA. The problem with large user stories is it’s doesn’t translate well to the developer. Generally large user stories result in
- Impacting multiple parts of the application, meaning that multiple services need to change and all the system needs to be tested as a whole.
- No acceptance criteria. How does the developer know when it is completed? For example this new field that needed to be added actually needs to appear in on 4 screens, and this report, and needs to be filterable, and ……
2. Incomplete User Stories
Something has just come up in a meeting with the business and there is a new User Story that is High Priority! The B.A. has had the conversation with the Dev team on the chat and has created an empty User Story as a placeholder.
They have to come back to them later on the requirements but the dev team are told they need to start working on it asap. The Dev has started working on it but without the requirements they are making code changes that will probably need to be changed and will not match the user story when it’s completed. The test later will be based off the user story so there is a high probability that it won’t pass and needs to be modified later, not making it out the door anyway.
Testing
1. Large Test Pack
Firstly we should commend having testing. There are still far to many application going out with little testing or manual testing. QA teams though are building large test packs in isolation with no knowledge of the build and release cycle. All they go on is the large user stories and normally a UI which to test against. These test packs grow and grow and often become the only indication if the system is good to go to production or not. However what tends to be the problem further down the road is
- Small changes are stopped from going out the door until the whole system has been tested. Often parts of the system are tested that don’t need to be (Why did we have to keep testing the static services if we haven’t made any changes?)
- Testing can take hours in some cases as the pack grows. This means that you end up adding multiple changes to that test run because you can’t keep running it for each feature. So when something goes wrong what caused it?
2. Staggered Test Cycle that occurs outside of the Sprint.
Because of the expensive test cycle, the testing part is taken out of the sprint. The developer says he is done and that the story is complete. It is thrown over the wall to the tester to take care.
A developer works on feature X in Sprint 1. In Sprint 2 it is tested while the developer start work on feature Y. When something goes wrong in the test pack then the developer has to drop what he’s doing and fix feature X. This will generally stop feature Y from being competed and moves into Sprint 3. This has a compound effect until nothing new is being worked on in a Sprint and a clean break needs to be made.
The Solution
Story Breakdown
The B.A, the Developer and QA need to break down the story into manageable pieces. Working together they can create Features that can be delivered at anytime and released into production. Releasing the feature into Production doesn’t complete the story but small, incremental releases reduces risk as there is less to go wrong, easier to rollback if something does go wrong and makes the process a lot simpler and repeatable. The Agile process of Backlog Grooming/Refinement also improves collaboration between the different areas so they understand more about what the other teams does and what is required.
A story involves a user wanting a new field to be captured from the UI and to appear in the Risk Reports. The team break down the feature into 4 Features,
1. Capture the field in the service
2. Capture the field in the UI
3. Use the field in the Risk Service
4. Use the field in Reporting Service
Each feature is worked on, tested and released into the Production independently. While the field may not appear in the reports until maybe the 2nd or 3rd sprint, the data is being captured early and it’s easy to see how releasing them independently makes it similar and less to go wrong.
Agree on what is a Definition of a Complete Story
An agreement between the 3 needs to be made to what state the feature has to be in before it’s accepted into the sprint. A Backlog item cannot be a Blank Cheque and often Acceptance Criteria needs to be added. IF later something has been missed it needs to be added as a new item to the Backlog. No one gets things 100% right so it should be expected that new items are added to the backlog but it shouldn’t derail the progress of the current Backlog item.
Using a common syntax is useful also when creating Acceptance Criteria. In a previous life my team created testing that was developer lead using the Gherkin Syntax, however to get that to work in the development cycle, whe taught the B.A. to write their stories using the same syntax so the was no obscurity of what was required.
Given I have a valid Trade with the following details
| Field | Value |
| LegalEntity | Party1 |
| Counterparty | Party2 |
| Book | Gas |
When I save the Trade
Then I have a success message
And the trade is displayed with the following details
| Field | Value |
| LegalEntity | Party1 |
| Counterparty | Party2 |
| Book | Gas |
| Status | Draft |
Code language: JavaScript (javascript)
You may not like the Gherkin syntax but the important thing to do is come up with a common way to communicate the requirements to the developer and the tester that leaves little to interpretation.
Don’t be scared to release incremental changes
Loose coupling of code lends itself to this. You should be able to release small incremental changes without impacting anything. In the situation where I have 2 services speaking to each other, adding a new field should not impact the downstream service. It should just ignore that field. Json if more forgiving than XML these days. In the case of a breaking change however there are a few things you can do
- Version. It’s not as easy as it looks in practice but still a good solution once you pick up momentum with it. Adding it to your URL (e.g. /api/v2/Trade ) can make things such as your UI easy to switch over from one to the other
- Manage breaking changes by duplicating fields. If you are moving a field from one location on your object to another, in the short-term duplicate it. Old consumers will still work along side new consumers. Json serializers will just ignore fields it doesn’t understand.
- Feature Toggling. UI done but backend isn’t? Feature toggle it with a variable. Once the backend service released, you can just toggle it on. I have seen this becoming more and more common recently.
Shift Left!
Shift Left testing was coined back in 2001 and since then due to the effectiveness it has been picked up by other areas such as DevSecOps. The concept itself is to think about and implement testing early in your SDLC and not something that comes at the end. Those familiar with ATDD will recognize it as an extreme version of shifting left. An important part of the Shift Left concept is integrating it into your CI/CD. A Nirvana state in microservices for me is to have Multirepos that contain a micro-frontend, a micro-service, database scripts and test packs. Everytime you checkin and push your feature, the CI/CD has everything it needs to create and test the service (as well as deploy later). More importantly though we have broken down our big test pack and. IMHO best efforts should be undergone to reduce/remove and scheduled tests or manually run processes at the end. UAT testing should be a matter of showing the end user in your sprint demo the new feature. If you have automated the testing and have the evidence, there should be no need to have someone spend a week manually testing things.
It’s a Team thing
Most importantly a team needs to do what’s right by them and work together with the objective of Keeping it Simple. Have a retrospective, work out the issues and be-involved throughout the SDLC. Being Agile should be less regimented than waterfall. Doing a stand-up everyday in a sprint doesn’t make you Agile. The ceremonies are there as tools not commandments. Be open to change as a team and work out what works best for you.
Have you had a similar experience? Is there other things you have seen or solved? I would love to here.