At the beginning of this series, we’ve had our first encounter with Bottlenecks and Throughputs and how they affect our development workflow. We’ve seen the two effects of Back Pressure and Upstream Pressure, which can potentially create new Bottlenecks once enough Throughput is gained.
One example of these is between our engineers and our deployments, which we’ve handled by splitting deployments (and other methods). The second is between our engineers and our applications, which we’ve tried to handle by splitting Throughput. Unfortunately, we’ve seen it is not enough as an application’s design determines how soon it will become a Bottleneck again.
We’ve conveniently done all of the above under the assumption that our two applications are mutually exclusive and share nothing at all. In this chapter, we’re going to start complicating it. This time, with two applications that share a simple client-server relationship. With it, we’ll see how it also complicates our development workflow.
Let’s recall one of the Rolling Stones’ greatest hits:
You can’t always get what you want
But if you try sometimes, well, you might find
You get what you need
In the previous series of Change Driven Design, we’ve talked about the principle of mutual exclusivity and how it helps us to break an application internally into Modules. We’ve also talked about a followup principle: “what shouldn’t Change together, shouldn’t Change together. Unless we really really really need to.”.
Even if we do wish as hard as we can for Modules or applications to Change together, sometimes it won’t happen for the plain reason it is technically impossible. Because not all of our applications are physically deployed to the same playing field. We might have one application deployed to our own servers and another application deployed to our customer’s servers, an agent (which we’ll further explore in the next chapter). We might have a B2C physical device with a companion mobile application on iPhones and Androids, and multiple backend applications deployed to our servers.
But most likely, at some point in our career, we’d have a frontend application and a companion backend one. Let’s ride on top of this one, as a perfect yet simple example for a split application. Although it had been forever in existence, a backend-frontend split only became widespread about 10 years ago. So let’s go back to the past and to the chapter of Fragments of Change of the Change Driven Design series.
Back then, our entire web application was nothing more than a single server application. One application that handled everything end-to-end, and it was dissected into layers. Only one of which was the UI, which was a rendering process and its output, a static HTML, was served to the browser.
There used to be dedicated engineers only for UI/HTML rendering, from which frontend development had probably emerged and became a specific skill set. Although working on something else entirely, they worked along with many other engineers on this single application, thus all were suffering from deployment dependency which was a Bottleneck in their development workflow. Or maybe not, because problems used to be much smaller back then.
Let’s take this antique layered application and modernize it a little bit, because today in any modern enough system the UI is handled by a frontend application. So let’s split it. To not get us into the refactoring needed for this, let’s conveniently assume our application consists of two mutually exclusive and properly intersecting Modules. One Module for the UI and another Module named Non-UI.
An additional third physical boundary was added between our two applications. One would be deployed to our servers, the other to our customer’s browsers. During run time, they would have to communicate and interact with one another, their intersection would need to be crossing the network boundary. The split entailed a client-server relationship. The problems that arise from networking are a challenge of its own in distributed systems, which is beyond the scope of this book but should not be taken for granted.
During coding and deployment, this boundary sometimes has an effect on our development workflow. Even before the split, we had two kinds of Intentions and Directions of Change:
- The Intention to Change only the UI, our modern frontend application.
- The Intention to Change only the Non-UI, our modern backend application.
As long as our Change has one of these Intentions, it is no different to our development workflow than having two independent deployments to two independent applications.
But actually, even before the split we had a third Intention, the Intention to Change both of them together. That’s when and where it gets complicated. The added physical boundary seems to have caused a mismatch with the Change Stream which could lead to Inefficiencies.
Can’t Change Together
When it comes to our engineers, these newly added physical boundaries would make their life a little harder. It is mentally harder to work concurrently on multiple applications, to have to constantly jump between them. A constant task switching and maybe even context switching. Can somewhat decrease our Throughput, to be regained by bettering our practices.
It is said that mono-repos help with that. It is also possible in some IDEs to consolidate multiple applications into a single project. My experience suggests it is quite hard to set up and maintain. Do notice the difference between having a single repository/project for the entire company, and a repository/project that holds together only a subset of the company’s applications.
Our deployments are also affected by these physical boundaries. A deployment of one must start only after the previous is finished. It forces an order on them. It makes them dependent via deployment..
Although coded in parallel, two consecutive and dependent deployments creates a longer idle time for our engineer. And our engineer would be able to verify his work only after both were completed.
This longer idle time is not an Inefficiency because it is a technical must. There is no work around as they can not be physically deployed to the same playing field. One goes to our servers and another to our customer’s browsers. In these cases, the source for potential Inefficiencies would be the deployment’s duration.
If technically must splits do not cause Inefficiencies, would optional splits will? By the end of this series, we will figure this out as well.
We had previously argued that making two small Changes instead of one big Change is eventually beneficial. Because this scenario is a technical must we are actually forced to practice it. It is still beneficial because fewer Changes would be bundled together, because they can’t be. It would make it easier to revert when an Instability is introduced right after the first deployment. But it would make it much harder to trace the Instability after the second was deployed, because it would need to be traced between two applications and not one.
Furthermore, any follow up Change would require another two sequential deployments. And Product related Changes would still be cutting through layers, only this time not within a single application but between our backend and our frontend.
Honestly, I know of no way to ease and avoid it besides careful pre-planning, design and practice. Carefully verify and test the first application, before we deploy the second. Code the first like the second does not exist and vice-versa. Which is what we should do anyhow when it comes to small sequential Changes done to internal Modules.
Or it might be just my binary thinking mind saying it is inevitable. As always, it is a question of frequency. There is a big difference between the above happening a few times a day and happening once a week. We’ve previously talked about the distributions within the Change Stream, that there is a frequency to the Intention to Change them both together. Splitting our Modules differently between our applications, would be setting this frequency:
On the left, the odds of deploying both applications is dependent on the cohesion between the Business Logic (BL) and the UI Modules. On the right, it depends on the cohesion between BL and DATA. Here’s a third option, to break the BL into two Modules.
BL(I) would be cohesive with the DATA Module. BL(II) would be cohesive with the UI Module. Maybe if we distribute and break our Modules according to Cohesion of Causes, we’d find something else entirely.
There are a lot of considerations on how to split and regroup our modules between applications, such as security and secrecy. We can not put our “money making” algorithm in someone’s browser or it would be easily stolen and copied. These topics are beyond the scope of this book, but in the next chapter we’ll touch one that is. As not only we need to cross physical boundaries, we need to be crossing time boundaries as well.