A fundamental truth about modern business is that how we collect and utilize data has a profound impact on our success or failure. In turn, the code that we create and maintain to manage our data becomes a critical factor for our business success.
What is the problem with our traditional methods of building complex business systems? Many organisations are running on legacy systems that are barely maintainable. They spend large amounts of time and resources patching fragile integrations between obsolete and sometimes unsupported applications. Adding new functionality, even a few extra fields, can be a major project and present a significant cost and risk to the business.
We are constantly trying to address the software development problems with new technology, architectural patterns, and DevOps processes. The key question is: are we addressing the right problem? Going forward, a strong guiding principle we should follow is “Functionality is an asset, code is a liability” - a thought which Kirk Pepperdine tweeted to Kelvin Henney. In other words, writing code is a “necessary evil” to create the functionality that ultimately delivers business value. Having a lot of code requires a lot of effort to maintain and secure it.
Maybe the question we should be asking is, are we writing too much code to get the required functionality? Can we get the same or more functionality whilst writing less code? Can we write simpler and less complex code? Or even better - can we do away with the code altogether?
Let’s start by thinking about what we mean by functionality and code.
What is the Traditional Solution Development approach?
At the start of a new software project we try to determine the scope of what will be needed by attempting to define the requirements in the following areas:
- The data model we will need to support the functionality
- The user experience (UX) that will expose the functionality
- The functionality that will be automated within the solution
- How data is exchanged with other internal solutions and external third-party applications
For most projects, our initial requirements will be high-level and somewhat ambiguous. As the project evolves the requirements will be refined and, in many cases, redefined. This leads to a sequence of change management activities and re-baselining the project plan.
To illustrate the typical solution development process, let’s consider a “simple” solution for an insurance policy management system that will gather data for a quotation, determine the rate, and issue an insurance policy. For clarity, we might decide to break this solution into two micro-services: Insurance Policy and Quotation.
Let us begin with Insurance Policy micro-service and the concept of the Policy itself – we will need a few attributes:
- Policy Number
- Effective Date Range
- Legal Name of Policy Holder
- Address of Policy Holder
We can define the names of the fields we need, indicate their data types and some validation rules…
At this point we have enough to begin our database design and we can create our first table – progress has been made!
Now we can add some more concepts such as Coverage, Exclusion, Wording and Beneficiary which will be children of the Policy and enclosed by the Insurance Policy micro-service. Our data model is now emerging, and we can start to think about the rating functionality and the data we are going to need to create the finished Insurance Policy.
We can start to consider the user experience for our Insurance Policy micro-service. We could consider a back-office user role for the broker that would allow them to examine and modify the Policy data. We might want to build this user experience as a web application over a database – we break out a set of user stories and assign it to our agile project team to build.
If we are working in a mature development environment, we might already have a standard architectural framework and DevOps processes in place. If not, we are going to need to set some baseline standards and processes before our teams can get started.
Typically, we would now expect our DevOps team to begin creating a database with a table for each concept. Using a standard Create, Read, Update, and Delete (CRUD) model, our team would now begin to create a set of stored procedures to maintain the underlying tables. So, for each concept we now have 1 table and at least 4 stored procedures (we might end up creating extra stored procedures for specialized searches and partial record updates).
The team would then create a set of business objects (one per concept) to call the stored procedures and provide an abstraction layer over the storage for the user experience.
At this point we may also start making our automated unit tests at the database and business object level. The size and complexity of the unit test software we will need to create can easily be of the same order of magnitude - or larger - than the actual functional code we will be testing.
Once we have some business objects, either actual or test stubs, we can begin to create the user experience. In this case we are going to have to create multiple forms for each concept; we need to be able to:
- search, find and select instances of the concept
- display and possibly edit the attributes of the instance
- invoke automation on the instance (e.g. rate the policy)
Now all that’s left is to add some automated test scripts for the user experience and we are done, right?
At this point we can easily be up to 20-30 tables with a couple of hundred attributes/columns, 100+ stored procedures, 20-30 business objects with 100+ functions, and 50-60 forms in our user experience. We could easily have about 100,000 lines of actual functional code and another 100,000 lines of test harnesses and scripts. That all takes time to write, for which the business must pay the programmers. Taking a conservative cost of $1/line our project is already up to $200K+ just to cover the basic data model and a simple user experience for our brokers.
What about security? We need to ensure that not everybody has access to the Insurance Policy data. We are going to need some mechanism for securing access to our micro-service and restricting the policies visible to a user. This implies we need to add some extra tables to map users to policies and then we are going to need some more stored procedures, business objects, user experience forms and testing software. Hopefully, we find this out towards the beginning of the project – security is better if it is built into the design from the beginning and not added as an afterthought!
What about audit? Increasingly, we are seeing system requirements to track changes to records over time. This implies that if we have 20 tables of primary policy data we need to add another 20 tables to track changes to the data – one per primary table. We can add a version number to the primary data and a database trigger that copies the new row from the primary table into the associated version history table. We will also need some new read stored procedures to pull specified versions of the instance from the version history tables. We typically don’t need to create extra business objects or user experience to handle this data – we could just add a little extra complexity to request a previous version of the record and ensure that the user can display it but not edit it or perform actions against it.
We have now added an extra $50K-$100K of software to implement security and audit – we must be done now, right? Well, this gets us a simple Insurance Policy micro-service into User Acceptance Testing (UAT) and our business users are now going to get immersed into testing it and figuring out what they missed in the initial requirements. This is where we find out:
We missed an attribute on the Policy object – no big deal we just add a column to the Policy table, the Policy History table, the trigger that creates the Policy History, all of the CRUD stored procedures used to maintain Policy/Policy History data, the business object the user experience forms, and all of the test harnesses and scripts. Perhaps it will take a day to get it all done and everything back through the DevOps Continuous Integration / Continuous Deployment (CI/CD) pipeline? Unfortunately, it’s much more expensive and risky to add the field now that we have built the UX.
“We need to redesign this form – I don’t like where these fields are located.” At last an easy one; we just need to change the form layout, tab ordering, and update test scripts.
Wait – this only works for Home Insurance Policies! We just expanded our business into Vehicle Insurance Policies. Do we need a whole new micro-service from scratch, can we clone and change the existing micro-service, or can we expand the micro-service to deal with a new type of Insurance Policy?
Our costs are now really starting to mount up. Small changes to concepts are taking longer to implement and a big request like a new type of policy could trigger a whole new project. If we are lucky, we will trap these issues during UAT. If we are unlucky, we will already be in production. When you consider that our micro-service may be in production for many years, these enhancements become a near certainty, and now we must deal with the impacts of data model changes on production data. Just adding a field becomes much more difficult, expensive, and fraught with real business risk. In many businesses it becomes increasingly difficult to get even simple changes through, and the ability of IT to support the business diminishes over time until IT becomes an anchor around the neck of the business.
It’s time to level up!
We have undoubtedly made great strides in our ability to build complex systems using modern DevOps, Agile Project Management, Domain Driven Design and micro-service architecture. Although our industry's knowledge base is maturing, many organisations are still having major issues with adoption and implementation of modern development practices, leaving many IT departments stuck in a rut.
Part of the problem is that we are still trying to solve these issues in fundamentally the same way. Our development projects are still at risk from fuzzy requirements. We are still writing lots of software (creating large liabilities) to provide even basic functionality (minimal assets). Our business solutions are still fragile, complex, and expensive to change. Micro-services are a step in the right direction, but we are still seeing issues in the implementation caused by further fragmenting data between services, and trying to find caching and integration approaches to maintain adequate performance.
The Return on Investment (ROI) calculation for our approach to developing software is still fundamentally flawed. To change the ROI equation in our favor, we must take a different approach to software development. We need to ask some basic questions about how we build software based on everything we have learned about software engineering over the past three decades.
- What if we replaced most of the software with data?
- What if we created truly re-usable components that we can assemble into new solutions?
- What if we eliminate all data silos and created Data Agnostic Services?
The idea behind Data Agnostic Services is to systematically reduce the need for code whilst delivering more functionality.