The Problem With Traditional Software Development

A fundamental truth about modern business is that how we collect and utilize data has a profound impact on our success or failure. In turn, the code that we create and maintain to manage our data becomes a critical factor for our business success.

What is the problem with our traditional methods of building complex business systems? Many organisations are running on legacy systems that are barely maintainable. They spend large amounts of time and resources patching fragile integrations between obsolete and sometimes unsupported applications. Adding new functionality, even a few extra fields, can be a major project and present a significant cost and risk to the business.

We are constantly trying to address the software development problems with new technology, architectural patterns, and DevOps processes. The key question is: are we addressing the right problem? Going forward, a strong guiding principle we should follow is “Functionality is an asset, code is a liability” - a thought which Kirk Pepperdine tweeted to Kelvin Henney. In other words, writing code is a “necessary evil” to create the functionality that ultimately delivers business value. Having a lot of code requires a lot of effort to maintain and secure it.

Maybe the question we should be asking is, are we writing too much code to get the required functionality? Can we get the same or more functionality whilst writing less code? Can we write simpler and less complex code? Or even better - can we do away with the code altogether?

Let’s start by thinking about what we mean by functionality and code.

Functionality is an Asset, Code is a Liability

Functionality is an Asset

In a perfect world, Information Technology would be invisible. It would:

  • Always be available
  • Present the same data on any channel
  • Do exactly what the business needs today
  • Change effortlessly as the needs of the business evolve
  • Drive business growth and opportunities
  • Provide security and privacy for our customers

Unfortunately, functionality comes at a price – we need to create code to provide it.

Code is a liability

Let’s think about code in its broadest sense:

  • Documentation that specifies, designs, implements, and tests a business solution
  • Source code that provides the functionality
  • Test cases that verify our code works
  • Servers and services we deploy the solution on

They cost us when we create them initially, as they evolve over the course of the initial development project, and as they adapt to reflect the changes in the business over the solution’s lifecycle.

What is the Traditional Solution Development approach?

At the start of a new software project we try to determine the scope of what will be needed by attempting to define the requirements in the following areas:

  • The data model we will need to support the functionality
  • The user experience (UX) that will expose the functionality
  • The functionality that will be automated within the solution
  • How data is exchanged with other internal solutions and external third-party applications

For most projects, our initial requirements will be high-level and somewhat ambiguous. As the project evolves the requirements will be refined and, in many cases, redefined. This leads to a sequence of change management activities and re-baselining the project plan.

To illustrate the typical solution development process, let’s consider a “simple” solution for an insurance policy management system that will gather data for a quotation, determine the rate, and issue an insurance policy. For clarity, we might decide to break this solution into two micro-services: Insurance Policy and Quotation.

Let us begin with Insurance Policy micro-service and the concept of the Policy itself – we will need a few attributes:

  • Policy Number
  • Effective Date Range
  • Legal Name of Policy Holder
  • Address of Policy Holder
  • Etc…

We can define the names of the fields we need, indicate their data types and some validation rules… At this point we have enough to begin our database design and we can create our first table – progress has been made!

Now we can add some more concepts such as Coverage, Exclusion, Wording and Beneficiary which will be children of the Policy and enclosed by the Insurance Policy micro-service. Our data model is now emerging, and we can start to think about the rating functionality and the data we are going to need to create the finished Insurance Policy.

We can start to consider the user experience for our Insurance Policy micro-service. We could consider a back-office user role for the broker that would allow them to examine and modify the Policy data. We might want to build this user experience as a web application over a database – we break out a set of user stories and assign it to our agile project team to build.

If we are working in a mature development environment, we might already have a standard architectural framework and DevOps processes in place. If not, we are going to need to set some baseline standards and processes before our teams can get started.

Typically, we would now expect our DevOps team to begin creating a database with a table for each concept. Using a standard Create, Read, Update, and Delete (CRUD) model, our team would now begin to create a set of stored procedures to maintain the underlying tables. So, for each concept we now have 1 table and at least 4 stored procedures (we might end up creating extra stored procedures for specialized searches and partial record updates).

The team would then create a set of business objects (one per concept) to call the stored procedures and provide an abstraction layer over the storage for the user experience.

At this point we may also start making our automated unit tests at the database and business object level. The size and complexity of the unit test software we will need to create can easily be of the same order of magnitude - or larger - than the actual functional code we will be testing.

Once we have some business objects, either actual or test stubs, we can begin to create the user experience. In this case we are going to have to create multiple forms for each concept; we need to be able to:

  • search, find and select instances of the concept
  • display and possibly edit the attributes of the instance
  • invoke automation on the instance (e.g. rate the policy)

Now all that’s left is to add some automated test scripts for the user experience and we are done, right?

At this point we can easily be up to 20-30 tables with a couple of hundred attributes/columns, 100+ stored procedures, 20-30 business objects with 100+ functions, and 50-60 forms in our user experience. We could easily have about 100,000 lines of actual functional code and another 100,000 lines of test harnesses and scripts. That all takes time to write, for which the business must pay the programmers. Taking a conservative cost of $1/line our project is already up to $200K+ just to cover the basic data model and a simple user experience for our brokers.

What about security? We need to ensure that not everybody has access to the Insurance Policy data. We are going to need some mechanism for securing access to our micro-service and restricting the policies visible to a user. This implies we need to add some extra tables to map users to policies and then we are going to need some more stored procedures, business objects, user experience forms and testing software. Hopefully, we find this out towards the beginning of the project – security is better if it is built into the design from the beginning and not added as an afterthought!

What about audit? Increasingly, we are seeing system requirements to track changes to records over time. This implies that if we have 20 tables of primary policy data we need to add another 20 tables to track changes to the data – one per primary table. We can add a version number to the primary data and a database trigger that copies the new row from the primary table into the associated version history table. We will also need some new read stored procedures to pull specified versions of the instance from the version history tables. We typically don’t need to create extra business objects or user experience to handle this data – we could just add a little extra complexity to request a previous version of the record and ensure that the user can display it but not edit it or perform actions against it.

We have now added an extra $50K-$100K of software to implement security and audit – we must be done now, right? Well, this gets us a simple Insurance Policy micro-service into User Acceptance Testing (UAT) and our business users are now going to get immersed into testing it and figuring out what they missed in the initial requirements. This is where we find out:

We missed an attribute on the Policy object – no big deal we just add a column to the Policy table, the Policy History table, the trigger that creates the Policy History, all of the CRUD stored procedures used to maintain Policy/Policy History data, the business object the user experience forms, and all of the test harnesses and scripts. Perhaps it will take a day to get it all done and everything back through the DevOps Continuous Integration / Continuous Deployment (CI/CD) pipeline? Unfortunately, it’s much more expensive and risky to add the field now that we have built the UX.

“We need to redesign this form – I don’t like where these fields are located.” At last an easy one; we just need to change the form layout, tab ordering, and update test scripts.

Wait – this only works for Home Insurance Policies! We just expanded our business into Vehicle Insurance Policies. Do we need a whole new micro-service from scratch, can we clone and change the existing micro-service, or can we expand the micro-service to deal with a new type of Insurance Policy?

Our costs are now really starting to mount up. Small changes to concepts are taking longer to implement and a big request like a new type of policy could trigger a whole new project. If we are lucky, we will trap these issues during UAT. If we are unlucky, we will already be in production. When you consider that our micro-service may be in production for many years, these enhancements become a near certainty, and now we must deal with the impacts of data model changes on production data. Just adding a field becomes much more difficult, expensive, and fraught with real business risk. In many businesses it becomes increasingly difficult to get even simple changes through, and the ability of IT to support the business diminishes over time until IT becomes an anchor around the neck of the business.

Why do development projects fail?

Scope and Requirements

Getting perfect requirements at the start of the project rarely happens! Usually, our requirements evolve as we show the real business users the solution in User Acceptance Testing (UAT) or production. As an industry, we have begun to embrace agile project management methods to begin to address this risk and we have certainly had a lot of success compared to the older waterfall models. We are noticing problems faster, but “fuzzy” requirements are still the biggest risk to ultimate project success.

Project Management

Some organisations are still using “waterfall” project management approaches. Larger enterprises still retain formal methodologies requiring distinct Requirements, Design, Code and Test Phases with “gates” that need to be passed before work can begin on the next phase. Even when “agile” practices are adopted many larger organisations still view “agile” as doing a code drop to test every few weeks but otherwise retain their waterfall process. At the other extreme are organisations that embrace “agile” as an almost chaotic approach to building services with little in the way of structure or documentation. We need a balanced approach where we combine true agility of action with the discipline of a structured and documented framework.

Lack of Architectural Vision

Many organizations allow their solutions to grow “organically” without a pre-defined architectural framework in place. This “organic” architectural approach is not the same as “Evolutionary” architecture. In the organic approach new technology is added piecemeal with no ultimate end state in mind. Projects based on organic architectures tend to be focused on short-term tactical solutions with no concern for the big picture. On the other hand, evolutionary architectures have a desired end state in mind and new technology is introduced to move the solution towards that end.

Proliferation of Technology

When we create a new micro-service we should use the “best” technology for the job, right? Usually this comes down to the personal preference of the development team tasked with building the service. Most services could be equally well implemented in C++, Java, PHP, C#, VB – just pick your preferred technology, language, and development model / library! This leads to the adoption of many different technologies in use within an IT organization and causes a lot of down stream problems with sustainability over the longer-term operations lifecycle.

Too much code

Many organisations don’t create well-organised libraries of code that can be easily reused or composed into new solutions. This means that developers struggle to find the correct code to use, so they decide to create their own. Developers tend to focus on building code for a specific purpose rather than creating a generic component that could be reused. In some cases, developers will cut and paste together fragments from Stack Overflow (or other developer communities) without really understanding what they do. Sometimes this introduces unknown dependencies on external libraries that opens vulnerabilities in the solution.

Data Migration

A large part of the cost and risk of many development projects is the migration of data from old applications and services into the new service. Trying to maintain many years of legacy data can compromise design. The new code will have to deal with both the functionality required by the new system and the overhead of trying to match new and old data models. Error handling will need to be added to compensate for old data that is “incomplete” or “invalid” under the new data model rules.

Why do IT operations fail to support the business?

Poor Strategic Alignment

Many organisations suffer from poor communication between the Business and IT departments. In some cases, poor communication within the IT department can also lead to issues with the delivery of IT solutions to the business. A failure to communicate leads to the Business and IT pursuing different objectives, so IT spends time and effort on deliverables that don’t directly support the immediate business needs. An effective communications and governance process should help Business and IT agree what the priorities are and focus effort on deliverables that add value to the business.

The “Integration Hairball”

Many organizations have multiple legacy applications and services containing data silos. Data may be “shared” between applications using batch ETL (Extract, Transform, Load) at various frequencies, or sometimes even real-time through APIs (Application Program Interfaces). Over time, the addition of applications and services increases the complexity of integration, making the overall business solution fragile. The IT team come to fear making changes because of the business risk and cost. Migrating to micro-services isn’t always an easy answer – often we just end up creating even more micro-silos of data. This in turn creates more integrations, reduces application performance, and doesn’t improve the fragility of our overall solution.

Data Analysis and Reporting

Data that is distributed across many applications and services is hard to assemble and integrate in real-time. Challenges include the underlying security models of the legacy applications and correlation of data between data silos. For example, an “Order” field in one application may be assigned a unique internal numeric id, whereas the “same” order in a different application may be identified by its “Order Number”. For many organisations, reports that span multiple applications must be created by manual processes using tools such as Excel. Larger organisations spend lots of time and money creating data warehouses, which increase the fragility of solutions as it introduces an extra layer of application data integration.

Single Customer View

Many organisations find it difficult to create a single view of their customer data, as information relating to the customer’s services may be stored in several different applications. This leads to a proliferation of “swivel ware” (a user switching between screens in different applications) in organisations – especially those that have grown through a merger and acquisition, so IT systems have not been fully integrated. This causes issues with customer service and satisfaction; it is difficult or even impossible to handle customer requests seamlessly when they require the use of multiple applications. An obvious consequence of this problem is a dependency on manual workflow to complete simple tasks. It consumes additional time, effort, and cost, and risks introducing bad data.

Security and audit

Security and auditing ability is often considered a "nice to have" by developers, and is added at the end of the project as an afterthought. Many legacy applications require a user to sign in before they can access any application functionality and data. However, division of powers to access functionality and information is based on user roles, which take time to make. Often organizations make do with existing user roles, leading to users having access to functionality and data they do not need. Creating proper security and privacy rules within applications is just emerging as a standard practice in mature development teams, but still needs to be widely adopted by our industry. New privacy regulations such as the General Data Protection Regulation (GDPR) require that security and privacy are built in.

Poor adoption of standards and processes

Many Small- and Medium-sized Enterprises (SMEs) do not have formal standards and processes that they use consistently in the production of IT deliverables. Deliverables are created on a “best effort” basis by a series of developers. Frequently, software is re-developed because the new developers didn’t like the way the previous developer wrote the code. Non-standardized software creation leads to a proliferation of languages, libraries, and frameworks in use within the same organisation. This causes sustainability issues as developers join and leave the team. Not enshrining “best practices” within standards and processes can lead to inefficiencies in software development, unnecessary errors in deployment, and more fragile IT solutions.

Inflexible Workforce

In organisations that have grown through merger and acquisition, often the underlying IT solutions are never fully integrated. One reason is that the IT solutions for each part of the business were often developed using different languages, libraries and frameworks. This means that the new business now has several IT solutions with potentially overlapping functionality that cannot be merged easily. Each of the merged organisations had its own team of developers that specialised in their own technology and may be unwilling or unable to adapt to the new situation. Moving people between teams as demand fluctuates over time is difficult or impossible which means that some people are overloaded whilst others are idle. This can also cause delays in changing one set of applications because of a lack of available resources with the relevant skillset.

Unsustainable Solutions

We have all found ourselves taking ownership of existing business solutions and scratching our heads thinking “why did the previous developers do that?” As software developers, many of the solutions we see simply aren’t sustainable. Maybe they rely on applications or technologies that are no longer supported. Maybe there are so many application systems dependencies that the risk of making a change is simply too high. Maybe the software is poorly structured or overly complex. This can leave us feeling backed into a corner, with no good way to make the changes that the business is now desperate to deliver.

It’s time to level up!

We have undoubtedly made great strides in our ability to build complex systems using modern DevOps, Agile Project Management, Domain Driven Design and micro-service architecture. Although our industry's knowledge base is maturing, many organisations are still having major issues with adoption and implementation of modern development practices, leaving many IT departments stuck in a rut.

Part of the problem is that we are still trying to solve these issues in fundamentally the same way. Our development projects are still at risk from fuzzy requirements. We are still writing lots of software (creating large liabilities) to provide even basic functionality (minimal assets). Our business solutions are still fragile, complex, and expensive to change. Micro-services are a step in the right direction, but we are still seeing issues in the implementation caused by further fragmenting data between services, and trying to find caching and integration approaches to maintain adequate performance.

The Return on Investment (ROI) calculation for our approach to developing software is still fundamentally flawed. To change the ROI equation in our favor, we must take a different approach to software development. We need to ask some basic questions about how we build software based on everything we have learned about software engineering over the past three decades.

  • What if we replaced most of the software with data?
  • What if we created truly re-usable components that we can assemble into new solutions?
  • What if we eliminate all data silos and created Data Agnostic Services?

The idea behind Data Agnostic Services is to systematically reduce the need for code whilst delivering more functionality.

What are Data Agnostic Services?

at Rest

within a persistent data storage medium


as part of an omni-channel user experience

in Action

when we perform business logic on it

in Motion

between data stores