The “big problem” with modern Enterprise Architecture

by | Nov 28, 2022 | Blog, Dev Team

For most enterprises, IT business solutions are implemented as a set of applications and/or services. The diagram below illustrates a “simple” set of applications and services that might exist within a small to medium sized enterprise. In practice, there are many enterprises with hundreds of applications, often implemented with a wide variety of technology.

Each circle in the diagram represents a distinct application, or service. It would typically encapsulate one, or more, distinct data stores such as databases, file stores, mailboxes, etc. It would share data with zero, or more, other applications/services using a range of different mechanisms, using a multitude of different transfer technologies and file formats. Many of these would be internal to the enterprise, but some would be for external applications and services.

Whilst the technology stack within an application/service is usually self-consistent, the various applications/services are generally different. Differences might include:

  • Security model and auditing capability
  • Hosting model: Cloud/on premise – Hybrids of SaaS, PaaS, IaaS
  • COTS (many with a lot of customization) or complete custom builds
  • Programming languages
  • Application architecture, or design patterns
  • Databases
  • Different versions and combination of all of the above

This application-centric model throws up a set of problems for an Enterprise IT department:

  • How to deal with high availability at an application level and how does this fit into a disaster recovery model?
  • How to balance staff resource mobility between applications/services as business needs change?
  • How to support business-critical applications/services as staff move away?
  • What happens when one of the “big rock” applications needs to be upgraded or moved? How many integrations get broken?
  • How to deal with mergers as new lines of business are acquired?
  • How to deal with divestments as business units are spun off?

When you look deeper, you realize that there are even more subtle problems that arise simply because the functionality has been implemented by an application/service. Essentially the “functionality” provided by the application defines a rigid scope, has been written in stone, is hard to change, and becomes a “big rock” that anchors the enterprise to the past, stifling innovation.

When analyze a group of applications/services with similar functionality, there are often many overlaps as shown below:

Let’s imagine a concrete example to illustrate the problem. We are going to talk about the basic “Customer, Order, and Invoice” applications. Historically, our Enterprise is a merger of three different Lines of Business (LoB) from two different companies. In one LoB the CRM encompasses Customer, Order and Invoice. The second LoB’s CRM covered Customers and Orders, but Invoicing was its own application. In the final LoB, all three functions were separate applications. So, in total, our example Enterprise has implemented six different applications for one function:

  • A – Customers, Orders and Invoices
  • B – Customers and Orders
  • C – Customers
  • D – Orders for (C)
  • E – Invoices for (B)
  • F – Invoices for (C)

Even when we restrict our analysis to just the Customer, we won’t see a complete overlap of either functionality or data. One of our applications (A) is focused on B2B Customers whilst the other two (B and C) are residential B2C. The data that we need to store is different for B2B and B2C Customers. There will be some common fields but there will also be fields unique to each Customer type, even for the B2C Customers the B and C Customer records may have different fields. If we are considering data from the different applications, there may also be a difference in primary keys. One application may key B2C Customers by Legal Name, the other by Primary Address. There may be an overlap where the same B2C Customer is registered in both applications, but we can’t easily link, or synchronize them.

We immediately see the issues with functionality being implemented by applications. Each of A, B and C has its own data store. If we wanted to merge data from A and C – how would we do that? Copy Customer data from C into A? Would the underlying data model of A support that? Even if it did, what would happen to the downstream applications D for Orders, and F for Invoicing? What would be the impact on A’s user experience given that we now have C Customer records in the data store? These considerations are why even years, or decades, after a merger we see distinct LoB applications that reflect the history of the enterprise.

This isn’t just an IT problem

Whilst this certainly poses challenges for an IT department, it also causes problems for the enterprise operations as well. Having multiple CRM applications causes fragmentation of customer data and requires checking multiple systems to get a single customer view. Having the “same” customer in multiple systems also means that you have potential data quality issues. For example, which is the “correct” address? Are you linking the correct two customers?

We have frequently encountered the “swivel-ware” issue in many enterprises. This occurs when to complete a single operational task, a user has to sign-in to multiple applications and enter some of the same data multiple times. Dual keying of the same data into multiple applications is a time, effort, and quality cost. To complete the task, each action must often be completed successfully and in the correct sequence. Failure to complete an action in sequence could result in some, or all, of the previously completed actions need to be rewound as part of a compensating transaction.

This also causes issues with implementing automation across applications. The rigid functionality implemented by each application must in some way be worked around to enable a “smooth” flow of actions. In many applications, the user experience imposes business rules or actions as they update their underlying data stores. In some cloud SaaS applications, those data stores may not be easily accessible. Just implementing a “happy path” workflow can be a technical and logistic challenge. When something goes wrong, we hit the compensating transaction issue. When one of the applications affected by the automation needs to be updated, then the whole automation process needs to be reviewed and possibly updated.

Microservices is the answer, right?

We have been moving towards the implementation of microservices for decades and as an industry we have made some progress. However, in most enterprises the move to microservices resulted in proliferating the underlying issues we addressed above. Many “microservices” really become small applications. In our example, we ended up with three distinct Invoice application/service implementations – two of which were originally supposed to be “microservices”. There was no separation, or re-use, of what could have been common code. For example, there was no “Calculate Sales Tax”, “Pay Invoice”, each Invoice application effectively duplicated this code using its own technology and data storage.

Many enterprises that went down the path of fine-grained microservices hit issues with the length of call-chains. If you need to chain together 10-20 microservice calls to complete a task, and each microservice is on its own server, then the time cost of making the call can easily become excessive. This led to the creation of larger coarse-grained services and the adoption of anti-patterns including caching and eventual consistency.

The fundamental issue here being that the individual microservices still encapsulate the data they maintain. Although implementing functionality with microservices is a move in the right direction, many enterprises didn’t go far enough in that they failed to extract the data from the service.

Solution: Build “functionality” without the need for applications – Virtualize Functionality

Let’s define “functionality” in this context as a set of user experiences and workflows that deliver business value. This “functionality” relies on secure access to business data and a data agnostic framework service that allows users to create and maintain it. The user experience can be provided as configuration that does not rely on underlying applications. The workflow can be provided as state-less microservices that can be invoked from the configured user experience.

First, we need a framework service that will enable us to securely store any type of record data. This will act as the system of record for all our business data over time. It will need to be horizontally scalable, highly available, and easy to replicate for disaster recovery scenarios. There are many current technologies that can be used to build such framework services. Decoupling “functionality” data silos from applications and services will reduce the need for complex security, integration, backup, and recovery solutions.

Secondly, we need a framework service that will enable us to securely render any type of record data from our secure storage. It should be possible to configure the user experience required by our desired functionality without the need for creating an application, or service. In recent years the technology for building Dynamic Page Generators has become readily available. It is now possible to create a detailed configuration for each page presented to the user that will render at runtime taking into account the roles of the user, the current state of the record, and the current content of the record. In practice, we can now render any desired user experience, across any platform (web, desktop, tablet or mobile phone).

Thirdly, we still need to use microservices to implement custom workflow tasks. However, we should build them completely state-less and data-less. They should be interacting directly with the framework data storage service. This means that they always operate directly on the latest version of the data without the need for caching or eventual consistency.

Is this a “quick win”?

The enabling technologies for this approach already exist. Our trellispark platform is a pragmatic working example. You can download trellispark and study its architecture on our website.

Like all enterprise architecture transformations, it will likely take years, if not decades, for this approach to become an industry standard. Remember how long it took the industry to migrate from physical servers to VMs? and from Enterprise data centers to cloud? We have been adopting microservice architectures for decades and are not there yet!

We are at the start of a path to the future where “functionality” itself becomes a configurable service and we can finally let go of our “big rock” applications.

Next Steps

Over the coming days we will be publishing a couple more blogs on the topic of Functionality Virtualization as a pragmatic solution to the some of the big problems facing Enterprise Architecture.

Why Virtualize Functionality? – This blog will outline the benefits of virtualizing functionality and describe why virtualization will enable you to do more with less.

How to Virtualize Functionality? – It can start with an easy increment to what you are already doing in your Enterprise Architecture. This blog will outline how to start along the path of functionality virtualization. How to prove that it will work in your Enterprise. How to get started with an initial project and then incorporate it into your strategic transformation roadmap at your pace.