The Virtualization of UX and CRUD Functionality

by | Dec 7, 2022 | Blog, Dev Team

What is Virtualized UX and CRUD Functionality?

For the purpose of this discussion “Functionality” means the user experience presented by an application. It is how a user will create and maintain data. It will also be used to invoke workflow on existing data.

“Virtualizing UX and CRUD functionality” is a mechanism by which the user experience is no longer rendered by code in the application. Instead, non-technical Business Analysts and Enterprise SMEs can configure most of the User Experience and CRUD functionality as a dataset without the need of developers. The user experience will be rendered in real-time on the user’s device by combining the configuration dataset with the data to be displayed. Where a developer is required, the scope of the developer’s customization tasks will be small and well-defined. Examples of where a developer might be required include creating a custom microservice component to accomplish a single task – like “Calculate Sales Tax”.

Virtualizing UX and CRUD functionality is a new and complex discussion, so let’s start by thinking about another type of virtualization that is a little more familiar.

Let’s start with why we virtualized applications

Back in the old days when the internet was still made of wood (old joke), we deployed our applications on physical servers. We eventually migrated most of our applications onto virtual servers in private clouds in our own, or third party, data centers. As public cloud became more mainstream and trusted, we started redeploying some virtual servers into public cloud using Infrastructure as a Service (IaaS). At this point our applications were still being deployed over the operating system of a server.

As cloud-based Platform as a Service (PaaS) solutions became available and trusted, we started to break apart the components of our applications and abstract them from the underlying operating system. By deploying the components using PaaS we are able to let the cloud provider deal with the maintenance of the servers and operating systems.

The rise of Software as a Service (SaaS) applications has provided distinct sets of functionalities that we can subscribe to. The underlying software application is managed by the cloud provider and our data is maintained in their systems. We have limited ability to customize the cloud service to our requirements and are constrained by the scope of the provided service. If we need two distinct sets of functionalities, then we need multiple cloud SaaS applications, or create our own custom solutions.

If we discussed current Enterprise Architecture options with a CTO/CIO twenty-five years ago, would they have believed that virtualization of applications from physical servers was achievable? Whilst we didn’t realize all of the claims made when we started the journey, we realized enough of them that the journey was worthwhile.

Bottom line: Virtualizing applications from physical servers has enabled us to systematically reduce costs and eliminate many operational risks.

We face even more challenges today than we faced twenty-five years ago. Our move to virtualize applications has created even more complex Enterprise Architectures – it’s time to consider how to simplify our approach.

Could the next step be virtualizing UX and CRUD functionality?

Our basic premise is that “Users don’t care about technology or applications – they just want secure access to data and business functionality.”

Let’s consider a future state, a decade in the future, where we have found a way to virtualize UX and CRUD functionality. Our virtualized UX and CRUD functionality framework implementation included three key elements:

  • A framework level data agnostic storage service that can securely store any required data and make it available directly to any other service.
  • A framework level dynamic page builder that can render any user experience using a configuration dataset on any user device.
  • A set of microservices that provide workflow functionality that either runs in the background or can be directly invoked from the user experience.

Functionality that used to be hard coded in the application user experience is now configured in data. If we need to add new functionality for our Enterprise, we can purchase/download/create a configuration dataset that provides the required functionality. We can customize that configuration dataset at any time to meet our specific business needs and then immediately start creating new data using that functionality. We do not need to create any new applications/services to render the new user experience.

The new data is going into the framework’s data agnostic storage service and so is directly and immediately available to any other service connected to the virtualized UX and CRUD functionality framework. There is one system of record and no additional data integrations are required.

If the new functionality comes with new automation or analytics, they can be created/customized as required and deployed as microservices. They will be connected to the framework and so have direct and immediate access to all other Enterprise data as required.

From the Enterprise perspective, the hard coded applications that tied us to old and inefficient processes are now gone. We replaced them as part of our ongoing modernization roadmap. We would have replaced them anyway; we just chose to virtualize the application’s UX and CRUD functionality instead of hard coding it again. Best of all, it was cheaper, faster and less risky to virtualize the application functionality as opposed to hard coding it again. We have cut operational costs and risks for both IT and the business. We now have a truly agile framework for continuous improvement.

Is it even possible to virtualize UX and CRUD functionality?

Our virtualized UX and CRUD functionality framework implementation depends on adding three simple things to our existing Enterprise Architecture toolbox. The good news is that we have been using two of them for a long time now:

  • Microservices are well understood mechanisms for building automation functionality that can either be invoked as required by user experience or run as background tasks. The only difference proposed here is that they no longer encapsulate data but read it directly from data agnostic storage. This means that many antipatterns can be avoided.
  • Data agnostic storage has also been available for a long time. Simple Document DB solutions have been around for a while, and we have investigated a number of hybrid approaches to optimize filtering and search over unstructured data.

The missing piece has been the ability to create a framework level dynamic page builder. Recent advances in user experience implementation technologies have made construction of practical dynamic page builder framework components possible.

Read the “How to Virtualize UX and CRUD Functionality” blog if you want to see a pragmatic implementation of a virtualized UX and CRUD functionality framework. This blog will direct you to:

  • Free resources to understand the underlying architecture and design of the required framework components
  • Download a fully functional release of the framework components to explore how this is accomplished
  • Overview of an implementation project approach to install the framework and build your first projects

Sounds simple enough – why should I do it?

Whenever somebody comes up with a new idea, they also make a bunch of claims about what the benefits will be. Over time some of them will be realized, some of them are interesting but not a material impact, some of them never materialize.

To try to gauge the possible benefits of virtualizing UX and CRUD functionality we have added a number of distinct sections below. Each starts by outlining a problem we see today in real enterprises and then in italics we have identified how our future state virtualized UX and CRUD functionality could help.

Data sets used in the current state (normal font) are made up based on actual previous project experience. For our example, we have considered a mid-sized Enterprise blended from many projects over the past couple of decades.

  • 100 applications – utilizing a wide variety of technology. Some are relatively recent, but many are years old, a few are decades old.
  • 350 databases – utilizing most of the available database engines used in the past 20 years. Most of the database engines are at least two versions behind current and some are no longer even supported.
  • 250 data migrations – most are batch ETL on a daily or weekly schedule. 50 are point to point integrations using the application’s version specific API
  • 700 Servers – mostly virtualized but some are still physical. A mixture of various operating systems and versions. Some approaching (or past) end of support/life.
  • 100 cloud services – we know about half of what the Enterprise staff are using. There are a lot of shiny objects out there that we haven’t been told about, but the Enterprise still depends upon them for critical functions.
  • 6 different authentication providers – most of our users have multiple user accounts to access all of the applications they need to use on a day-to-day basis to complete their operational tasks. Many applications use a standalone user/password authentication method that does not have MFA.
  • 85 developers and operations staff have some form of administration access to a subset of the infrastructure, applications, servers and databases.

Any numbers in the visionary future state (italics) are estimated based on a best estimate of the consolidation made possible by virtualizing UX and CRUD functionality.

For this example, we are going to assume that we have nearly completed our Functionality Virtualization. We have virtualized the UX and CRUD functionality of 90 applications into our framework which will significantly reduce the number of technologies, servers and databases in our infrastructure.

Proliferation of technology

The rate of advance of technology is accelerating and it seems that every new application project wants to adopt some new architecture, tool, or service. That would be great if all of our legacy applications could all be upgraded at the same time. The reality is that we all have legacy applications that are decades old that are still critical to some line of business. It’s not just the applications themselves of course, we also see many different technologies in use for the data integrations and other supporting services including automation, analytics, machine learning and AI.

Most of our data is now stored in one framework data agnostic storage service. All user experience is rendered by a small number of dynamic page builders. We have dramatically reduced the number of technologies in our Enterprise Architecture. If new technology becomes available for either data storage or user experience rendering, we can simply update the framework component and all of our virtualized user experience functionality updates at the same time.

We are still using a variety of different technologies to create workflow automation, analytics, machine learning and AI. They are implemented by microservices and are completely abstracted from user experience applications, so there are significantly fewer technologies in play here as well.

Technical Support

Your best creative developers are always going to want to use the latest technology on the newest projects. As well as feeding the “proliferation of technology” problem you also are left with a technical support problem for your older applications. As applications age, the original development team migrate to new projects. This is especially true if you outsourced the application build to a third-party development team or used independent consultants.

It is not uncommon to find applications that are decades old being supported by a small team of developers who just keep the lights on. These developers generally have a very specific skillset and so can’t be assigned as needed to other critical projects even if they have spare capacity. On the other hand, they can’t easily be replaced as many new hires frequently don’t want to work supporting old applications.

We now have few, if any, applications outside the framework. Most of our business functionality is being controlled and exposed by the latest technology. We need significantly fewer resources to manage all of the functionality managed by the framework. They are easier to find and onboard as we are using the latest mainstream technology.

Fragmentation of data

Our business-critical data is now more fragmented than ever. We have application data silos within our own data centers and across multiple cloud IaaS, PaaS and SaaS. As our business users increasingly take advantage of “shiny” cloud services to implement “informal” business solutions we can even lose track of where critical data is being stored. A greater percentage of our business-critical data is now under the control of third parties than ever before.

Most of the business-critical Enterprise data is now in one framework data agnostic storage service under the direct control of the Enterprise. Some data may still reside in legacy applications, but the overall fragmentation is now significantly reduced. The ability of the business to rapidly add new functionality to the framework is reducing the tendency to look for shiny cloud services.

Data Integration

As the number of data silos increases, we need more complex data integration services so that we can automate workflow and reporting across multiple lines of business applications. For many of our application data silos, the underlying data is not readily accessible. Many applications have some measure of logic associated with updating records – the very functionality that we use the application for!

Creating data integrations can be a complex project in itself and ties together specific versions of the connected applications. Upgrading applications to newer versions can easily break existing integrations. The web of applications and data integrations can make it almost impossible to update some core applications.

Not all data integrations are happening in real-time. There are a significant number of batches that are run over night, weekly, monthly or quarterly. Data in many applications can be stale between refreshes.

We now have considerably fewer application data silos to maintain, and most data has already been consolidated into a single system of record. The data agnostic storage service has been designed to be easy to integrate with and becomes the focus of an Entity Synchronization Layer. This means that the remaining applications need only synchronize with the framework data, which will then distribute appropriate updates as required.

“Swivel Ware”

It’s not uncommon that a user will have multiple distinct logon credentials to access several applications where they will enter multiple copies/variations of the same data to complete a single task. In some cases, they will manually copy/paste data between applications, or have to rekey variations of data on a theme. For example, when processing an address change, they may need to retype slightly different formats of the same address into several systems. Mistakes inevitably occur.

Most lines of business functionality are now exposed through the framework user experience. This significantly reduces the number of applications required to complete any business task. A user who logs into the framework can access any type of data required by their role in a transparent and secure manner. They access an immediate “single pane of glass” view of all related data so they can make better decisions and can initiate any required workflow directly from the user experience. Duplicate data entry is significantly reduced, and fewer mistakes are made.

Data Quality and Quantity

The quality of our data is compromised by our increased data silo fragmentation, an increased need for “swivel ware” data entry, and an incomplete data integration. At the same time, our Enterprise demands even more data for advanced analytics, machine learning and AI services.

We often have multiple applications that store overlapping data sets. For example, customer data can be held in many different applications. Being able to relate the same customer in multiple applications can be a challenge. Ensuring that they contain the “same” data may be practically impossible. This calls into question the aggregate quality of customer records – which is the source of truth?

Our framework data agnostic storage can scale horizontally and is able to maintain and secure access to as much data as the Enterprise needs.

The number of data silos has been significantly reduced and there is now a single source of truth for any record maintained by the framework. The latest version of the record is immediately available to any workflow, analytics, machine learning, AI, or user. As they access a common record storage, records can be locked as required during update to ensure that only one update at a time is being applied across all the virtualized functionalities.

Staff Mobility Issues

As technologies proliferate within the Enterprise the size of the team needed to support them increases. Most IT staff have a specific area of expertise that they wish to employ or develop. This leads to interesting situations where we may have spare capacity in some skillsets but can’t easily redeploy staff to under-resourced skillset applications/projects. This creates staffing issues where some people are overworked whilst they see under-utilized teammates on other projects/applications. Bringing in temporary staff is typically more expensive and has both ramp-up costs and loss of knowledge as they leave.

The main role for developers would be the creation of customization of workflow automation, analytics, machine learning, or AI. All developers would be using the same technology as part of the virtualization framework. There is easy mobility of staff as the changes to functionality are likely to be much smaller in scope than large application/projects we see today. We are likely to see lots of little tasks rather than big projects in this space. This will make it easier and faster to build/deploy custom functionality – improving business agility.

Third Party Support

Timely access to third party support services can be expensive. Even when we get a support engineer on the call, it takes them time to ramp-up on our problem and offer a solution. If it’s a custom build, then it is unlikely that the support engineer was involved in the customization and will need to learn what was customized before they can effectively solve the issue.

Ultimately, the primary responsibility of the third-party support engineer is to protect their business and not ours. This can lead to issues being deflected to applications or services provided by other vendors.

Far fewer third parties are involved and mostly support the remaining legacy applications. The core framework is very light weight compared with many traditional applications and is largely deployed and supported within the scope of control of the Enterprise itself. This may be hosted on-premises, or in cloud.

Fragmented Security

Multiple applications also come with the added vulnerability of multiple security approaches. This can create gaps where data can be exfiltrated from the Enterprise. This complicates adoption of Data Loss Prevention policies and makes it harder to apply consistent security to all data. Legacy applications may be less secure and harder to patch/update.

All the business-critical data has been moved into the framework data agnostic storage service. The framework data is protected behind the latest layered security defense essentially creating a virtual “air-gap” from the business users. All access to data is tightly controlled allowing enforcement of Data Loss Prevention policies and a seamless security posture across all data.

Business Agility

For many Enterprises, “Business Agility” is an oxymoron. It may take weeks, or months, to get a simple change (such as adding a new field) to a form approved, tested, and deployed. Changing functionality to add a new product or customer type may take many months, or even years. Even minor changes to large applications require careful consideration and extensive regression testing. There is significant business risk in changing the application itself and any potential knock-on effect to other integrated applications.

In general terms, the larger the application, the longer the backlog list of things to be done in the next release. Multiply this by a large number of interconnected applications and the IT support for innovation slowly stagnates, losing competitive advantage.

Most of our business functionality is now expressed as a configuration dataset. Simple changes like adding new fields to forms can be safely completed in minutes and deployed from Dev to QA to Production in hours, if necessary, without bringing down the framework services.

New product, or customer types may take days to configure a new user experience. Adding new workflow automation, analytics, machine learning and AI may take weeks to develop, test and deploy.

The framework can deliver new functionality faster with significantly reduced business risk. This enables the business to develop and maintain a competitive advantage by delivering new value faster.

Consistent User Experience Cross Platform

The applications used by our back-end staff are typically different from those used by our customers. For example, we have an old VB6 custom built CRM system for customer service in one line of business. We provide a web-based portal for our customers so that they can see some of their customer data, orders, and invoices. The user experience is very different for each user type, and we have to train our back-end staff to understand what the customer is seeing on the website. To make it more fun, the customer’s website is pulling data from three different back-end applications. The back-end user has to be familiar with all of them and be looking at the same customer record in each. There may also be lag between an update being made on one device and then being visible on another.

Since all page builders connect to the same framework data, changes on one device are immediately available to all other devices. The customer and back-end staff are always looking at the same data using a common user experience. This makes it much easier for our staff to support our customers and provide a better experience.

Operations Training

Our applications each target a single device (web, desktop, tablet, or phone). There is no consistency in how these applications perform basic functions through their user experience. Navigation, search, filtering, saving and auto-saving, etc. are implemented differently. Just training staff on the basic capabilities of the half dozen applications they will need to complete their tasks can takes days or weeks.

On top of that we also need to train our users on where the various applications overlap. Where is the customer data stored in different applications? How do you change the address? Where do you place this type of order? Bringing new users up to speed can take months of training and initial supervision as they enter a new role. Rolling out changes in an application then means that we need to retrain all users of that application. Minor changes are not a big problem but rolling out a new version of the application where the menus have changed, or functionality has been added/modified/removed are always challenging.

Virtualizing functionality means that we now only have to train the users on one way of interacting with an application for all devices. All the data they need is at their fingertips and is available with simple navigation cues. There is only one customer, one place to change the address, one place to add an order (no matter what type). Most tasks are now automated using wizards to guide the user through the process.

Infrastructure Footprint and Costs

Although most of our 700 servers are virtualized, we are still paying license costs and management costs for each. Due to cost limitations, we can’t afford a hot secondary data center for disaster recovery for all applications. 40 applications have been identified as critical and are pre-installed on 50 virtual servers in the secondary data center.

After virtualizing functionality of 90 applications, we have reduced our server footprint significantly. We still have 35 servers for our remaining 10 applications.

We have 40 servers that implement our data agnostic storage service. We have been able to consolidate a lot of the CPU/RAM/Storage as we have far fewer databases and servers to manage. There is less data to store and significantly fewer data migrations.

Since we are using dynamic page builder technology, we can significantly scale back our server requirements for rendering user experience which will now be done on the client. We just need to be able to serve the client with the page builder and provide a set of open APIs that they can access to get at the required data. This brings our user experience server footprint down to 30 servers.

All of our workflow functionality has been pushed into microservices or assimilated into the data agnostic storage service itself. we now only need to support 40 servers.

The 40+30+40 = 110 servers used to host our virtualized functionality are evenly split between our primary and second data centers. They can be utilized together to support unexpected peak loads if necessary and offer a seamless fail-over and fail-back in the event of disaster.

Licensing Costs

We have 100 applications with 350 distinct databases and associated licenses. We are also paying license costs on 70 applications, some of which are managed services billed by user.

We are down to 10 applications with 25 databases and associated licenses. We are now only paying license costs on 5 of the 10 remaining applications and only one of those is a managed service.

The 90 applications whose functionality we virtualized are now running on 40 database servers which need to be licensed. We are maintaining our own dynamic page builders and custom workflow, so we have no license costs for that.

Role of Developers (Technical Staff)

We have 70 developers supporting our applications but the level of support for each is patchy. Some applications have 7 developers, most have 1 or 2 and about 30% have no internal support at all.

Each application uses its own technology stack and requires a lot of experience to understand its implementation and configuration. Onboarding new developers is difficult. Most of the developer’s time is spent making minor changes to the user experience, creating reports, or extracting datasets. The datasets are used to drive data migrations or reporting/analytics engines. A lot of developer time is also spent dealing with operational support issues – fixing data when a user has made a mistake in the application or correcting failed data migrations.

About 30 of our developers are heavily over-utilized doing 10-15 hours overtime per week. 20 are breaking even and the remaining 20 are significantly under-utilized while waiting for an incident to occur on their applications. Scheduling time for developing new functionality has always been an issue and most projects are overrun due to unexpected complexity of tasks and availability of resources who have to deal with operational emergencies regularly.

Our remaining 10 applications have a total of 12 developers assigned. 4 or these applications have no support at all.

The 90 applications whose functionality was virtualized are now supported by a team of 15.

  • 5 of these are tasked with support of the data agnostic storage services and dynamic page builders. They mainly prepare datasets for reports or help customizing elements of the user experience.
  • 10 developers are tasked with building custom workflow, analytics. integrating machine learning and AI.

Our 15 developers are able to distribute the load evenly amongst themselves, since each has the appropriate technical skills to do any of the required tasks. The tasks themselves are generally fairly small and can be done within a working day making it much easier to estimate and schedule workload. This enables the development team as a whole to deploy high value deliverables faster making the Enterprise more agile.

Role of Business Analysts (Non-technical Staff)

We have 50 Business Analysts who work with Enterprise staff to determine the requirements for new applications or changes to existing applications. At present they spend a lot of their time creating wireframes using Visio and documenting requirements in meetings. They then have to engage with developers to convert their notes into prototypes which can take weeks. When the developers produce the prototypes, they are frequently lacking in some aspects of functionality and so they need to be iterated a couple of times before being brought back to the users. When the users can interact with the prototype, they frequently have additional changes that also need to be incorporated.

If changes are required in the implementation, these can easily have bigger knock-on effects in the application as a whole that will require extensive rework to other application functionality.

Changes made in one application also frequently cascade into other applications. Most of the time these are caught during initial implementation because of the institutional experience of the developers with their applications. Occasionally something will slip through and cause down-stream issues in data migrations and other applications.

As the work comes to a conclusion, the Business Analysts will draw up a change management plan with the Project Manager which will describe all the training required by staff to take advantage of the changed functionality across all affected applications.

With our virtualized functionality we have significantly reduced the probability of unintended consequences to making a change as there are fewer data migrations or down-stream applications.

Business Analysts – or any non-technical staff member – can now configure most of the desired user experience in real-time without needing the help of a developer. Where they identify a place in the user experience where they need a developer to do some customization, they can configure the context and tag with the requirements. The developer then doesn’t have to infer when or how the customization will be used; they just need the customization requirements. This makes it much more likely that the developer can implement the customization right the first time. If there is an issue with the implementation, then it is isolated and easily fixed.

There is now only one user experience, so it is easier to train Enterprise users on any changes to functionality. If a change to a process is being made, it is frequently encapsulated in a wizard that guides the user through the task.

Role of Testers

We have 60 testers in our Enterprise who are responsible for ensuring that our applications, data migrations, workflows, reports etc. all work as expected. Each of the testers is an expert in a set of applications and has built up an extensive knowledge of how the application is supposed to work. They spend most of their time creating test cases and testing the application user experience as well as regression testing all applications that may be impacted by any change. Final user acceptance testing of the applications is performed by a combination of the testers and Enterprise subject matter experts (SMEs) for each application.

With most of our functionality virtualized and the user experience being directly configured by our Business Analysts and our Enterprise SMEs, the testers are no longer required to conduct extensive testing of the user experience. The testers are now free to focus their attention on higher value changes to workflow, reporting, analytics, machine learning and AI.

Role of Project Managers

We have 20 Project Managers working through our Project Management Office. They coordinate changes required by the Enterprise with the various DevOps resources required to implement them. They frequently have issues when trying to build business cases and establish the timelines, costs and risks. Resource availability of developers is a key challenge as there are constant operational emergencies that need an immediate response. There are issues with fuzzy business requirements as Business Analysts and Enterprise SMEs are constantly changing their minds about which functionality is the most important or even what is actually required. Most new application projects are going to utilize new technology so there are challenges around getting the application architecture defined and approved. All of this adds cost, time and risk to the projects.

Adding new virtualized functionality is much easier. The architecture is already defined which reduces risk, the Business Analysts and Enterprise SMEs can configure exactly the user experience they require mostly without the need of a developer. The customization tasks that are identified by the user experience configuration are generally small and can be assigned to the next free developer making scheduling more predictable.

Overall governance is improved with visibility of progress easier to see. Functionality is delivered incrementally and faster. Changes in priorities can be easily accommodated.

Role of Operations

We have 15 operations team members responsible of maintaining and running our applications, servers and cloud services. The infrastructure footprint is large and complex – there are multiple changes being deployed on a weekly basis. Just keeping track of the entire infrastructure configuration and data flows is problematic. Monitoring/alerting, performance management, and incident response are problematic given the number of different technologies, applications, and services they need to support.

Each operations team member is capable of maintaining only a subset of infrastructure and as a group are reluctant to commit to taking on new applications that rely on unfamiliar technology. A new technology means extra training and then trying to figure out how to incorporate that into the overall operations. How will it impact HA or DR? How will it scale? How will we monitor it? What security vulnerability does it create?

Moving to virtualized functionality has drastically reduced the number of technologies, applications and services. There is now a well-defined HA/DR architecture. Security boundaries are clearly defined and easily enforced. Most of the new workflow, analytics, machine learning, and AI functionality is being deployed on known technology. Having newer and significantly fewer technologies in the infrastructure makes it easier to monitor. Having a virtualized framework makes it easier to balance load across available resources and allows for predictable scaling as demand grows.

Disaster Recovery

Maintaining a consistent set of point in time backups is challenging and expensive. We can’t afford a hot secondary site for all of our applications and data. We have had to prioritize which business-critical applications will fail over quickly and are likely to see some data loss. We anticipate that business operations will be degraded and many of the lower priority business functions will have to revert to a manual (paper-based) process.

Fail back from a disaster will be a project in its own right and may take several days to complete. We anticipate some degraded operations during fall back. Of course, we will then be faced with updating all the operational applications that reverted to manual processing.

For the applications we virtualized, all the data is in a data agnostic storage service (DASS) that was maintained in real-time to the secondary data center. Failover will be instantaneous with no data loss or inconsistency across the virtualized functionality. All virtualized functionalities will be available to the Enterprise which will significantly reduce the need for degraded or manual operations. Fail-back will be equally trouble free when the primary data center is restored.

We have significantly reduced our RPO (Recovery Point Objective) and RTO (Recovery Time Objective) whilst reducing our costs and risks.

Security

Many of our applications use a standalone security model based on user and password. Making sure that each user has the correct credentials and application roles is a significant challenge. We know that some users still have access to applications/roles from previous positions in the Enterprise that they no longer need. Role granularity on some of our applications is very coarse grained which means that we have a lot of users with access to all of the records within the application. We have limited means of enforcing a Data Loss Prevention policy across the Enterprise.

Our data storage is using multiple databases / versions and is scattered over hundreds of different virtual servers and cloud services. We have issues maintaining the latest patches across some of our applications. We have issues with older operating system versions on some servers. Most of our 70 developers and 15 operations staff have some level of administration access to the applications, services, and servers that they support.

We have tried to implement SIEM across the infrastructure with limited success. The solution as a whole is complex with a lot of applications utilizing legacy technology with limited monitoring endpoints.

Our Enterprise user’s devices are being compromised by malware on a regular basis. So far, the attacks have been contained at the device, but a Ransomware penetration of our service infrastructure just seems like a question of time.

By virtualizing the functionality of 90 applications we have been able to move the majority of our business-critical data into a framework data agnostic storage service (DASS). This single service is essentially “air-gapped”. There is no possible flow of credentials from any device, application or service across the firewall. The credential manager of the DASS is within the DASS itself. Only 3 very trusted users have administrator access to the DASS, and they can only access the DASS from a server within the DASS that requires MFA from the user and a JIT confirmation from another admin user. The DASS admin server has no email or messaging services. Any file transferred to the server is virus checked and the server is hardened to prevent installation of malware.

We are also deploying the latest SIEM technology to monitor all of our framework resources (especially the DASS). Any anomalies are automatically flagged for immediate attention and the originating device is immediately disconnected from the framework until the issue is resolved.

Whilst this is still not a perfect guarantee of security – we have significantly reduced the threat of ransomware to the business. We have split our DASS infrastructure evenly between our primary and secondary data centers and “air-gapped” both from each other. A ransomware attack on the primary automatically causes a fail-over to the secondary whilst we restore the primary.

By consolidating the security models of 90 applications we have also made it much simpler to administer the access and roles granted to each of the Enterprise users. We can now run a weekly report that compares the access/roles of each user against a template for their current position in the Enterprise and flag any discrepancies. By consolidating our business-critical data in the DASS we have also made it possible to implement a Data Loss Prevention policy on all outbound data flows.

Conclusion

We know that we will never achieve all the above suggested benefits.

Clearly Virtualizing Functionality will consolidate a lot of the infrastructure and reduce the complexity of the technology being deployed. Being able to configure user experience functionality instead of writing application code will mean that IT staff can deliver value faster and increase Enterprise agility.

Changes in the various DevOps team roles will have a significant impact on head count. Whilst staff numbers could be reduced, it is likely that some will be retrained and assigned to deliver high value components like analytics, machine learning and AI.

Collapsing a lot of applications into the virtualization framework will likely reduce server footprint and licensing costs. Bringing all the business-critical data together makes it much easier to secure and offers much better threat detection and isolation. Being able to deploy seamlessly across two data centers also minimizes the risks of ransomware and future potential security threats.

Taken as a whole, it would seem that Functionality Virtualization is likely to offer significant benefits to most Enterprises.

The next question to answer is: “How to Virtualize Functionality and what will it cost?