Securing and maximizing the ROI of your business critical data

by | Feb 14, 2023 | Blog, Dev Team

Why are we discussing security and ROI of data in the same blog? They are more intimately related than you might think!

Why is your business critical data vulnerable?

Most businesses have several and sometimes hundreds of software applications that support some aspect of their business operations. Each of these applications typically stores its records and files in a number of data silos.

Completing a single business task may include updating records in multiple applications and manually processing actions. Users are given access to multiple applications and it is not uncommon for users to be granted elevated access rights to data silos to complete their tasks. Access rights are not always correctly updated as users change roles or leave the business.

At the IT support level, there are likely to be multiple IT staff with administration access to the applications and their data stores. These applications are typically not homogenous and differ in age, technology, support, and implementation. Some may be hosted on-premise and some in cloud IaaS, PaaS, or SaaS.

This application complexity creates vulnerabilities at the architecture level which can be exploited by ransomware and other security threats.

Why did we create so many data silos?

The traditional approach to building each software application is to define its scope of functionality and then hard code both the user experience and Create, Read, Update, and Delete (CRUD) access to its underlying data. As the number of applications increases we see a proliferation of technologies and implementations that makes it hard to reliably secure the data, or enforce common security policies. It has also been challenging to quickly build an optimal user experience tailored to the business task. This has led to the emergence of a wide range of “low-code” or “no-code” SaaS applications that are geared to niche business purposes. This has further increased fragmentation of business data, IT costs, security vulnerabilities, and workflow automation complexity.

Data integration requirements then add another layer of vulnerability as data needs to be transferred between these application data silos.

What state would your critical business data be in after a disaster?

Most organizations have 10’s or 100’s of software applications that maintain fragments of their business critical data in data silos. In our work with many enterprises we have seen that some of these data silos can be decades old (we have seen systems over 40-years old!)

These data silos don’t exist in splendid isolation – they need to share data to complete business processes. Sometimes this is done by user’s manually dual keying the same data into multiple applications/data silos. As automation is introduced, some of these manual updates are replaced by Extract, Transform and Load (ETL) batch processes that run on a schedule. We also see real-time point-to-point integrations where data is shared between two, or more, applications via APIs, Messaging, Events, or Service Bus technologies.

As you can appreciate, the data layer of many enterprises is very complex and issues of data synchronization are critical to the quality of the data and the efficient operation of the business.

So what happens when disaster strikes and some, or all, of the data silos are compromised? For example, by a ransomware attack?

Fun fact, disasters don’t always strike everywhere at once. This is especially true if your data silos are scattered over multiple data centers, SaaS providers, or are synchronizing with third-party applications. Even if you could recover your data silo from a backup, it is likely to be inconsistent with other data silos which may hinder your normal business operations for an extended period.

Disaster Recovery (DR): Fail-Over and Fail-Back

After implementing DR solutions for many organizations, I’m struck by the fact that it isn’t the volume of data that determines the final cost. Most frequently its the number and complexity of the data silos and flows.

If the primary data center has 100’s of servers, applications, data silos, and flows, then your secondary site isn’t going to be small and inexpensive. If you have aggressive RTO/RPO targets then a complex primary site is going to be expensive and challenging to implement. In many cases, not all of your applications are going to be in the highest priority group and you may need to significantly compromise business operations in the failed state.

Even when we have a good solution to a phased fail-over, the fail-back process can often be a separate project in its own right as we attempt to move and re-synchronize data silos and flows back to the primary site.

How can we reduce our vulnerability and maximize ROI?

A good place to start is to use a requirements virtualization platform to consolidate the business functionality and data silos distributed across multiple software applications into a single platform reducing duplication of resources and data.

The vast majority of the code in our software applications moves data from a database and renders it on a screen for a user. All of this can be replaced by a couple of services that use requirements configuration data to virtualize the application user experience. The components used in our trellispark platform include the following architecture:

  • Data is no longer encapsulated by individual applications, it is shared by a Data Agnostic Storage Service (DASS). The service is horizontally scalable and can safely store all of the data required by our enterprise. The DASS is a hybrid document/SQL database that uses the CQRS pattern to store all data in a document format and query data using relational SQL. This approach means that we can create horizontally scalable secure storage for any type of data that can be retrieved efficiently by SQL queries. Any application or service can retrieve the latest version of the data directly from the DASS without the need to go through some other intermediary application or service. The technology required to build an efficient DASS for your organization has been around for over a decade – we favor building on top of MS SQL Server but you could implement over any mainstream database.
  • User experience is no longer hard-code by an application, it is rendered in real-time by a dynamic page builder on any device, for any user.

By moving critical business data into a platform data agnostic storage service we can centralize data security and significantly reduce the number of people with direct access to any of the underlying data sources. This significantly reduces the likelihood of a successful ransomware attack or data breach.

The data agnostic storage service can be hardened with a small and tightly controlled access perimeter and a few administrators authenticated within the service itself. We can create separate DEV, QA, and Production environments and tightly control migration of scripts or other files between them. We can implement the highest possible level of security focused in one place.

If you would like to discuss the content and ideas raised in this blog please reach out to me: Tony@GreatIdeaz.com I’d love to talk to you!

Like to learn more?

If you would like to schedule a virtual meeting or learn more about trellispark, please contact us and provide a brief description of your interest. Or simply drop us an email at info@greatideaz.com.