One of the first things we do when we initiate a new software application project is attempt to gather a set of software requirements. These are often broken down into Functional and Non-Functional Requirements. The Functional Requirements are then generally further sub-divided into User Experience (UX), Data (CRUD), and Workflow.
The success of our software application project ultimately depends on the quality of our software requirements. We need to validate that our software requirements are as correct and complete as close to the start of the project as possible.
You can try to validate your business requirements by creating requirements documents and getting sign off before you start development. This is problematic for many reasons including: contradictory/conflicting requirements, undocumented requirements, silent stakeholders, changing UI preferences, stakeholders with different opinions/priorities, etc.
When you pass your “approved” requirements document over to your developers, there can be mis-communication, invalid assumptions, missing/incomplete requirements, etc. This leads to multiple iterations of requirements gathering and refactoring.
In parallel with our developers, our testers are also interpreting the requirements to build some test cases that will validate that the requirements have been met by the application. More opportunities for mis-communication, invalid assumptions, missing/incomplete requirements, etc.
We could try building a prototype to validate our requirements – but how do we guarantee that it gets converted into a production application? Given all of the time that has elapsed – how do we even know that the business requirements haven’t changed again?
Obviously, we need to be agile – but whilst that works well for small projects, for large and complex applications, this can lead to a lot of technical debt and refactoring. We have the illusion of progress in the short term, that we will pay for over the middle/long term.
Interestingly, at the end of the process, it comes back to the stakeholders to do final user acceptance testing and validate that the application meets its requirements.
An alternative approach to address functional requirements is to go directly to deployed functionality without generating a lot of requirements documentation, code, and test cases. Essentially, cut out a lot of the requirements documentation, software development, and test cases described above. This is replaced with an immersive process, where the desired functionality is configured and validated immediately by using an application virtualization platform.
We can use a requirements virtualization platform like trellispark to configure and validate most of the functionality with stakeholders in requirements meetings. We can configure custom user experience, data models, state transitions, user roles, access rights, and identify workflow interactions. The resulting configuration data is used to dynamically render the application and can be deployed directly into production.
Creating Form Definitions using trellispark UX Creator
User Experience Requirements
User Interface (UI) design tools can generate really great looking UI mock-ups, but they are limited because they are only passive representations of an active user experience (UX).
As a business analyst, we can spend a lot of time creating an attractive UI without fully understanding the requirements of the UX. This is because when a business user is looking at a UI mock-up, we are relying on their imagination to fill in the blanks as to what the UX will actually do. The requirements gaps often experienced when using mock-ups include: miscommunication / misaligned expectations, incorrect assumptions, conflicting requirements, and gaps between creative and engineering. At the end of this process, we still haven’t validated that our requirements are either correct or complete.
The best way to be sure that our UX requirements are complete and correct is to build an active representation of the functionality first – we can always polish the UI later.
We have found the fastest way to get to an active UX that can be shared with business users is to gather the UX requirements as configuration data. This configuration data can then be used to power a requirements virtualization platform.
When gathering UX requirements for forms-based functionality, we begin with a simple hierarchy: Forms contain one or more Tabs; and Tabs contain one or more Fields.
Most of our UX requirements now resolve to:
- What is the user allowed to see? Which tabs and fields are visible?
- What is the user allowed to do? Which fields are modifiable, or clickable, to invoke an action on the record?
To answer these questions, we need to add a simple permission model to our Forms, Tabs, and Fields, that takes into account: the roles of the current user; the state of the current record; the contents of the current record; the platform upon which the record will be rendered.
Using these simple ideas we can quickly gather requirements for any custom UX, and validate them with our business users as they interact with our active UX. This process can be performed in requirements workshops, enabling us to rapidly come to consensus on what the final solution will actually do.
Data Architecture (CRUD) Requirements
Over the course of many projects, we have found that the critical success factors for data architecture include: building flexible and extensible data schemas; focusing on domain-based architecture aligned with business needs; eliminating data silos across organizations; validating all data at point of entry; striving for consistency; documenting everything; and avoiding data duplication.
Our experience is that gathering and validating the user experience (UX) requirements is a good place to start our deeper dive into data architecture (Create, Read, Update, and Delete (CRUD)) requirements. After all, those UX fields need to exist somewhere in our data requirements and to be useful it is necessary to surface our data on to a user experience!
We expanded our initial UX requirements data modelling because:
- Some of the fields we need to maintain in our record data might not appear on a form-based user experience.
- We need to understand the hierarchic and referential nature of our data architecture. For example: Customers are parent records of Orders and Invoices; Invoices reference Orders, etc.
- Many of our record types represent real-world objects that have a distinct life-cycle. Records of such types need to keep track of the current status of the object. We should be able to define all possible States, Events and Allowed State Transitions of these record types.
- We need to define the validation rules of each of our data fields so that we can automate the import of data into our solutions.
- We need to define the security controls applied to our records, how long we will retain version history and audit trails.
- We want to identify opportunities to optimize the storage and indexing of data to improve the performance of common query operations.
We expanded our UX requirements configuration dataset to incorporate a richer set of data (CRUD) requirements. We then interpret this configuration data using an requirements virtualization platform to produce a working version our business application.
Our business users interact with our virtualized functionality during requirements workshops. This enables us to rapidly come to consensus on what the final solution will actually do – essentially we do UAT as we configure the requirements!
Using our requirements virtualization platform means that we can focus on implementing best practices for securing data whilst creating a highly available and scalable data architecture. This also enables us to eliminate data silos and data duplication.
We use automated testing tools in our CI/CD pipeline to ensure the performance and quality of the requirements virtualization platform as we deploy updates.
When we consider how to improve the productivity of business analysts through better tooling, the trend seems to be a merging of their role with that of developers. By using an requirements virtualization platform to render the desired business functionality, we can eliminate over 90% of the code that we would normally require our developers to write. As we see the adoption of AI to create workflow code, we see even more opportunity for a business analyst to generate at least some of the code traditionally produced by developers. Already we see posts suggesting that the next important language for a developer to learn is English (that is, the ability to use natural language to ask a generative AI to produce the desired code), and that to keep adding value they need more business domain knowledge.
Over the course of many projects, we have found that after gathering and validating the user experience (UX) and data (CRUD) requirements we can begin to layer on the workflow requirements. The UX requirements identify the context when it should be possible for a user to invoke workflow. For example, when an “Order” is in the “Draft” state and the current user has the “Customer” role, then the “Submit” workflow is available. Of course, the granularity of enabling the user to initiate the action goes well beyond this simple example and could include the platform, the content of the Order, the standing of the Customer, etc.
By specifying the precise conditions in the UX requirements where the action may be initiated by a user, we have already configured and validated most of the workflow action requirements that we would normally have to be specified.
What we are left with are the requirements for exactly what the workflow action is supposed to do. There are two distinct cases to consider:
- Events that cause a state transition on the selected record and its children.
- Commands that manipulate the record, or related records, in the data store.
Let’s start by considering the different events that cause record state transitions. Most of the requirements for the state transition can be gathered and validated as part of the Data (CRUD) requirements: what states are allowed; what events are available; whether the transition is gated; how a record changes state; how the change is cascaded down to child records; what commands may be invoked before or after the transition. All of this is just configuration data that can be rendered directly by our requirements virtualization platform and validated directly by users before we even need to involve technical resources to build workflow.
Now let’s talk about the commands that will actually perform workflow actions. The initiation context is fully defined and tested so all we need to focus on is what are the requirements for the workflow action itself. For example: what does it mean to “Submit” an “Order”? We should capture a description of both the actual requirement and how we intend to test that the requirement has been successfully implemented. Collecting both types of requirement at the same time improves the likelihood that our requirements will be complete and correct.
In our requirements virtualization platform, all workflow commands are invoked through a common interface. This makes it easy to scaffold most of the code required to implement and test them. Adding the few tens of lines of code typically used to implement the required functionality is then an ideal task for either a junior developer, or a business analyst augmented with AI code generation. Since all of our workflow code is encapsulated in loosely-coupled services invoked from the requirements virtualization platform, we can easily add automated testing as part of the associated CI/CD pipeline.
We have also found that compartmentalizing our workflow this way significantly reduces the effort for system, integration, and regression testing as requirements change.
After we have finished validating the UX and CRUD requirements with our business users, we can pull together all of the outstanding workflow requirements into a project for implementation. This allows us to make a clear assessment of the scope, budget, and timelines.
We are already seeing the emergence of AI technologies that can take a descriptive requirement and produce the software required to implement it. In the next few years, AI code generation may mean that our workflow requirements could also be virtualized into our requirements virtualization platform.
If you would like to discuss the content and ideas raised in this blog please reach out to me: Tony@GreatIdeaz.com I’d love to talk to you!
Fast Track Requirements Analysis with a Real App
Like to learn more?
If you would like to schedule a virtual meeting or learn more about trellispark, please contact us and provide a brief description of your interest. Or simply drop us an email at email@example.com.