Testing is frequently the most expense part of your software application development.

A single line of application code may require several lines of test code to fully test all possible logic paths. This is fundamentally true no matter how that application code is created. Handwritten, or code generated, it’s still code that needs to be tested.

Early in my career I helped develop software for the UK Military for advanced weapons systems. For critical systems where human life is directly at stake the ratio of testing effort to writing application code can be 20:1 or higher! A high bar is set because a single software error could result in the loss of lives and there is no path back from that. So, if 20:1 is what we need to be sure we covered all of the possible code paths, what about less sensitive applications?

For many business-critical systems we typically see a ratio of 5:1 or higher. All code paths cannot possibly be covered, but it is often considered an acceptable risk because the cost of a software error isn’t literal life or death. At the bottom end, non-critical business systems generally have ratios around 1:1 or 2:1.

Even tests have to be designed and “tested”. They are, of course, “tested” against the application code and if there is a discrepancy then either the application or test code is adjusted until they agree.

This means that the cost of testing for even non-critical applications can be as much as half of the total application cost, rising substantially depending upon the criticality of the application.

This begs the question: “How can we safely reduce testing costs without compromising quality?”

Start by reducing the number of lines of code

Obvious but true – fewer lines of code means less testing and lower costs. Start with strategies that allow you to refactor and eliminate application code. Look for places where code can be replaced by configuration data. If you have similar code in multiple places, like sending an email, refactor it into its own generic service and pass in parameters or other configuration data instead. If that email code is formatting a message body content, then look into a formatting solution like passing XML and XSLT as parameters.

The most fruitful area for replacing code with configuration is the application user interface. Most user experience is about rendering multiple fields and actions on a two-dimensional grid and maintaining a database. The logic to do this is traditionally coded on separate “pages” often includes logic that determines which fields are available and editable based on multiple factors such as the role of the user or the state of the record. When you think about it, most of this logic is generic (as much as 95% of most application’s code) and can be replaced with a common service and configuration data.

Enforce Strong Standards and Templates

Given that we will always write code, it is important to adopt a set of appropriate coding standards and templates for implementation.

This isn’t just about how we layout our code, or how we name our variables. Adopting standards and templates around error detection, error handling, calling APIs, creating APIs, accessing databases, etc. are important for improving the consistency and quality of our code. Using standard code templates also means that we can use standard test templates to validate them.

Enforcement is done by “static testing” or “code reviews” that identify places where standards have not been met which may compromise code quality or future maintainability.

Enforcement of standards significantly improves code quality, mitigates the risk of errors, and the need to test all code paths.

Use Appropriate Code Generation Tools

Given that we have standard templates for some sections of our application code, we can write code generators for both application and test code.

A code generator will need some form of configuration data that can either be records in a database, the structure of an existing database, the contents of a Model class, etc. The output of the code generator is a set of text files created by applying a set of rules to the configuration data to produce the standard template. The files are then placed into context within a project and can be compiled or otherwise used as required.

Here at Great Ideaz, we have made extensive use of code generation to create application code, test code, test data sets, and expected test result sets. There really is no limit to how you apply this idea to fast track a project.

Code generators are a great way to ensure that at least the first version of a file conforms to your standards. This allows you to quickly create a lot of high-quality code that you have confidence will work.

Of course, you may be required to modify the generated code. This means that you will still need to do some testing, but you may be able to generate those test cases as well.

Keep It Simple and Structured (KISS)

Most developers thinking about “keeping it simple” use a rule based on the number of lines of code in the function, maybe up to 50-100 lines is ok. Personally, I prefer to use the number of logic statements as a guide. If you have to nest more than 2-3 logic statements then you should think about refactoring into multiple functions.

Functions with lots of logical paths are clearly more complex. They are harder to read and review, which also means its harder to follow the logic and understand them. More complex functions are more likely to contain errors and are harder to test. As you increase the number of logic flow control statements, you exponentially increase the number of tests required to ensure that you have successfully mapped all possible paths.

Think about it this way: if you have a function with 8 nested IF statements you might need 2^8 (=256) tests just to push your code down each path. If you can refactor that as 2 functions each with 4 nested IF statements, you might need 2 * 2^4 (=32) tests. Different arrangements of the IF statements might alter the math, but I think you can see the point. Complexity significantly increases the testing effort required.

The other half of this section is to keep your code well-structured. Avoid exiting out of functions at multiple points by scattering RETURN statements all over the function. As an industry, we have frowned on the use of LABELS and GOTO statements for decades so structured programming should be the norm. I have seen cases where developers have dropped shortcuts into code that compromise the structure – especially when it comes to error detection and handling.

Use Loosely Coupled Components

If you need to change a part of your application, you can restrict the changes to a single component and limit the scope of detailed testing required. When you are ready to re-deploy a component you may also have to perform regression/integration tests on the application as a whole with the new component.

The key question here is how do you mitigate the amount of regression/integration tests required?

The answer is determined by the granularity of your components. Smaller is better!

Let’s consider an example of a CRM application processing an Order. I might need to create a function that calculates the Sales Tax due on the Order. If that function is a separately deployable component, then I just have to test the calculation in detail and then sanity check every place where that calculation is required. In this scenario, the scope of my regression testing is minimized.

Alternatively, my sales tax calculation function might have been part of a larger Order Processing service that includes code to create, process and invoice orders. In this case, I should do detailed testing on the whole Order Processing service and then perform regression/integration testing everywhere that service is used.

DevOps and Automated Testing

Automated Build and Release pipelines significantly reduce the number of errors made as application components are created, modified, built and deployed.

Most DevOps tooling includes a capability to include Automated Testing.

Automated testing of components can be very cost effective. Most developers now use some level of automated testing tools to create test cases for their classes and functions. Incorporating these tests into build pipelines can ensure that if the code is updated, existing test cases are not broken. A good practice is to ensure that test cases are reviewed and updated to ensure that they include new tests for any changes introduced.

It is generally much easier to automate a test for a function in a class as opposed to a form in a user experience. User Experience testing tools are frequently more problematic, and you may require more manual testing depending on the complexity of your user experience.

Conclusion

Creating automated test scripts, or manually testing applications is often viewed as a risk management exercise that’s balanced with the cost and time to market.

Given that, we need to determine how we can best reduce or mitigate the risk of errors.

First, we can make sure that we improve the quality of our code. This comes down to enforcing standards and templates so that we create clean code to start with. Keeping it simple and well-structured makes our code more understandable and deploying loosely coupled fine-grained components reduces the need for regression testing.

Code quality is a good place to start, but by far the best strategy is not to have to write and ultimately test that code in the first place.

Creating common services that can use configuration data to replace logic that would otherwise be implemented as code makes a significant difference to application costs over the full course of their lifecycle. And often the best place to look for reducing or simplifying code is in the user experience.