Most organisations understand the benefits that Agile development brings, and recognise the importance of testing often and early. Performance testing, however, rarely happens until just before deployment. In this article, Lam Pham, Senior Test Team Manager at NashTech, discusses how to implement performance testing alongside Scrum Agile, including the use of test automation.
The business value offered by Agile development and DevOps — in terms of better-quality software and faster time to market — is well understood today. There's also widespread recognition that testing often and early is critical to Agile approaches. However, performance testing is rarely included in sprints. Generally, it's carried out on the release candidate just before deployment, and this can lead to project risk.
At NashTech, we recommend implementing performance testing alongside Scrum Agile. By conducting performance testing during sprints, and defining non-functional acceptance criteria for each story as part of the Definition of Done (DoD), performance can be proven to meet requirements before agreeing that any functionality is complete.
Despite the benefits of conducting performance testing within sprints, there are a number of challenges:
At NashTech, we've developed a 3-part strategy to address those challenges.
Performance testing plays a critical role in establishing acceptable quality levels for the end user as it evaluates the quality of the system under a certain workload. Shifting performance testing left — making sure it happens as close to development as possible — gives developers an early feedback loop for performance issues.
Given the growing need for faster release cycles, continuous integration (CI) and continuous delivery (CD) help improve product quality while reducing project cost.
Plainly, then, performance testing is a necessary part of the CI/CD pipeline. Identifying the goals (including the volume metrics) and acceptance values in a specific environment is critical to successful performance testing. Depending on the goals, we identify the threshold for each module and set its DoD in the sprint. Performance testing is then executed when the new build is ready. And if something fails, we're notified straightaway.
There's no single right answer for all situations: we have to adapt the performance-testing approach to the individual case. The following aspects will need to be considered.
Measurements and metrics. You first need to understandthe requiredperformance measurements and metrics. To make them realistic, we take into account the technical stacks of the system, the business rules, and how the application will be used in production. We then define the performance testing goals and evaluation methods.
Taking too many measurements will make analysis difficult; and may also negatively impact on the application’s actual performance. That's why it's vital to understand which measurements and metrics are most relevant to achieving the testing goals.
Test early. Performance testing should be done as early as possible with a component part of the system. This is often cheaper than testing with the whole system — and it can be conducted as soon as an individual component is developed. The trigger for performance testing execution is each build in the CI process. However, the test should be run with all aspects of the technologies, including the infrastructure and data volume. Testing must provide assurance that the system is ready for its intended audience the moment it’s rolled out.
Test scenario. In general, we should test using the most realistic scenario possible. To that end, we base the scenario on as muchinformation as possible about the test, use case, purpose and environment, gathered in advance. This kind of test is carried out once all the components are integrated.
There's a common belief that performance testing can be done on a simple, low-profile environment, and that a prediction can then be made for the real environment based on analysis of the results. But there are many risks with this approach. In particular, the size and structure of the data could dramatically affect load test results.
The closer the test environment is to the production environment in data size, structure and infrastructure, the more reliable the performance test results will be.
Subject to our earlier point about metrics, we collect as much information as we can using the testing tools and monitoring services.
Running regular automated performance tests is helpful for two main reasons:
A better understanding of the system's behaviour, coupled with faster identification of the pain points, allows us to more quickly provide an accurate forecast of how the system will perform in production.
There are significant benefits to uncovering performance issues in each Agile sprint. If you leave performance testing until just before deployment, issues have to be fixed immediately, which may delay the release.
As well as increasing the likelihood of your project going live on time, raising awareness of performance issues as soon as possible will lead to development of better architected, more scalable applications.
To learn more about NashTech Software Testing Services, email email@example.com and a member of the team will be in touch.