Lorem ipsum dolor sit amet, consectetur adipiscing elit lobortis arcu enim urna adipiscing praesent velit viverra sit semper lorem eu cursus vel hendrerit elementum morbi curabitur etiam nibh justo, lorem aliquet donec sed sit mi dignissim at ante massa mattis.
Vitae congue eu consequat ac felis placerat vestibulum lectus mauris ultrices cursus sit amet dictum sit amet justo donec enim diam porttitor lacus luctus accumsan tortor posuere praesent tristique magna sit amet purus gravida quis blandit turpis.
At risus viverra adipiscing at in tellus integer feugiat nisl pretium fusce id velit ut tortor sagittis orci a scelerisque purus semper eget at lectus urna duis convallis. porta nibh venenatis cras sed felis eget neque laoreet suspendisse interdum consectetur libero id faucibus nisl donec pretium vulputate sapien nec sagittis aliquam nunc lobortis mattis aliquam faucibus purus in.
Nisi quis eleifend quam adipiscing vitae aliquet bibendum enim facilisis gravida neque. Velit euismod in pellentesque massa placerat volutpat lacus laoreet non curabitur gravida odio aenean sed adipiscing diam donec adipiscing tristique risus. amet est placerat.
“Nisi quis eleifend quam adipiscing vitae aliquet bibendum enim facilisis gravida neque velit euismod in pellentesque massa placerat.”
Eget lorem dolor sed viverra ipsum nunc aliquet bibendum felis donec et odio pellentesque diam volutpat commodo sed egestas aliquam sem fringilla ut morbi tincidunt augue interdum velit euismod eu tincidunt tortor aliquam nulla facilisi aenean sed adipiscing diam donec adipiscing ut lectus arcu bibendum at varius vel pharetra nibh venenatis cras sed felis eget.
Our Quality Engineering team’s goal is to continue to shift left, testing earlier in the lifecycle, providing our developers with as much information about the software under test as early and often as possible. The benefits of this are early bug detection resulting in better code quality, reduced costs associated with development and testing, and effective time and resource management.
An initial step in building any Quality Engineering practice is to automate as much functional testing as possible. This is usually the low-hanging fruit and frees up the Quality Engineers to conduct higher value exploratory testing and test more complex scenarios. After the functional test automation is defined and implemented, a team has lots of options to automate other types of testing, including security testing, usability testing, or performance testing.
For our practice, we found the process of conducting performance testing on back-end code had concrete repeatable steps which yielded measurable results that could be compared. And so, we decided to first focus on automating our back-end performance testing.
Our automated performance testing solution is comprised of a C# based application to query our test databases and construct sample datasets. These datasets are used as our test input to our JMeter scripts which are dynamically generated based on the endpoint we’re targeting, the number of simultaneous users, and the number of rows in the dataset. Once we have our input data and our test script, we execute this script against the developer’s changes while also running against an instance of our production code. Once the tests are complete, we not only compare the performance metrics between the two branches but, if applicable, we look at the content of the responses that are returned. This allows us to conduct functional testing as well as performance testing all within the continuous integration pipeline.
As both the code and the suite of tests grow, the execution time of our test suite increases which conflicts with our goal of providing results as quickly as possible. To manage this, the performance tests monitor changes to files and only the appropriate tests are executed upon commit, thus ensuring the time it takes for a result is optimized.
In addition to testing individual branches, we opted to execute our entire suite of performance tests periodically throughout the duration of our development sprint. This regression testing enables us to measure the performance of the whole system, once all branches have been merged.
We store the results of all our performance test runs. A separate process then analyzes the stored performance run data and reports on the results of the runs daily in comparison to previous runs across several reporting periods. This process allows us to stop any slow performing code from being released into production, to view the performance trendline of endpoints over time, and to provide corrective measures, if needed, before our customers experience a degradation of performance.
We’ve seen some great measurable benefits since implementing automated performance testing! For example, we recently completed a comparison of ten proposed code changes, measuring the performance and functionality of each branch against one another. We were accurately able to see which code change had the least impact, rejecting nine of the branches as they didn’t meet our performance requirements. This testing would normally have taken us a day or more to complete, however, we were able to provide results within a couple hours and without taking up manual testing time!
The next step for our team will be to integrate application monitoring (CPU usage, memory usage) during the test runs. This will give us an even richer dataset and enable us to be proactive in preventing issues should we see them in testing. We’ll also explore how we can take the lessons and steps learned in our implementation for back-end performance testing and apply them to front-end performance testing so that our team can benefit from these gains across our entire development lifecycle!