There is a testing principle known to those close to the test community. “100% testing of a product is not possible.” Therefore, it is necessary to plan risk or prioritization based tests that are of the project’s proper economic dimension. The load test is one of them. If you want your product or the service you provide to continue to serve your customers under high demand, as designed then a load test is one of the most important items of the activities that you need to plan that can provide information on your website performance.

As Loadium team, we had a very efficient work before the March 8th campaign, International Women’s day, the period with our active customer Modanisa and we had an efficient campaign period after this study. In this article, I will try to tell the technical explanations and the campaign period along with the data obtained following the test.

First test and solving the access issue…

Before the tests, our client moved his servers to another provider with the decision he made at the beginning of the year. Nevertheless, we were planning to see that we could get rid of some of the problems caused by the campaign period, but not all.

During the first tests, configurations were made on our side in terms of access and the production of the desired user load.

Producing the load, as virtual users, over 50 dedicated IPs; after seeing that we can reach the desired numbers on the Loadium, the test tool which is designed for system performance tests, record & play and API scripts, we have started to run them on the web, mobile(iOS & Android), and mobile web.

Once we’ve prepared the environment and plan for testing, we’ve seen that server provider replacement is a 30% contribution but everyone agreed that this situation would not be enough for the campaign period. However, we had another problem. The fact that the load tests can be done via the Google Analytics data as a spoken metric had the opinion that it would contribute much to our estimates.

Triggering Google Analytics Data

To submit a report to the top management and to keep track of the data that is always followed, by making a request to Google Analytics at the beginning of each request; we’ve also enabled analytics to see what everyone is doing in real time (within the framework of Google’s reality).

At the end of each test, however, we have also published reports of the number of requests per second, successful and unsuccessful test results. In addition, the more important thing is that we made sure a common language was spoken.

Monitoring of the source of the problems during the tests and the arrangements

Now we had become able to trigger Analytics data, and everyone could speak a common language when it was time to talk about when technical problems started surfacing.

During the load tests, random navigation, membership, cache, product selection and shopping behaviors within the site were required over 3 times more requests than the previous campaign period.

After a total of 4 runs, staying true to the acceptable response times, we were able to answer 3 to 3.5 times as much after the last runs.

So how did this work?

The tests that were run showed where improvements were supposed to be done in terms of database, service calls, login structure, and cache usage.

Together with these improvements, our software development team were made aware that the average response times in each test made it 1 times more responsive to the customer.

Thus, without fear of the busy server problem of each campaign period; we have had the pride of increasing the hourly turnover to y times and the total turnover to x times. During the busiest hours of the campaign, stress was replaced with celebrations.