Performance Testing

Software Testing Part 11

ยท

9 min read

Performance Testing

What is Performance Testing?

  • The testing performed to evaluate the response time, throughput, and utilization of the system, to execute its required functions in comparison with different versions of the same product or a different competitive product is called performance testing.
  • Performance testing is complex and expensive due to large resource requirements and the time it takes.
  • A good number of defects that get uncovered during performance testing may require design and architecture change.
  • Performance test cases are repetitive in nature.
  • These test cases are normally executed repeatedly for different values of parameters, different load conditions, different configurations, and so on.
  • Performance testing is a laborious process involving time and effort.
  • Not all operations/business transactions can be included in performance testing.
  • High priority test cases are implemented before others.

Factors Governing Performance Testing

  1. Throughput - The capability of the system or the product in handling multiple transactions.
  2. Response Time - Delay between the point of request and the first response from the product.
  3. Network Latency - Delay in response times due to network.
  4. Product Latency - Delay in response times due to product.
  5. Tuning - The product performance is enhanced by setting different values to the parameters(variables) of the product, operating system, and other components.
  6. Benchmarking - Competitive products are compared against each other to analyse the strengths and weaknesses.
  7. Capacity Planning - Exercise to find out what resources and configurations are needed.

Why is Performance Testing done?

Performance Testing is done to ensure that a product:

  • Processes the required number of transactions in any given interval (throughput).
  • Is available and running under different load conditions (availability).
  • Responds fast enough for different load conditions (response time).
  • Delivers worthwhile return on investment for the resources - hardware and software and deciding what kind of resources are needed for the product for different load conditions (capacity planning).
  • Is comparable to and better than that of the competitors for different parameters (competitive analysis and benchmarking).

Methodology For Performance Testing

Performance testing involves the steps:

  1. Collecting requirements
  2. Writing test cases
  3. Automating performance test cases
  4. Executing performance test cases
  5. Analyzing performance test results
  6. Performance tuning
  7. Performance benchmarking
  8. Recommending right configuration for the customers(Capacity Planning)

1. Collecting Requirements

  • Two types of requirements - generic & specific.
  • Generic requirements are those that are common across all products in the domain.
  • Specific requirements are those that depend on implementation for a particular product and differ from one product to another in a given domain

Some resources for deriving performance requirements:

  1. Performance compared to the previous release of the same product.
  2. Performance compared to the competitive products.
  3. Performance compared to absolute numbers derived from actual need.
  4. Performance numbers derived from architecture and design.

2. Writing Test Cases

A test case for performance testing should have the following details:

  1. List of operations or business transactions to be tested.
  2. Steps for executing those operations/transactions.
  3. List of product, OS parameters that impact the performance testing, and their values.
  4. Loading pattern.
  5. Resource and their configuration (network, hardware, software configurations).
  6. The expected results (that is, expected response time, throughput, latency).
  7. The product versions/competitive products to be compared with and related information such as their corresponding fields

3. Automating Performance Test Cases

Automation helps in performance testing because of these reasons:

  1. Performance testing is repetitive.
  2. Performance test cases cannot be effective without automation and mostly it is almost impossible to do performance testing without automation.
  3. The results of performance testing need to be accurate, and manually calculating the response time, throughput, and so on can introduce inaccuracy because of human error.
  4. Performance testing takes into account several factors. There are far too many permutations and combination of those factors and it will be difficult to remember all these and use them if the tests are done manually.
  5. The analysis of performance results and failures needs to take into account related information such as resource utilization, log files, trace files, and so on that are collected at regular intervals. It is impossible to do this testing and perform the book-keeping of all related information and analysis manually.

Thus, end-to-end automation is required for performance testing.

4. Executing Performance Test Cases

The most effort-consuming aspect in execution of performance test cases is usually data collection.

Data for the following points needs to be collected for test case execution:

  1. Start and end time of test case execution.
  2. Log and trace/audit files of the product and operating system, for future debugging and repeatability purposes.
  3. Utilization of resources like CPU, memory, disk, network utilization etc on a periodic basis.
  4. Configuration of all environmental factors like hardware, software, and other components.
  5. The response time, throughput, latency etc as specified in the test case documentation at regular intervals.
  6. What performance a product delivers for different configurations of hardware and network setup, is another aspect that needs to be included during execution.
  7. This requirement mandates the need for repeating the tests for different configurations and is referred to as configuration performance tests.

5. Analyzing the Performance Test Results

Before analyzing the test results, some calculations of data and organization of the data are required, which are:

  1. Calculating the mean of the performance test result data.
  2. Calculating the standard deviation.
  3. Removing the noise (noise removal) and re-plotting and recalculating the mean and standard deviation.
  4. In terms of caching and other technologies implemented in the product, the data coming from the cache need to be differentiated from the data that gets processed by the product.
  5. The majority of the server-client, Internet, and database applications store the data in a local high-speed buffer when a query is made.
  6. This enables them to present the data quickly when the same request is made again. This is called caching.
  7. Differentiating the performance data when the resources are available completely as against when some background activities were going on.

The analysis of performance data is carried out to conclude the following:

  1. Whether performance of the product is consistent when tests are executed multiple times.
  2. What performance can be expected for what type of configuration resources, both hardware and software.
  3. What parameters impact performance and how they can be used to derive better performance.
  4. What is the effect of scenarios involving several mix of operations for the performance factors.
  5. What is the effect of product technologies such as caching on performance improvements.
  6. Up to what load are the performance numbers acceptable.
  7. What is the optimum throughput and response time of the product for a set of factors such as load, resources, and parameters.
  8. What performance requirements are met and how the performance looks when compared to the previous version.
  9. Sometime high-end configuration may not be available for performance testing. Use the current data and charts to extrapolate the expected results.

6. Performance Tuning

Analyzing performance data helps in narrowing down the list of parameters that really impact the performance results and improving product performance.

The results of performance tuning are normally published in the form of a guide called the performance tuning guide for customers so that they can benefit from the results.

Two steps to get best results from performance tuning:

  1. Tuning the product parameters.
  2. Tuning the operating system and parameters.

Tuning Product Parameters
Important points to consider while tuning the product parameters:

  1. Repeat the performance tests for different values of each parameter that impact performance, by keeping other parameters unchanged. This reveals the effect a parameter has on the performance of the product.
  2. Sometimes when a particular parameter value is changed, it needs changes in other parameters, as two parameters can be dependent and share things in common. Repeat the performance tests for a group of parameters and their different values.
  3. Repeat the performance tests for default values of all parameters, which are called factory settings tests.
  4. Repeat the performance tests for low and high values of each parameter and combinations.

Tuning Operating System Parameters
Tuning the OS parameters is another step towards getting better performance. There are various sets of parameters provided by the operating system under different categories, which are:

  1. File system related parameters, like number of open files permitted.
  2. Disk management parameters, like simultaneous disk reads/writes.
  3. Memory management parameters, like virtual memory page size and number of pages.
  4. Processor management parameters, like enabling/disabling processors in multiprocessor environment.
  5. Network parameters, like setting TCP/IP time out.

7. Performance Benchmarking

Steps involved in performance benchmarking are the following:

  1. Identifying the transactions/scenarios and the test configuration.
  2. Comparing the performance of different products.
  3. Tuning the parameters of the products being compared fairly to deliver the best performance.
  4. Publishing the results of performance benchmarking.

8. Capacity Planning

Capacity planning corresponding to short, medium and long-term requirements:

  1. Minimum required configuration - Denotes that with anything less than this configuration, the product may not even work. Thus, configurations below the minimum required configuration are usually not supported
  2. Typical configuration - Denotes that under that configuration the product will work fine for meeting the performance requirements of the required load pattern and can also handle a slight increase in the load pattern.
  3. Special configuration - Denotes that capacity planning was done considering all future requirements.

Role of Load Balancing & High Availability in Capacity Planning

  • Load balancing ensures that the multiple machines available are used equally to service the transactions.
  • This ensures that by adding more machines, more load can be handled by the product.
  • Machine clusters are used to ensure availability.
  • In a cluster there are multiple machines with shared data so that in case one machine goes down, the transactions can be handled by another machine in the cluster.

Process For Performance Testing

  1. Gather resource requirements.
  2. Set up the test lab.
  3. Assign all the responsibilities to multiple teams and their members.
  4. Set up product traces, audits, internal and external traces, logs etc.
  5. Decide the entry and exit criteria.

Conclusion

You can read other articles written by me through these links.

Software Testing Series
1. Fundamental Principles of Software Testing
2. Software Development Life Cycle Models
3. Quality Assurance vs Quality Control
4. Testing Verification vs Testing Validation
5. Process & Life Cycle Models For Testing Phases
6. White Box Testing
7. Black Box Testing
8. Integration Testing
9. System Testing
10. Regression Testing
11. Performance Testing
12. Ad Hoc Testing
13. Checklist & Template For Test Plan & Management
14. Software Test Automation

Operating System Series
1. Introduction & Types of OS
2. Process States & Lifecycle
3. System Calls
4. User Mode vs Kernel Mode
5. CPU Process Scheduling
6. Process Synchronization
7. Deadlocks
8. Memory Management
9. Disk Management & Scheduling
10. File System in OS
11. Protection & Security

System Design Series
Introduction To Parallel Computing
Deep Dive Into Virtualization
Insights Into Distributed Computing

Cloud Computing Series
1. Cloud Service Models
2. Cloud Deployment Models
3. Cloud Security
4. Cloud Architecture
5. Cloud Storage
6. Networking In The Cloud
7. Cloud Cost Management
8. DevOps In Cloud & CI/CD
9. Serverless Computing
10. Container Orchestration
11. Cloud Migration
12. Cloud Monitoring & Management
13. Edge Computing In Cloud
14. Machine Learning In Cloud

Computer Networking Series
1. Computer Networking Fundamentals
2. OSI Model
3. TCP/IP Model : Application Layer
4. TCP/IP Model : Transport Layer
5. TCP/IP Model : Network Layer
6. TCP/IP Model : Data Link Layer

Version Control Series
1. Complete Guide to Git Commands
2. Create & Merge Pull Requests
3. Making Open Source Contributions

Linux
Complete Guide to Linux Commands

Thanks For Reading! ๐Ÿ’™
Garvit Singh

ย