What is System Testing?
- System testing is defined as a testing phase conducted on the complete integrated system, to evaluate the system compliance with its specified requirements.
- It is done after unit, component, and integration testing phases.
- The testing conducted on the complete integrated products and solutions to evaluate system compliance with specified requirements on functional and nonfunctional aspects is called system testing.
- System testing brings out issues that are fundamental to design, architecture, and code of the whole product.
- System testing is the only phase of testing which tests the both functional and non-functional aspects of the product.
- Two types - Functional & Non-Functional.
What is a System?
- A system is a complete set of integrated components that together deliver product functionality and features.
- A system can also be defined as a set of hardware, software, and other parts that together provide product features and solutions.
Why is System Testing Done?
For these reasons:
- Provide independent perspective in testing
- Bring in customer perspective in testing
- Provide a “fresh pair of eyes” to discover defects not found earlier by testing
- Test product behavior in a holistic, complete, and realistic environment
- Test both functional and non-functional aspects of the product
- Build confidence in the product
- Analyze and reduce the risk of releasing the product
- Ensure all requirements are met and ready the product for acceptance testing.
Functional Testing vs Non Functional Testing
Testing Aspects | Functional Testing | Non Functional Testing |
Involves | Product features and functionality | Quality factors |
Tests | Product behavior | Behavior and experience |
Result conclusion | Simple steps written to check expected results | Huge data collected and analyzed |
Results varies due to | Product implementation | Product implementation, resources, and configurations |
Testing focus | Defect detection | Qualification of product |
Knowledge required | Product and domain | Product, domain, design, architecture, statistical skills |
Failures normally due to | Code | Architecture, design, and code |
Testing Phase | Unit, component, integration, system | System |
Test case repeatability | Repeated many times | Repeated only in case of failures and for different configurations |
Configuration | One-time setup for a set of test cases | Configuration changes for each test case |
Functional System Testing
- Functional testing is performed at different phases, two problems arise : Duplication & Gray area.
- Duplication refers to the same tests being performed multiple times.
- Gray area refers to certain tests being missed out in all the phases.
- Gray areas in testing happen due to lack of product knowledge, lack of knowledge of customer usage, and lack of co-ordination across test teams.
Ways system functional testing is performed:
- Design/architecture verification
- Business vertical testing
- Deployment testing
- Beta testing
- Certification, standards, and testing for compliance.
Design/Architecture Verification
Helps in validating the product features that are written based on customer scenarios and verifying them using product implementation
Certain test cases are identified and moved to earlier phases of testing like integration or component testing to catch defects early and avoid any major defects later.
Guidelines used to reject test cases:
- Is this focusing on code logic, data structures, and unit of the product? If yes, then it belongs to unit testing.
- Is this specified in the functional specification of any component? If yes, then it belongs to component testing.
- Is this specified in design and architecture specification for integration testing? If yes, then it belongs to integration testing.
- Is it focusing on product implementation but not visible to customers? This is focusing on implementation - to be covered in unit/component/integration testing.
- Is it the right mix of customer usage and product implementation? Customer usage is a prerequisite for system testing.
Business Vertical Testing
Overview
- General-purpose products like workflow automation systems are designed to cater to various businesses and services.
- Business vertical testing involves adapting and testing the product for different industry verticals such as insurance, banking, and asset management.
Customization in Business Vertical Testing
- The product's procedures are altered to align with specific business processes.
- User objects (e.g., clerk, officer) are created and associated with operations to customize the product for different business needs.
- Role-based operations are implemented to ensure certain tasks are performed by specific user roles.
Terminology Considerations
- The product adapts its terminology to match industry-specific language.
- For example, an email in insurance might be referred to as a claim, while in a loan processing system, it may be called a loan application.
Importance of Understanding Business Processes
- The product needs to comprehend the intricacies of business processes to effectively customize its workflow for different verticals.
- Customization features are crucial to accommodate the unique requirements of various business domains.
Syndication in Business Vertical Testing
- Some tasks related to business verticals may be handled by solution integrators or service providers.
- Licensing agreements may involve changing product names, company names, and copyrights to reflect the identity of the solution integrators or service providers.
Testing Approaches
- Business vertical testing can be performed through simulation or replication.
- Simulation involves assuming requirements and testing the business flow
- Replication involves obtaining customer data and customizing the product accordingly.
Scenario Testing in Integration and System Phases
- Integration testing creates business vertical scenarios, focusing on interfaces and interactions.
- System testing evaluates business verticals in a real-life customer environment, considering customization, terminology, and syndication aspects.
Deployment Testing
Overview
- System testing is the final phase before product delivery.
- Ensures that customer deployment requirements are met, assessing short-term success or failure based on customer satisfaction.
Offsite Deployment Testing
- Conducted to simulate customer deployment requirements.
- Aims to ensure that the product is ready for customers who await it.
Onsite Deployment Testing
- Conducted after the release of the product at customers' locations.
- Involves collaboration between the product development organization and the organization using the product.
Stages of Onsite Deployment Testing
Stage 1
- Mirrored deployment machines with similar configurations are set up.
- Actual data from the live system is taken, and user operations are rerun on the mirrored deployment machine.
- Verifies whether the enhanced or similar product can perform existing functionality without affecting users.
- Intelligent recorders may be used to maintain identical mirrored and live systems regarding business transactions.
Stage 2
- Mirrored system from Stage 1 is made a live system running the new product.
- Regular backups are taken, and alternative methods are used to record incremental transactions.
- Recorded transactions from the mirrored system are preserved with the old live system as a fallback.
- If no failures are observed in this stage for an extended period, the onsite deployment is considered successful, and the old live system is replaced by the new system.
Beta Testing
Sending the product that is under test to the customers and receiving the feedback is called beta testing.
During the entire duration of beta testing, there are various activities that are planned and executed according to a specific schedule. This is called a beta program.
Some of the activities involved in the beta program are as follows:
- Collecting the list of customers and their beta testing requirements along with their expectations on the product.
- Working out a beta program schedule and informing the customers.
- Sending some documents for reading in advance and training the customer on product usage.
- Testing the product to ensure it meets 'beta testing entry criteria'.
- Sending the beta product to the customer and enable them to carry out their own testing.
- Collecting the feedback periodically from the customers and prioritizing the defects for fixing.
- Responding to customer feedback with product fixes or documentation changes and closing the communication loop with the customers in a timely fashion.
- Analyzing and concluding whether the beta program met the exit criteria.
- Communicate the progress and action items to customers and formally closing the beta program.
- Incorporating the appropriate changes in the product.
Certification, Standards and Testing for Compliance
- A product needs to be certified with the popular hardware, operating system, database, and other infrastructure pieces. This is called certification testing.
- There are many standards for each technology area and the product may need to conform to those standards.
- This is very important as adhering to these standards makes the product interact easily with other products.
- Testing the product to ensure that these standards are properly implemented is called testing for standards. Once the product is tested for a set of standards, they are published in the release documentation.
- Testing the product for contractual, legal, and statutory compliance is one of the critical activities of the system testing team.
Non Functional Testing
- Non-Functional testing is similar to that of functional testing but differs from the aspects of complexity, knowledge requirement, effort needed, and number of times the test cases are repeated.
- Repeating non-functional test cases involves more time, effort, and resources, the process for non-functional testing has to be more robust and stronger than functional testing to minimize the need for repetition.
- This is achieved by having more stringent entry/exit criteria, better planning, and by setting up the configuration with data population in advance for test execution.
Set Up The Configuration
- Two ways setup is done : simulated environment and real-life customer environment.
- Setting up a scenario that is exactly real-life is difficult.
- Simulated setup is used for non-functional testing where actual configuration is difficult to get.
- In order to create a “near real-life” environment, details of customer's hardware setup, deployment information and test data are collected in advance.
Come up with Entry/Exit Criteria
Entry and exit criteria are decided for various test cases based on parameters like Maximum limits, response time, throughput, latency, failures per iteration, failures per test duration, stressing the system beyond limits and so on.
Balancing Key Resources
4 key resources - CPU, Disk, Memory and Network. They need to be judiciously balanced to enhance the quality factors of the product.
Basic assumptions that can be made about resources and configuration:
- The CPU can be fully utilized as long as it can be freed when a high priority job comes in.
- The available memory can be completely used by the product as long as the memory is relinquished when another job requires memory.
- The cost of adding CPU or memory is not that expensive as it was earlier.
- The product can generate many network packets as long as the network bandwidth and latency is available and does not cost much.
- More disk space or the complete I/O bandwidth can be used for the product as long as they are available. While disk costs are getting cheaper, IO bandwidth is not.
- The customer gets the maximum ROI only if the resources such as CPU, disk, memory, and network are optimally used. Software has to be designed intelligently for optimal performance.
- Graceful degradation in non-functional aspects can be expected when resources in the machine are also utilized for different activities in the server.
- Predictable variations in performance or scalability are acceptable for different configurations of the same product.
- Variation in performance and scalability is acceptable when some parameters are tuned, as long as we know the impact of adjusting each of those tunable parameters.
- The product can behave differently for non-functional factors for different configurations such as low-end & high-end servers as long as they support return on investment. This in fact motivates the customers to upgrade their resources.
- Once such sample assumptions are validated by the development team and customers, then non-functional testing is conducted.
Scalability Testing
- To find out the maximum capability of the product parameters.
- High resources required.
- A high-end configuration is selected and the scalability parameter is increased step by step to reach the maximum capability.
- When the requirements from the customer are more than what design/architecture can provide, the scalability testing is suspended, the design is reworked, and scalability testing resumed to check the scalability parameters.
- Having a highly scalable system that considers the future requirements of the customer helps a product to have a long lifetime.
- Failures during scalability test include the system not responding, system crashing etc.
- Scalability tests help in identifying the major bottlenecks in a product.
- Sometimes the underlying infrastructure such as the operating system or technology can also become bottlenecks. In that case, the product organisation is expected to work with the OS vendor to resolve the issues.
- Scalability tests are performed on different configurations to check the product's behavior. For each configuration, data are collected and analyzed.
- The demand of resources tends to grow exponentially when the scalability parameter is increased.
- The scalability reaches a saturation point beyond which it cannot be improved. This is called the maximum capability of a scalability parameter. Even though resources may be available, product limitation may still not allow scalability.
- This is called a product bottleneck. Identification of such bottlenecks and removing them in the testing phase as early as possible is a basic requirement for resumption of scalability testing.
Reliability Testing*
Done to evaluate the product's ability to perform its required functions under stated conditions for a specified period of time or for a large number of iterations.
Product reliability is achieved by focusing on the following activities:
- Defined engineering processes.
- Review of work products at each stage.
- Change management procedures.
- Review of testing coverage.
- Ongoing monitoring of the product.
A reliability tested product will have the these characteristics:
- No errors or very few errors from repeated transactions.
- Zero downtime.
- Optimum utilization of resources.
- Consistent performance and response time of the product for repeated transactions for a specified time duration.
- No side-effects after the repeated transactions are executed.
Stress Testing
- Stress testing is done to evaluate a system beyond the limits of specified requirements or resources, to ensure that system does not break.
- Done to find out if the product's behavior degrades under extreme conditions and when it is denied the necessary resources, like insufficient memory, inadequate hardware etc.
- The product is over-loaded deliberately to simulate the resource crunch and to find out its behavior.
- It is expected to gracefully degrade on increasing the load, but the system is not expected to crash at any point of time during stress testing.
- The process, data collection, and analysis required for stress testing are very similar to those of reliability testing.
- The difference lies in the way the tests are run.
- Reliability testing is performed by keeping a constant load condition till the test case is completed, the load is increased only in the next iteration of the test case.
- In stress testing, the load is generally increased through various means such as increasing the number of clients, users, and transactions till and beyond the resources are completely utilized.
- When the load keeps on increasing, the product reaches a stress point when some of the transactions start failing due to resources not being available.
- The failure rate may go up beyond this point.
- To continue the stress testing, the load is slightly reduced below this stress point to see whether the product recovers and whether the failure rate decreases appropriately.
- This exercise of increasing/decreasing the load is performed two or three times to check for consistency in behavior and expectations.
- The time required for the product to quickly recover from those failures is represented by MTTR (Mean time to recover).
Guidelines to select the tests for stress testing:
- Repetitive tests - Executing repeated tests ensures that at all times the code works as expected.
- Concurrency - Concurrent tests ensure that the code is exercised in multiple paths and simultaneously.
- Magnitude - This refers to the amount of load to be applied to the product to stress the system.
- Random variation - Stress testing depends on increasing/decreasing variable load.
Interoperability Testing
Interoperability testing is done to ensure the two or more products can exchange information, use
information, and work properly together.
Guidelines that help in improving interoperability.
- Consistency of information flow across systems - When an input is provided to the product, it should be understood consistently by all systems.
- Changes to data representation as per the system requirements - When two different systems are integrated to provide a response to the user, data sent from the first system in a particular format must be modified or adjusted to suit the next system's requirement.
- Correlated interchange of messages and receiving appropriate responses - When one system sends an input in the form of a message, the next system is in the waiting mode or listening mode to receive the input.
- Communication and messages - When a message is passed on from a system A to system B, if any and the message is lost or gets garbled the product should be tested to check how it responds to such user requesting him to wait for sometime until it recovers the connection.
- Meeting quality factors - When two or more products are put together, there is an additional requirement of information exchange between them.
Conclusion
You can read other articles written by me through these links.
Software Testing Series
1. Fundamental Principles of Software Testing
2. Software Development Life Cycle Models
3. Quality Assurance vs Quality Control
4. Testing Verification vs Testing Validation
5. Process & Life Cycle Models For Testing Phases
6. White Box Testing
7. Black Box Testing
8. Integration Testing
9. System Testing
10. Regression Testing
11. Performance Testing
12. Ad Hoc Testing
13. Checklist & Template For Test Plan & Management
14. Software Test Automation
Operating System Series
1. Introduction & Types of OS
2. Process States & Lifecycle
3. System Calls
4. User Mode vs Kernel Mode
5. CPU Process Scheduling
6. Process Synchronization
7. Deadlocks
8. Memory Management
9. Disk Management & Scheduling
10. File System in OS
11. Protection & Security
System Design Series
Introduction To Parallel Computing
Deep Dive Into Virtualization
Insights Into Distributed Computing
Cloud Computing Series
1. Cloud Service Models
2. Cloud Deployment Models
3. Cloud Security
4. Cloud Architecture
5. Cloud Storage
6. Networking In The Cloud
7. Cloud Cost Management
8. DevOps In Cloud & CI/CD
9. Serverless Computing
10. Container Orchestration
11. Cloud Migration
12. Cloud Monitoring & Management
13. Edge Computing In Cloud
14. Machine Learning In Cloud
Computer Networking Series
1. Computer Networking Fundamentals
2. OSI Model
3. TCP/IP Model : Application Layer
4. TCP/IP Model : Transport Layer
5. TCP/IP Model : Network Layer
6. TCP/IP Model : Data Link Layer
Version Control Series
1. Complete Guide to Git Commands
2. Create & Merge Pull Requests
3. Making Open Source Contributions
Linux
Complete Guide to Linux Commands
Thanks For Reading! 💙
Garvit Singh