Ad Hoc Testing

Software Testing Part 12

Ad Hoc Testing

What is Ad Hoc Testing?

  • Testing done without using any formal testing technique and without any formal planning is called ad hoc testing.
  • It is done to explore the undiscovered areas in the product by using intuition, previous experience in working with the product, expert knowledge of the platform or technology, and experience of testing a similar product.
  • Ad hoc testing does not make use of any of the test case design techniques like equivalence partitioning, boundary value analysis, and so on.
  • Ad hoc testing is done as a confidence measure just before the release, to ensure there are no areas that got missed out in testing.
  • The test cases are not documented.
  • Ad hoc testing can be done in all phases of testing.

Drawbacks of Ad Hoc Testing & Possible Resolutions

DrawbackPossible Resolution
Difficult to ensure that the learnings learned in ad hoc testing are used in future.
Document ad hoc tests after test completion.
Large number of defects found in ad hoc testing.Schedule a meeting to discuss defect impacts. Improve the test cases for planned testing.
Lack of comfort on coverage of ad hoc testing.When producing test reports combine planned test and ad hoc test. Plan for additional planned test and ad hoc test cycles.
Difficult to track the exact steps.Write detailed defect reports in a step-by-step
manner. Document ad hoc tests after test execution.
Lack of data for metrics analysis.Plan the metrics collection for both planned tests and ad hoc tests.

Ad Hoc Testing Techniques

Buddy Testing

  • Uses the “buddy system” where two team members are identified as buddies.
  • The buddies mutually help each other, with a common goal of identifying defects early and correcting them.
  • A developer and a tester usually become buddies.

Pair Testing

  • Pair testing is testing done by two testers working simultaneously on the same machine to find defects in the product.
  • Two testers pair up to test a product's feature on the same machine.
  • The objective of this exercise is to maximize the exchange of ideas between the two testers.
  • When one person is executing the tests, the other person takes notes.
  • They can swap roles of “tester” and “scribe” during a session. They can mutually decide on the modus operandi.

Exploratory Testing

  • Exploratory testing is a technique used to find defects by exploring the product, covering more depth and breadth.
  • Exploratory testing tries to do that with specific objectives, tasks, and plans.

There are several ways to perform exploratory testing:

Guesses

  • Guesses are used to find the part of the program that is likely to have more errors.
  • Previous experience on working with a similar product or software or technology helps in guessing.
  • This is because the tester would have already faced situations to test a similar product or software.
  • Those tests from guesses are used on the product to check for similar defects.

Architecture Diagrams & Use Cases

  • Architecture diagrams depict the interactions and relationships between different components and modules.
  • Use cases give an insight of the product's usage from the end user's perspective.
  • A use case explains a set of business events, the input required, people involved in those events and the expected output.

Study of Past Defects

  • Studying the defects reported in the previous releases helps in understanding of the error prone functionality/modules in a product development environment.

Error Handling

  • Error handling is a portion of the code which prints appropriate messages or provides appropriate actions in case of failures.

Discussions

  • Exploration may be planned based on the understanding of the system during project discussions or meetings.
  • Information can be picked up during these meetings regarding implementation of different requirements for the product.

Questionnaires & Checklists

  • Questions like “what, when, how, who and why” can provide leads to explore areas in the product.

Iterative Testing

  • Iterative testing aims at testing the product for all requirements, irrespective of the phase they belong to in the spiral model.
  • Iterative testing requires repetitive testing.
  • Developers create unit test cases to ensure that the program developed goes through complete testing.
  • Unit test cases are also generated from black box perspective to more completely test the product.
  • After each iteration, unit test cases are added, edited, or deleted to keep up with the revised requirement for the current phase.
  • Regression tests may be repeated at least every alternate iteration so that the current functionality is preserved.
  • Automation helps in iterative testing.

Agile & Extreme Testing

  • Agile and extreme (XP) models take the processes to the extreme to ensure that customer requirements are met in a timely manner.
  • Agile and XP methodology emphasizes the involvement of the entire team, and their interactions with each other, to produce a workable software that can satisfy a given set of features.
  • Software is delivered as small releases, with features being introduced in increments.
  • Extreme programming and testing makes frequent releases and in a controlled way by involving customers.

Activities in XP Work Flow

  1. Develop user stories.
  2. Prepare acceptance tests.
  3. Code.
  4. Test.
  5. Refactor.
  6. Automate.
  7. Delivery.

The rules that are followed in extreme programming and testing are as follows:

  1. Cross Boundaries - Developers and testers cross boundaries to perform various roles.
  2. Make Incremental Changes - Both product and process evolves in an incremental way.
  3. Travel Light - Least overhead possible for development and testing.
  4. Communicate - More focus on communication.
  5. Write Tests Before Code - Unit tests and acceptance tests are written before the coding and testing activities respectively. All unit tests should run 100% all the time. Write code from test cases.
  6. Make Frequent Small Releases.
  7. Involve Customers All The Time.

Defect Seeding

  • Defect seeding is a method of intentionally introducing defects into a product to check the rate of its detection and residual defects.
  • Also known as error seeding or bebugging.
  • Usually one group of members in the project injects the defects while another group tests to remove them.
  • The purpose of this exercise is while finding the known seeded defects, the unseeded/unearthed defects may also be uncovered.
  • Defects that are seeded are similar to real defects. Therefore, they are not very obvious and easy to detect.
  • Total latent defects = (Defects seeded / Defects seeded found) * Original defects found. Latent defects are the number of defects which are yet to be found.

Precautions while defect seeding:

  1. Care should be taken during the defect seeding process to ensure that all the seeded defects are removed before the release of the product.
  2. The code should be written in such a way that the errors introduced can be identified easily. Minimum number of lines should be added to seed defects so that the effort involved in removal becomes reduced.
  3. It is necessary to estimate the efforts required to clean up the seeded defects along with the effort for identification. Effort may also be needed to fix the real defects found due to the injection of some defects.

Real Life Scenarios & Suitable Ad Hoc Techniques

ScenarioMost effective ad hoc technique
Randomly test the product after all planned test cases are done.Monkey Testing
Capture the programmatic errors early by developers and testers working together.Buddy Testing
Test the new product/domain/technology.Exploratory Testing
Leverage on the experience of senior testers and to exploit the ideas of newcomers.Pair Testing
Deal with changing requirements.Iterative Testing
Make frequent releases with customer involvement in product development.Agile/Extreme Testing

Conclusion

You can read other articles written by me through these links.

Software Testing Series
1. Fundamental Principles of Software Testing
2. Software Development Life Cycle Models
3. Quality Assurance vs Quality Control
4. Testing Verification vs Testing Validation
5. Process & Life Cycle Models For Testing Phases
6. White Box Testing
7. Black Box Testing
8. Integration Testing
9. System Testing
10. Regression Testing
11. Performance Testing
12. Ad Hoc Testing
13. Checklist & Template For Test Plan & Management
14. Software Test Automation

Operating System Series
1. Introduction & Types of OS
2. Process States & Lifecycle
3. System Calls
4. User Mode vs Kernel Mode
5. CPU Process Scheduling
6. Process Synchronization
7. Deadlocks
8. Memory Management
9. Disk Management & Scheduling
10. File System in OS
11. Protection & Security

System Design Series
Introduction To Parallel Computing
Deep Dive Into Virtualization
Insights Into Distributed Computing

Cloud Computing Series
1. Cloud Service Models
2. Cloud Deployment Models
3. Cloud Security
4. Cloud Architecture
5. Cloud Storage
6. Networking In The Cloud
7. Cloud Cost Management
8. DevOps In Cloud & CI/CD
9. Serverless Computing
10. Container Orchestration
11. Cloud Migration
12. Cloud Monitoring & Management
13. Edge Computing In Cloud
14. Machine Learning In Cloud

Computer Networking Series
1. Computer Networking Fundamentals
2. OSI Model
3. TCP/IP Model : Application Layer
4. TCP/IP Model : Transport Layer
5. TCP/IP Model : Network Layer
6. TCP/IP Model : Data Link Layer

Version Control Series
1. Complete Guide to Git Commands
2. Create & Merge Pull Requests
3. Making Open Source Contributions

Linux
Complete Guide to Linux Commands

Thanks For Reading! 💙
Garvit Singh