The misconception in software testing is that ad hoc QA basically involves a tester randomly clicking around a website or application in the hopes of finding a defect. This could not be further from the truth. Ad hoc testing may be the flip-side to pure test case driven QA but it is just as important to the success of any quality assurance engagement. Test case QA is regimented and closely follows each step of a test plan with little deviation. Ad hoc is exploratory, using the test plan as more of a guideline.
Consider a typical ad hoc testing approach. First, the tester begins QA by addressing any areas or features of concern that the client has mentioned. Then he begins testing his way out from there. Certain application functionality and web elements tend to be more fragile, brittle, and prone to bugs. File upload features, for example, as well as text entry fields, can be gold mines for issues. Site or app specific tabs and buttons can give rise to race conditions: issues that can be forced to the surface by quickly clicking from one feature to another.
Ad hoc testing is far more than simple “button mashing”. It involves figuring out what elements to hit, in which ways to hit them, and on what OS/browser or device configurations. Frankly, good ad hoc testing skills take time to develop. Doing it efficiently and reporting the results in a clear and concise way requires experience, and using a test lab that can complement test case driven testing with effective ad hoc testing is essential in ensuring the quality of your software product.