In what manner are defects frequently found when performing manual testing?
Frequently defects are found while executing prepared test scripts. At times, defects are discovered during exploratory testing. For companies fortunate enough to have subject matter experts on staff, the SMEs may be aware of scenarios that have historically caused issues. What’s not desirable is to find defects during UAT (user acceptance testing). At this point in the testing process, any bugs should have already been unearthed and resolved. With COTS (commercial of the shelf software), there are times where the regression testing will not initially find any defects. If the regression tests are not updated periodically, they may become stagnant and not be worth the time in executing.
Are there real benefits of having a Test Manager?
Depending upon the job requirements of your Test Manager, there could be numerous benefits to having a Test Manager. Keeping testers on track could be accomplished by relying on a project manager. However, a seasoned test manager could have additional insight in to the desired thoroughness that may be missed by a PM. Also, there is a certain level of knowledge and experience that comes with veteran test manager. Upper management can (and should) allow for a certain amount of decision making to be completed on behalf of the test manager.
What are Quality Assurance decision makers looking for from a metrics perspective?
When it comes to manual testing, they are looking at a number of key indicators. Percentage of test scripts completed is one of them. What’s considered as “completed” can vary, depending on who the ultimate decision maker is. To some, only when a test script has been successfully passed is it truly considered to be completed. Others feel that a script is completed if all steps have been executed. Here is an example; a 100 step script has been fully executed. Two of the steps have failed. The script is counted as being completed in the overall testing metrics. A 2nd iteration will be created once a fix has been put in to place. Eventually the 2nd iteration will be executed and all 100 steps will have successfully passed. This 2nd iteration will be counted in the testing metrics as well.
Should an integration script be fully executed a second time if only a few steps fail?
When it comes to integration testing, it can be a challenge to assemble the necessary resources to complete a full end-to-end integration script. Let us take a 200 step integration script. If all but 2 steps were to fail and all other steps were successfully executed, there is a possibility that additional testing would only be necessary to verify the failed steps. What should be taken in to consideration are the fixes that have been put in to place. Is it possible the fixes could resolve the issues, but possibly create other issues? Are the fixes linked to other modules? Perhaps the fix is to upgrade ones browser. Are there other web-based applications that have a browser limit that are used by the end users of the application being tested?
What should testers be able to sign off on?
This type of question should really be addressed in the overall testing strategy. If not the test strategy, then in the test plan is where this type of information should be located. Should the tester be outsourced, they may not have been given the bandwidth to give an opinion regarding the overall status regarding the testing of a particular module. There are various levels of testing roles. These too should be listed along with their corresponding responsibilities. Again, this should be within the overall testing strategy for a company.
Should I schedule a consultation with Q-Assurance today?