There are many basics expectations from automation testing framework like code re-usability, human readable report generation, easy maintenance etc…
These are the expectations that are widely discussed in every forum, blog and meet ups with few exceptions.
However once we created framework with all expectations and starts using it in real time, there will be many other problems/requirements that will emerge.
These new problems may not be a must have feature but having those features will ensure the success of test automation.
One such problem that we often encounter in test execution is, in multiple cycles of regression testing, same test cases will be failed again and again due to the same bug. And automation developers had to validate these scripts multiple times while doing failure analysis (each test execution iteration). This might portray a picture that the number of failures are not reducing after each regression cycle/iteration; Developers had to debug the failure only to find out that it is failed due to existing bug while increasing time to analyze and report test results.
When the number of automated tests and failures are less, this may not look like a problem. Once the number of automated tests increases, assume that you have 2000 automated test cases, this will cause a problem in analyzing and reporting the bugs.
What can we do to improve?
We can integrate automation framework with bug management system that we are using; And tag a bug ID to each automation test case, if any. Tagging can be done using many different mechanism like ,
- Creating groups with bug ID and exclude or include groups while execution after validating the bug status, in case of TestNG
- Simply keep test case to Bug ID map in a properties or xml or json file. Before execution of a test, enable framework to get bug ID from this mapping file, if any, for test case in execution and validate the bug status; if the bug is Reopened/Open/New status, skip the execution; if the bug is Resolved/Verified/Closed status, proceed execution.
- Even we can use custom annotations
By doing this, we are not executing those test cases that we know will fail for sure when executed; The results will be more green, will be easy to analysis new failures and time to report issues to respective stakeholder will be reduced.