In many testing shops, counts of passed/failed ‘tests’ are the main part of summary test reporting. But a ‘test’ result is just a collection of verification results, and its reporting is just a way to obscure the actual situation.
Suppose, just for a very simple example, that we have 100 tests, each with 10 verification points. Suppose further that the report says that 5 tests failed.
What, exactly, does that mean? Well, actually, it doesn’t mean anything very exact.
At one extreme, it could mean that in the 5 tests, all 50 verifications failed. At the other it could mean that just one verification failed in each of the 5 tests — 5 failures in all.
So we know that for failures we have somewhere in the range 5% (50/1000) down to 0.5% (5/1000). Pretty fuzzy, no?
That’s why I don’t report passed/failed ‘tests’; I report passed/failed verifications.
[Actually, I report passed/missed/failed verifications, where ‘missed’ means that the verification point in the test was not reached.]