+4 031 436 6773 contact@euro-testing.com

A false positive is a negative result in a software test that was supposed to return a positive result. A false negative is the opposite, where a positive result is returned as negative. Software testing and QA teams must be able to recognize these results in order to avoid them in the future.

Software Testing Analogy

Let’s begin with an analogy about software testing. Suppose for a moment that bugs are like medical conditions (no pun intended). The process we use to identify them is like the medical one: through differential diagnosis. We detect the harmful situation and offer a course of treatment. Yet we are all familiar with situations where things can get complicated, just like in the medical field.  

In software testing, one of the most challenging situations we can encounter relates to a particular type of errors: the false positives and false negatives. 

Understanding False Positives and False Negatives

False Positives

With false positive results, tests are marked as failed even if they actually passed and the software functions as it should.  We report errors even though they don’t exist. Data tells us the software should not work as intended yet it does.

From our experience, this type of error has an insidious impact. While it doesn’t affect the software itself, they tend to upset the dev’s trust in the software delivery process.

Some can even begin to question the software testing company’s expertise. However, it’s usually uninspired to penalize testers for false positives (or even base KPIs on this) because it can only lead to an undesired situation – testers being scared to report them because of possible backlash. Also, keep in mind that most false positives are related to unclear situations – e.g. missing documentation. As cliché as it might sound – it’s better to be safe than sorry.

False Negatives

With false negative results, our tests are marked as passed even though they failed. We detected no problems at the moment of the test, yet they were present. The software will continue to run with glitches embedded even though it shouldn’t have.

In a best case scenario, we detect them at a later stage of tests and fix them. 

In a bad case scenario, we notice them after the software has been deployed.  

In the worst case scenario, the software bugs remain in the application for an indeterminate amount of time.

The main problem with these errors is that they can affect the business bottom line by “breaking” the software.

QA Automation Testing during Software Development

Avoiding False Positives and False Negatives

We think that one of the best ways of detecting false negatives is to insert errors into the software and verify if the test case discovers them (linked with mutation testing).

Some argue that reporting false positives is somewhat preferable to missing false negatives. This is because while the first keeps things “internal” the second has wider business implications: from bad software to unhappy end-users.

We should keep in mind that they are by nature hard to detect. Their causes can vary from the way we approached the test to the automation scripts we used and even to test data integrity .

From our experience, having test case traceability in place works best to prevent both of them. Here are questions to consider when implementing a test case traceability scenario to help figure out which test cases were most likely affected.

  1. When was the first time the failure showed itself?
  2. Can we track it back in time?
  3. Was it linked with extra implementations?
  4. Did some software functionalities change?
  5. Does the test data look suspicious?

All things considered, we believe it all comes down being responsible in software testing. It’s important to actually care about the test and not just do a superficial track & report.

If you think you might be dealing with false positives and negatives errors in your software tests and need some guidance, drop us a line.