Testing from pseudoscience to science

Couple of years ago I started my doctoral studies in the area of software testing. I wanted to research “The differences in testing approaches in different organizational cultures”. While I decided to postpone it due to some personal beliefs on the value of a doctoral diploma, the scientific method of doing software testing is one the biggest gains in my professional experience as a tester.

It was the moment when I learned about Exploratory Testing and Cem Kaner’s areas of scientific research or Parasuraman’s model for Types and Levels of Human Interaction with Automation (pdf source) or the challenge of running repeatable, measurable experiments in applying various software testing techniques.

But soon I found out that scientific research and industry practices share very little.

Although I highly appreciate the labor of scientists and engineers, the management practices I’ve seen to organize them are generally devoid of scientific riguorosity. I would call this pseudoscience.

We routinely green-light new projects more on the basis of intuition than facts. Any software testing should begin with a vision. The next steps following that vision is critical. I notice teams engaging in various projects, selectively finding data that support their vision rather than exposing the software to true experiments to devoid of customer feedback or external accountability of any kind.

In other words, although we are well aware of our biased perceptions, we make little effort in trying to approach scientific methods and become objectives in our observations.

Anytime a team attempts to demonstrate cause and effect by placing highlights on a graph of gross metrics, it is engaging in pseudoscience. The cost of a defect over time or the test automation pyramid pattern, are all examples of models used rather to demonstrate that something works, instead of trying to de-mythify the model.

How do we know that the proposed cause and effect is true in general? We usually take the information for granted, without applying any critical thinking to it.

How do we know that it will fit our specific context? We don’t. Instead, we constantly justify our failures as lack of knowledge.

Learning is the key. As testers, we need to start working or thinking in interaction cycles. Build->Measure->Run. Use the outcome of an iteration to produce validated learning and use it in your next cycle.

Based on validated learning we can establish real facts about the context or the situation and from there, continue building models to validate the initial vision.

As software practitioners, we need to keep distance to testing pseudoscience by engaging in rational arguments and empirical studies part of our daily testing activities.

So why is this important?

As Huib Schoots very clearly suggests, we need to start learning in terms of efficiency and problem solving, instead of using templates, standards or tricks.

By learning to think better, you are able to test better and improve yourself. Research and scientific studies are a good place to start.

Enjoy testing,

Andrei

Source of inspiration for this post: “The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses” by Eric Ries

Tags: , , , ,

Trackbacks/Pingbacks
  1. Testing Bits – 11/13/16 – 11/26/16 | Testing Curator Blog - November 27, 2016

    […] Testing from pseudoscience to science – Andrei Contan – http://andrei.contan.ro/testing-pseudoscience/ […]

Leave a Reply