Understanding Feature Testing in an Interactive, Human-driven Environment

by Dmitry Kirsanov 14. November 2023 15:03

Software testers have an integral role to play in the software development cycle with their principal task being ensuring that the system's features work as per the specified requirements. If you're engaged in developing fresh software or enhancing an existing one, your testing team will most likely invest considerable time in feature verification.

However, feature testing isn't an exclusive reserve of the development process. When adopting commercial off-the-shelf software, you're likely to undertake a good deal of feature-by-feature testing. Similarly, for software being developed in an iterative manner, testers progressively authenticate each new feature as it becomes part of the deployed environment.

Before we delve deeper into the dynamics of feature testing, it's necessary to clarify a few commonly used terminologies. I've referred to our topic as 'feature testing', a term that encompasses different aspects of the system, often used in a more refined context during development.

To illustrate, in a flight booking system, the ability to search for flights based on certain criteria might be considered a 'feature', although it's not always the ultimate objective of the user. The terms 'scenario', 'business scenario', and 'use case' are also frequently used, typically implying a larger scale use than 'feature'. So 'booking a flight' might fall under use case or business scenario.

Irrespective of the semantics, this post considers feature confirmation testing to cover all these aspects. It pivots on the human-driven, interactive testing based on the operational functionality of the system.

This form of testing is generally driven by requirements. The tester begins with an understanding of how a feature or scenario is expected to operate and verifies if it functions as anticipated. This could include testing several alternate paths, representing the different ways a real user might engage with the system.

The preparatory phase of this activity involves identifying various scenarios to ascertain feature functionality. This testing approach isn't rigid and is informed by the tester's ingenuity. A great example of feature confirmation testing —the principal objective of testers becomes determining whether a feature works, whether bugs exist and if the feature is ready for production. However, testers have the capacity to report more refined information, strengthening feature confirmation.

Like all methods, feature-based testing has its strengths and weaknesses that become more evident as we delve into the strategies for particular testing circumstances. Strengths lie in its simplicity in obtaining coverage on what needs to be scrutinized. What is new or altered? Testing begins with a list of functionalities, giving a clear direction. Feature-based testing also serves as a learning tool about the system under test, focusing only on the feature in question.

Yet a caveat exists. The requirement-driven nature of feature-based testing can limit the exploratory instinct of a tester. The requirement documentation on which feature-based testing is built can give testers tunnel vision, focusing only on what's documented. This can lead to missing variables not present in the documentation. Also, some repetition can occur since developers working on the feature were likely guided by the same requirement documents.

Termed as 'gates', testers can be seen as the final verdict on whether a feature goes into production, which can lead to an adversarial relationship whilst limiting communication. However, the most effective testers act as information collectors & broadcasters, laying out their findings for the team or product owner to decide the course of action.

Despite these challenges, the role of feature confirmation testing is crucial in ensuring software works as intended, and its drawbacks can be mitigated through good testing strategies.



blog comments powered by Disqus