Functional Test Automation, Part 2: The Subject, the Standard, and the Evidence

In my last post I wrote that the reality of automated functional testing has so far failed to live up to my expectations. In this post I’ll define what I mean by functional testing. What follows might not be the definition you’re familiar with, and I don’t mean to suggest that this is the only valid definition. It is certainly influenced by the industries I work with, where:

  • The subject of functional testing is a black box
  • The standard of functional testing is the set of functional requirements
  • The evidence of functional testing formally links test cases to those requirements they test

The Subject

Functional Test AutomationAll testing measures a subject against a standard. The subject of functional testing is that set of interfaces available to the product’s end users, and only those interfaces. For web applications, for example, the interface is the browser itself.  Everything that sits behind the interface, including databases, web servers, third-party application programming interfaces (APIs) – anything your end users won’t interact with directly – is locked inside a “black box.” This black box criterion distinguishes functional testing from unit or integration testing, which can and should reach inside the box to test individual components.

Functional testing is not limited to graphical user interfaces, and the end user isn’t necessarily a human. If your product exposes a RESTful HTTP API or a file-drop endpoint for direct consumption, for example, those APIs are subjects for functional testing. As I’ll discuss in a later post, machine interfaces often occupy those hard-to-scratch places beyond the reach of many automated functional testing tools.

The Standard

If end user interfaces are the subject of your functional tests, the standard by which you measure success is a set of documented functional requirements. A good solution passes all its functional tests, and good functional tests cover all the solution’s functional requirements. This criterion distinguishes functional tests from testing that follows other standards, such as source code design (unit tests), enterprise integration design (integration tests), or performance specifications (load and latency testing).

This standard does not imply that requirements be some ossified collection of “the system shall’s” trapped in a binary document somewhere on a shared drive, nor does it imply the use of a waterfall development methodology. At KSM we practice user story modeling, based on Mike Cohn‘s book User Stories Applied for Agile Software Development. Our requirements are acceptance tests associated with user stories in a Scrum release backlog, which we maintain in Atlassian Jira. They are living requirements built for change, but easy to snapshot when we need a stable baseline for testing.

The Evidence

I’ve spent the bulk of my career delivering software solutions into regulated industries, including health care, pharmaceutical R&D and manufacturing, and electric utility operations and market automation. In this world (and especially in pharmaceutical R&D), functional testing is more than a best practice to ensure quality and long-term return on investment: it’s the ante that buys you a seat at the table. If you cannot prove that you did it, and did a thorough job of it at that, you don’t pass the vendor or project audit, and no one buys or implements your solution. Game over. There is no long term for you to return your customer’s investment, no matter how great your solution might actually be.

Proof, in this context, comprises evidence that formally links test cases to functional requirements. Test cases merely inspired by requirements, but not formally linked to them, won’t cut it. One commenter to my last post, someone who (like me) builds solutions for US 21 CFR Part 11 regulated industries, introduced the concept of traceability and described the set of reports they deliver to satisfy their quality auditors. Indeed, the most difficult challenges to automating functional testing lie in the generation of trace matrices, from requirement to test to result, in a format that auditors can understand and accept. Those challenges are compounded by the pain of integrating manual test  results into that trace matrix.

Automated testing is fast and regression-friendly, but introduces complexity.  Manual testing is slow and tedious, but it’s also simple and accessible. In my next post I’ll give two cheers for manual testing, in the hopes of recovering its advantages in our fix to functional test automation.

Categories: Automated Functional Testing, Quality Assurance
Trackback URL for this post: https://www.ksmpartners.com/2014/11/functional-test-automation-part-2-the-subject-the-standard-and-the-evidence/trackback/

2 Responses to Functional Test Automation, Part 2: The Subject, the Standard, and the Evidence

  1. Pingback: Functional Test Automation, Part 1: Room for Improvement |

  2. Pingback: Two Cheers for Manual Testing (Functional Test Automation, part 3) |

Leave a Reply

Your email address will not be published. Required fields are marked *