If you’ve never played it before, a hand of manual testing misery poker plays out something like this:
“It took six of us eight weeks to plow through a three and a half foot stack of system test scripts”
“That’s nothing. Our site acceptance testing alone took fifteen of us three months for a six-foot stack.”
“But were yours double sided?”
“Then what took you so long?”
“Screenshots every step”
“Oh. I fold.”
We automate functional testing for a reason: the alternative is tedious, resource-intensive, and expensive. So why do test suites still comprise so many manual tests?
A grade school teacher once plunked bread, jelly, and peanut butter down on a table and asked my class how to make a peanut butter and jelly sandwich. She followed our instructions exactly, first by rubbing the jar of jelly across the bread because we didn’t tell her to open it. She slapped the jellied side of the bread face-down on the table to apply the peanut butter, which she did with her hands because we forgot to tell her to use a knife. Feigning ignorance of context, she approximated the response of an automaton.
Lesson learned: automatons are idiots. They respond only to very low-level commands. This is the fundamental problem with automating functional testing.
Our first instinct is often to build a better automaton. Continuing the analogy above, we replace brain, muscle and nerve with microchip, servo, and sensor. We assemble machine-level instruction sets – programs – for grasping the jar, twisting the lid, and flipping the bread into reusable routines, then assemble those together into higher-order procedures like “spread the peanut butter” and, ultimately, “make a peanut butter and jelly sandwich.” We can crank out hundreds of sandwiches per hour, all perfect and perfectly consistent.
Then the requirements change: the customer wants sliced bananas in each sandwich. It will take weeks to build or buy equipment to peel and slice bananas, and more to program it. We learn the lesson again: automatons are idiots.
Fully-automated functional tests are automatons. The limitations of automatons reveal by contrast two significant advantages of manual testing:
- Manual tests are written to be interpreted by humans. Humans understand context: they understand that to open a jar of peanut butter you must first remove it from the shelf. Humans learn quickly: it takes a minute to teach someone to peel and slice a banana. They understand instructions like “Choose study AGS-38693 from the ‘Study’ drop-down and click the ‘Submit’ button,” but shoot you dirty looks if you give them the XPath address of those controls on a web page. (Trust me on this one.)
- Manual tests don’t require specialized expertise to write. Manual test authors need only speak the language of the subject (the user interface, treated as a black box) and the standard (the functional requirements) of functional testing. Automated tests, on the other hand, require additional expertise in the low-level language of an automaton. The semantic distance between the language of specification and the language of implementation of a functional test is far greater for automated tests than for manual ones. This distance also explains the puzzled look on an auditor’s face when you show them your Java or C# test scripts. (Trust me on this one, too.)
A fresh approach to automating functional testing must retain these advantages of manual testing. It must shrink the semantic distance between the specification of tests and the languages in which tests are implemented (see the next post in the series). However, this is not enough; there will always be bananas to peel and slice, requirements that don’t lend themselves to easy automation. A fresh approach must also support seamless integration of manual test steps into otherwise automated processes.