Archive for the ‘Quality Assurance’ Category

Time Consuming Paper Tests

In March, we exhibited at the 33rd SQA Annual Meeting and Quality College and spoke with many people from regulated industries who tested web applications using paper based manual test. Almost uniformly people expressed frustration with paper based testing, especially the amount of time it takes.  One woman in particular had just finished testing a web application with paper tests that took her five months to execute.  In our experience, it is common even for teams of five to ten people to take weeks or months to finish a suite of paper tests.

Paper based web application testing takes too long to implement, and our desire to speed up this process is one of the factors that lead us to create Toffee.  In our prior blog post, Manual Testing: Toffee is about Orchestration not only Automation, we explained how Toffee improves the speed and accuracy of manual testing by eliminating paper and promoting heads up testing.  With Toffee, testers no longer need to switch contexts between the application under test, paper test scripts, and a text document for tracking screenshots.  Now, we want to share an experiment we performed to test the time efficiency gains of using Toffee tests versus traditional paper based tests.

As part of our experiment, we created two sets of tests for the exact same functionality of a food ordering web application.  One set of tests was a pure paper based test consisting of 47 manual steps.  The second set of tests was Toffee based and although 7 of the 47 steps were automated, the rest were still manual.

Twenty different people executed both sets of tests with half doing the paper based test first and half starting with the Toffee based test.  We timed how long it took each person to complete each test and found that the Toffee test took 31% less time to complete.  So, even though 85% of the Toffee test required the exact same interactions with the application under test as the paper test, people were able to complete the Toffee test in about two thirds the time it took to perform the paper test.

This is a huge gain in time efficiency especially if tests take weeks or months to complete and just one of the many reasons to use Toffee to orchestrate your testing.

For more information about Toffee, visit http://toffeetesting.io, or feel free to contact us with questions at info@toffeetesting.io.

Testing Multi-User Interaction

How do you properly test the interaction between multiple users  within a web application?  What if you need to test that user B receives a notification immediately after user A performs some action? How do you ensure that user D’s access is cut off immediately when user C revokes it? Testing this type of functionality can be difficult, because it is not as simple as opening the application under test and typing in fields and clicking buttons.  In order to ensure that the interaction functions properly and timely, you often need to concurrently open, interact with, and monitor at least two unique instances of the application.

Consider a chat app that allows two users to instantly send messages to each other through your web interface.  To test this application, you must log two different users into their own instance of the app at the same time.  To test that the messages are instantaneous, you need to send a message from the first user and immediately check if the message appears in the second user’s inbox.  You cannot simply log in to the app with one user, send a message, and then log in as the receiving user because the immediacy of the message exchange is not tested.

Collecting screenshot evidence of user interaction adds another layer of complexity because, you have to position each unique user’s instance of the application on the screen so that they do not overlap with each other.

Manually testing user interaction quickly becomes time consuming and complicated as you need to test and gather evidence for more unique users.  Many web testing tools do not even offer the ability to simultaneously run multiple instances of the application under test, however, Toffee has features to make it easy including multiple test sessions and comparative screenshots.

A Toffee test session is a single active instance of a browser that is then used to access the application under test.  Toffee allows you to open multiple test sessions at the same time.  This can be used to simulate multiple users because each test session can access the application as a different user.  As in the chat app example, Toffee can log in to two instances of the app, send messages from one user, and confirm that the messages are immediately received by the second user.

Toffee makes it simple to document the user interaction with comparative screenshots.  Using Toffee dock or move commands you can easily set up a comparative screenshot so that each unique test session does not overlap.  Dock commands place a session at either the top, bottom, right, left, or any corners of the screen.  Move commands allow the user to set a session to any user specified screen coordinates.

Testing multi-user interaction can be a complicated nuisance, but Toffee’s test sessions and comparative screenshots make it simple.

For more information about Toffee, visit http://toffeetesting.io, or feel free to contact us with questions at info@toffeetesting.io.

Record/Replay vs Programmable Tests: Toffee the Best of Both

Record/replay and programmable web testing tools each have their own benefits and shortfalls. These trade offs are analysed in a paper by Maurizio Leotta titled “Capture-Replay vs. Programmable Web Testing: An Empirical Assessment during Test Case Evolution” (citation below).  Leotta concludes that while initial test creation is faster for record/replay tests, programmable web tests save time during regression testing as the application is updated.  

As part of Leotta’s study, testers with some programming experience made two sets of tests for six different web applications.  Each web application had one set of tests made using a record/replay tool called Selenium IDE and another set of programmed tests written in Java using Selenium WebDriver.  The testers then updated each test suite for a newer version of each web application.  Leotta measured the time the testers spent creating and updating the test suites, comparing the record/replay time with the programmable test time.

Leotta found that the programmable tests took up to 112% more time to make from scratch than the record/replay tests.  This was mainly because creating the record/replay tests did not require any programming knowledge and was as simple as the tester recording themselves using the web application.

Next, Leotta’s research showed that when test suites were updated for new versions the programming tests took up to 51% less time to repair than the record/repay tests.  This time saving was mostly due to the developers using the page object design pattern.  In summary, a page object is a programming construct where code that locates elements and initiates actions on the web application is stored in one place.  The page objects can be reused in multiple tests.  Testers only update page objects when repairing programmable tests, which can fix multiple tests at once. Record/replay tests do not reuse anything, so every record/replay test must be fixed individually or even recorded again every time the application under test changes.

As I discussed in a previous post, Toffee avoids the brittleness common to record/replay testing, replacing it with plain-language test commands and an interactive script development environment. Toffee also saves time during test repairs with aliases and test modularization, which provide the same benefits as  the page object pattern, but without requiring programming.  Toffee commands that are repeated throughout tests can be saved as an alias, so only the alias needs to be updated.  Also, a series of frequently used commands (think login) can be saved and maintained in a single “include” file and used on multiple tests.

If the application will be updated more than once, Leotta recommends using programmable tests, otherwise, he recommends record/replay tests.  Toffee has the efficiencies of both record/replay tests and programmable tests, so it is the best solution for whatever your web testing needs may be.

For more information about Toffee, visit http://toffeetesting.io, or feel free to contact us with questions at info@toffeetesting.io.

Leotta, D. Clerissi, F. Ricca and P. Tonella, “Capture-replay vs. programmable web testing: An empirical assessment during test case evolution,” 2013 20th Working Conference on Reverse Engineering (WCRE), Koblenz, 2013, pp. 272-281.
© 2013 IEEE
doi: 10.1109/WCRE.2013.6671302

Familiar Reports for Regulated Industries

 

Web application testing for regulated industries often involves auditors, who are not developers, to confirm the testing results.  We developed Toffee so that non-developers can read the automated test results and quickly understand them.

We have personal experience of the challenges of incorporating automated testing into the computer system validation process for regulated industries.  Before we built Toffee, we wrote our automated tests in Java using Selenium.  After writing a large suite of automated tests that validated one of our client’s applications, we turned over the results to their auditor.  The auditor came back to us and explained that our results did not help her because, understandably, she did not know what any of the Java meant.  We ended up sitting down with her and explaining each line of code.

It quickly became apparent that describing lines of code was not a tenable exercise for regulated industry web testing.  We think auditors should not have to become developers, so instead we made Toffee to allow auditors or any non-developer to read automated tests.

After Toffee runs a test, it generates a report that conforms to the traditional web testing style used in manual/paper tests.  The report has five columns for each test step which state the action taken, the expected result, the actual result, the verdict, and if applicable the screenshot evidence.

If the test uses screenshot evidence, a picture is taken for each step.  We discussed this in a prior post called  Automatic Screenshots: Evidence Gathering Made Easy.  Each screenshot is included in the report appendix with a reference number, so all the information is in one place.  This makes it easy for an auditor to quickly evaluate the test.

We realized that non-developers are regularly a part of web testing, so we made Toffee to be user friendly for everyone regardless of their programming knowledge.

 For more information about Toffee, visit http://toffeetesting.io, or feel free to contact us with questions at info@toffeetesting.io.

The Fragility of Record/Replay Tests

The convenience of using record/replay web testing tools is often defeated by the brittleness of the tests they produce.  Mouna Hammoudi’s paper “Why Do Record/Replay Tests of Web Applications Break?” explores why these tests break so easily as the application under test changes.  Toffee overcomes many of these issues with record/replay testing.

Hammoudi used a record/replay tool called Selenium IDE, which allows a user to record themselves using the application under test and replay their actions at a later time to test functionality.  After creating a test for each of 300 releases across 5 web applications, Hammoudi found that roughly 73% of all breaks in her Record/Replay tests were due to locators.

Hammoudi explains that locator breaks occur when either an HTML element’s attributes or the HTML structure has changed.  In our experience, tests that use structure based locators such as XPath are especially vulnerable: as web applications are updated the structure changes regularly and even small changes can invalidate structural locators.

Toffee offers two solutions that make Toffee tests more robust and less expensive to maintain compared to record/replay tests.  First, Toffee makes it easy to write test steps using attribute based locators like id, name, label, and others.  These locators tend to be more stable because they are not affected by common structural changes. Instead, developers have to intentionally change the attribute value.

Second, Toffee provides users with the ability to create aliases.  Aliases are custom commands which are defined by the the tester and refer to other Toffee commands.  For example, the command “click button with id button-logout-user” can be aliased as “logout”.  Then the “logout” alias can be used throughout your tests, instead of the original command.  This minimizes the necessary repair after the button’s id changes because only the “logout” alias must be updated.

The drawback of record/replay tests is their brittleness.  Toffee tests are more robust and reusable without sacrificing the accessibility of record/replay tools.

 

For more information about Toffee, visit http://toffeetesting.io, or feel free to contact us with questions at info@toffeetesting.io.

Manual Testing: Toffee is about Orchestration not only Automation

We created Toffee to not simply automate QA testing but to orchestrate both manual and automated tests.  Manual tests are an important part of QA testing because it is either impractical or impossible to automate everything.  See our prior post on the virtues of manual testing: Two Cheers for Manual Testing

Creating manual tests for Toffee is as straightforward as writing traditional manual tests.  Simply mark the test step as manual and then write the instructions for the tester to perform.  Toffee will know it is a manual step and will take care of the rest.

Manual test are typically carried out using stacks of paper or lengthy text documents with instructions and a space to fill in the result.  Testers are stymied by going back and forth between the application under test and the test documentation, squandering time and becoming susceptible to mistakes every time they lose their place in the instructions.

Toffee eliminates these problems. Testers no longer have to deal with shuffling paper or scrolling through online documents.  Toffee provides test step instructions, and a place to record the results for only the current step, in a “heads up” display alongside the application under test.  This reduces human errors and is less taxing on testers.

As part of testing orchestration, Toffee allows manual steps to be integrated directly into otherwise automated tests.  Toffee executes all of the automated steps until it comes to a manual step, then it will pause and request the manual action from the tester.  After the tester completes the manual step and records the result, the computer continues on with automated steps.  This way if the majority of a test can be automated, a whole separate test does not need to be made just for the manual steps.  In addition, the results from the manual step and automated steps are all in the same report generated by Toffee.

Manual tests are still a vital part of QA testing, and Toffee does not just accommodate manual testing, but it makes it less of a headache for testers and more accurate.

For more information about Toffee, visit http://toffeetesting.io, or feel free to contact us with questions at info@toffeetesting.io.

Automatic Screenshots: Evidence Gathering Made Easy

When working with clients and performing tests ourselves, we found that documenting tests with screenshot evidence is one of the most time consuming and challenging aspects of testing especially for manual tests.  With this in mind, we designed Toffee to make evidence gathering easy with automatic screenshot capture for both manual and automated tests.

By using the simple command “enable automatic screenshot capture”, Toffee will begin to automatically take screenshots at every test step regardless if it is manual or automated.  Testers no longer have to remember to do it themselves or repeat steps if they forget.  The default timing for the screenshot is within milliseconds of the test step being completed, but the timing can be delayed as needed to accommodate slower applications.  The screenshot delay can be modified at any point within the test.  This allows flexibility to ensure the appropriate evidence is captured with every screenshot.

One of the main frustrations Toffee’s screenshot capture solves is keeping screenshots connected to the correct test step results.  For example, without Toffee a tester often has to paste screenshots for manual tests into some additional word processing tool and document it appropriately so others know which screenshot corresponds to which test step.  This method is tedious and prone to error.  Toffee stores the results and screenshots together and can easily generate a single report containing all of the results and evidence in a single document.  

Toffee makes gathering screenshot evidence easy by automatically capturing them after test steps and keeping them all in one place with the test results whether they are manual or automated.

For more information about Toffee, visit http://toffeetesting.io, or feel free to contact us with questions at info@toffeetesting.io.

Announcing Toffee

Toffee: Test Orchestration for the Enterprise

Since my last post on functional testing, KSM has been hard at work transforming the ideas from that post into a new product and service offering. Toffee (“Test Orchestration for the Enterprise”) allows QA professionals to build and execute automated tests in an interactive, online test environment, without requiring programming expertise. For those test cases that automation cannot easily reach, Toffee lets you include manual test steps in your scripts alongside automated ones. Automated screenshots capture your entire desktop, providing evidence for steps whether executed within or outside of the browser. Test results for both automated and manual tests are presented in a familiar step/expected result/actual result format, along with screenshot evidence.

Toffee started as a command-line based solution, which allowed us to focus on the syntax and scope of the Toffee command set. We used this first incarnation to test solutions we developed in house. The largest test suite achieved 100% automation with over 28,000 test steps, and completely replaced the Selenium tests we had written in Java.

On the trade show floor of the Society of Quality Assurance Annual Meeting last week, KSM previewed the next generation of Toffee, called Toffee Composer. Composer provides the same level of functionality as the initial version, but in a user-friendly web interface. Build your scripts, execute them, and store your results online, either in our cloud-based environment or in your own data center.

For more information about Toffee, visit http://toffeetesting.io. We would be happy to schedule an online demonstration for you; just send us an email at info@toffeetesting.io.

Two Cheers for Manual Testing (Functional Test Automation, part 3)

If you’ve never played it before, a hand of manual testing misery poker plays out something like this:

“It took six of us eight weeks to plow through a three and a half foot stack of system test scripts”

“That’s nothing.  Our site acceptance testing alone took fifteen of us three months for a six-foot stack.”

“But were yours double sided?”

“Erm, no”

“Then what took you so long?”

“Screenshots every step”

“Oh.  I fold.”

We automate functional testing for a reason: the alternative is tedious, resource-intensive, and expensive.  So why do test suites still comprise so many manual tests? Continue Reading…

Functional Test Automation, Part 2: The Subject, the Standard, and the Evidence

In my last post I wrote that the reality of automated functional testing has so far failed to live up to my expectations. In this post I’ll define what I mean by functional testing. What follows might not be the definition you’re familiar with, and I don’t mean to suggest that this is the only valid definition. It is certainly influenced by the industries I work with, where:

  • The subject of functional testing is a black box
  • The standard of functional testing is the set of functional requirements
  • The evidence of functional testing formally links test cases to those requirements they test

Continue Reading…