The most common challenge reported to me by software quality professionals in the pharmaceutical and medical device industries is the lack of time available to properly validate releases of software-as-a-service (SaaS). The frequency of releases, so they say, does not allow sufficient time for controlled configuration, testing and documentation. When I ask them how they configure and test new releases, they often describe familiar processes: configuration by checklist, manual testing and hand-crafted documentation.

These are the same processes we have used for years, long before vendors started delivering GxP solutions as SaaS. These processes were adapted to work with self-hosted delivery models. They do not scale when applied to SaaS.

In the traditional self-hosted model of software deployment, vendors provide software that customers install and host on infrastructure they control. The baseline effort to configure and deploy a new software release is relatively constant. No matter how many new features or bug fixes that release delivers, the customer must:

  • upgrade underlying operating systems, databases, and application servers to the minimum versions required to host the new release;
  • qualify and catalog all that infrastructure, and document the same;
  • plan for, and execute, the migration of data from the old version of the software to the new;
  • install the new version of the software per vendor-provided instructions; and
  • qualify both the installation and operational fitness of the software, and document the same.

The vendor, for its part, must support hundreds of customers through each upgrade. It is more cost-effective for everyone involved for the vendor to batch multiple changes together into fewer releases. “Big bang” releases delivered every 6, 12, or even 18 months are common, and the customer enjoys the luxury of skipping releases they do not need – and of taking all the time in the world to validate the ones they do.

The ascendant SaaS delivery model changes all that by relieving customers of the responsibility for hosting the software. Instead, the vendor maintains and controls the hosting infrastructure, and installs and qualifies the software on it. By consolidating hundreds of on-site deployments into one, the vendor drives down the cost and complexity of deployment and support. The justification for bunching lots of changes into “big bang” software releases is gone, and vendors are able to release changes more frequently. The delay between new feature releases shrinks from 6-12 months to quarterly, monthly, or even weekly releases. The customer receives new features sooner, but loses some flexibility to skip releases they do not need – and has less time to validate them.

A quality assurance regimen comprised of ponderous manual processes cannot keep pace, especially for those who must validate tens or hundreds of configurations of that software across sites. In subsequent posts, I will explore how judicious use of automation in QA processes can increase agility and reduce errors. Most importantly, it lets us spend less time on onerous (but important) box-checking tasks, and more time on high-value activities like risk identification and mitigation.

blog-date

Jan 10, 2020

blog-author

KSM Technology Partners