Reliable Tests, Proper Test Data, & Testing Dependencies: Defeating the Triple Threat

Sponsored Post

Reliable Tests, Proper Test Data, & Testing Dependencies: Defeating the Triple Threat

Reliable testing requires that tests, data, and dependencies all work consistently and accurately, and represent reality. But with the explosion of APIs, data security concerns, and need to move fast, the task is easier said than done.

Proactive software testing needs a new approach that orchestrates these three elements together in a unified manner. Enter: Speedscale. By recording production traffic, Speedscale provides a self-service way for engineers to autogenerate these assets without scripting, allowing test execution early and often. Speedscale captures transactions and scenarios (data), essentially “listening” to your app and understanding how your service is used by your customers. By isolating your API and varying the inputs (tests) and backends (dependencies), you can systematically control variables and apply the scientific method to test your code before release.

Testing, Dependencies & Data

During my 10 years in the DevOps tools and digital transformation consulting space, only a handful of companies stood out for truly streamlined software delivery methodology. The commonality between these companies? They’ve invested a lot of time and energy (2 years or more!) to develop automation, SOP, and self-service to ensure these three critical components are working together in unison:

  1. Reliable tests
  2. Accurate data
  3. Testing dependencies or environments

At Speedscale, we often refer to these 3 elements as the Triple Threat.

But in today’s complex, API-driven world, the task to manage the “Triple Threat” together is much easier said than done. As a result, many companies get discouraged. It’s no wonder that setting up reliable tests, and testing dependencies, routinely gets ignored in favor of developing new features!

software testing vs build new features meme

Why is it so hard? Let’s dig in.

Introducing the ‘Triple Threat’ of software testing

Automating Software Tests

Historically, most companies tackle software test automation first. They pour millions of dollars into automation frameworks and defect management suites, hoping to fulfill the promise of tests that you write once and use anywhere. It’s the most obvious test suite choice if you’re trying to increase automation.

software test scripts meme

But here’s the problem: These tests aren’t truly reusable because they are usually a bunch of unit tests built around the UI layer, which typically changes more than other parts of the application. A scripted unit test also takes longer to write if the test is dynamic (not hardcoded). So in the interest of time, this test case is usually pretty brittle. Furthermore, there are never enough environments to run these tests in. After a while, testers get so burned out and jaded that anything in the test tools space might as well cease to exist

(With Speedscale, this is why we decided to focus on systems beneath the UI layer, but we’ll get to that later.)

Securing The Right Test Data

Data is a critical part of understanding what your service will do in production. Data contains the context and use case that determine which code branch is utilized. Without proper test data, your test execution may yield inaccurate results. Consider this test case: Say you’re a developer at Delta Airlines building a new feature for Gold Medallion members. But the only available data to test with is for Silver Medallion members. In this scenario, how can you be sure your feature will work with the audience it’s intended for? Answer: you can’t really be sure.

Unfortunately, there are now fewer companies addressing the data problem, since security can bring the ax down on that quite quickly. The processes and technologies to manage data (e.g. data scrubbing, virtualization, and ETL), however, are alive and well. Speedscale is one of the first solutions to leverage actual data for testing — with the added capability of sanitizing and transforming it to make it replayable while fulfilling security requirements.

Managing The Complexity Of Environments And Testing Dependencies

In today’s development landscape, there seems to be more mature usage guidelines, processes, and general understanding around dependency tests and environments, compared to the other three elements. Servers and VM’s are oftentimes a fixed CapEx that is closely monitored, with strict budgets that dictate how available they are to engineering groups with chargebacks. But with cloud, containers and functions — that’s all changing. With containers standardizing the landscape, consistently simulating external software dependencies and environments is within reach for everyone.

Still, standardizing cloud infrastructure in a self-serve, repeatable, and scalable way is more difficult than it sounds. While containerized cloud sells the promise of being on only when you need it, many platform engineering teams are surprised at the complexity they must navigate. When it comes to dependency testing, it’s not surprising to see around 44% of enterprise cloud environments are in fact long-lived pre-prod instances eating up the bill

Orchestrating the ‘Triple Threat’ in unison

Streamlined software delivery requires orchestrating reliable tests, utilizing proper test data, and testing dependencies. If you have folders upon folders of unit test scripts, but the backend systems aren’t ready to run them, they’re unusable. If you have plenty of cloud capacity or brilliant IaC automation, but you rely on an offshore consultancy to do manual testing, you’re similarly beholden. Perhaps you have sufficient test scripts and environments, but ETL processes to replicate, scrub and stage test data takes a long time between data refreshes. Or maybe you’re not even sure which data scenarios to run. Or worse still, you can only test the happy path because the applications rarely misbehave in the lower environments, versus how they error out in production. This can lead to a false sense of security.

The “fail fast” approach of yesterday

With companies facing mounting pressure to quickly release new features and stay ahead of competitors, most companies test in production and rely on monitoring tools to tell them what’s broken. By focusing all their effort on making rollouts and rollbacks as fast as possible, they can patch or pull out builds at the first sign of trouble. Except, therein lies the dilemma.

software testing in production Austin Powers meme

Fast rollouts and rollbacks were popularized by the Netflixes, Facebooks, and Ubers of the world, who have a huge pool of users they can leverage to test their updates. If a few folks have a bad experience, they’re not worse for wear, since they have multi-millions of users.

Certain industries like fintech, insurance, and retail, however, cannot risk customer-facing functionality, like monetary transactions, fail. Typically these industries are heavily regulated, have critical processes in place, or have razor-thin margins where every visitor must generate revenue.

Orchestrating reliable tests, proper data, and testing dependencies in unison is the secret to streamlined software delivery.

You can’t do arm workouts every day and expect to gain leg muscles

Many dev teams are still convinced that the wrong tactics will get them the results they want. They think the faster they release and rollback, or the better they can execute canary deploys, or the more intelligent they are about which aspects of the code we release first, the more stable and robust their software will become.

But you can’t do arm workouts every day and expect to gain leg muscles.

No amount of rollbacks, canary deploys, or blue/green actually improves the chances of production code working right out of the gate, with minimal disruption, for every release. When you put this in perspective, it becomes clear that many development teams are working on the wrong muscle.

According to the DORA State of DevOps Report 2019, the failure rate of releases between Elite, High, and performers were all the same, up to 15%.

In fact, according to the DORA State of DevOps Report 2019, the failure rate of releases between Elite, High, and performers were all the same, up to 15%. Despite being categorized as an ‘Elite performer’, the average failure rate was the same as ‘Medium performers’. The classification was largely based on how often they release and how quickly they were able to react to issues. I’m not downplaying that capability, however we can’t expect production outages and defects to decrease if we’re only ever concerned with addressing how quickly we react.

Software quality has to be proactive.

The solution for proactive API software testing

Before I introduce Speedscale, let me share a testing scenario from a different industry:

In electrical engineering, chipsets are put into test harnesses to verify functionality independent of other components. Engines are put on machines called dynamometers (pictured below) to confirm power output before installation into vehicles.

A dynamometer machine tests vehicle engines to confirm power output

Translation: You need to test in a simulated environment before you put everything together.

Consider the aviation industry; Plane manufacturers have logged hundreds of hours on the engines, modeled the wings in a wind tunnel, and tested the software in a simulator way before taking off for the first time.

Software is one of the few industries where we put everything together and turn it on and hope it works.

Speedscale was founded to bring a more proactive and automated approach to ensuring the quality of production code.

How exactly do we do that?

Introducing Speedscale for proactive, automated software testing

Speedscale captures real-life transactions and scenarios (the proper data) by recording production traffic. By “listening” to your app, Speedscale is able to understand how your service is actually used by your customers. Speedscale essentially isolates your API and varies the inputs (the reliable tests) and backends (the testing dependencies). This way, you can systematically control variables and apply the scientific method to a variety of test cases before release.

Speedscale was built to ensure software quality by providing a more proactive and automated approach to testing.

As a best practice:

❌ DON’T try to guess how users will use your app and script tests to simulate it.

✅ DO examine real traffic and use it to auto-generate your tests.

❌ DON’T rely solely on huge, cumbersome end-to-end environments.

✅ DO auto-identify necessary backends for your SUT (system-under-test) and automatically generate mocks to simulate the behavior, modeled from real user traffic.

❌ DON’T manually test every release and expect to keep up

✅ DO run traffic replays as part of your automated CICD, and validate regression, functional, and performance behavior every code commit and build.

Try it for yourself free for 30 days by signing up here or schedule a demo to see if Speedscale is right for your needs!