Systems | Development | Analytics | API | Testing

Troubleshooting Microservices with AI

Ever found yourself saying, "But it works on my machine!" when a bug pops up in a microservices environment? It's a common and frustrating problem. Unlike a monolithic application, microservices are a collection of independently deployed services that communicate with each other. This complexity makes it difficult to reproduce real-world issues on your local machine, as you may not have all the necessary services and dependencies running. But what if you could take a snapshot of a running application's behavior and bring it home for debugging?

Looking Back, Looking Ahead: Thoughts on My First Year at Speedscale

When I started at Speedscale, I looked like this: And after one year of learning, growing, and keeping pace with innovation well, let’s just say the journey has left its mark: Of course, I’m joking (sort of). The truth is, this past year has been intense, energizing, and filled with new challenges. If anything, it’s made me feel younger in spirit, even if the mirror might disagree some mornings.

Simulating Multi-Agent Workflows to Find Hidden API Vulnerabilities

API gateways are often viewed as the centralized entry point for client HTTP requests in a distributed system. They act as intermediaries between clients and backend services, managing API request routing, load balancing, rate limiting, access control, and traffic shaping across multiple backend services. This API management is vital for many services and products, but many organizations can put too much stock in it.

Configuring Data Loss Prevention

Redacting PII (DLP): Speedscale can be configured to redact personally identifiable (PII) or other sensitive information (PII) from traffic via it's data loss prevention (DLP) features. This redaction happens before data leaves your network, preventing the Speedscale service from seeing the data at all. However, the overall shape or structure of the data is retained in order to facilitate useful testing against systems.

Finding the Ghost in the Machine

The industry is rapidly moving towards deeper AI integration than ever before. What was once simply focused on chatbots or recommendation engines has pivoted significantly to AI systems communicating with other AI systems. These AI tools are leveraging multi-agent workflows to accomplish complex tasks that traditional systems have struggled with. Innovation without validation is a liability. Any developer worth their salt will know that these systems require ample testability and validation.

Mastering Kubernetes Testing with Traffic Replay

Kubernetes has become the backbone of many modern application deployment pipelines, and for good reason as a container orchestration platform, Kubernetes automates the scaling, deployment, and management of workloads, allowing developers to make their applications easier to manage and deploy at scale without worrying about their service’s dependencies, their user’s operating system, or the intricacies of their data center or infrastructure provider.

Considerations for Testing gRPC Streams

If you’ve spent any time building cloud-native systems, you’ve probably tripped over the tricky beast that is gRPC streaming. It’s powerful, flexible, and feels like magic when it works. But the minute you need to test it? Suddenly, you’re in “hold my coffee, I need a week” territory. One of the most common places we see gRPC streams in the wild is when clients connect to asynchronous message buses like Google Pub/Sub.