Systems | Development | Analytics | API | Testing

Identity Passthrough for AI: Why Your LLM Needs to Know Who's Asking

When a user asks your AI assistant a question, who actually runs the database query? In most enterprise AI deployments, the answer is troubling: a shared service account with broad access to everything. The user's identity evaporates the moment their request enters the AI system. This architectural pattern creates security gaps, compliance failures, and data leakage risks that undermine enterprise AI adoption.

What Is MCP? Connecting AI Across the Software Delivery Lifecycle

AI promises speed and automation — but most teams are still stuck jumping between disconnected tools across development, testing, and operations. In this video, we introduce the Model Context Protocol (MCP) and how it enables AI assistants to securely access tools, systems, and real-time context across the software delivery lifecycle. MCP is the foundation of Perforce Intelligence, allowing AI to: The result: less friction, faster feedback, and AI that works with your existing systems — not around them.

Chat with Your Data: The Official Databox MCP

Your AI is brilliant, but it’s blind. Until now. We are thrilled to launch the official Databox MCP (Model Context Protocol). This open standard server bridges the gap between your business data and your favorite AI tools, turning general-purpose LLMs into specialized data analysts that know your business data. Stop manually exporting CSVs or taking screenshots of dashboards. With Databox MCP, you can connect 130+ data sources (Google Analytics, HubSpot, Salesforce, Stripe, and more) directly to tools like Claude, ChatGPT, Cursor, and n8n.

AI-Powered Loan Management Software Development

The world has really come a long way due to widespread digital transformation adoption! And, it’s no secret that it has changed the FinTech sector drastically. In light of this evolution, it has become imperative for lenders to adapt and refine their operations with a well-defined Loan Management System.

How to build a Copilot agent

A customer recently shared their debugging workflow with me. When an error shows up in Honeybadger, they import it to Linear, manually add context about where to look in the codebase, then assign GitHub Copilot to investigate. It works, but they asked a good question: could Copilot just access Honeybadger directly? The answer is yes—and it's easier than I expected.

Top 25 Test Generating Tools

Software testing was once a slow and repetitive process that developers accepted as unavoidable, often consuming significant time without delivering proportional value. Traditional manual testing struggled to scale with growing application complexity and rapid release cycles. In 2026, test generating tools have reshaped this landscape by introducing automated test generation, AI-driven logic, and intelligent coverage strategies.

Multi-Node Training with ClearML

Orchestrating distributed AI workloads Distributed (multi-node) training has become a requirement rather than an optimization for many modern AI workloads. As model sizes grow, datasets expand, and training timelines tighten, teams increasingly rely on multiple machines, often with multiple GPUs each, to complete training efficiently.

A Developer's Guide to MCP Servers: Bridging AI's Knowledge Gaps

Have you ever asked an AI assistant to generate code for a framework it doesn't quite understand? Maybe it produces something that looks right, but the syntax is slightly off, or it uses deprecated patterns. The AI is working hard, but it lacks the specific context it needs to truly help you. The Model Context Protocol (MCP) was designed to bridge this knowledge gap by giving AI assistants access to domain-specific knowledge and capabilities they don't have built in.

Can We Still Trust the Code? #speedscale #qualityassurance #digitaltwin #trust #devops

The "Velocity Gap" is real. AI like Claude and GitHub Copilot are pumping out code faster than ever, but there’s a catch: Engineers don't trust it yet. We’re moving away from the old days of "clicking around" in a test environment, but how do we verify code at the speed of light? Ken breaks down why the future of QA isn't just "testing," it’s simulation. Video collab with @ScottMooreConsultingLLC Learn More: speedscale.com.