Systems | Development | Analytics | API | Testing

MCP in Production: Governing Agentic API Consumption | DeveloperWeek

As AI agents begin interacting with APIs, traditional API governance models need to evolve. In this DeveloperWeek session, Derric Gilling (WSO2) explains how organizations can manage and secure agent-driven API consumption using the Model Context Protocol (MCP). Unlike human applications, AI agents can generate large volumes of API calls from a single prompt. Without proper controls, this can lead to unexpected costs, security risks, and limited visibility into how APIs are being used.

EP20: The Agentic Enterprise

In this episode, *Dr. Sanjiva Weerawarana* and *Asanka Abeysinghe* are joined by WSO2 Chief AI Officer *Rania Khalaf* to discuss what the agentic enterprise really means. The conversation looks beyond AI pilots and explores the architectural foundations needed to make agents practical at enterprise scale. Topics include agents as first-class actors, the platform capabilities required to support them, and why identity, policy, observability, and audit matter in an agentic world. The episode closes with a practical view of what architects should start doing now.

Full Stack AI for Healthcare: Optimizing Clinical Workflows with Conversational AI for Authorization

Prior authorization is one of the biggest drivers of clinician burnout and care delays, costing the U.S. healthcare system billions in administrative waste every year. Traditional automation hasn't been able to handle the complexity of real-world clinical documentation. Until now. In this session, we go beyond the AI hype to show real outcomes of AI in healthcare, demonstrating how Agentic Conversational AI, integrated directly into EHR workflows, is transforming the prior authorization process.

Why do AI agents fail in the enterprise? #aiagents #shorts

Intelligence isn't enough. To make smart decisions, AI agents need context. Shafrine (WSO2) breaks down why integration is the secret sauce to moving AI from a pilot project to a high-performing "agentic" workforce. Learn how connecting your siloed systems provides the "informed decision-making" power agents need to actually get work done.

Why 90% of AI Projects Never Leave the Pilot Phase? #ai #shorts #softwarearchitect

Struggling to scale your AI? You aren’t alone. Shafrine from WSO2 identifies the bottleneck holding companies back: Data Silos. Without integration, your AI agents lack the "context" needed to be useful in a production environment. Learn how to bridge the gap between a "cool pilot" and a "scalable enterprise agent" by fixing your fragmented workflows.

WSO2 AI Guardrails: PII Masking, Prompt Injection & Safety

Generative AI offers incredible potential, but it comes with real risks like data leakage and prompt attacks. In this video, we demonstrate how WSO2 AI Guardrails act as an intelligent filter to secure your AI integrations and ensure compliance. We walk through the configuration of four critical advanced guardrails to inspect both incoming requests and outgoing responses, helping you move from risky experiments to safe, reliable production services.

The Role of Integration in the Agentic Enterprise

In this episode of, *Steve Jordan* and *Shafreen Anfar* from WSO2 explore how integration is paving the way for the agentic enterprise, where humans and AI agents collaborate to drive business success. They discuss how seamless connectivity across systems provides agents with the real-time context and ability to take action that is necessary to scale AI from simple pilots to full-scale production. The conversation also highlights the importance of robust security, governance, and observability in managing this new digital workforce.

WSO2 AI Guardrails: PII Masking, Prompt Injection & Safety

Generative AI offers incredible potential, but it comes with real risks like data leakage and prompt attacks. In this video, we demonstrate how WSO2 AI Guardrails act as an intelligent filter to secure your AI integrations and ensure compliance. We walk through the configuration of four critical advanced guardrails to inspect both incoming requests and outgoing responses, helping you move from risky experiments to safe, reliable production services.

WSO2 AI Gateway: Prompt Management & Semantic Caching

Learn how to ensure consistent AI interactions and drastically reduce latency using the WSO2 AI Gateway. This step-by-step tutorial demonstrates how to standardize your LLM requests for quality and efficiency while cutting down on redundant API costs. We explore "Prompt Management" to enforce organizational guidelines using templates and decorators, and "Semantic Caching" to leverage vector embeddings—serving instant, cached responses for semantically similar queries to minimize expensive LLM calls.