Systems | Development | Analytics | API | Testing

The Future of Data & AI is Anywhere Cloud! #Cloudera #AI #Tech #Shorts

Experience a true anywhere cloud with the only data and AI platform that delivers a complete cloud experience regardless of your location. By providing unified security and governance, you can securely access 100% of your data across both on-premises and cloud environments.

Hot Sauce Releases - Real Device Access API

Future-Proof Your Mobile Testing with Unrestricted Device Access For years, Platform Engineering teams have faced a painful choice: build a fragile, expensive internal device lab to get full control, or use a rigid public cloud and lose access to the system internals. That choice ends now. Join us for the launch of the Real Device Access API, the first solution that treats mobile devices as Infrastructure-as-Code.

Designing error models in OpenAPI for agent-safe APIs | Swagger Studio

Poorly documented or inconsistent error models lead to brittle clients and unreliable automation. Whether you're building APIs for human developers or AI agents, proper error handling is crucial for automation and reliability. In this guided tutorial, SmartBear Solutions Engineer Rosemary Charnley demonstrates how to design robust error models in OpenAPI specifications using Swagger Studio.

The Breakdown | API calls and mobile apps

You used an API this morning. Probably before you even got out of bed. That weather app? It's your phone communicating with a server in the cloud — sending a request, getting data back, and displaying it on your screen in seconds. Location. Request format. Expected response. That's the anatomy of an API call. And it's happening constantly across nearly every app on your phone. Hugo Guerrero and Amanda Alcamo break it all down in Episode 2 of The API & AI Breakdown. No jargon. No fluff. Just clarity.

WSO2 AI Gateway: Prompt Management & Semantic Caching

Learn how to ensure consistent AI interactions and drastically reduce latency using the WSO2 AI Gateway. This step-by-step tutorial demonstrates how to standardize your LLM requests for quality and efficiency while cutting down on redundant API costs. We explore "Prompt Management" to enforce organizational guidelines using templates and decorators, and "Semantic Caching" to leverage vector embeddings—serving instant, cached responses for semantically similar queries to minimize expensive LLM calls.

Ep 64 | AI Managed Services: A Smarter Path for SMEs

AI adoption is accelerating across small and medium-sized enterprises (SMEs), but many businesses lack the in-house expertise to build and manage AI infrastructure effectively. In this episode of The AI Forecast, Paul Muller speaks with Hyve’s Marketing and Operations Director, Charlotte Webb, about how managed service providers (MSPs) are reshaping AI adoption for SMEs. They explore the build vs. buy debate in AI solutions and why cloud computing alone doesn’t guarantee lower costs, better performance, or compliance.

Evolve25: Customer Fireside Chat with Banco do Brasil

Learn how the oldest bank in Brazil manages over 800 AI solutions and 5,500 GenAI use cases while maintaining a "Responsible AI" framework. Discover the bank's three-block ROI strategy focusing on operational efficiency, customer satisfaction, and new business models. This session is a must-watch for enterprise leaders navigating the intersection of legacy infrastructure, culture shifts, and Agentic AI.

Stop GenAI Rate Limits: Model Routing & Token Throttling with WSO2 AI Gateway

Learn how to mitigate skyrocketing AI costs and prevent model outages using the WSO2 AI Gateway. This step-by-step tutorial shows you how to move beyond simple request limits and implement smart, token-based usage policies. We also demonstrate "Adaptive Model Routing" showing you how to automatically switch between models when rate limits are hit, and how to distribute traffic using weighted round-robin to optimize for cost and performance.