Systems | Development | Analytics | API | Testing

Models to Meaning: AI Value in Production w/ Open Source - MLOps Live #42 w/ QuantumBlack

In this session of MLOps Live, Joseph Perkins, Product Manager at Vizro by QuantumBlack, and Gilad Shaham, Director of Product Management, Iguazio (A McKinsey Company) discuss how modern AI teams are moving beyond heavy engineering to deliver production-ready, business-visible AI systems using open-source frameworks like MLRun and Vizro. In this session, you’ll learn how: The session includes a live demo of a gen AI application, showing how MLRun and Vizro work together to deliver both operational control and business visibility in production.

Using Agentic Frameworks to Build New AI Services

The original promise of AI was that it would write most of the code for us. In reality, we’re not there yet. So where can AI meaningfully improve developer productivity today? In this post, we look at how AI powers development productivity across the SDLC, practical tools to use and frameworks for overcoming AI operationalization bottlenecks.

7 RAG Evaluation Tools You Must Know

RAG evaluation measures how effectively a system retrieves relevant context and uses it to generate grounded answers. These evaluations detect hallucinations, measure retrieval precision and reveal where pipelines degrade after model updates or knowledge-base changes. Engineers rely on these tools to maintain output quality, prevent regressions, validate prompt and architecture choices and ensure that production answers stay aligned with trusted sources.

Introducing MLRun v1.10: New tools for building agents and monitoring gen AI

MLRun 1.10, the latest version of our open source AI orchestration framework, is available today to all users. Iguazio started out as a platform to operationalize enterprise machine learning projects. Though we’ve been through quite a few waves of AI in just a short time, the underlying challenges are the same: getting from experimentation to production remains a major blocker.

Banking on Gen AI: Driving Profitable and Scalable Client Engagement with Gen AI Copilots

Wealth management has always been about personal touch. Relationship managers provide a white-glove service to elite clientele - guiding investments, financial plans, and more. However, they’re under growing pressure to serve more clients and drive bank revenue, without diluting that personal connection and service quality. This dual mandate is placing relationship managers in a catch-22 situation. If they serve more clients their ability to provide personalized services diminishes, and vice versa.

LLM Observability Tools in 2025

1. Organizations have moved beyond pilots and are embedding LLMs into production workflows across customer support, finance, security, and software delivery. 2. LLM observability mitigates risks like hallucinations, bias, compliance breaches, and runaway costs. 3. LLM observability requires prompt/response tracking, hallucination detection, drift monitoring, RAG pipeline visibility, and long-term context tracing. 4.

Managing AI Risks When Implementing Gen AI

As enterprises embed gen AI into their workflows, many are discovering a minefield of risks. Data privacy breaches, misinformation, adversarial attacks and hidden bias are just a few of the challenges that can derail gen AI initiatives. These aren't just technical concerns, they're business-critical issues that can erode trust, trigger legal consequences, and tarnish reputations.

Accelerating and Scaling AI Deployments Across Hybrid Environments - MLOps Live #40 with Safaricom

Safaricom, one of the most AI-mature mobile operators, delivers predictive modeling and hyper-personalized financial services to millions of users. But operational challenges were slowing down deployments—limiting their ability to scale and act in real time. In this session, Safaricom’s AI team shares how they: Watch now to learn how they overcame bottlenecks, scaled faster, and unlocked real-time impact at massive scale with the Iguazio technology.

Best Practices to Develop, Deploy, and Manage Gen AI Copilots

Generative AI copilots are moving from experimental tools to core enterprise solutions. But too often, organizations rush into development, only to discover adoption stalls because the copilot doesn’t solve a specific user problem, lacks trust safeguards, or can’t scale reliably. This guide lays out best practices across the entire lifecycle, from planning and building, to deployment, monitoring, and long-term maintenance.

Orchestrating Multi-Agent Workflows with MCP & A2A

Multi-agent workflows are the latest technological gen AI advancements. In this blog, we explore how to develop such systems, overcome operational challenges, improve system observability, and enable seamless collaboration between agents in complex AI pipelines. We’ll cover architecture, A2A and MCP protocols and introduce Google Cloud’s agentic marketplace.