Herzliya, Israel
2014
  |  By Alexandra Quinn
The original promise of AI was that it would write most of the code for us. In reality, we’re not there yet. So where can AI meaningfully improve developer productivity today? In this post, we look at how AI powers development productivity across the SDLC, practical tools to use and frameworks for overcoming AI operationalization bottlenecks.
  |  By Alexandra Quinn
RAG evaluation measures how effectively a system retrieves relevant context and uses it to generate grounded answers. These evaluations detect hallucinations, measure retrieval precision and reveal where pipelines degrade after model updates or knowledge-base changes. Engineers rely on these tools to maintain output quality, prevent regressions, validate prompt and architecture choices and ensure that production answers stay aligned with trusted sources.
  |  By Gilad Shaham
MLRun 1.10, the latest version of our open source AI orchestration framework, is available today to all users. Iguazio started out as a platform to operationalize enterprise machine learning projects. Though we’ve been through quite a few waves of AI in just a short time, the underlying challenges are the same: getting from experimentation to production remains a major blocker.
  |  By Asaf Somekh
Wealth management has always been about personal touch. Relationship managers provide a white-glove service to elite clientele - guiding investments, financial plans, and more. However, they’re under growing pressure to serve more clients and drive bank revenue, without diluting that personal connection and service quality. This dual mandate is placing relationship managers in a catch-22 situation. If they serve more clients their ability to provide personalized services diminishes, and vice versa.
  |  By Michal Eschar
1. Organizations have moved beyond pilots and are embedding LLMs into production workflows across customer support, finance, security, and software delivery. 2. LLM observability mitigates risks like hallucinations, bias, compliance breaches, and runaway costs. 3. LLM observability requires prompt/response tracking, hallucination detection, drift monitoring, RAG pipeline visibility, and long-term context tracing. 4.
  |  By Alexandra Quinn
As enterprises embed gen AI into their workflows, many are discovering a minefield of risks. Data privacy breaches, misinformation, adversarial attacks and hidden bias are just a few of the challenges that can derail gen AI initiatives. These aren't just technical concerns, they're business-critical issues that can erode trust, trigger legal consequences, and tarnish reputations.
  |  By Alexandra Quinn
Generative AI copilots are moving from experimental tools to core enterprise solutions. But too often, organizations rush into development, only to discover adoption stalls because the copilot doesn’t solve a specific user problem, lacks trust safeguards, or can’t scale reliably. This guide lays out best practices across the entire lifecycle, from planning and building, to deployment, monitoring, and long-term maintenance.
  |  By Alexandra Quinn
Multi-agent workflows are the latest technological gen AI advancements. In this blog, we explore how to develop such systems, overcome operational challenges, improve system observability, and enable seamless collaboration between agents in complex AI pipelines. We’ll cover architecture, A2A and MCP protocols and introduce Google Cloud’s agentic marketplace.
  |  By Alexandra Quinn
As LLMs become central to AI-driven products like copilots and customer support chatbots, data science teams need to ensure the LLM performs well for the use case. The process of LLM evaluation ensures reliability, safety and performance in production AI systems. In this guide, we explore how to approach evaluations across development and production lifecycles, what frameworks to use, and how the integration between open-source MLRun and Evidently AI enables more scalable, structured testing.
  |  By Alexandra Quinn
Customer service chatbots and co-pilots and smart call center analysis applications are prime use cases for AI and generative AI. These AI systems and agents can provide real-time recommendations, support customer service at scale, generate insights that can be used in downstream applications to reduce churn and increase revenue, and more. How can customer service organizations grow and optimize their use of data and AI?
  |  By Iguazio (Acquired by McKinsey)
In this session of MLOps Live, Joseph Perkins, Product Manager at Vizro by QuantumBlack, and Gilad Shaham, Director of Product Management, Iguazio (A McKinsey Company) discuss how modern AI teams are moving beyond heavy engineering to deliver production-ready, business-visible AI systems using open-source frameworks like MLRun and Vizro. In this session, you’ll learn how: The session includes a live demo of a gen AI application, showing how MLRun and Vizro work together to deliver both operational control and business visibility in production.
  |  By Iguazio (Acquired by McKinsey)
Safaricom, one of the most AI-mature mobile operators, delivers predictive modeling and hyper-personalized financial services to millions of users. But operational challenges were slowing down deployments—limiting their ability to scale and act in real time. In this session, Safaricom’s AI team shares how they: Watch now to learn how they overcame bottlenecks, scaled faster, and unlocked real-time impact at massive scale with the Iguazio technology.
  |  By Iguazio (Acquired by McKinsey)
Hear from Kaegan Casey, AI/ML Solutions Architect at Seagate, discuss how his team uses MLRun to train thousands of models in parallel.
  |  By Iguazio (Acquired by McKinsey)
In this webinar we explored cutting-edge tools enabling scalable AI workflows. Discover how MCP (Model Context Protocol) and A2A (Agent-to-Agent communication layer) empower teams to design, build, and manage multi-agent workflows with precision. Key Takeaways.
  |  By Iguazio (Acquired by McKinsey)
In this webinar, we heard firsthand about the challenges and opportunities presented by LLM observability.
  |  By Iguazio (Acquired by McKinsey)
Scaling and maintaining thousands of models in production presents complex, non-trivial challenges. Join us to hear first-hand the secrets to successful deployment, orchestration and management of AI applications in real-time and at scale. Kaegan Casey, AI/ML Solutions Architect at Seagate, shared two of their newest predictive manufacturing use cases, using both batch and real-time functions.
  |  By Iguazio (Acquired by McKinsey)
This demo presents a Telco-focused GenAI agent co-pilot in action, providing assistance to a representative during a live conversation with a customer. The co-pilot recommends a highly personalized upsell opportunity in the process.

The Iguazio Data Science Platform automates MLOps with end-to-end machine learning pipelines, transforming AI projects into real-world business outcomes. It accelerates the development, deployment and management of AI applications at scale, enabling data scientists to focus on delivering better, more accurate and more powerful solutions instead of spending their time on infrastructure.

The platform is open and deployable anywhere - multi-cloud, on prem or edge. Iguazio powers real-time data science applications for financial services, gaming, ad-tech, manufacturing, smart mobility and telecoms.

Dive Into the Machine Learning Pipeline:

  • Collect and Enrich Data from Any Source: Ingest in real-time multi-model data at scale, including event-driven streaming, time series, NoSQL, SQL and files.
  • Prepare Online and Offline Data at Scale: Explore and manipulate online and offline data at scale, powered by Iguazio's real-time data layer and using your favorite data science and analytics frameworks, already pre-installed in the platform.
  • Accelerate and Automate Model Training: Continuously train models in a production-like environment, dynamically scaling GPUs and managed machine learning frameworks.
  • Deploy in Seconds: Deploy models and APIs from a Jupyter notebook or IDE to production in just a few clicks and continuously monitor model performance.

Bring Your Data Science to Life.