LLM Evaluation and Testing for Reliable AI Apps - MLOps Live #38 with Evidently AI

In this webinar, we heard firsthand about the challenges and opportunities presented by LLM observability.

We discussed:

  • Real-world risks: Shared actual examples of LLM failures in production environments, including hallucinations and vulnerabilities.
  • Practical evaluation techniques: Discovered tips for synthetic data generation, building representative test datasets, and leveraging LLM-as-a-judge methods.
  • Evaluation-driven workflows: Explored how to integrate evaluation into your LLM product development and monitoring processes.
  • Production monitoring strategies: Gain insights on adding model monitoring capabilities to deployed LLMs, both in the cloud and on-premises.

Relevant Links: LLM monitoring in MLRun: https://docs.mlrun.org/en/latest/tutorials/genai-02-model-monitor-llm.html Monitoring in MLRun with the Evidently base class: https://docs.mlrun.org/en/latest/api/mlrun.model_monitoring/index.html#mlrun.model_monitoring.applications.e[…]identlyModelMonitoringApplicationBase