Systems | Development | Analytics | API | Testing

Machine Learning

How ClearML Helps Teams Get More out of Slurm

It is a fairly recent trend for companies to amass GPU firepower to build their own AI computing infrastructure and support the growing number of compute requests. Many recent AI tools now enable data scientists to work on data, run experiments, and train models seamlessly with the ability to submit their jobs and monitor their progress. However, for many organizations with mature supercomputing capabilities, Slurm has been the scheduling tool of choice for managing computing clusters.

ClearML Supports Seamless Orchestration and Infrastructure Management for Kubernetes, Slurm, PBS, and Bare Metal

Our early roadmap in 2024 has been largely focused on improving orchestration and compute infrastructure management capabilities. Last month we released a Resource Allocation Policy Management Control Center with a new, streamlined UI to help teams visualize their compute infrastructure and understand which users have access to what resources.

Improving LLM Accuracy & Performance - MLOps Live #28 with Databricks

Watch session #28 in our MLOps Live Webinar Series featuring Databricks where we discuss improving LLM accuracy & performance. Hear Margaret Amori (Databricks), Vijay Balasubramaniam (Databricks) , and Yaron Haviv (Iguazio) share best practices and pragmatic advice on successfully improving the accuracy and performance of LLMs while mitigating challenges like risks and escalating costs. See real examples including techniques to overcome common challenges using tools such as Databricks Mosaic AI and their new open LLM, DBRX.

Integrating LLMs with Traditional ML: How, Why & Use Cases

Ever since the release of ChatGPT in November 2022, organizations have been trying to find new and innovative ways to leverage gen AI to drive organizational growth. LLM capabilities like contextual understanding and response to natural language prompts enable the development of applications like automated AI chatbots, smart call center apps, or for financial services.

LLM Metrics: Key Metrics Explained

Organizations that monitor their LLMs will benefit from higher performing models at higher efficiency, while meeting ethical considerations like ensuring privacy and eliminating bias and toxicity. In this blog post, we bring the top LLM metrics we recommend measuring and when to use each one. In the end, we explain how to implement these metrics in your ML and gen AI pipelines.

Why RAG Has a Place in Your LLMOps

With the explosion of generative AI tools available for providing information, making recommendations, or creating images, LLMs have captured the public imagination. Although we cannot expect an LLM to have all the information we want, or sometimes even include inaccurate information, consumer enthusiasm for using generative AI tools continues to build.

Generative AI in Call Centers: How to Transform and Scale Superior Customer Experience

Customer care organizations are facing the disruptions of an AI-enabled future, and gen AI is already impacting customer care organizations across use cases like agent co-pilots, summarizing calls and deriving insights, creating chatbots and more. In this blog post, we dive deep into these use cases and their business and operational impact. Then we show a demo of a call center app based on gen AI that you can follow along.

Predict Known Categorical Outcomes with Snowflake Cortex ML Classification, Now in Public Preview

Today, enterprises are focused on enhancing decision-making with the power of AI and machine learning (ML). But the complexity of ML models and data science techniques often leaves behind organizations without data scientists or with limited data science resources. And for those organizations with strong data analyst resources, complex ML models and frameworks may seem overwhelming, potentially preventing them from driving faster, higher-quality insights.

LLM Validation & Evaluation MLOps Live #27 with Tasq.ai

In this session, Yaron Haviv, CTO Iguazio was joined by Ehud Barnea, PHD, Head of AI at Tasq.ai and Guy Lecker ML Engineering Team Lead, Iguazio to discuss how to validate, evaluate and fine tune an LLM effectively. They shared firsthand tips of how to solve the production hurdle of LLM evaluation, improving LLM performance, eliminating risks, along with a live demo of a fashion chatbot that leverages fine-tuning to significantly improve the model responses.