Systems | Development | Analytics | API | Testing

Latest News

How ClearML Helps Teams Get More out of Slurm

It is a fairly recent trend for companies to amass GPU firepower to build their own AI computing infrastructure and support the growing number of compute requests. Many recent AI tools now enable data scientists to work on data, run experiments, and train models seamlessly with the ability to submit their jobs and monitor their progress. However, for many organizations with mature supercomputing capabilities, Slurm has been the scheduling tool of choice for managing computing clusters.

ClearML Supports Seamless Orchestration and Infrastructure Management for Kubernetes, Slurm, PBS, and Bare Metal

Our early roadmap in 2024 has been largely focused on improving orchestration and compute infrastructure management capabilities. Last month we released a Resource Allocation Policy Management Control Center with a new, streamlined UI to help teams visualize their compute infrastructure and understand which users have access to what resources.

Integrating LLMs with Traditional ML: How, Why & Use Cases

Ever since the release of ChatGPT in November 2022, organizations have been trying to find new and innovative ways to leverage gen AI to drive organizational growth. LLM capabilities like contextual understanding and response to natural language prompts enable the development of applications like automated AI chatbots, smart call center apps, or for financial services.

LLM Metrics: Key Metrics Explained

Organizations that monitor their LLMs will benefit from higher performing models at higher efficiency, while meeting ethical considerations like ensuring privacy and eliminating bias and toxicity. In this blog post, we bring the top LLM metrics we recommend measuring and when to use each one. In the end, we explain how to implement these metrics in your ML and gen AI pipelines.

Why RAG Has a Place in Your LLMOps

With the explosion of generative AI tools available for providing information, making recommendations, or creating images, LLMs have captured the public imagination. Although we cannot expect an LLM to have all the information we want, or sometimes even include inaccurate information, consumer enthusiasm for using generative AI tools continues to build.

Generative AI in Call Centers: How to Transform and Scale Superior Customer Experience

Customer care organizations are facing the disruptions of an AI-enabled future, and gen AI is already impacting customer care organizations across use cases like agent co-pilots, summarizing calls and deriving insights, creating chatbots and more. In this blog post, we dive deep into these use cases and their business and operational impact. Then we show a demo of a call center app based on gen AI that you can follow along.

Predict Known Categorical Outcomes with Snowflake Cortex ML Classification, Now in Public Preview

Today, enterprises are focused on enhancing decision-making with the power of AI and machine learning (ML). But the complexity of ML models and data science techniques often leaves behind organizations without data scientists or with limited data science resources. And for those organizations with strong data analyst resources, complex ML models and frameworks may seem overwhelming, potentially preventing them from driving faster, higher-quality insights.

Open Source Fractional GPUs for Everyone, Now Available from ClearML

If you’ve been following our news, you know we just announced free fractional GPU capabilities for open source users, enabling multi-tenancy for NVIDIA GPUs and allowing users to optimize their GPU utilization to support multiple AI workloads as part of our open source and free tier offering.