Systems | Development | Analytics | API | Testing

Iguazio

Future-Proofing Your App: Strategies for Building Long-Lasting Apps

The generative AI industry is changing fast. New models and technologies (Hello GPT-4o) are emerging regularly, each more advanced than the last. This rapid development cycle means that what was cutting-edge a year ago might now be considered outdated. The rate of change demands a culture of continuous learning and technological adaptation.

LLM Validation and Evaluation

LLM evaluation is the process of assessing the performance and capabilities of LLMs. This helps determine how well the model understands and generates language, ensuring that it meets the specific needs of applications. There are multiple ways to perform LLM evaluation, each with different advantages. In this blog post, we explain the role of LLM evaluation in AI lifecycles and the different types of LLM evaluation methods. In the end, we show a demo of a chatbot that was developed with crowdsourcing.

Improving LLM Accuracy & Performance - MLOps Live #28 with Databricks

Watch session #28 in our MLOps Live Webinar Series featuring Databricks where we discuss improving LLM accuracy & performance. Hear Margaret Amori (Databricks), Vijay Balasubramaniam (Databricks) , and Yaron Haviv (Iguazio) share best practices and pragmatic advice on successfully improving the accuracy and performance of LLMs while mitigating challenges like risks and escalating costs. See real examples including techniques to overcome common challenges using tools such as Databricks Mosaic AI and their new open LLM, DBRX.

Integrating LLMs with Traditional ML: How, Why & Use Cases

Ever since the release of ChatGPT in November 2022, organizations have been trying to find new and innovative ways to leverage gen AI to drive organizational growth. LLM capabilities like contextual understanding and response to natural language prompts enable the development of applications like automated AI chatbots, smart call center apps, or for financial services.

LLM Metrics: Key Metrics Explained

Organizations that monitor their LLMs will benefit from higher performing models at higher efficiency, while meeting ethical considerations like ensuring privacy and eliminating bias and toxicity. In this blog post, we bring the top LLM metrics we recommend measuring and when to use each one. In the end, we explain how to implement these metrics in your ML and gen AI pipelines.

Generative AI in Call Centers: How to Transform and Scale Superior Customer Experience

Customer care organizations are facing the disruptions of an AI-enabled future, and gen AI is already impacting customer care organizations across use cases like agent co-pilots, summarizing calls and deriving insights, creating chatbots and more. In this blog post, we dive deep into these use cases and their business and operational impact. Then we show a demo of a call center app based on gen AI that you can follow along.

LLM Validation & Evaluation MLOps Live #27 with Tasq.ai

In this session, Yaron Haviv, CTO Iguazio was joined by Ehud Barnea, PHD, Head of AI at Tasq.ai and Guy Lecker ML Engineering Team Lead, Iguazio to discuss how to validate, evaluate and fine tune an LLM effectively. They shared firsthand tips of how to solve the production hurdle of LLM evaluation, improving LLM performance, eliminating risks, along with a live demo of a fashion chatbot that leverages fine-tuning to significantly improve the model responses.

Why You Need GPU Provisioning for GenAI

GPU as a Service (GPUaaS) serves as a cost-effective solution for organizations who need more GPUs for their ML and gen AI operations. By optimizing the use of existing resources, GPUaaS allows organizations to build and deploy their applications, without waiting for new hardware. In this blog post, we explain how GPUaaS as a service works, how it can close the GPU shortage gap, when to use GPUaaS and how it fits with gen AI.

Gen AI for Customer Service Demo

Iguazio would like to introduce two practical demonstrations showcasing our call center analysis tool and our innovative GenAI assistant. These demos illustrate how our GenAI assistant supports call center agents with real-time advice and recommendations during customer calls. This technology aims to improve customer interactions and boost call center efficiency. We're eager to share how our solutions can transform call center operations.

Best 10 Free Datasets for Manufacturing [UPDATED]

The manufacturing industry can benefit from AI, data and machine learning to advance manufacturing quality and productivity, minimize waste and reduce costs. With ML, manufacturers can modernize their businesses through use cases like forecasting demand, optimizing scheduling, preventing malfunctioning and managing quality. These all significantly contribute to bottom line improvement.