Systems | Development | Analytics | API | Testing

Iguazio

Scaling NLP Pipelines at IHS Markit - MLOps Live #17

The data science team at IHS Markit will be sharing practical advice on building sophisticated NLP pipelines that work at scale. Using a robust and automated MLOps process, they run complex models that make massive amounts of unstructured data searchable and indexable. In this session, they will share their journey with MLOps and provide practical advice for other data science teams looking to.

Automating MLOps for Deep Learning

MLOps holds the key to accelerating the development, deployment and management of AI, so that enterprises can derive real business value from their AI initiatives. Deploying and managing deep learning models in production carries its own set of complexities. In this talk, we will discuss real-life examples from customers that have built MLOps pipelines for deep learning use cases. For example, predicting rainfall from CCTV footage to prevent flooding.

ODSC West: Building Operational Pipelines for Machine and Deep Learning

MLOps holds the key to accelerating the development and deployment of AI, so that enterprises can derive real business value from their AI initiatives. From the first model deployed to scaling data science across the organization. The foundation you set will enable your team to build and monitor a growing amount of AI applications in production. In this talk, we will share best practices from our experience with enterprise customers who have effectively built and deployed composite machine and deep learning pipelines.

ODSC West AI Expo Talk: Real-Time Feature Engineering with a Feature Store

Given the growing number of AI projects and the complexities associated with bringing these projects to production, and specifically the challenges associated with feature engineering, the industry needs a way to standardize and automate the core of feature engineering. Feature stores provide enterprises with a competitive edge, as they enable them to expedite and simplify the path from lab to production. They enable sharing and re-use of features across teams and projects to save time and effort and ensure consistency across training and inference.

ODSC West MLOps Keynote: Scaling NLP Pipelines at IHS Markit

The data science team at IHS Markit has been hard at work building sophisticated NLP pipelines that work at scale using the Iguazio MLOps platform and open-source MLRun framework. Today they will share their journey and provide advice for other data science teams looking to: Nick (IHS Markit) and Yaron (Iguazio) will share their approach to automating the NLP pipeline end to end. They’ll also provide details on leveraging capabilities such as Spot integration and Serving Graphs to reduce costs and improve the data science process.

Introduction to TF Serving

Machine learning (ML) model serving refers to the series of steps that allow you to create a service out of a trained model that a system can then ping to receive a relevant prediction output for an end user. These steps typically involve required pre-processing of the input, a prediction request to the model, and relevant post-processing of the model output to apply business logic.

It Worked Fine in Jupyter. Now What?

You got through all the hurdles getting the data you need; you worked hard training that model, and you are confident it will work. You just need to run it with a more extensive data set, more memory and maybe GPUs. And then...well. Running your code at scale and in an environment other than yours can be a nightmare. You have probably experienced this or read about it in the ML community. How frustrating is that? All your hard work and nothing to show for it.

How to Bring Breakthrough Performance and Productivity To AI/ML Projects

By Jean-Baptiste Thomas, Pure Storage & Yaron Haviv, Co-Founder & CTO of Iguazio You trained and built models using interactive tools over data samples, and are now working on building an application around them to bring tangible value to the business. However, a year later, you find that you have spent an endless amount time and resources, but your application is still not fully operational, or isn’t performing as well as it did in the lab. Don’t worry, you are not alone.