Systems | Development | Analytics | API | Testing

Iguazio

Iguazio Named a Leader and Outperformer In GigaOm Radar for MLOps 2022

The GigaOm Radar reports support leaders looking to evaluate technologies with an eye towards the future. In this year's Radar for MLOps report, GigaOm gave Iguazio top scores on multiple evaluation metrics, including Advanced Monitoring, Autoscaling & Retraining, CI/CD, and Deployment. Iguazio was therefore named a leader and also classified as an Outperformer for its rapid pace of innovation.

Deploying Your Hugging Face Models to Production at Scale with MLRun

Hugging Face is a popular model repository that provides simplified tools for building, training and deploying ML models. The growing adoption of Hugging Face usage among data professionals, alongside the increasing global need to become more efficient and sustainable when developing and deploying ML models, make Hugging Face an important technology and platform to learn and master.

How to Easily Deploy Your Hugging Face Models to Production - MLOps Live #20- With Hugging Face

Watch Julien Simon (Hugging Face), Noah Gift (MLOps Expert) and Aaron Haviv (Iguazio) discuss how you can deploy models into real business environments, serve them continuously at scale, manage their lifecycle in production, and much more in this on-demand webinar!

How to Run Workloads on Spark Operator with Dynamic Allocation Using MLRun

With the Apache Spark 3.1 release in early 2021, the Spark on Kubernetes project has been production-ready for a few years. Spark on Kubernetes has become the new standard for deploying Spark. In the Iguazio MLOps platform, we built the Spark Operator into the platform to make the deployment of Spark Operator much simpler.

Building an Automated ML Pipeline with a Feature Store Using Iguazio & Snowflake

When operationalizing machine and deep learning, a production-first approach is essential for moving from research and development to scalable production pipelines in a much faster and more effective manner. Without the need to refactor code, add glue logic and spend significant efforts on data and ML engineering, more models will make it to production and with less issues like drift.