Systems | Development | Analytics | API | Testing

How Seagate Runs Advanced Manufacturing at Scale With Iguazio

Seagate is the world’s leading data storage solution. Together with Iguazio, Seagate is able to manage data engineering at scale while harnessing petabytes of data, efficiently utilize resources, bridge the gap between data engineering and data science and create one production-ready environment with enterprise capabilities. In this new webinar, Vamsi Paladugu, Sr.

McKinsey Acquires Iguazio: Our Startup's Journey

8 years ago, when I founded Iguazio together with my co-founders Yaron Haviv, Yaron Segev & Orit Nissan-Messing, I never thought I would be making this announcement on our company blog: McKinsey acquired Iguazio! When we first embarked on this journey, we realized that while AI has the ability to transform any industry - from banking to retail to manufacturing - in reality most data science projects fail.

Distributed Feature Store Ingestion with Iguazio, Snowflake, and Spark

Enterprises who are actively increasing their AI maturity in a bid to achieve business transformations often find that with increased maturity comes increased complexity. For use cases that require very large datasets, the tech stacks required to meet business needs quickly become unwieldy.

Looking into 2023: Predictions for a New Year in MLOps

In 2022, AI and ML came into the mainstream consciousness, with generative AI applications like Dall-E and GPT AI becoming massively popular among the general public, and ethical questions of AI usage stirring up impassioned public debate. No longer a side project for forward-thinking businesses or CEOs that find it intriguing, AI and ML are now moving towards the center of the business.

Iguazio Named a Major Player in the IDC MLOps MarketScape 2022

The IDC MarketScape: Worldwide Machine Learning Operations Platforms 2022 Vendor Assessment is an annual study that evaluates technology vendors based on a comprehensive framework. It provides an in-depth quantitative and qualitative assessment of MLOps solution vendors in a long-form research report, to help buyers make important technology decisions that will create long term business success.

Iguazio Named a Leader and Outperformer In GigaOm Radar for MLOps 2022

The GigaOm Radar reports support leaders looking to evaluate technologies with an eye towards the future. In this year's Radar for MLOps report, GigaOm gave Iguazio top scores on multiple evaluation metrics, including Advanced Monitoring, Autoscaling & Retraining, CI/CD, and Deployment. Iguazio was therefore named a leader and also classified as an Outperformer for its rapid pace of innovation.

Deploying Your Hugging Face Models to Production at Scale with MLRun

Hugging Face is a popular model repository that provides simplified tools for building, training and deploying ML models. The growing adoption of Hugging Face usage among data professionals, alongside the increasing global need to become more efficient and sustainable when developing and deploying ML models, make Hugging Face an important technology and platform to learn and master.

How to Run Workloads on Spark Operator with Dynamic Allocation Using MLRun

With the Apache Spark 3.1 release in early 2021, the Spark on Kubernetes project has been production-ready for a few years. Spark on Kubernetes has become the new standard for deploying Spark. In the Iguazio MLOps platform, we built the Spark Operator into the platform to make the deployment of Spark Operator much simpler.

Building an Automated ML Pipeline with a Feature Store Using Iguazio & Snowflake

When operationalizing machine and deep learning, a production-first approach is essential for moving from research and development to scalable production pipelines in a much faster and more effective manner. Without the need to refactor code, add glue logic and spend significant efforts on data and ML engineering, more models will make it to production and with less issues like drift.

Iguazio Product Update: Optimize Your ML Workload Costs with AWS EC2 Spot Instances

Iguazio users can now run their ML workloads on AWS EC2 Spot instances. When running ML functions, you might want to control whether to run on Spot nodes or On-Demand compute instances. When deploying Iguazio MLOps platform on AWS, running a job (e.g. model training) or deploying a serving function users are now able to choose to deploy it on AWS EC2 Spot compute instances.