How to Run Workloads on Spark Operator with Dynamic Allocation Using MLRun

With the Apache Spark 3.1 release in early 2021, the Spark on Kubernetes project has been production-ready for a few years. Spark on Kubernetes has become the new standard for deploying Spark. In the Iguazio MLOps platform, we built the Spark Operator into the platform to make the deployment of Spark Operator much simpler.

Pros & Cons of Using a Customer Data Platform as Your Data Warehouse

Does your Ecommerce business team understand the customer journey? By tracking the history of individual customer behavior and customer interactions across different channels, your organization can better understand what motivates your audience — and cater to them with the right marketing campaigns.

How to Accelerate HuggingFace Throughput by 193%

Deploying models is becoming easier every day, especially thanks to excellent tutorials like Transformers-Deploy. It talks about how to convert and optimize a Huggingface model and deploy it on the Nvidia Triton inference engine. Nvidia Triton is an exceptionally fast and solid tool and should be very high on the list when searching for ways to deploy a model. Our developers know this, of course, so ClearML Serving uses Nvidia Triton on the backend if a model needs GPU acceleration.