Systems | Development | Analytics | API | Testing

Machine Learning

Orchestrating ML Pipelines at Scale with Kubeflow

Still waiting for ML training to be over? Tired of running experiments manually? Not sure how to reproduce results? Wasting too much of your time on devops and data wrangling? Spending lots of time tinkering around with data science is okay if you’re a hobbyist, but data science models are meant to be incorporated into real business applications. Businesses won’t invest in data science if they don’t see a positive ROI.

What Are Feature Stores and Why Are They Critical for Scaling Data Science?

A feature store provides a single pane of glass for sharing all available features across the organization. When a data scientist starts a new project, he or she can go to this catalog and easily find the features they are looking for. But a feature store is not only a data layer; it is also a data transformation service enabling users to manipulate raw data and store it as features ready to be used by any machine learning model.

Automating MLOps for Deep Learning: How to Operationalize DL With Minimal Effort

Operationalizing AI pipelines is notoriously complex. For deep learning applications, the challenge is even greater, due to the complexities of the types of data involved. Without a holistic view of the pipeline, operationalization can take months, and will require many data science and engineering resources. In this blog post, I'll show you how to move deep learning pipelines from the research environment to production, with minimal effort and without a single line of code.

Of Muffins and Machine Learning Models

While it is a little dated, one amusing example that has been the source of countless internet memes is the famous, “is this a chihuahua or a muffin?” classification problem. Figure 01: Is this a chihuahua or a muffin? In this example, the Machine Learning (ML) model struggles to differentiate between a chihuahua and a muffin.