Systems | Development | Analytics | API | Testing

Machine Learning

AI and ML: No Longer the Stuff of Science Fiction

Artificial Intelligence (AI) has revolutionized how various industries operate in recent years. But with growing demands, there’s a more nuanced need for enterprise-scale machine learning solutions and better data management systems. The 2021 Data Impact Awards aim to honor organizations who have shown exemplary work in this area.

Getting Started with CI/CD and Continual

While CI/CD is synonymous with modern software development best practices, today’s machine learning (ML) practitioners still lack similar tools and workflows for operating the ML development lifecycle on a level on par with software engineers. For background, follow a brief history of transformational CI/CD concepts and how they’re missing from today’s ML development lifecycle.

Using Elastic ML to Observe Your Kuma API Observability Metrics

Observability is catching on these days as the de-facto way to provide visibility into essential aspects of systems. It would be unwise for you not to leverage it with Kuma service mesh — the place that allows your services to communicate with the rest of the world. However, many observability solutions restrict themselves to the works: simple metric collection that provides them with dashboards. Expecting users to simply sit on their chairs and look at those metrics all day long is an invitation to failure, as we know that one can only do so much when they get tired and bored.

Analysts Can Now Use SQL to Build and Deploy ML Models with Snowflake and Amazon SageMaker Autopilot

Machine learning (ML) models have become key drivers in helping organizations reveal patterns and make predictions that drive value across the business. While extremely valuable, building and deploying these models remains in the hands of only a small subset of expert data scientists and engineers with deep programming and ML framework expertise.

ODSC West: Building Operational Pipelines for Machine and Deep Learning

MLOps holds the key to accelerating the development and deployment of AI, so that enterprises can derive real business value from their AI initiatives. From the first model deployed to scaling data science across the organization. The foundation you set will enable your team to build and monitor a growing amount of AI applications in production. In this talk, we will share best practices from our experience with enterprise customers who have effectively built and deployed composite machine and deep learning pipelines.

ODSC West AI Expo Talk: Real-Time Feature Engineering with a Feature Store

Given the growing number of AI projects and the complexities associated with bringing these projects to production, and specifically the challenges associated with feature engineering, the industry needs a way to standardize and automate the core of feature engineering. Feature stores provide enterprises with a competitive edge, as they enable them to expedite and simplify the path from lab to production. They enable sharing and re-use of features across teams and projects to save time and effort and ensure consistency across training and inference.

ODSC West MLOps Keynote: Scaling NLP Pipelines at IHS Markit

The data science team at IHS Markit has been hard at work building sophisticated NLP pipelines that work at scale using the Iguazio MLOps platform and open-source MLRun framework. Today they will share their journey and provide advice for other data science teams looking to: Nick (IHS Markit) and Yaron (Iguazio) will share their approach to automating the NLP pipeline end to end. They’ll also provide details on leveraging capabilities such as Spot integration and Serving Graphs to reduce costs and improve the data science process.

Introduction to TF Serving

Machine learning (ML) model serving refers to the series of steps that allow you to create a service out of a trained model that a system can then ping to receive a relevant prediction output for an end user. These steps typically involve required pre-processing of the input, a prediction request to the model, and relevant post-processing of the model output to apply business logic.