Mastercard Reduces MTTR and Improves Query Processing with Unravel Data

Mastercard is one of the world’s top payment processing platforms, with more than 700 million cards in use worldwide. In the US, nearly 40% of American adults hold a Mastercard-branded card. And the company is going from strength to strength; despite a dip in valuation of more than a third when the pandemic hit, the company has doubled in value three times in the last nine years, recently reaching a market capitalization of more than $350B dollars.

Unravel Data Featured in CRN's 2021 Big Data 100 List

In a press release delivered today, Unravel Data announced its appearance on CRN’s Big Data 100 list for 2021. Unravel’s entry appears in the Data Management and Integration category. Also featured in this category are other rising stars such as Confluent, Fivetran, Immuta, and Okera, all of whom spoke at new industry conference DataOps Unleashed, held in March.

"Reverse ETL" with Keboola

TL;DR: Yes, you can do it. And no, you don’t need a separate tool for it. “Reverse ETL” is a fairly recent addition to the data engineer’s dictionary. While you can read articles upon articles about it (there’s a pretty good ‘primer’ in the Memory Leak blog), it can be summarized as being the art and science of taking data from your data warehouse and sending it somewhere other than BI - generally into other tools and systems where it becomes operational.

How Data Affects Healthcare | Rise of The Data Cloud | Snowflake

Data driven healthcare, anonymized data hackathons in a digital data sandbox, how to leverage the power of data for good, compute on demand, how the pandemic has affected digital adoption and how shifting to the cloud impacts patients are just some of the topics being covered in today's episode of Snowflake's Rise of the Data Cloud. Join us as Ashok Chennuru, Chief Data and Analytics Officer at Anthem gives us a peek into the world of AI and healthcare.

Automating and Governing AI over Production Data on Azure - MLOPs Live #14 w/Microsoft

Many enterprises today face numerous challenges around handling data for AI/ML. They find themselves having to manually extract datasets from a variety of sources, which wastes time and resources. In this session, we discuss end-to-end automation of the production pipeline and how to govern AI in an automated way. We touch upon setting up a feedback loop, generating explainable AI and doing all of this — at scale.

Industrializing Enterprise AI with the Right Platform - MLOps Live #9 - With NVIDIA

We discuss how enterprises need a platform that brings together tools to streamline data science workflow with leading edge infrastructure that can tackle the most complex ML models — one that can bring innovative concepts into production sooner, integrated within your existing IT/DevOps-grounded approach.

Simplifying Deployment of ML in Federated Cloud and Edge Environments - MLOPs Live #12 - with AWS

We discuss some common applications for machine learning at the edge and the main challenges associated with deploying distributed cloud and edge applications. We then wrap up the session with a live demo showing how to run a distributed cloud or edge application on Amazon Cloud and Outposts with the Iguazio Data Science Platform.

How Feature Stores Accelerate & Simplify Deployment of AI to Production MLOPs Live #13

The breakdown:

00:00 - Intro
02:15 - MLOps Overview
05:03 - Feature Engineering
07:44 - MLOps Workflow
10:44 - Solution: Feature Store
14:25 - Feature Store Competitive Landscape
17:03 - Features of a Feature Store
21:01 - CTO: Feature Store Sneakpeak
25:55 - Python Code example
27:57 - ML Pipeline example
30:07 - Covid-19 Patient Deterioration
33:26 - LIVE DEMO
52:45 - QA