Analytics

How Thrivent Uses Real-Time Data for AI-Driven Fraud Detection

In today’s fast-paced financial services landscape, customers have a shorter attention span than ever. To meet clients’ growing demands for real-time access to information and keep innovating in areas like fraud detection and personalized financial advice, Thrivent needed to overhaul its data infrastructure. With data scattered across siloed legacy systems, diverse tech stacks, and multiple cloud environments, the challenge was a bit daunting. But by adopting Confluent Cloud, Thrivent was able to unify its disparate data systems into a single source of truth.

Databricks + Unravel: Achieve Speed and Scale on the Lakehouse

Companies are under pressure to deliver faster innovation, enabled by cloud-based data analytics and AI. In order to deliver faster business value, data teams are looking to achieve speed and scale through data and AI pipeline performance and efficiency. A recent MIT Technology Review Insights report finds that 72% of technology leaders agree that data challenges are the most likely factor to jeopardize AI/ML goals.

Adobe and Snowflake Deepen Partnership to Rewrite the Next Era of Customer Experience

Adobe launched Adobe Experience Platform Federated Audience Composition, now generally available on Snowflake, allowing organizations to unlock seamless interoperability for marketers by integrating Snowflake's AI Data Cloud with Adobe Real-Time Customer Data Platform (CDP) and Adobe Journey Optimizer.

Gen AI for Marketing - From Hype to Implementation

Gen AI has the potential to bring immense value for marketing use cases, from content creation to hyper-personalization to product insights, and many more. But if you’re struggling to scale and operationalize gen AI, you’re not alone. That’s where most enterprises struggle. To date, many companies are still in the excitement and exploitation phase of gen AI. Few have a number of initial pilots deployed and even fewer have simultaneous pilots and are building differentiating use cases.

SQL for data exploration in a multi-Kafka world

Every enterprise is modernizing their business systems and applications to respond to real-time data. Within the next few years, we predict that most of an enterprise's data products will be built using a streaming fabric – a rich tapestry of real-time data, abstracted from the infrastructure it runs on. This streaming fabric spans not just one Apache Kafka cluster, but dozens, hundreds, maybe even thousands of them.

Govern an Open Lakehouse with Snowflake Open Catalog, a Managed Service for Apache Polaris

To enhance security and ease operational burden, many organizations with data lakes or lakehouses want flexibility to securely integrate their tools of choice on a single copy of data. An open standard for storage format and catalog API has helped, but there’s still a need for open standards for the catalog, including a consistent way to apply security access controls to data.

Ahead of the Curve: Why Self-Service Data Management Can't Be Ignored

This year's Gartner Hype Cycle for Data Management report mentions self-service data management. It’s a game-changer that gives business users the power to work with data without constantly relying on IT, boosting data quality and making data available for analytics and decision-making. But what is it, really? How do you achieve self-service? Let’s take a closer look.