Systems | Development | Analytics | API | Testing

October 2024

What Made Current 2024 Unforgettable? Hear From Our Attendees | Current 2024

In this recap video from Current 2024, attendees share their favorite moments from the event. From insightful talks on data streaming innovation to hands-on workshops and networking opportunities, hear what participants found most valuable.

Shift Left: Headless Data Architecture, Part 2

The headless data architecture is the formalization of a data access layer at the center of your organization. Encompassing both streams and tables, it provides consistent data access for both operational and analytical use cases. Streams provide low-latency capabilities to enable timely reactions to events, while tables provide higher-latency but extremely batch-efficient querying capabilities. You simply choose the most relevant processing head for your requirements and plug it into the data.

Windowing with Table-Valued Functions | Apache Flink SQL

Apache Flink SQL makes it easy to implement analytics that summarize important attributes of real-time data streams. There are four different types of time-based windows in Flink SQL: tumbling, hopping, cumulating, and session. Learn how these various window types behave, and how to work with the table-valued functions that are at the heart of Flink SQL’s support for windowing.

How Thrivent Uses Real-Time Data for AI-Driven Fraud Detection

In today’s fast-paced financial services landscape, customers have a shorter attention span than ever. To meet clients’ growing demands for real-time access to information and keep innovating in areas like fraud detection and personalized financial advice, Thrivent needed to overhaul its data infrastructure. With data scattered across siloed legacy systems, diverse tech stacks, and multiple cloud environments, the challenge was a bit daunting. But by adopting Confluent Cloud, Thrivent was able to unify its disparate data systems into a single source of truth.

Shift Left: Headless Data Architecture, Part 1

The headless data architecture is an organic emergence of the separation of data storage, management, optimization, and access from the services that write, process, and query it. With this architecture, you can manage your data from a single logical location, including permissions, schema evolution, and table optimizations. And, to top it off, it makes regulatory compliance a lot simpler, because your data resides in one place, instead of being copied around to every processing engine that needs it.

Why Real-Time Data is Crucial to Developing Generative AI Models

Learn how GEP, an AI-powered supply chain and procurement company, harnesses real-time data streaming through Confluent Cloud to fuel its generative AI solutions. With seamless integration into Azure OpenAI services and GPT models, GEP’s generative AI chatbot delivers document summaries and risk management insights to its customers.

How Confluent Fuels Gen AI Chat Models with Real-Time Data

Discover how GEP, an AI-powered procurement company, utilized Confluent's data streaming platform to transform its generative AI capabilities. Integrating real-time data into their AI models enabled GEP to provide a contextual chat-based service. This chatbot allowed GEP customers to build their own tools simply by communicating in English with a chatbot.

Preparing the Consumer Fetch: Kafka Producer and Consumer Internals, Part 3

Welcome back to the third installment of our blog series where we’re diving into the beautiful black box that is Apache Kafka to better understand how we interact with the cluster through producer and consumer clients. Earlier in the series, we took a look at the Kafka producer to see how the client works before following a produce request as it’s processed by the cluster.

Replication in Apache Kafka Explained | Monitoring & Troubleshooting Data Streaming Applications

Learn how replication works in Apache Kafka. Deep dive into its critical aspects, including: Whether you're a systems architect, developer, or just curious about Kafka, this video provides valuable insights and hands-on examples. Don't forget to check out our GitHub repo to get all of the code used in the demo, and to contribute your own enhancements.

APAC Data Streaming Deep Dive: Unlocking Business Agility and Innovation Across the Region

Throughout my career in enterprise technology, I've witnessed numerous transformations play out across the Asia-Pacific (APAC) region. But the shift we're seeing now with data streaming is truly unprecedented. What was once a supportive technology is rapidly becoming the very foundation of modern business in our region.

Confluent Cloud Is Now 100% KRaft and You Should Be Too

We are now in the final chapter of Apache Kafka’s multi-year journey to remove Apache ZooKeeper and fully transition to self-managed metadata in KRaft. Many Kafka users and customers are beginning to migrate to KRaft and are eager to understand its performance characteristics in production environments.

Shift Left: Bad Data in Event Streams, Part 2

Alright, I’m back. Time for part 2. In the first part, I covered how we handle bad data in batch processing. In particular, cutting out the bad data, replacing it, and running it again. But this strategy doesn’t work for immutable event streams as they are, well, immutable. You can’t cut out and replace bad data like you would in batch processed data sets.

Unlocking Data Value in the Age of AI and Data Streaming

Imagine getting into your car to head to work on a hot day. Your car already knows and sets the temperature, the ambient lighting, and the music you prefer. Not only that, it optimizes your route, and with Level 3 autonomy, it can even drive you there. But what does the automotive industry have to do on the backend in order to achieve this kind of personalization?

Spring Into Confluent Cloud with Kotlin - Part 2: Kafka Streams

After a short break, we’re back with Part 2 of this series on Spring Framework, Confluent Cloud, and the Kotlin language. Many organizations that write applications and microservices for the JVM have chosen Spring Framework, leveraging the many libraries available for features such as REST services, persisting data to a variety of datastores, and integration with messaging. These organizations have existing investments in building, testing, deploying, and monitoring applications using Spring.

Enhancing Security with IAM Roles in Confluent Managed Connectors

As cloud environments evolve, so must the security measures that protect them. With Confluent’s latest enhancement—AWS IAM role integration for managed connectors—you can now adopt temporary security credentials, reducing both the risk of long-term credential exposure and the operational burden of key management. This feature tightens security and simplifies access management for your data flows between AWS and Confluent Cloud.

Shift Left: Bad Data in Event Streams, Part 1

At a high level, bad data is data that doesn’t conform to what is expected. For example, an email address without the “@”, or a credit card expiry where the MM/YY format is swapped to YY/MM. “Bad” can also include malformed and corrupted data, such that it’s completely indecipherable and effectively garbage.

How Booking.com Used Data Streaming to Put Travel Decisions into Customer's Hands

Booking.com wanted to give people a “connected trip” experience, allowing customers to seamlessly book flights, accommodations, car rentals, and excursions in one visit. The company realized the value of data streaming early on in reaching this goal, but the operational effort had become overwhelming. Learn how Booking.com found the answer in Confluent’s data streaming platform. With its automated configuration that required no ongoing maintenance, the team was able to prioritize innovation with data and provide the comprehensive booking experience they had been searching for.

Your Guide to the Apache Flink Table API: An In-Depth Exploration

Apache Flink offers a variety of APIs that provide users with significant flexibility in processing data streams. Among these, the Table API stands out as one of the most popular options. Its user-friendly design allows developers to express complex data processing logic in a clear and declarative manner, making it particularly appealing for those who want to efficiently manipulate data without getting bogged down in intricate implementation details.