Systems | Development | Analytics | API | Testing

Agencies Win With Data Streaming: Evolving Data Integration to Enable AI

With data streaming, public sector organizations can better leverage real-time data and modernize applications. Ultimately, that means improving the reliability of services that agencies and citizens depend on, enhancing operational efficiency (therefore cutting costs), and delivering critical insights the moment they’re needed.

Guide to Consumer Offsets: Manual Control, Challenges, and the Innovations of KIP-1094

Consumer offsets are at the heart of Apache Kafka's robust data handling capabilities, as they determine how data is consumed, reprocessed, or skipped across topics and partitions. In this comprehensive guide, we delve into the intricacies of Kafka offsets, covering everything from the necessity of manual offset control to the nuanced challenges posed by offset management in distributed environments.

Beyond Boundaries: Leveraging Confluent for Secure Inter-Organizational Data Sharing

Data is one of a company’s most valuable assets. Its value is often limited, however, by the challenge of sharing it across organizational boundaries in a secure, reliable, and scalable way. Traditional approaches to inter-organizational data sharing have contributed to this. Flat file sharing, API calls, and proprietary solutions all pose different challenges, from security concerns to scalability and development burden.

Why Is My Apache Flink Job Not Producing Results?

Imagine that you have built an Apache Flink job. It collects records from Apache Kafka, performs a time-based aggregation on those records, and emits a new record to a different topic. With your excitement high, you run the job for the first time, and are disappointed to discover that nothing happens. You check the input topic and see the data flowing, but when you look at the output topic, it’s empty. In many cases, this is an indication that there is a problem with watermarks.

Ep 6 - From Roadblocks to Results: How Shared Vision Drives Data Streaming Success

Whether you're an executive setting the strategy or an architect building the backbone, alignment is the key to turning transformation from a buzzword into results. Rick Hernandez, principal technical architect at EY, shares how to unlock shared vision and turn it into enterprise-wide data streaming success. In this episode, Rick joins Joseph to explore how organizations can connect leadership ambition with technical execution. Exploring how top-down buy-in, clearly defined objectives, and strategic alignment with the right technology can give your organization “wings to a tiger.”

Unlocking Data Insights with Confluent Tableflow: Querying Apache Iceberg Tables with Jupyter Notebooks

What if you could analyze real-time and historical data with just a few clicks and minimal code? Whether you're a data scientist, engineer, or analyst working in Python, you don't need to be an Apache Kafka expert to unlock the power of streaming analytics. In this blog, we'll walk you through integrating Confluent Tableflow with Trino, which will enable you to query and visualize Apache Iceberg tables effortlessly in Jupyter Notebooks.

Enhancing Data Integration with Tableflow and Apache Iceberg

Integrating Tableflow and Iceberg tables streamlines the process of linking to external data lakes and data warehouses. Jeffrey Johnathan Jennings of signalRoom explains how this approach accelerates time to insight while ensuring a more efficient and cost-effective data architecture.