Systems | Development | Analytics | API | Testing

How to use Flink SQL, Streamlit, and Kafka: Part 1

Market data analytics has always been a classic use case for Apache Kafka. However, new technologies have been developed since Kafka was born. Apache Flink has grown in popularity for stateful processing with low latency output. Streamlit, a popular open source component library and deployment platform, has emerged, providing a familiar Python framework for crafting powerful and interactive data visualizations. Acquired by Snowflake in 2022, Streamlit remains agnostic with respect to data sources.

Solving the Dual-Write Problem: Effective Strategies for Atomic Updates Across Systems

The dual-write problem occurs when two external systems must be updated in an atomic fashion. A classic example is updating an application’s database while pushing an event into a messaging system like Apache Kafka. If the database update succeeds but the write to Kafka fails, the system ends up in an inconsistent state. However, the dual-write problem isn’t unique to event-driven systems or Kafka. It occurs in many situations involving different technologies and architectures.

Data Streaming Awards 2024: Nominations Are Now Open

The Data Streaming Awards is back for its third year! Designed to bring the data streaming community together, this one-of-a-kind industry award event recognizes organizations that are harnessing the power of this revolutionary technology to drive business and customer experience transformation. If you know a company (even your own team) that is using data streaming technology to transform their business and provide amazing value to their customers and communities, the time is now to submit a nomination.

Best Practices for Confluent Terraform Provider

Managing Confluent Cloud infrastructure efficiently poses challenges due to the complexities involved in deploying and maintaining various components like environments, clusters, topics, and authorizations. Without proper tooling and practices, teams struggle with manual configuration errors, lack of consistency, and potential security risks. The Confluent Terraform.

How to Set Up a Fully Managed Alerting Pipeline Using Confluent Cloud Audit Logs

In large organizations, Confluent Cloud is often simultaneously accessed by many different users along with business-critical applications, potentially across different lines of business. With so many individual pieces working together, the risk of an individual outage, error, or incident affecting other services increases. An incident could be constituted by a user clicking a wrong button, an application’s misconfiguration, or just a bug—you name it.

Serverless Decoded: Reinventing Kafka Scaling with Elastic CKUs

Apache Kafka has become the de facto standard for data streaming, used by organizations everywhere to anchor event-driven architectures and power mission-critical real-time applications. However, this rise has also sparked discussions on improving Kafka operations and cost-efficiency—streaming data is naturally prone to bursts and often unpredictable, resulting in inevitable variations in workloads and demand on your Kafka cluster(s).

Modernize Payments Architecture for ISO 20022 Compliance

The payments industry is evolving rapidly, fueled by technological advancements, changing consumer behaviors, and a growing appetite for real-time transactions. As this transformation unfolds, new standards have been introduced to ensure the payments ecosystem's safety, security, and efficiency.

Introducing Confluent Cloud OpenSearch Sink Connector

Amazon OpenSearch is a popular fully managed analytics engine that makes it easier for customers to do interactive log analytics, real-time application monitoring, and semantic and keyword searches. It can also be used as a vector engine that helps organizations build and augment GenAI applications without managing infrastructure (we’ll talk about this in future blogs). Additionally, the service provides a reliable, scalable infrastructure designed to handle massive data volumes.

Contributing to Apache Kafka: How to Write a KIP

I’m brand new to writing KIPs (Kafka Improvement Proposals). I’ve written two so far, and my hands sweat every time I hit send on an email with ‘ KIP’ in the title. But I’ve also learned a lot from the process: about Apache Kafka internals, the process of writing KIPs, the Kafka community, and the most important motivation for developing software: our end users. What did I actually write? Let’s review KIP-941 and KIP-1020.