Systems | Development | Analytics | API | Testing

Creating a data-driven culture with self service and data literacy

In this segment, Geraldine Wong, CDO of GXS Bank, explains how her bank's data strategy aims to promote inclusion through superior data insights and AI, but achieving this requires building a data-driven culture by providing employees the right tools, access, and knowledge about the data.

#OpenSource: Does It Help with Career Progression?

Welcome to Test Case Scenario! In this episode, join our host Jason Baum and panelists from Sauce Labs as they dive deep into the thrilling world of open source. Nikolay Advolodkin, Principal Developer Advocate, Marcus Merrell, VP Technology Strategy, and Diego Molina, Open Source Program Lead, share insights on the future of open source, discuss exciting projects, and offer valuable lessons for those venturing into the realm of open source. Get ready for an engaging conversation on the latest open-source trends, developments, and expert advice.

Designing Event-Driven Systems

Many forces affect software today: larger datasets, geographical disparities, complex company structures, and the growing need to be fast and nimble in the face of change. Proven approaches such as service-oriented (SOA) and event-driven architectures (EDA) are joined by newer techniques such as microservices, reactive architectures, DevOps, and stream processing. Many of these patterns are successful by themselves, but as this practical ebook demonstrates, they provide a more holistic and compelling approach when applied together.

Real-time Fraud Detection - Use Case Implementation

When it comes to fraud detection in financial services, streaming data with Confluent enables you to build the right intelligence-as early as possible-for precise and predictive responses. Learn how Confluent's event-driven architecture and streaming pipelines deliver a continuous flow of data, aggregated from wherever it resides in your enterprise, to whichever application or team needs to see it. Enrich each interaction, each transaction, and each anomaly with real-time context so your fraud detection systems have the intelligence to get ahead.

How to Tune Kafka Connect Source Connectors to Optimize Throughput

Kafka Connect is an open source data integration tool that simplifies the process of streaming data between Apache Kafka® and other systems. Kafka Connect has two types of connectors: source connectors and sink connectors. Source connectors allow you to read data from various sources and write it to Kafka topics. Sink connectors send data from the topics to another endpoint.

ThoughtSpot for the Connected Google Workspace

I’m calling it now. The next battleground for analytics adoption among business users will be the productivity suite. Let’s unpack that statement by considering these two examples: Traditional BI has always forced you down a one-way street for answers—drop what you are doing, login to the BI tool, and pray to the data deities that you can find the answer you’re looking for.

Deploy and scale high-performance background jobs with Koyeb Workers

Today, we are thrilled to announce workers are generally available on Koyeb! You can now easily deploy high performance workers to process background jobs in all of our locations. It's now simple to deploy workers from a GitHub repository and rely on our built-in CI/CD engine: simply connect your repository and we build, deploy, and scale your workers on high-performance servers all around the world.

Introducing Confluent Platform 7.5

Introducing Confluent Platform version 7.5, which offers a range of new features to enhance security, improve developer efficacy, and strengthen disaster recovery capabilities. Building on the innovative feature set delivered in previous releases, Confluent Platform 7.5 makes enhancements to three categories of features: The following explores each of these enhancements and dives deep into the major feature updates and benefits.

Flink in Practice: Stream Processing Use Cases for Kafka Users

In Part One of our “Inside Flink” blog series, we explored the critical role of stream processing and why developers are increasingly choosing Apache Flink® over other frameworks. In this second installment, we'll showcase how innovative teams across every industry and size are putting stream processing into practice – from streaming data pipelines to train ML models or more timely analytics to fraud detection in finance and real-time inventory management in retail.