How Qlik Is Powering Bystronic's GenAI Transformation

Some data problems are universal — like dealing with unstructured data. At Bystronic, a global leader in sheet metal processing, we have mountains of it. From technical documentation to sales decks, HR policies, and IT knowledge bases, data is scattered across folders, servers, and systems. Industry research shows that 80% of enterprise data is unstructured, meaning it’s often invisible to the teams that need it most. As a result, up to 68% of that data goes unused. The impact is real.

How NeuBird's Hawkeye Automates Incident Resolution in Confluent Cloud

A joint post from the teams at NeuBird and Confluent For organizations leveraging Confluent, ensuring smooth operations is mission-critical. While Confluent Cloud eliminates the operational burden of managing Apache Kafka, application teams still need to monitor and troubleshoot client applications connecting to Kafka clusters.

Confluent Cloud is now available in the new AWS Marketplace AI Agents and Tools category

Confluent announces the availability of Confluent Cloud in the new AI Agents and Tools category of AWS Marketplace. This enables AWS customers to easily discover, buy, and deploy AI agent solutions, including Confluent's fully managed data streaming platform Confluent Cloud, using their AWS accounts, for accelerating AI agent and agentic workflow development.

AI at Scale Needs Control: Inside ClearML's Resource Allocation Policy Manager

By Erez Schnaider, Technical Product Marketing Manager, ClearML AI engineering today goes far beyond simply training a model. Teams are fine-tuning large language models on high-end GPUs, running massive, distributed experiments, and orchestrating hybrid workflows spanning on-premises clusters, private and public clouds. With great power comes great responsibility, and with powerful hardware comes complexity. Without robust controls, things can quickly descend into costly chaos: Who’s using what?

Building Streaming Data Pipelines, Part 2: Data Processing and Enrichment With SQL

In my last blog post, I looked at the essential first part of building any data pipeline—exploring the raw source data to understand its characteristics and relationships. The data is information about river levels, rainfall, and other weather information provided by the UK Environment Agency on a REST API. I used the HTTP Source connector to stream this into Apache Kafka topics (one per REST endpoint), and then Tableflow to expose these as Apache Iceberg tables.