Systems | Development | Analytics | API | Testing

Cloudera Account 360: New Self-Service Administrative Platform Demo

Cloudera Account 360 is designed to resolve this by providing a single pane of glass from which customers can manage their users and accounts. It offers robust, flexible, and secure account as well as user management capabilities, helping you avoid delays by eliminating the need to raise support cases with Cloudera for simple administrative tasks. Foundational Features Available Now Cloudera Account 360 includes two core feature sets.

How does Katalon help organizations start accelerating their testing?

Katalon helps organizations accelerate testing by removing complexity from automation. With an all-in-one platform, low-code options, and built-in best practices, teams can start fast, scale confidently, and deliver quality at speed without needing deep automation expertise. — Alex Martins, VP of Strategy at Katalon Follow Katalon for more insights in our series!

See exactly why your Gradle Build Cache missed: new Task Inputs visibility feature

Every Android developer has been there: yesterday's build finished in 2 minutes, but today's identical build takes 8 minutes. You check your code - nothing major changed. You check your environment - everything looks the same. So why the massive difference? Without visibility into what actually changed between builds, debugging performance issues becomes guesswork. You're left wondering: Which tasks didn't come from cache? What inputs changed? Why did this specific compilation task take so long?

State Transition Testing: Diagrams, Tables & Examples

Ever seen a workflow pass QA, then fail the moment users retry, refresh, or hit a timeout? That gap usually isn’t about a “wrong input.” It’s often because the system is in a different state when the same input arrives. In state transition in software testing, the state decides what’s allowed, what must be blocked, and what should happen next. It is one of the simplest ways to make these workflows behave predictably in the real world.

Silent Failures: Why AI Code Breaks in Production

You ship a small “safe” change on Friday. The diff is tiny, the tests are green, and the AI assistant was confident. An hour after deploy, your on-call channel lights up. A downstream service is rejecting responses that look fine in code review. Now you’re rolling back and rewriting a fix that should have been obvious if you had real traffic in the loop. This isn’t a hypothetical.
Sponsored Post

What AI Has Never Seen: The Context Gap in Code Generation

Your AI coding assistant has read the entire internet. It knows every programming language, every framework, every best practice documented in Stack Overflow answers and GitHub repositories. It can generate a REST API handler in seconds that looks perfect with clean code, proper error handling, following all the patterns. But here's what it's never seen: your production traffic. Data from a real API request. Someone filling out a form with messed up or incomplete data. AI is changing how we write and test code, but there's a fundamental gap between training data and production reality.

Katalon Product Roundup | January 2026

February brings a wave of upgrades across the Katalon platform to help you test smarter, not harder. From deeply customizable analytics dashboards and native release management in TestOps, to AI-driven API test generation, self-healing, and a modernized Studio 11 runtime, this month focuses on visibility, stability, and speed at scale.

Reusing test cases with Call to Test | Zephyr

SmartBear Zephyr is the Jira-native test management and automation platform that empowers your team to deliver better software,faster. By creating test cases, linking them to user stories and requirements, and monitoring progress all within Jira, you can unify your testing and development efforts. This short video demonstrates how to use a test case in Zephyr, known as the “Call to Test” capability. You’ll see how you can reference and reuse test cases across multiple Jira projects, no matter the test case type.

AI Analytics with Databox

You know the feeling. It’s Monday morning, and someone asks, “How are we doing?” Suddenly, you’re toggling between six tabs, exporting CSVs, and trying to remember which dashboard has the number they actually need. By the time you’ve pulled everything together, the meeting’s over. This was the problem we originally built Databox to solve: centralizing scattered data into dashboards that actually make sense. But dashboards were only the first step.