Systems | Development | Analytics | API | Testing

Building Snowflake Intelligence

For Reza Akhavan, building Snowflake Intelligence was about more than just enabling AI agents. It was about empowering people. What started as an idea to help business users talk to their data became Snowflake Intelligence: a secure, scalable system for surfacing trusted insights from structured and unstructured data. Reza and his team didn’t just build a feature. They reimagined how businesses turn data into actions.

When AI writes code that humans wouldn't: Testing in the age of agentic coding tools

Agentic coding tools like Cursor, GitHub Copilot, and OpenAI’s Codex are reshaping how software is developed. They enable developers to offload routine tasks and accelerate feature delivery. However, these tools also introduce new challenges – particularly in how we test and validate the code they produce.

What Is API Gateway Federation? A Guide to Centralized API Management

API gateway federation refers to the integration and management of multiple API gateways within a unified control plane. This approach allows organizations to use different API gateways, which may be from various vendors or tailored to specific environments (e.g., cloud-based, on-premises), while centrally managing their configurations, policies, and monitoring. Figure 1: API gateway federation with a unified control plane.

Compliance is Everyone's Job: How to Automate Your Headaches Away

Another day, another API. Fueled by AI-assisted coding and agile workflows, the speed of innovation has never been higher. But for the compliance team? It’s panic mode. Every new API must follow a minefield of internal rules: security protocols, naming conventions, reuse policies, documentation standards. And while the dev team is flying forward, compliance is stuck doing manual reviews, chasing specs, and untangling inconsistencies often after the code is already written.

Rethinking the Economics of Agentic AI: When 'Cheap' Gets Complicated

Everyone thinks AI is getting cheaper. But is it really? At first glance, the economics of AI seem to be improving for everyone. Thanks to continued model optimization and advances in hardware, the cost of running LLMs (also known as inference) is steadily decreasing. Developers today can access incredibly powerful models at a fraction of what it cost just a year ago. But there’s a catch.