Systems | Development | Analytics | API | Testing

Koyeb MCP Server: Interact with your Koyeb Resources in Natural Language

Today, we're announcing the Koyeb MCP Server in public beta to let you interact with your Koyeb resources in natural language. Using the Koyeb MCP Server, LLMs and agents can easily discover and leverage Koyeb primitives to: All of this using your favorite AI assistants like Claude, Cursor, Windsurf or any other applications that support the Model Context Protocol.

Delivering scalable, serverless APIs with SmartBear and AWS

Amazon API Gateway and AWS Lambda are widely used for deploying and running scalable APIs or applications in the cloud. While they offer powerful capabilities for deploying and scaling APIs, designing the API or maintaining visibility into performance and reliability can be challenging without the right tools in place.

app.build by Neon: Spawning 1000s of AI-generated apps at scale with Koyeb

It's 2025, you can build a full-stack app by telling an AI what you want. Inspired by projects and tools like v0, Create, Replit, and Same, Neon built app.build: an open-source AI agent that builds and deploys full-stack web apps, straight from your terminal. Built for developers, app.build is local-first, extensible, and fully open source.

How eXalt Built a Secure and Scalable ChatGPT Alternative with Koyeb

eXalt is a French consulting firm with over 1200 consultants and offices in Paris, New York, London, Madrid, Lisbon, Brussels, and throughout France. They specialize in Finance and Tech, offering expertise in data science, cybersecurity, software development and IT infrastructure, project and product management, and more. When eXalt consultants are on assignment, they often need fast, reliable access to a ChatGPT-like tool to help with research and problem-solving.

Serverless Postgres GA: Production-Ready Databases for Large Scale and AI Apps

Today, we’re excited to announce the general availability of Serverless Postgres — a fully managed, fault-tolerant, and effortlessly scalable Postgres database service purpose-built for large scale and AI applications. Since the public preview, over 50,000 databases have been created for use cases ranging from multi-tenant SaaS to AI agent memory, RAG pipelines, and ephemeral dev environments.

How Anyshift Scales Real-Time Queries Across Millions of Nodes with Koyeb

Anyshift provides AI context for your infrastructure, powered by Annie—an AI infrastructure assistant trained on your environment. From answering complex infrastructure questions to suggesting Terraform code and catching hidden issues, Annie helps teams manage, monitor, and optimize their infrastructure with ease and precision. Unlike generic AI copilots, Anyshift provides context-aware insights based on your actual infrastructure and codebase—not just LLM guesses.

Achieve 5x Faster Inference Speeds on Serverless GPUs with Pruna AI and Koyeb

Today, we are excited to announce our partnership with Pruna AI. Pruna AI is the optimization engine built to simplify and accelerate scalable inference. Koyeb offers a serverless cloud platform for teams to deploy ML and AI models on high-performance GPUs, CPUs, and accelerators - globally. By combining Pruna with Koyeb, you can speed up your model optimizations, achieve 5x faster inference speeds, and run them on scalable, high-performance serverless infrastructure.

Optimizing Serverless Stream Processing with Confluent Freight Clusters and AWS Lambda

Confluent has been instrumental in enabling customers from various industries to develop real-time stream processing solutions using Apache Kafka. While many of these use cases demand low-latency and real-time processing, stream processing is also increasingly being utilized for ingesting logging and telemetry data. This type of data typically features a high ingest rate, but allows for a higher tolerance for end-to-end processing time.