Koyeb

Paris, France
2019
  |  By Yann Léger
Today, we are announcing AWS regions on Koyeb for businesses. The fastest way to build, run, and scale your apps on AWS infrastructure. Over the last months, we've gotten more and more requests from businesses established on AWS to have a way to deploy Koyeb services on AWS infrastructure to: Our platform's core technology is cloud-agnostic and can be operated on top of anything, from high-performance bare metal servers to IaaS providers.
  |  By Thomas Le Roux
Ready? Day three of Koyeb launch week is on! When you deploy your apps on Koyeb, your data is on ephemeral disks. While this works great for stateless applications, this is challenging for stateful workloads like databases. Just in time to save the day, we are launching the technical preview of Volumes! You can now use Volumes to persist data between deployments, restarts, and even when services are paused. We're gradually onboarding users to ensure the best experience for everyone.
  |  By Yann Léger
Welcome to day two of Koyeb launch week. Today we're announcing not one, but two major pieces of news: Our lineup ranges from 20GB to 80GB of vRAM with A100 and H100 cards. You can now run high-precision calculations with FP64 instructions support and a gigantic 2TB/s of bandwidth on the H100. With prices ranging from $0.50/hr to $3.30/hr and always billed by the second, you'll be able to run training, fine-tuning, and inference workloads with a card adapted to your needs.
  |  By Julien Castets
We are thrilled to kickstart this first launch week with autoscaling - now generally available! Our goal is to offer a global and serverless experience for your deployments. Autoscaling makes this vision a reality. Say goodbye to overpaying for unused resources and late-night alerts for unhealthy instances or underprovisioned resources! During the autoscaling public preview, we received key feedback around scaling factors.
  |  By Alisdair Broshar
AI applications that produce human-like text, such as chatbots, virtual assistants, language translation, text generation, and more, are built on top of Large Language Models (LLMs). If you are deploying LLMs in production-grade applications, you might have faced some of the performance challenges with running these models. You might have also considered optimizing your deployment with an LLM inference engine or server.
  |  By Julien Castets
Hey there! We're back for our third edition of Tips and Tricks, our new mini series where we share some helpful insights and cool tech that we've stumbled upon while working on technical stuff. Catch up on the previous posts: All of our posts are super short reads, just a couple of minutes tops. If you don’t like one of the posts, no problem! Just skip it and check out the next one. If you enjoy any of the topics, I encourage you to check out the "further reading" links.
  |  By Julien Castets
There are several ways to handle load spikes on a service. However, these methods are not cost-effective: you either pay for resources you don't use, or you risk not having enough resources to handle the load. Fortunately, there is a third way: horizontal autoscaling. Horizontal autoscaling is the process of dynamically adjusting the number of instances of a service based on the current load. This way, you only pay for the resources you use, and you can handle load spikes without any manual intervention.
  |  By Julien Castets
Hey there! We're back for our third edition of Tips and Tricks. As we said in our first posts on Drizzle ORM and Template Databases in PostgreSQL, our new Tips and Tricks mini blog series is going to share some helpful insights and cool tech that we've stumbled upon while working on technical stuff. Today's topic is short and sweet. It'll be on CPU utilization and what that metric indicates. If you enjoy it and want to learn more, I encourage you to check out the "further reading" links.
  |  By Yann Léger
Today, we’re excited to share that Serverless GPUs are available for all your AI inference needs directly through the Koyeb platform! We're starting with GPU Instances designed to support AI inference workloads including both heavy generative AI models and lighter computer vision models. These GPUs provide up to 48GB of vRAM, 733 TFLOPS and 900GB/s of memory bandwidth to support large models including LLMs and text-to-image models.
  |  By Julien Castets
Hey there! We're back for our second edition of Tips and Tricks. As we said in our first post on Drizzle ORM, our new Tips and Tricks mini blog series is going to share some helpful insights and cool tech that we've stumbled upon while working on technical stuff. Today, we're going to talk about the template databases of PostgreSQL. Remember, these posts will be super short reads. If you don’t like the topic of one of the posts, no problem! Just skip it and check out the next one.

Koyeb provides the fastest way to run web applications, APIs, and event-driven workloads across clouds with high performance and a developer-oriented experience. Koyeb dramatically reduces deployment time and operational complexity by removing server and infrastructure management for businesses and developers.

At Koyeb, we provide a unified experience to deploy, run and scale your applications globally with seamless support of Docker containers, native code, functions and provides:

  • An easy-to-use web interface to manage all your apps deployments
  • Support of all kinds of services including full web applications, APIs, event-driven serverless functions, background workers, and cron jobs.
  • Full support of Docker containers
  • Git-driven deployment to build and deploy native code in Ruby, Node.js, Java, Python, Clojure, Scala, Go, Rust, PHP, or with a Dockerfile present in the repository.
  • A High-Performance Edge Network with a global CDN and powerful load-balancing across zones with automatic traffic geo-steering
  • Full-Service Mesh and Discovery to deploy secure micro-services and functions in seconds
  • Transparent deployment in fast, secure MicroVMs
  • The Koyeb CLI (Command Line Interface) to manage resources and automate directly from your terminal
  • An easy-to-use REST API to use Koyeb programmatically

Koyeb provides the fastest way to deploy apps globally with a developer-friendly serverless platform. No ops, servers, or infrastructure management.