Serverless GPUs in Private Preview: L4, L40S, V100, and more
Today, we’re excited to share that Serverless GPUs are available for all your AI inference needs directly through the Koyeb platform! We're starting with GPU Instances designed to support AI inference workloads including both heavy generative AI models and lighter computer vision models. These GPUs provide up to 48GB of vRAM, 733 TFLOPS and 900GB/s of memory bandwidth to support large models including LLMs and text-to-image models.