Cloud GPUs
Rent NVIDIA L40S GPUs from $0.86/hr
Universal data center GPU based on Ada Lovelace architecture with 48GB GDDR6 memory and 18,176 CUDA cores for AI inference, generative AI, and professional visualization workloads.

Powering the next generation of AI & high-performance computing.

Engineered for large-scale AI training, deep learning, and high-performance workloads, delivering unprecedented compute power and efficiency.

NVIDIA Ada Lovelace Architecture

Versatile architecture combining AI compute with advanced graphics acceleration for breakthrough multi-workload performance.

Fourth-Generation Tensor Cores

Enhanced AI acceleration with FP8 precision support delivering up to 5X higher inference performance.

48GB GDDR6 Memory

Large memory capacity enables handling of multimodal generative AI models and complex 3D rendering tasks.

Third-Generation RT Cores

Advanced ray tracing acceleration with 2X faster real-time performance for professional visualization workflows.
Performance

Key specs at a glance.

Performance benchmarks that push AI, ML, and HPC workloads further.

Memory Bandwidth

864

GB/s

FP16 Tensor Performance

362

TFLOPS

PCIe Gen4 ×16 Bandwidth

63

GB/s
Use Cases

Popular use cases.

Designed for demanding workloads
—learn if this GPU fits your needs.
Technical Specs

Ready for your most
demanding workloads.

Essential technical specifications to help you choose the right GPU for your workload.

Specification

Details

Great for...

Memory Bandwidth
864
GB/s
Feeding massive multimodal and image batches into VRAM without stalls for LLM inference and real-time rendering.
Memory Bandwidth
864
GB/s
Feeding massive multimodal and image batches into VRAM without stalls for LLM inference and real-time rendering.
FP16 Tensor Performance
362
TFLOPS
Accelerating mixed-precision transformer and convolution operations in generative AI and image-model workloads.
FP16 Tensor Performance
362
TFLOPS
Accelerating mixed-precision transformer and convolution operations in generative AI and image-model workloads.
PCIe Gen4 ×16 Bandwidth
63
GB/s
High-speed GPU-to-host and GPU-to-GPU transfers in multi-card setups when NVLink isn’t available.
PCIe Gen4 ×16 Bandwidth
63
GB/s
High-speed GPU-to-host and GPU-to-GPU transfers in multi-card setups when NVLink isn’t available.
Comparison

Powerful GPUs. Globally available.
Reliability you can trust.

30+ GPUs, 31 regions, instant scale. Fine-tune or go full Skynet—we’ve got you.

Community Cloud

$
0.79
/hr
N/A

Secure Cloud

$
0.86
/hr
Unique GPU Models
25
19
Unique GPU Models
Secure Cloud
19
Community Cloud
25
Global Regions
17
14
Global Regions
Secure Cloud
14
Community Cloud
17
Network Storage
Network Storage
Secure Cloud
✔️
Community Cloud
✖️
Enterprise-Grade Reliability
Enterprise-Grade Reliability
Secure Cloud
✔️
Community Cloud
✖️
Savings Plans
Savings Plans
Secure Cloud
✔️
Community Cloud
✖️
24/7 Support
24/7 Support
Secure Cloud
✔️
Community Cloud
✔️
Delightful Dev Experience
Delightful Dev Experience
Secure Cloud
✔️
Community Cloud
✔️

7,035,265,000

Requests since launch & 400k developers worldwide

Build what’s next.

The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.