Cloud GPUs
Rent NVIDIA L40S GPUs from $0.86/hr
Universal data center GPU based on Ada Lovelace architecture with 48GB GDDR6 memory and 18,176 CUDA cores for AI inference, generative AI, and professional visualization workloads.

Powering the next generation of AI & high-performance computing.

Engineered for large-scale AI training, deep learning, and high-performance workloads, delivering unprecedented compute power and efficiency.

NVIDIA Ada Lovelace Architecture

Versatile architecture combining AI compute with advanced graphics acceleration for breakthrough multi-workload performance.

Fourth-Generation Tensor Cores

Enhanced AI acceleration with FP8 precision support delivering up to 5X higher inference performance.

48GB GDDR6 Memory

Large memory capacity enables handling of multimodal generative AI models and complex 3D rendering tasks.

Third-Generation RT Cores

Advanced ray tracing acceleration with 2X faster real-time performance for professional visualization workflows.
Performance

Key specs at a glance.

Performance benchmarks that push AI, ML, and HPC workloads further.

Memory Bandwidth

864

GB/s

FP16 Tensor Performance

362

TFLOPS

PCIe Gen4 ×16 Bandwidth

63

GB/s
Use Cases

Popular use cases.

Designed for demanding workloads
—learn if this GPU fits your needs.
Technical Specs

Ready for your most
demanding workloads.

Essential technical specifications to help you choose the right GPU for your workload.

Specification

Details

Great for...

Memory Bandwidth
864
GB/s
Feeding massive multimodal and image batches into VRAM without stalls for LLM inference and real-time rendering.
Memory Bandwidth
362
Feeding massive multimodal and image batches into VRAM without stalls for LLM inference and real-time rendering.
Feeding massive multimodal and image batches into VRAM without stalls for LLM inference and real-time rendering.
864
FP16 Tensor Performance
362
TFLOPS
Accelerating mixed-precision transformer and convolution operations in generative AI and image-model workloads.
FP16 Tensor Performance
362
Accelerating mixed-precision transformer and convolution operations in generative AI and image-model workloads.
Accelerating mixed-precision transformer and convolution operations in generative AI and image-model workloads.
362
PCIe Gen4 ×16 Bandwidth
63
GB/s
High-speed GPU-to-host and GPU-to-GPU transfers in multi-card setups when NVLink isn’t available.
PCIe Gen4 ×16 Bandwidth
63
High-speed GPU-to-host and GPU-to-GPU transfers in multi-card setups when NVLink isn’t available.
High-speed GPU-to-host and GPU-to-GPU transfers in multi-card setups when NVLink isn’t available.
63
Comparison

Powerful GPUs. Globally available.
Reliability you can trust.

30+ GPUs, 31 regions, instant scale. Fine-tune or go full Skynet—we’ve got you.

Community Cloud

$
0.79
/hr
N/A

Secure Cloud

$
0.86
/hr
Unique GPU Models
25
19
Unique GPU Models
Lorem ipsum
19
Lorem ipsum
25
Global Regions
17
14
Global Regions
Lorem ipsum
14
Lorem ipsum
17
Network Storage
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Enterprise-Grade Reliability
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Savings Plans
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
24/7 Support
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Delightful Dev Experience
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum

7,035,265,000

Requests since launch & 400k developers worldwide

Build what’s next.

The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.

12:22