Cloud GPUs
Rent NVIDIA L4 GPUs from $0.43/hr
Energy-efficient data center GPU based on Ada Lovelace architecture with 24GB GDDR6 memory and 7,424 CUDA cores for AI inference, video processing, and edge computing applications.

Powering the next generation of AI & high-performance computing.

Engineered for large-scale AI training, deep learning, and high-performance workloads, delivering unprecedented compute power and efficiency.

NVIDIA Ada Lovelace Architecture

Universal accelerator architecture delivering up to 120X better AI video performance with exceptional energy efficiency.

Fourth-Generation Tensor Cores

Enhanced AI acceleration with FP8 precision support delivering up to 2.5X higher performance for generative AI inference.

24GB GDDR6 Memory

Large memory capacity enables handling of larger AI models and complex video processing tasks.

Low-Profile Form Factor

Compact 72W design fits in standard servers from edge to data center in space-constrained environments.
Performance

Key specs at a glance.

Performance benchmarks that push AI, ML, and HPC workloads further.

Memory Bandwidth

300

GB/s

FP16 Tensor Performance

242

TFLOPS

PCIe Gen4 ×16 Bandwidth

64

GB/s
Use Cases

Popular use cases.

Designed for demanding workloads
—learn if this GPU fits your needs.
Technical Specs

Ready for your most
demanding workloads.

Essential technical specifications to help you choose the right GPU for your workload.

Specification

Details

Great for...

Memory Bandwidth
300
GB/s
Feeding large batches of data into GPU memory without stalls—ideal for inference-heavy AI workloads and real-time graphics.
Memory Bandwidth
242
Feeding large batches of data into GPU memory without stalls—ideal for inference-heavy AI workloads and real-time graphics.
Feeding large batches of data into GPU memory without stalls—ideal for inference-heavy AI workloads and real-time graphics.
300
FP16 Tensor Performance
242
TFLOPS
Boosting mixed-precision AI inference and graphics computations, accelerating model throughput on edge and data-center servers.
FP16 Tensor Performance
242
Boosting mixed-precision AI inference and graphics computations, accelerating model throughput on edge and data-center servers.
Boosting mixed-precision AI inference and graphics computations, accelerating model throughput on edge and data-center servers.
242
PCIe Gen4 ×16 Bandwidth
64
GB/s
Enabling high-speed GPU-to-host and GPU-to-GPU transfers when NVLink isn’t available, ensuring smooth scaling in multi-card setups.
PCIe Gen4 ×16 Bandwidth
64
Enabling high-speed GPU-to-host and GPU-to-GPU transfers when NVLink isn’t available, ensuring smooth scaling in multi-card setups.
Enabling high-speed GPU-to-host and GPU-to-GPU transfers when NVLink isn’t available, ensuring smooth scaling in multi-card setups.
64
Comparison

Powerful GPUs. Globally available.
Reliability you can trust.

30+ GPUs, 31 regions, instant scale. Fine-tune or go full Skynet—we’ve got you.

Community Cloud

$
/hr
N/A

Secure Cloud

$
0.43
/hr
Unique GPU Models
25
19
Unique GPU Models
Lorem ipsum
19
Lorem ipsum
25
Global Regions
17
14
Global Regions
Lorem ipsum
14
Lorem ipsum
17
Network Storage
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Enterprise-Grade Reliability
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Savings Plans
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
24/7 Support
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Delightful Dev Experience
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum

7,035,265,000

Requests since launch & 400k developers worldwide

Build what’s next.

The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.

12:22