Cloud GPUs
Rent NVIDIA RTX 4090 GPUs from $0.69/hr
High-end consumer GPU based on Ada Lovelace architecture with 24GB GDDR6X memory and 16,384 CUDA cores for AI workloads, machine learning, and image generation tasks.

Powering the next generation of AI & high-performance computing.

Engineered for large-scale AI training, deep learning, and high-performance workloads, delivering unprecedented compute power and efficiency.

NVIDIA Ada Lovelace Architecture

Next-generation consumer architecture delivering exceptional AI performance with improved power efficiency and advanced compute capabilities.

Fourth-Generation Tensor Cores

Enhanced AI acceleration with 512 Tensor Cores providing significant performance gains for machine learning workloads.

24GB GDDR6X Memory

Massive memory capacity with 1,008GB/s bandwidth enables training and inference on large AI models.

Third-Generation RT Cores

Advanced ray tracing acceleration with 128 RT Cores ideal for AI rendering applications and computer vision tasks.
Performance

Key specs at a glance.

Performance benchmarks that push AI, ML, and HPC workloads further.

Memory Bandwidth

1008

GB/s

FP16 Tensor Performance

165.2

TFLOPS

PCIe Gen5 ×16 Bandwidth

63

GB/s
Use Cases

Popular use cases.

Designed for demanding workloads
—learn if this GPU fits your needs.
Technical Specs

Ready for your most
demanding workloads.

Essential technical specifications to help you choose the right GPU for your workload.

Specification

Details

Great for...

Memory Bandwidth
1008
GB/s
Feeding large image batches and high-resolution textures into VRAM without stalls for rendering, LLM inference, and real-time simulations.
Memory Bandwidth
165.2
Feeding large image batches and high-resolution textures into VRAM without stalls for rendering, LLM inference, and real-time simulations.
Feeding large image batches and high-resolution textures into VRAM without stalls for rendering, LLM inference, and real-time simulations.
1008
FP16 Tensor Performance
165.2
TFLOPS
Speeding mixed-precision transformer training and inference, boosting token throughput in generative AI and deep learning workloads.
FP16 Tensor Performance
165.2
Speeding mixed-precision transformer training and inference, boosting token throughput in generative AI and deep learning workloads.
Speeding mixed-precision transformer training and inference, boosting token throughput in generative AI and deep learning workloads.
165.2
PCIe Gen5 ×16 Bandwidth
63
GB/s
Enabling high-speed GPU-to-GPU and host-to-device transfers when NVLink isn’t available, ensuring smooth multi-GPU scaling for large models.
PCIe Gen5 ×16 Bandwidth
63
Enabling high-speed GPU-to-GPU and host-to-device transfers when NVLink isn’t available, ensuring smooth multi-GPU scaling for large models.
Enabling high-speed GPU-to-GPU and host-to-device transfers when NVLink isn’t available, ensuring smooth multi-GPU scaling for large models.
63
Comparison

Powerful GPUs. Globally available.
Reliability you can trust.

30+ GPUs, 31 regions, instant scale. Fine-tune or go full Skynet—we’ve got you.

Community Cloud

$
0.34
/hr
N/A

Secure Cloud

$
0.69
/hr
Unique GPU Models
25
19
Unique GPU Models
Lorem ipsum
19
Lorem ipsum
25
Global Regions
17
14
Global Regions
Lorem ipsum
14
Lorem ipsum
17
Network Storage
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Enterprise-Grade Reliability
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Savings Plans
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
24/7 Support
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Delightful Dev Experience
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum

7,035,265,000

Requests since launch & 400k developers worldwide

Build what’s next.

The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.

12:22