Cloud GPUs
Rent NVIDIA RTX 3090 GPUs from $0.46/hr
High-end consumer GPU based on Ampere architecture with 24GB GDDR6X memory and 10,496 CUDA cores for AI workloads, machine learning research, and model fine-tuning.

Powering the next generation of AI & high-performance computing.

Engineered for large-scale AI training, deep learning, and high-performance workloads, delivering unprecedented compute power and efficiency.

NVIDIA Ampere Architecture

Second-generation RTX architecture delivering significant performance improvements for AI compute and parallel processing workloads.

Third-Generation Tensor Cores

Enhanced AI acceleration with 328 Tensor Cores providing substantial performance gains for machine learning tasks.

24GB GDDR6X Memory

Massive memory capacity with 936GB/s bandwidth enables working with medium to large AI models.

Second-Generation RT Cores

Advanced ray tracing acceleration with 82 RT Cores ideal for AI rendering applications and computer vision.
Performance

Key specs at a glance.

Performance benchmarks that push AI, ML, and HPC workloads further.

Memory Bandwidth

936

GB/s

FP16 Tensor Performance

142

TFLOPS

NVLink Bandwidth

112.5

GB/s
Use Cases

Popular use cases.

Designed for demanding workloads
—learn if this GPU fits your needs.
Technical Specs

Ready for your most
demanding workloads.

Essential technical specifications to help you choose the right GPU for your workload.

Specification

Details

Great for...

Memory Bandwidth
936
GB/s
Delivering high-throughput to VRAM for feeding high-resolution image batches in real-time rendering, simulation, and AI inference.
Memory Bandwidth
142
Delivering high-throughput to VRAM for feeding high-resolution image batches in real-time rendering, simulation, and AI inference.
Delivering high-throughput to VRAM for feeding high-resolution image batches in real-time rendering, simulation, and AI inference.
936
FP16 Tensor Performance
142
TFLOPS
Accelerating mixed-precision deep learning tasks like image generation, super-resolution, and large-scale model inference.
FP16 Tensor Performance
142
Accelerating mixed-precision deep learning tasks like image generation, super-resolution, and large-scale model inference.
Accelerating mixed-precision deep learning tasks like image generation, super-resolution, and large-scale model inference.
142
NVLink Bandwidth
112.5
GB/s
Enabling fast GPU-to-GPU data sharing across two cards for seamless multi-GPU scaling in rendering and AI workloads.
NVLink Bandwidth
112.5
Enabling fast GPU-to-GPU data sharing across two cards for seamless multi-GPU scaling in rendering and AI workloads.
Enabling fast GPU-to-GPU data sharing across two cards for seamless multi-GPU scaling in rendering and AI workloads.
112.5
Comparison

Powerful GPUs. Globally available.
Reliability you can trust.

30+ GPUs, 31 regions, instant scale. Fine-tune or go full Skynet—we’ve got you.

Community Cloud

$
0.22
/hr
N/A

Secure Cloud

$
0.46
/hr
Unique GPU Models
25
19
Unique GPU Models
Lorem ipsum
19
Lorem ipsum
25
Global Regions
17
14
Global Regions
Lorem ipsum
14
Lorem ipsum
17
Network Storage
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Enterprise-Grade Reliability
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Savings Plans
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
24/7 Support
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Delightful Dev Experience
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum

7,035,265,000

Requests since launch & 400k developers worldwide

Build what’s next.

The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.

12:22