Explore our credit programs for startups
Cloud GPUs
Rent NVIDIA RTX 2000 Ada GPUs from $0.24/hr
Compact professional GPU based on Ada Lovelace architecture with 16GB GDDR6 memory and 2,816 CUDA cores for AI workloads, machine learning, and professional applications in small form factor systems.

Powering the next generation of AI & high-performance computing.

Engineered for large-scale AI training, deep learning, and high-performance workloads, delivering unprecedented compute power and efficiency.

NVIDIA Ada Lovelace Architecture

Latest generation professional architecture delivering 1.5X the FP32 performance of previous generation in power-efficient design.

Fourth-Generation Tensor Cores

Enhanced AI acceleration with 88 Tensor Cores delivering significant performance improvements for machine learning tasks.

16GB GDDR6 Memory

Large memory capacity with ECC support provides reliable workspace for AI models and professional datasets.

Dual-Slot Low Profile Design

Compact form factor with efficient cooling enables deployment in small form factor workstations.
Performance

Key specs at a glance.

Performance benchmarks that push AI, ML, and HPC workloads further.

Memory Bandwidth

256

GB/s

FP16 Tensor Performance

12.00

TFLOPS

PCIe Gen4 ×8 Bandwidth

32

GB/s
Use Cases

Popular use cases.

Designed for demanding workloads
—learn if this GPU fits your needs.
Technical Specs

Ready for your most
demanding workloads.

Essential technical specifications to help you choose the right GPU for your workload.

Specification

Details

Great for...

Memory Bandwidth
256
GB/s
Sustaining high-throughput feeding of large, high-resolution image batches to VRAM for compact workstation image-model inference and visualization.
Memory Bandwidth
256
GB/s
Sustaining high-throughput feeding of large, high-resolution image batches to VRAM for compact workstation image-model inference and visualization.
FP16 Tensor Performance
12.00
TFLOPS
Speeding mixed-precision inference and training of CNNs and vision transformers in space-constrained setups.
FP16 Tensor Performance
12.00
TFLOPS
Speeding mixed-precision inference and training of CNNs and vision transformers in space-constrained setups.
PCIe Gen4 ×8 Bandwidth
32
GB/s
Fast host-to-GPU and GPU-to-GPU transfers in multi-card image-model training when NVLink isn’t supported.
PCIe Gen4 ×8 Bandwidth
32
GB/s
Fast host-to-GPU and GPU-to-GPU transfers in multi-card image-model training when NVLink isn’t supported.
Comparison

Powerful GPUs. Globally available.
Reliability you can trust.

30+ GPUs, 31 regions, instant scale. Fine-tune or go full Skynet—we’ve got you.

Community Cloud

$
/hr
N/A

Secure Cloud

$
0.24
/hr
Unique GPU Models
25
19
Unique GPU Models
Secure Cloud
19
Community Cloud
25
Global Regions
17
14
Global Regions
Secure Cloud
14
Community Cloud
17
Network Storage
Network Storage
Secure Cloud
✔️
Community Cloud
✖️
Enterprise-Grade Reliability
Enterprise-Grade Reliability
Secure Cloud
✔️
Community Cloud
✖️
Savings Plans
Savings Plans
Secure Cloud
✔️
Community Cloud
✖️
24/7 Support
24/7 Support
Secure Cloud
✔️
Community Cloud
✔️
Delightful Dev Experience
Delightful Dev Experience
Secure Cloud
✔️
Community Cloud
✔️

7,035,265,000

Requests since launch & 400k developers worldwide

Build what’s next.

The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.

You’ve unlocked a
referral bonus!

Sign up today and you’ll get a random credit bonus between $5 and $500 when you spend your first $10 on Runpod.