We've cooked up a bunch of improvements designed to reduce friction and make the.



Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
Unordered list
Bold text
Emphasis
Superscript
Subscript
Last weekend, we sponsored TreeHacks at Stanford, the world's largest collegiate hackathon. Over 1,000 hackers from 30+ universities and 12 countries descended on the Jen-Hsun Huang Engineering Center for 36 straight hours of building. There were a ton of teams built on Runpod, and we gave away over $20K in credits to fuel their projects.
TreeHacks isn't your average hackathon. Out of 15,000+ applicants, roughly 1,000 were selected based on their track record of actually building things. The organizers filter hard for people who ship, then give them everything they need: flights, food, lodging, massive prizes ($500K total prize pool), dev credits, and enough side events (llama petting, robot fights, lightsaber battles) to keep people from burning out. Sam Altman and Garry Tan gave keynotes. The whole thing is run by students. It's one of the best-organized builder events we've seen.
We were there to put GPU compute in the hands of people who'd actually push it.

The range of projects was wild. Here's a sample of what teams shipped in under 36 hours:
Cancer drug discovery in minutes. One team built RepoRx, an AI-powered drug repurposing pipeline that matched underutilized FDA-approved drugs to disease proteins. They ran DiffDock molecular docking simulations on Runpod Serverless GPUs, using a physics engine to compress what normally takes months of computational research into minutes.
A working Minecraft app from a single prompt. One team spun up a swarm of 200 AI agents and had a functional Minecraft application running in 45 minutes, all from one prompt. That's the kind of thing that makes you rethink what's possible with orchestrated inference at scale.
Brain-to-music in real time. A team 3D-printed an EEG headset that reads brain signals, categorizes emotions, and generates music from them. Hardware and AI inference, end to end, built from scratch during the event.
Self-distillation finetuning, live. Another team implemented SDFT (arxiv.org/abs/2601.19897), a finetuning method where a model uses itself as its own teacher to learn new facts without forgetting old ones. They updated actual model weights in under a minute. Watching it happen live was something else.
Visual neural network builder. NeuroBlocks gave non-experts a drag-and-drop interface for building and training real neural networks with PyTorch on Runpod. No code required to architect, train, and evaluate a model.
AI agent knowledge commons. HackOverflow built a persistent knowledge system for AI agents, using Flash for high-performance inference triage. Agents that actually remember and share what they learn across sessions.
Video ad localization from a single upload. ADapt let you upload one video and generate targeted ad variants for every audience segment. Upload once, deploy everywhere.

We awarded prizes to the teams that best demonstrated what's possible with GPU-accelerated compute:
1st place: RepoRx. AI-powered drug repurposing for cancer research. DiffDock molecular docking simulations on Runpod Serverless, turning months of research into minutes of compute.
2nd place: NeuroBlocks. Visual drag-and-drop platform for building and training real neural networks with PyTorch on Runpod.
3rd place (tie): HackOverflow. Persistent knowledge commons for AI agents, powered by Flash for inference triage.
3rd place (tie): ADapt. AI-powered video ad localization. One upload, targeted variants for every audience.

We quietly tested some new tooling with hackers over the weekend. The majority reported zero errors, and one team said Runpod was the easiest part of their entire project. That's the bar we're aiming for: infrastructure that disappears so builders can focus on what they're actually making.
Over 100 hackers also expressed interest in working at Runpod (we're following up with all of you).
We can't say more yet on what we're building. But we're cooking something.

TreeHacks represents exactly the kind of builder we care about. These are people who don't just talk about what AI could do. They sit down, pick a hard problem, and ship a working solution in 36 hours. Drug discovery, neural interface hardware, novel finetuning methods, agentic systems. All built on GPUs, all needing real compute to run.
That's who Runpod is for. If you're building something that needs serious GPU infrastructure, we want to make that part effortless.
Check out all the TreeHacks 2026 projects on Devpost.

We sponsored TreeHacks 2026 at Stanford, where teams built on Runpod across 36 hours, shipping projects ranging from GPU-accelerated cancer drug discovery to real-time brain-to-music generation. Our top prizes went to RepoRx, NeuroBlocks, HackOverflow, and ADapt.

Last weekend, we sponsored TreeHacks at Stanford, the world's largest collegiate hackathon. Over 1,000 hackers from 30+ universities and 12 countries descended on the Jen-Hsun Huang Engineering Center for 36 straight hours of building. There were a ton of teams built on Runpod, and we gave away over $20K in credits to fuel their projects.
TreeHacks isn't your average hackathon. Out of 15,000+ applicants, roughly 1,000 were selected based on their track record of actually building things. The organizers filter hard for people who ship, then give them everything they need: flights, food, lodging, massive prizes ($500K total prize pool), dev credits, and enough side events (llama petting, robot fights, lightsaber battles) to keep people from burning out. Sam Altman and Garry Tan gave keynotes. The whole thing is run by students. It's one of the best-organized builder events we've seen.
We were there to put GPU compute in the hands of people who'd actually push it.

The range of projects was wild. Here's a sample of what teams shipped in under 36 hours:
Cancer drug discovery in minutes. One team built RepoRx, an AI-powered drug repurposing pipeline that matched underutilized FDA-approved drugs to disease proteins. They ran DiffDock molecular docking simulations on Runpod Serverless GPUs, using a physics engine to compress what normally takes months of computational research into minutes.
A working Minecraft app from a single prompt. One team spun up a swarm of 200 AI agents and had a functional Minecraft application running in 45 minutes, all from one prompt. That's the kind of thing that makes you rethink what's possible with orchestrated inference at scale.
Brain-to-music in real time. A team 3D-printed an EEG headset that reads brain signals, categorizes emotions, and generates music from them. Hardware and AI inference, end to end, built from scratch during the event.
Self-distillation finetuning, live. Another team implemented SDFT (arxiv.org/abs/2601.19897), a finetuning method where a model uses itself as its own teacher to learn new facts without forgetting old ones. They updated actual model weights in under a minute. Watching it happen live was something else.
Visual neural network builder. NeuroBlocks gave non-experts a drag-and-drop interface for building and training real neural networks with PyTorch on Runpod. No code required to architect, train, and evaluate a model.
AI agent knowledge commons. HackOverflow built a persistent knowledge system for AI agents, using Flash for high-performance inference triage. Agents that actually remember and share what they learn across sessions.
Video ad localization from a single upload. ADapt let you upload one video and generate targeted ad variants for every audience segment. Upload once, deploy everywhere.

We awarded prizes to the teams that best demonstrated what's possible with GPU-accelerated compute:
1st place: RepoRx. AI-powered drug repurposing for cancer research. DiffDock molecular docking simulations on Runpod Serverless, turning months of research into minutes of compute.
2nd place: NeuroBlocks. Visual drag-and-drop platform for building and training real neural networks with PyTorch on Runpod.
3rd place (tie): HackOverflow. Persistent knowledge commons for AI agents, powered by Flash for inference triage.
3rd place (tie): ADapt. AI-powered video ad localization. One upload, targeted variants for every audience.

We quietly tested some new tooling with hackers over the weekend. The majority reported zero errors, and one team said Runpod was the easiest part of their entire project. That's the bar we're aiming for: infrastructure that disappears so builders can focus on what they're actually making.
Over 100 hackers also expressed interest in working at Runpod (we're following up with all of you).
We can't say more yet on what we're building. But we're cooking something.

TreeHacks represents exactly the kind of builder we care about. These are people who don't just talk about what AI could do. They sit down, pick a hard problem, and ship a working solution in 36 hours. Drug discovery, neural interface hardware, novel finetuning methods, agentic systems. All built on GPUs, all needing real compute to run.
That's who Runpod is for. If you're building something that needs serious GPU infrastructure, we want to make that part effortless.
Check out all the TreeHacks 2026 projects on Devpost.
The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.