Brendan McKeag

How to Run a "Hello World" on RunPod Serverless

February 6, 2025

If you're new to serverless computing and Docker, this guide will walk you through creating your first RunPod serverless endpoint from scratch. We'll build a simple "Hello World" application that demonstrates the basic concepts of serverless deployment on RunPod's platform. You'll learn how to build a Docker image locally, push it to Dockerhub, set up an endpoint on RunPod, and send your first request to it.

Understanding Serverless and Docker: The Basics

Before diving into the technical steps, let's understand what we're building and why. Imagine you're a chef who wants to serve food to customers. In a traditional setup (like a regular server), you'd need to rent a restaurant space full-time, pay for utilities, and maintain the kitchen even when no customers are present. This is like running a traditional server that's always on – essentially our Pod service. And while this is the right solution for some use cases, we believe in offering you multiple options to make the best use of our service.

Serverless computing is more like having a kitchen that magically appears only when customers order food and disappears when they're done eating. You only pay for the actual time spent cooking. Docker containers are like standardized, portable kitchen setups that ensure your recipes work the same way whether you're cooking in New York or Tokyo. We've discussed in the past about how a serverless scaling strategy can help you make the most of your GPU spend.

In this tutorial, we're going to create a simple "kitchen" (Docker container) that knows how to say hello to people, package it up so it can be shared (push to DockerHub), and then set it up on RunPod's serverless platform where it will spring to life whenever someone wants a greeting.

Prerequisites

Before we begin, make sure you have the following installed on your MacOS system:

  1. Docker Desktop for Mac - Download from Docker's official website
  2. Python 3.8 or later
  3. A free DockerHub account - Sign up at hub.docker.com
  4. A RunPod account - Sign up at runpod.io
  5. A text editor of your choice. VSCode is my personal recommendation as it's free and user friendly.

Note that while this tutorial is tuned for MacOS users, the terminal commands should work just fine on most flavors of Linux, or you can look into installing Windows Subsystem for Linux on a Windows install.

You'll also want to be sure Docker Desktop is running to prevent any potential Docker daemon errors.

Step 1: Setting Up Your Project

First, let's create a new directory for our project and set up the necessary files. Run this in the Terminal:

Step 2: Creating the Handler Function

Create a new file called handler.py with this simple "Hello World" handler:

Step 3: Creating the Dockerfile

Create a new file called Dockerfile that will define how to build your container:

Create a requirements.txt file with our dependencies:

Step 4: Building and Testing Locally

Let's build the Docker image and test it locally. Be sure to replace the username with your Dockerhub username, e.g. brendanmckeag/runpod-hello-world:latest .

Note that you must build for linux/amd64 (otherwise you'll run into an 'exec format error' error down the line.)

If running the docker run command to test, you'll get an error:

This is normal. When we run a RunPod serverless worker locally, it looks for a file called test_input.json by default. This file simulates the input that would normally come from actual API requests in production. Since we haven't created this file yet, the worker exits immediately. This error/warning is not worrisome, but seeing it instead of some other response is a good sign that you're on the right track.

Step 5: Pushing to DockerHub

Now that we've built and tested our image, let's push it to DockerHub:

Step 6: Creating a RunPod Serverless Endpoint

Go to RunPod's Serverless dashboard and click "New Endpoint" and then Docker Image.

Build what’s next.

The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.

12:22