We've cooked up a bunch of improvements designed to reduce friction and make the.


You read the title! Whisper just got faster with Runpod's new Faster-Whisper serverless endpoint.
For those who haven't used it before, Whisper is an AI speech recognition model trained on hundreds of thousands of hours of multilingual human speech. It's great for audio captioning (things like podcasts, YouTube videos, TV shows, songs, etc.), and is capable of translating non-English audio to English as it goes, too.
We will be deprecating our existing Whisper serverless endpoint in favor of our new Faster-Whisper endpoint. This endpoint provides the same great service as the regular Whisper endpoint in a fraction of the compute time.
You'll get your Whisper results 2-4x faster with Faster-Whisper! Here's some sample execution times across audio clips of varying lengths, all done with the large-v2
model using Runpod's endpoints:
Nope! In fact, since the new Faster-Whisper endpoint is 2-4x faster, it's also 2-4x cheaper!
Our serverless APIs only charge the user based on the time it takes to execute a call, $0.00025/s. With the Faster-Whisper endpoint, our pricing for Whisper API access is now more competitive than ever. Most others such as OpenAI ($0.0001/s) charge users for API access based on the length of the audio clip to transcribe (which, we remind, is dramatically longer than the time it takes to return that transcription). Check out the sample cost comparison below for each of the audio clips above:
We'd be happy to answer your questions or concerns on our Discord server or via help@runpod.io!
The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.