We've cooked up a bunch of improvements designed to reduce friction and make the.


At RunPod, we're constantly looking for ways to make AI development more accessible. Today, we're excited to announce our newest feature: a pre-configured Axolotl environment for LLM fine-tuning that dramatically simplifies the process of customizing models to your specific needs.
Fine-tuning large language models has traditionally been complex, requiring specialized knowledge, careful environment configuration, and significant computational resources. Yet it remains one of the most powerful techniques for adapting foundation models to specific domains, styles, or tasks.
With RunPod's new Axolotl environment, we've eliminated the technical hurdles, allowing you to focus on what matters most: creating models that work for your unique use cases.
Our pre-configured environment provides a streamlined, no-setup-required approach to fine-tuning with Axolotl - the popular open-source training framework trusted by AI researchers and practitioners. Here's what you get:
Click on the new Fine Tuning option on the left, and provide a base model from Huggingface, your HF access token (required for gated models) and then your dataset. You will then be provided a list of curated GPUs that are well-suited for fine tuning, and then you can deploy it like any other pod.
You can then connect to the pod through Jupyter Notebook. If you need the password, you can find your automatically generated password in the environment variables.
This new environment opens doors to a multitude of practical uses across industries and specialties. Organizations in healthcare can train models to understand medical terminology and assist with research, while legal firms might customize models to interpret complex legal documents and precedents. Customer service teams can develop assistants that not only understand product-specific inquiries but also communicate with the exact tone and values that reflect their brand identity.
Data scientists no longer need to work with generic models that lack context – now they can develop specialized models that deeply understand their organization's specific datasets and analytical frameworks. Content creators and marketing teams will find particular value in models fine-tuned to match their unique writing styles, helping to maintain consistent voice across all materials without sacrificing creative flexibility. Meanwhile, academic researchers gain the ability to rapidly experiment with different training methodologies, focusing on their hypotheses rather than wrestling with technical setup and environment configuration challenges.
Our Axolotl environment transforms theoretical possibilities into practical solutions across industries. By removing technical barriers, we've made fine-tuning accessible to organizations of all sizes:
Fine-tuning no longer requires a machine learning engineer; you can spin up a pod in moments and get immediate access to the entire Huggingface LLM library for inspiration and start testing out how applying your dataset can improve a model's performance.
The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.