We've cooked up a bunch of improvements designed to reduce friction and make the.



Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
Unordered list
Bold text
Emphasis
Superscript
Subscript
Our good friend SECourses has made some amazing videos showcasing how to run various genative art projects on Runpod. His latest video, titled "Kohya LoRA on Runpod", is a great introduction on how to get into using the powerful technique of LoRA (Low Rank Adaptation). Here's the paper if you're into that kind of stuff: https://arxiv.org/abs/2106.09685.
LoRA is a very valuable technique because it allows one to create a relatively lightweight file to apply on top of existing models that augment the output. This is similar to the concept of textual inversion, but can achieve some very impressive results without having to resort to generating an entirely new model. A full model can be anywhere from 2-8GB, but a LoRA file is usually less than 100MB! The Kohya project has been very popular for investigating LoRA, but it is fairly difficult to install correctly on some machines. This excellent SECourses video walks through the installation step-by-step.
We hope you enjoy and make sure to share anything cool you make in our discord channel!

SECourses breaks down how to use LoRA with Kohya on Runpod in a beginner-friendly tutorial. Learn how to apply lightweight LoRA files to existing models for powerful generative art results—no full model retraining required.

Our good friend SECourses has made some amazing videos showcasing how to run various genative art projects on Runpod. His latest video, titled "Kohya LoRA on Runpod", is a great introduction on how to get into using the powerful technique of LoRA (Low Rank Adaptation). Here's the paper if you're into that kind of stuff: https://arxiv.org/abs/2106.09685.
LoRA is a very valuable technique because it allows one to create a relatively lightweight file to apply on top of existing models that augment the output. This is similar to the concept of textual inversion, but can achieve some very impressive results without having to resort to generating an entirely new model. A full model can be anywhere from 2-8GB, but a LoRA file is usually less than 100MB! The Kohya project has been very popular for investigating LoRA, but it is fairly difficult to install correctly on some machines. This excellent SECourses video walks through the installation step-by-step.
We hope you enjoy and make sure to share anything cool you make in our discord channel!
The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.