TutorialsImageFine-tuning FLUX with LoRA
AdvancedUpdated Dec 16, 2025

Fine-tuning FLUX with LoRA

Deep dive into preparing datasets, hyperparameter tuning, and deploying your own custom LoRA models for unique styles.

DMW
Dr. Marcus Webb
ML Research Lead
20 min read

What is LoRA?

LoRA (Low-Rank Adaptation) is a technique for efficiently fine-tuning large models by training only a small number of additional parameters. Benefits:

  • Smaller file sizes - LoRAs are typically 10-200MB
  • Faster training - Hours instead of days
  • Composable - Combine multiple LoRAs
  • No base model changes - Use with any compatible model

Preparing Your Dataset

Dataset Requirements

  • Minimum images: 10-20 (more is better)
  • Recommended: 50-200 images
  • Resolution: At least 512x512, ideally 1024x1024
  • Consistency: Similar style/subject across images

Image Selection Tips

  1. High quality, well-lit images
  2. Consistent subject/style
  3. Various angles and poses
  4. Clean backgrounds when possible
  5. No watermarks or text

Captioning Your Images

Each image needs a caption describing it:

text
photo_001.jpg -> "A portrait of sks person, professional lighting, studio background"
photo_002.jpg -> "A sks person smiling, outdoor natural lighting, park background"

Use a trigger word (like sks) to identify your trained subject.

Training Configuration

python
from abstrakt import AbstraktClient

client = AbstraktClient()

# Upload training images
training_job = client.training.create({
    "model": "flux-lora",
    "config": {
        "trigger_word": "sks",
        "steps": 1000,
        "learning_rate": 1e-4,
        "batch_size": 1,
        "resolution": 1024,
        "network_rank": 32,
        "network_alpha": 16
    },
    "images": [
        {"url": "https://...", "caption": "A sks person..."},
        {"url": "https://...", "caption": "A sks person..."},
        # ... more images
    ]
})

Hyperparameter Guide

Learning Rate

  • 1e-4: Standard, good default
  • 5e-5: More conservative, better for faces
  • 2e-4: Faster training, risk of overfitting

Steps

  • 500-800: Quick training, basic style
  • 1000-1500: Balanced, recommended
  • 2000+: Deep training, risk of overfitting

Network Rank

  • 8-16: Smaller LoRA, subtle effects
  • 32: Balanced, recommended
  • 64-128: Larger LoRA, stronger effects

Monitoring Training

python
# Check training status
status = client.training.get(training_job.id)

print(f"Status: {status.state}")
print(f"Progress: {status.progress}%")
print(f"Current step: {status.current_step}")

# Training logs
for log in status.logs:
    print(f"[{log.step}] Loss: {log.loss}")

Using Your LoRA

Once training completes:

python
result = client.run("fal-ai/flux/dev", {
    "input": {
        "prompt": "A portrait of sks person as a medieval knight",
        "loras": [
            {
                "path": "your-lora-id",
                "scale": 0.8  # LoRA strength
            }
        ]
    }
})

Combining Multiple LoRAs

python
result = client.run("fal-ai/flux/dev", {
    "input": {
        "prompt": "A sks person in a beautiful landscape",
        "loras": [
            {"path": "person-lora-id", "scale": 0.9},
            {"path": "landscape-style-lora", "scale": 0.5}
        ]
    }
})

Troubleshooting

IssueSolution
OverfittingReduce steps, lower learning rate
UnderfittingMore steps, higher learning rate
Style bleedingAdjust LoRA scale (0.5-0.8)
Poor qualityMore/better training images

Best Practices

  1. Start simple - 500 steps, default settings
  2. Iterate - Train multiple versions with different settings
  3. Test thoroughly - Try various prompts
  4. Version control - Save different LoRA versions
  5. Document - Keep notes on training configs

Next Steps

#fine-tuning#lora#flux#training