Menu

AI NEWS CYCLE

Axolotl

Fine-tuning & Training

Visit Axolotl

Go to Official Website

Opens in a new tab

About Axolotl

Axolotl (OpenAccess-AI-Collective) is an open-source fine‑tuning framework that streamlines fine‑tuning of many open-source LLM families, offering YAML-based configs, LoRA/QLoRA/Relora/full-finetune support, multi‑GPU deployment (FSDP/DeepSpeed), and integrations with experiment loggers.

Key Features

  • Supports LoRA, QLoRA, full-finetune, GPTQ workflows across many model families (LLaMA, Mistral, Falcon, Pythia, MPT, etc.)
  • YAML-config-driven experiments and CLI for training/inference
  • Multi-GPU support via FSDP/DeepSpeed and common optimizations (flash-attn, xFormers)
  • Optional logging and checkpointing integrations (Weights & Biases, etc.)

Use Cases & Best For

Practitioners who need a flexible, config-driven fine‑tuning toolkit for open-source LLMs
Teams running multi‑GPU/Distributed LoRA or QLoRA experiments with reproducible configs

About Fine-tuning & Training

Customize and train models