NewGraphRAG now in early beta

Glossary

Fine-tuning vs. RAG

Also known as: Fine-tuning, Instruction tuning, RAG vs. fine-tuning

Definition

Fine-tuning is the practice of further training a pre-trained language model on domain-specific data. It modifies the model's weights and is suitable when style, format, or behavior need to be learned reproducibly. Retrieval-Augmented Generation (RAG) leaves the model unchanged and supplies relevant documents as context at runtime — keeping knowledge current, auditable, and swappable without retraining. In practice, the two approaches are often combined: fine-tuning for behavior and format, RAG for facts and sources.

How Swiss Knowledge Hub uses this term

Swiss Knowledge Hub deliberately bets on RAG rather than customer-specific fine-tuning, because the content in Swiss organizations (policies, contracts, plans) changes frequently and has to remain traceable at all times. Fine-tuning needs are evaluated separately in the Custom plan.

Related terms

Sources

  1. Wikipedia: Fine-tuning (deep learning)https://en.wikipedia.org/wiki/Fine-tuning_(deep_learning)
  2. OpenAI — Fine-tuning Guidehttps://platform.openai.com/docs/guides/fine-tuning

Last updated: April 22, 2026