AI ARCHITECTURE

Retrieval-Augmented Generation

A technique that grounds LLM responses in factual, enterprise-specific data by retrieving relevant documents before generating an answer.

RAG solves the hallucination problem that plagues vanilla LLMs. When a BasaltHQ agent is asked "What are the payment terms for Client X?", it does not guess. BASALTECHO first queries the enterprise vector database to retrieve the actual signed contract, the latest invoice history, and any relevant email threads. This retrieved context is injected into the LLM prompt, ensuring the response is grounded in verified corporate truth. BasaltHQ implements a multi-stage RAG pipeline: semantic search, re-ranking, and citation verification, ensuring every generated response includes traceable source references.