DField SolutionsMérnöki stúdió · Budapest
Loading · Töltődik
Skip to content

Quantization

Related service AI solutions

DEFINITION

Reducing the bit-width of model weights (e.g. from 16 to 4). 4-8× smaller memory footprint, 2-3× faster inference, ~1-2% quality loss.

RELATED TERMS06
  • RAG (Retrieval-Augmented Generation)

    An AI architecture where the model retrieves relevant documents from your own data before answering, and only reasons over that context. Kills ~80% of hallucinations.

  • LLM (Large Language Model)

    A neural model with billions of parameters (GPT-4, Claude, Mistral) that generates text. In production we never use one bare · always wrapped in retrieval and guardrails.

  • Embedding

    A vector representation of text (e.g. 1536 floats). If two embeddings are close, the meanings are close. In RAG we use this to pick relevant chunks.

  • Vector database

    A database specialised for fast approximate-nearest-neighbour search over embedding vectors (pgvector, Qdrant, Weaviate). The engineering base of RAG retrieval.

  • Eval (LLM evaluation)

    An automated test suite that runs ~50–200 'golden' questions against the model before every release and checks that quality metrics (accuracy, factuality, latency) clear the threshold.

  • Guardrail

    An input- or output-layer that filters the model's prompt/response (PII scrubbers, prompt-injection detectors, JSON-schema validation, topic blocks). Not before/after the model · around it.

MENTIONED IN THE BLOG08