Quantization
Related service AI solutions
DEFINITION
Reducing the bit-width of model weights (e.g. from 16 to 4). 4-8× smaller memory footprint, 2-3× faster inference, ~1-2% quality loss.
- RAG (Retrieval-Augmented Generation)→
An AI architecture where the model retrieves relevant documents from your own data before answering, and only reasons over that context. Kills ~80% of hallucinations.
- LLM (Large Language Model)→
A neural model with billions of parameters (GPT-4, Claude, Mistral) that generates text. In production we never use one bare · always wrapped in retrieval and guardrails.
- Embedding→
A vector representation of text (e.g. 1536 floats). If two embeddings are close, the meanings are close. In RAG we use this to pick relevant chunks.
- Vector database→
A database specialised for fast approximate-nearest-neighbour search over embedding vectors (pgvector, Qdrant, Weaviate). The engineering base of RAG retrieval.
- Eval (LLM evaluation)→
An automated test suite that runs ~50–200 'golden' questions against the model before every release and checks that quality metrics (accuracy, factuality, latency) clear the threshold.
- Guardrail→
An input- or output-layer that filters the model's prompt/response (PII scrubbers, prompt-injection detectors, JSON-schema validation, topic blocks). Not before/after the model · around it.
- 0130 Sept 2026DField Q3 2026 roundup · what shifted, what we shipped, what is broken→
- 0201 Jul 2026DField Q2 2026 roundup · what shifted, what we shipped, what is broken→
- 0326 Apr 2026RAG's three failure modes · and the diagnostic table we use on every audit→
- 0426 Apr 2026Why your AI agent leaks money · 6 prompt-cache wins worth doing this week→
- 0523 Apr 2026On-device LLMs in 2026 · Gemini Nano vs Apple Intelligence for mobile builds→
- 0622 Apr 2026pgvector at 10M+ rows · index choice, query patterns, real performance numbers→
- 0722 Apr 2026LLM prompt caching in production · a 60-80% cost cut→
- 0822 Apr 2026Agentic AI · the safe tool-use pattern we ship by default→