Skip to content
- Active learning
- Supervised Fine-Tuning (SFT)
- Fine-Tuning
- Reinforcement Learning (RL)
- Reinforcement Learning from Human Feedback (RLHF)
- Zero-Shot / One-Shot / Few-Shot Learning
- Unsupervised Learning
- Data Augmentation
- Synthetic Data
- Distillation (Knowledge Distillation)
- LoRA (Low-Rank Adaptation)
- Mixture of Experts (MoE)
- Pruning
- Quantization
- Contextual Compression
- Frugal AI
- Tuning-free Control
- Chain-of-Thought (CoT)
- Memory (in Agents/LLMs)
- Long Short-Term Memory (LSTM)
- Self-Consistency
- Grounding
- Alignment
- Alignment Tax
- Anthropic Principle (AI)
- Prompt
- Prompt Engineering
- Prompt Chaining
- Prompt Injection
- Meta-prompting
- System Prompt / Hidden Prompt
- Temperature
- Steerability
- Agentic AI / Autonomous Agent
- Tool Use / API Calling
- Retrieval Plugin / Connector
- Inference Endpoint
- Webhook
- Humanizer
- Synthetic Persona
- Benchmark
- Evaluation Benchmark
- Inference
- Throughput
- Parameter
- Token
- Graphics Processing Unit
- Vector Database
- AI Governance
- Zero-Trust AI
- Red Teaming
- Jailbreak (LLM)
- Guardrails-as-a-Service
- AI Detector
- AI Hall Monitor
- Watermarking (AI output)
- Overfitting
- Hallucination
- AI Slop
- Bias & Misalignment (covered under Alignment/Tax too)
- Jailbreak (cross-listed, but mainly here)
- AI-as-a-Service
- Natural Language Processing
- Natural Language Understanding (NLU)
- Natural Language Generation (NLG)
- Text-to-image
- Stable Diffusion
- Speech-to-Text