Glossary
AI terms in plain English
31 terms you will run into when reading about AI for business. Short definition first, longer plain-language explanation second.
Agentic AI
AI that takes multi-step actions toward a goal, not just answers questions.
Agentic AI plans, calls tools, observes results, and adjusts on the fly. The difference from a chatbot is autonomy plus the ability to act on the outside world. Production agents have audit logging, clear scope limits, and human approval gates on consequential actions.
AI Readiness
Whether your business is set up to adopt AI safely and usefully.
AI readiness covers data quality, security posture, process documentation, internal skill, and policy. A readiness review surfaces what blocks adoption before money gets spent on the wrong tool.
AI Search Optimization (AEO)
Making a site readable and citable by AI search engines.
AEO covers structured data, summary blocks, llms.txt, AI crawler permissions, and content patterns that LLMs can extract. Different from classic SEO. The audience is ChatGPT, Claude, Perplexity, Gemini, and similar.
AIO
Short for AI Optimization. Often used interchangeably with AEO.
AIO is industry shorthand for the cluster of practices that prepare a site to be read by AI rather than just indexed by classic search.
Audit Log
A tamper-evident record of what an automated system did and when.
Every automation we build emits an audit log. The log captures inputs, decisions, tool calls, outputs, and approvals. Required for compliance, useful for debugging, essential when an agent makes a decision someone questions later.
Context Window
How much input an AI model can consider at once.
Context window is measured in tokens. A larger window means a model can hold more of your document or conversation in mind. Costs scale with context, so longer is not always better.
Data Classification
A scheme that labels data by sensitivity so tools handle it correctly.
Public, internal, confidential, regulated. A classification scheme determines which AI tools can see which data. Without it, the safest assumption is "all data is sensitive," which kills adoption.
Embedding
A numeric representation of text used for similarity search.
Embeddings turn text into vectors. Vectors that are close to each other in space represent text that is close in meaning. This is the building block of retrieval, recommendation, and semantic search.
Evaluation (Evals)
The discipline of measuring whether an AI system actually works.
Evaluation is what separates production AI from demos. A real eval set includes representative inputs, expected outputs, and a scoring method. We run evals on every agent before deployment and re-run them on every change.
Fine-Tuning
Training an existing model on additional examples to specialize it.
Fine-tuning produces a custom model that handles a specific domain better than a generic one. It is often the wrong first move. Most businesses get better results from good prompts plus retrieval. Fine-tuning earns its place when prompts plus retrieval have hit a ceiling.
Foundation Model
A large pretrained AI model like Claude, GPT, Gemini, Grok, or Llama.
Foundation models are the base layer. Production applications use them through APIs (public) or locally (private). The model is a commodity. The system around it is where value is built or lost.
Guardrails
Controls that constrain what an AI system is allowed to do.
Guardrails include input validation, output filtering, tool access limits, rate caps, and human approval gates. Good guardrails feel invisible until they catch something.
Hallucination
When an AI confidently generates information that is not true.
Hallucination is the default failure mode of LLMs. Retrieval, citations, evaluation, and human review reduce hallucinations to acceptable levels for production work. They never go to zero.
Human In The Loop (HITL)
A workflow where humans approve, review, or correct AI output.
HITL is not a step backward. The right amount of human review is what makes AI production-grade. We design HITL into every consequential decision.
LLM (Large Language Model)
A model trained to generate text and reason over language.
LLMs power most of what people mean when they say AI. Claude, GPT, Gemini, Grok, and the local model families (Llama, Mistral, Qwen, others) are all LLMs.
llms.txt
A site-root file that describes your site for AI crawlers.
Modeled after robots.txt. Tells AI agents what your site is, what to read, and where the canonical documents are. Recommended by the llmstxt.org spec.
Local AI
AI that runs on your own hardware rather than a vendor API.
Local AI keeps data inside your environment. Useful for regulated data, intellectual property, or workflows where vendor outage is unacceptable. Often paired with smaller open-weight models like Llama, Qwen, or Mistral.
MCP (Model Context Protocol)
A standard for connecting AI agents to tools and data sources.
MCP is a protocol that lets AI agents talk to filesystems, APIs, databases, and other tools in a consistent way. We use MCP for building agentic workflows that span multiple systems.
PII (Personally Identifiable Information)
Data that can identify a specific person.
Name, address, phone, SSN, financial accounts, health data, biometrics. PII handling rules vary by jurisdiction. We design AI workflows to keep PII out of public vendor pipelines unless an explicit policy permits it.
Prompt Engineering
The craft of writing inputs that get useful outputs from AI.
Prompt engineering is real and underrated. The same model with a thoughtful prompt produces dramatically different results from the same model with a thrown-together prompt. We treat prompts like code: versioned, reviewed, and tested.
Prompt Injection
An attack that smuggles instructions into an AI through untrusted input.
A web page, an email, a document, anything an AI reads can carry adversarial instructions. We treat all third-party content as untrusted and design agents so they cannot execute consequential actions based on it without human approval.
RAG (Retrieval-Augmented Generation)
Letting an AI look up information from your data before answering.
RAG combines a search step with a generation step. The model answers using your data instead of relying on what it memorized during training. Most "AI that knows our business" systems are RAG systems.
Retrieval
Pulling relevant documents into an AI prompt at query time.
Retrieval is half of RAG. Good retrieval finds the right two paragraphs out of ten thousand documents. Bad retrieval drowns the model in noise.
Schema (JSON-LD)
Structured data that tells search engines and AI what a page is about.
JSON-LD is the format. Schema.org is the vocabulary. Organization, Service, LocalBusiness, Person, FAQPage, BlogPosting, Course. Pages with correct schema get richer citations from AI and richer results from search.
System Prompt
A persistent instruction set that shapes how a model behaves.
The system prompt sets the rules. A good system prompt is specific, includes examples, and explicitly handles failure cases. We version system prompts the same way we version code.
Tool Use
When an AI calls external functions or APIs to take action.
Tool use turns a chat into work. The AI decides which tool to call with which arguments. Production tool use needs scope limits, audit logging, and a clear story for when the tool fails.
Token
The unit AI models read, generate, and bill by.
A token is roughly four characters of English. Models charge per token in and per token out. Budget matters more than people expect. We design prompts and retrieval to keep token use predictable.
Vector Database
A database optimized to store and search embeddings.
Vector databases power retrieval. Pinecone, Weaviate, Postgres with pgvector, Chroma, and many others. The choice usually comes down to operational fit rather than feature differences.
Private AI
AI deployed inside your own environment.
Private AI means the model, the data, and the prompts stay inside your network. Useful for regulated industries, sensitive intellectual property, and workflows that cannot tolerate vendor outages.
Workflow Automation
Replacing repetitive manual steps with code or AI that does them reliably.
Workflow automation is not always AI. The right answer is sometimes a script, a webhook, or a no-code tool. We name what each task actually needs rather than reaching for AI by default.
Zero Trust
A security model that trusts nothing by default, including AI agents.
Zero trust extends to AI. An agent does not get to call a tool because it claims it needs to. It gets explicit permission, scoped to a specific task, with audit logging.
Want the working definition of something else?
Send us the term. If it is general enough, we will add it. If it is specific to your business, we will answer it directly.