SHIPPED WORK2026
MCP Security Layer
A security layer between an AI agent and its tools · checks every tool call at the intent level, blocks or approves, logs.
Before an AI agent calls a tool (send_email, execute_sql, transfer_funds), MCP Security intercepts. A secondary AI model classifies the intent of the call, matches it against a policy, allows or blocks · logs everything for audit. Drop-in in front of any MCP-compatible agent stack.
THE PROBLEM
- -An agent with tools executes whatever the prompt says · prompt injection becomes expensive fast
- -Writing custom auth per tool is engineering-days per tool
- -No central log of what the AI tried · audit isn't reproducible
- -If one tool gets compromised, the others inherit the blast radius
WHAT THE CLIENT GOT
- One layer that protects every tool · no per-tool auth logic needed
- Audit trail ready · fits MNB / NAIH / EU AI Act reporting
- Safe under prompt injection · the intent layer stops malicious calls
- Drop-in · 1 hour to slot into an existing MCP stack
WHAT WE DELIVERED
- +Intent analyser · separate model decides what the call is trying to do
- +Policy engine · YAML-based allow/deny rules
- +Full audit log · every call, decision, rationale preserved
- +Real-time alerts · Slack, PagerDuty, email on suspicious patterns
- +Drop-in MCP-compatible · OpenAI Assistants API, Anthropic, LangChain
STACK
- Python
- FastAPI
- OpenAI
- Anthropic
- LangChain
RELATED READING
- AI solutions · Website & online shoppgvector at 10M+ rows · index choice, query patterns, real performance numberspgvector at 10M rows is not scary · if you pick the right index. HNSW vs IVFFlat, filter patterns, real numbers.
- AI solutionsLLM prompt caching in production · a 60-80% cost cutPrompt caching is the single biggest LLM cost lever in 2026. 4 patterns, real savings numbers, 2 gotchas worth knowing.
- AI solutions · CybersecurityAgentic AI · the safe tool-use pattern we ship by defaultAgentic AI that can send email and move money is not just a chatbot. Here's the safe tool-use pattern we ship.
- AI solutionsLLM evals-as-code · the CI gate we run on every RAG deployAn eval that's not in CI is not an eval. Here's the evals-as-code workflow we run on every RAG project.