I apply forensic investigation methodology to AI systems — identifying the control gaps that compliance reviews miss and that adversarial actors exploit first. Governing AI agents across regulated industries in Europe and beyond.
Fifteen years in fraud investigation taught me one discipline above all others: find the gap, not the actor. Every major control failure I investigated came down to a process that looked airtight on paper and had a four-lane bypass in practice.
When I turned my attention to how organisations were deploying AI, I recognised the pattern immediately. Not the technology — the governance posture. Strong on capability evaluation. Weak on boundary definition. Almost entirely absent on what happens after deployment.
AI Resilience Lab exists to close that gap. I work with regulated organisations — banks, insurers, and corporate technology functions — to design control frameworks, monitoring architectures, and resilience strategies for AI systems operating in real-world environments.
My focus is the execution layer: the moment the agent acts, the sequence of tool calls it makes, and the decisions it takes between the policy document and the outcome.
The major governance frameworks — NIST AI RMF, ISO 42001, the EU AI Act — provide necessary structure. None yet prescribe how to govern the moment the agent acts. These frameworks address that gap directly.
I work with regulated organisations that are deploying agentic AI and need a governance posture that matches the actual risk — not just the compliance checklist. If that describes your situation, I would like to hear from you.