AI Control Failure Specialist · Zurich, Switzerland

Finding the gaps
compliance reviews miss.

I apply forensic investigation methodology to AI systems — identifying the control gaps that compliance reviews miss and that adversarial actors exploit first. Governing AI agents across regulated industries in Europe and beyond.

Start a conversation View frameworks
Zurich · Switzerland · EU AI Act · FINMA

Forensic thinking applied to AI risk.

Fifteen years in fraud investigation taught me one discipline above all others: find the gap, not the actor. Every major control failure I investigated came down to a process that looked airtight on paper and had a four-lane bypass in practice.

When I turned my attention to how organisations were deploying AI, I recognised the pattern immediately. Not the technology — the governance posture. Strong on capability evaluation. Weak on boundary definition. Almost entirely absent on what happens after deployment.

AI Resilience Lab exists to close that gap. I work with regulated organisations — banks, insurers, and corporate technology functions — to design control frameworks, monitoring architectures, and resilience strategies for AI systems operating in real-world environments.

My focus is the execution layer: the moment the agent acts, the sequence of tool calls it makes, and the decisions it takes between the policy document and the outcome.

15+
Years in Fraud Investigation & Risk Management
EU
AI Act · FINMA · Swiss Regulatory Context
ZRH
Based in Zurich. Serving regulated industries globally.

What I do — and how I do it differently.

01
AI Control Architecture Review
Forensic review of agentic AI deployments. I map permission gaps, sequence gaps, and authorization gaps that standard compliance reviews do not reach. Output: a prioritised control gap register with remediation architecture.
02
Behavioral Governance Design
Design of ongoing governance programs for AI agents operating in production. Covers intent alignment, action boundary enforcement, and behavioral monitoring — built into the system architecture, not the policy document.
03
AI Agent Risk Assessment
Pre-deployment and in-production risk assessments for agentic AI systems. Adversarial scenario modeling. Failure mode mapping. Human checkpoint design. Aligned to EU AI Act, FINMA guidance, ISO 42001, and NIST AI RMF.
04
Regulatory Alignment
Mapping AI deployments against EU AI Act obligations, FINMA model risk guidance, and emerging agentic AI governance frameworks. Practical, deployment-level — not theoretical compliance gap analysis.
05
Executive & Board Advisory
Translating AI agent risk into business language for boards, risk committees, and senior leadership. What to ask, what to own, and what governance structure is required before autonomous systems go live at scale.
06
Thought Leadership & Training
Speaking, workshops, and in-house training on AI control failure, agentic AI fraud, and behavioral governance. Designed for risk officers, compliance leaders, and CISOs in regulated industries.

Original frameworks built for the execution layer.

The major governance frameworks — NIST AI RMF, ISO 42001, the EU AI Act — provide necessary structure. None yet prescribe how to govern the moment the agent acts. These frameworks address that gap directly.

IP 01
Behavioral Governance Framework
The discipline of ensuring AI agents cannot do what they should not — regardless of what they are capable of. Three pillars: Intent Alignment · Action Boundary Enforcement · Continuous Behavioral Monitoring. Not point-in-time compliance — ongoing assurance that an agent operates within authorized boundaries.
IP 02
The AI Agent Governance Stack
A five-layer model for governing agentic AI from business objective to human oversight: Business Objective → Agent Capability → Permission Boundary → Behavioral Audit → Human Oversight. Each layer is a governance decision, not a technical default.
IP 03
AI Agent Fraud — Named Category
The exploitation — intentional or emergent — of agentic AI systems to produce unauthorized outcomes. A category that existing fraud frameworks do not address because they were built for human actors. White paper in development.
IP 04
AI Control Failure Taxonomy
Five failure categories mapped from forensic investigation methodology: Scope Breach · Permission Drift · Objective Misalignment · Audit Evasion · Cascading Failure. A diagnostic tool for pre-deployment review and post-incident analysis.

The right conversation starts here.

I work with regulated organisations that are deploying agentic AI and need a governance posture that matches the actual risk — not just the compliance checklist. If that describes your situation, I would like to hear from you.

Location
Zurich, Switzerland
Focus markets
Banking · Insurance · Corporate Technology · Advisory