Every organisation that engages RT to address an AI governance challenge begins the conversation in the same place: the AI system. What model is it? How was it deployed? Who owns it technically? What does it output, and how is that output being used? These are reasonable questions and they eventually need answers. They are not, however, the first governance question. And pursuing them first consistently leads organisations to the wrong diagnosis.
The first AI governance question is always about the decision the AI is participating in. Not the AI system itself — the decision. What category of decision is this? Who is currently accountable for decisions in this category? What does accountability mean in this context — who reviews, who approves, who is on record for the outcome? How is the decision currently being made without AI, and what changes when AI enters the decision chain?
The reason this matters is that AI governance challenges almost never originate in the AI. They originate in pre-existing ambiguities in decision accountability that the AI’s arrival makes visible. When an AI system is deployed into a decision category where accountability was already unclear, the AI doesn’t create a new governance problem — it reveals one that was already there.
The organisations that struggle most with AI governance are not the ones that deployed AI carelessly. They are the ones that deployed AI into decision environments where the governance architecture was already inadequate — and discovered, through the AI’s arrival, that it was.
This distinction matters practically. If AI governance is framed as a question about the AI — its safety, its bias, its outputs, its ownership — the response is to establish AI-specific governance controls: model review boards, algorithmic audit processes, AI ethics policies. These controls are not useless. They are, however, layered on top of an underlying governance architecture that remains whatever it was before. If that architecture was unclear about decision accountability, the AI-specific controls don’t resolve the unclarity — they add a governance layer that addresses a symptom without touching the structural condition producing it.
If AI governance is framed as a question about the decision the AI participates in, the response is different. It starts from the existing decision accountability architecture, maps where AI is entering the decision chain, identifies what changes when AI enters (who is accountable for what, under what conditions, with what oversight), and designs the governance architecture that makes accountability clear, explicit, and defensible. The AI-specific controls that follow from this are grounded in a genuine accountability architecture rather than layered on top of a missing one.
In practice, RT begins AI governance engagements by mapping the decision categories the AI is influencing — not the AI system — and asking a single question about each one: if this AI-influenced decision produces a bad outcome, who is accountable for it, what are they accountable for, and how would accountability be established after the fact? If the answer to that question is clear, the governance architecture is probably adequate. If the answer is unclear, ambiguous, or contested — if the honest answer is “it depends” or “probably several people” or “we’d have to work that out at the time” — that ambiguity is the governance challenge. The AI made it visible. The AI did not create it.
This is a reframe that most organisations find clarifying and, initially, uncomfortable. Clarifying because it locates the actual problem. Uncomfortable because the actual problem is older and deeper than the AI deployment that surfaced it — and because addressing it requires the kind of structural governance work that is harder than establishing a model review board. But the organisations that navigate AI governance successfully are the ones that do that work, rather than the ones that manage the symptom.