The Control Was Never the Risk
AI, structural coherence, and the coming redesign of risk management
Author’s Note
I came across a Deloitte article in The Wall Street Journal asking whether it is time to reimagine risk management. The piece centers on a useful and increasingly urgent question: if risk management and internal audit were designed today, with AI, continuous monitoring, expanding regulatory pressure, and more complex operating environments already in view, would we build the function the same way? Deloitte’s article was published April 21, 2026, and features Geoffrey Kovesdy of Deloitte & Touche LLP discussing AI-enabled control testing, continuous controls monitoring, internal audit, and the limits of simply backing new capabilities into old models.
If risk management and internal audit were designed today, with AI, continuous monitoring, expanding regulatory pressure, and more complex operating environments already in view, would we build the function the same way?
That question caught my attention because it points at something larger than AI adoption. Deloitte frames the opportunity around faster control testing, continuous controls monitoring, risk governance, and the need to rethink legacy models. All of that matters. A generative AI system that can reduce a control test from 20 hours to five is not a small improvement. Across thousands of controls, the math changes quickly.
But speed is not the part that interests me most.
What interests me is the deeper question underneath Deloitte’s article: whether risk management, as currently designed in many organizations, is close enough to operating reality to matter while action is still possible.
I am writing this because I think the AI conversation around risk is still too focused on efficiency. Faster testing. Faster evidence review. Faster reporting. Faster monitoring. Those gains are real, but they are not enough. A control tested faster is still only useful if the control corresponds to how work actually happens. A dashboard updated continuously is still only useful if it reveals reality rather than smoothing it over. A risk function made more efficient is still inadequate if it remains structurally distant from the first places risk becomes visible.
This is where my own lens enters: Structural Coherence.
I define Structural Coherence as the degree to which an institution’s stated goals, operating systems, incentives, information flows, and decision rights actually align with the reality it claims to manage. From that perspective, risk management is not merely about controls, testing, reporting, or governance. At its highest level, risk management is the discipline of keeping an organization attached to reality before consequence arrives.
Deloitte asked the right opening question.
This essay is my attempt to answer it.
The Control Was Never the Risk
Deloitte recently asked the right question about risk management: if organizations were designing the function from a clean sheet of paper today, would they design it the same way? The honest answer, for many organizations, is probably not. The reason is not simply that AI can now test controls faster, summarize evidence, monitor exceptions, or reduce manual audit labor. The reason is that AI is beginning to expose a deeper weakness in the way many institutions understand risk itself.
The traditional risk function was built around boundaries of accountability. First line, second line, third line. Ownership, oversight, assurance. The model has value because it gives organizations a language for responsibility and prevents certain conflicts from being buried inside operating teams. But the model can also create a dangerous illusion: that risk moves cleanly through the same lines by which accountability is assigned.
It does not. Risk forms inside the actual operating system of the enterprise, in the space between incentives, decision rights, information flows, resource constraints, informal workarounds, delayed maintenance, exception handling, and the thousand small translations that occur before bad news reaches the people with authority to act. The three lines model may clarify who is responsible for risk, but it does not automatically reveal where risk is forming, how quickly it is moving, or whether the control environment still corresponds to the work as actually performed.
That distinction is where the next generation of risk management begins.
Deloitte’s example is useful: a generative AI-powered solution reducing the time required to test a control from 20 hours to five. In organizations performing thousands of control tests a year, the math is enormous. Internal audit, SOX reporting, compliance, and control assurance could all gain capacity. But the larger implication is not labor efficiency. The larger implication is that AI forces a harder question: what exactly are we testing, and does the test still correspond to reality?
A control tested faster is useful. A control tested continuously is better. But a control that no longer matches how work actually happens remains theater, even if AI makes the theater more efficient. This is the problem with treating AI as a modernization layer instead of a coherence test. AI can accelerate control testing. It can summarize evidence, flag exceptions, monitor signatures, identify missing artifacts, review approvals, compare documents, and detect process deviations. But if the underlying operating model is structurally incoherent, the organization may simply get better at documenting its own misunderstanding.
The control was never the risk. The risk is the gap between what the organization says is happening and what is actually happening. When the documented process, actual behavior, incentives, information flow, and decision authority begin to separate, the organization has already entered structural incoherence. The incident, audit finding, regulatory breach, or operational failure is usually not the beginning of the problem. It is the receipt.
I define Structural Coherence as the degree to which an institution’s stated goals, operating systems, incentives, information flows, and decision rights actually align with the reality it claims to manage. Risk management, in its highest form, is not the administration of controls. It is the continuous measurement of that alignment.
That is why Deloitte’s own broader change-management numbers matter. Deloitte has cited research showing that nearly 70% of large-scale change initiatives fail to meet their long-term goals. In separate board-facing work on transformation initiatives, Deloitte also noted that 70% of companies in a margin-improvement survey did not meet their goals. Taken together, those figures create a problem Deloitte does not fully confront in the risk article: risk management is now being asked to transform itself inside organizations that are statistically poor at transformation.
The question, therefore, cannot be merely whether AI can help risk functions test controls faster. Of course it can. The harder question is whether an organization whose change initiatives routinely fail can redesign the very function responsible for seeing failure early.
Deloitte’s risk-management article asks whether AI should force organizations to redesign risk and internal audit from a clean sheet. Deloitte has also cited research showing that nearly 70% of large-scale change initiatives fail to meet their long-term goals, and in separate board-facing transformation work noted that 70% of companies in a margin-improvement survey did not meet their goals.
That is not a technology problem. It is not a maturity-model problem. It is not even a governance problem in the usual committee-and-charter sense. It is an operating-system problem. Most organizations do not fail because nobody had a framework. They fail because the framework did not remain attached to reality. The deck said one thing. The incentives taught another. The system measured a third. The people doing the work adapted to a fourth. By the time leadership saw the problem, it had already passed through several layers of translation.
Risk management often arrives late because the organization was designed to make it late.
The hidden weakness in many control environments is that they are built around periodic confirmation rather than live correspondence. A quarterly review, a sample test, a control owner attestation, an audit finding, a remediation plan, and a closure date may all be necessary, but none of them guarantees that the organization still understands the risk while action can still change the outcome. In complex organizations, slow truth becomes false comfort.
Consider a simple operating example. A maintenance backlog grows for six weeks. The dashboard still shows green because response-time averages remain inside tolerance. Supervisors begin closing tickets administratively and reopening them under new categories to preserve SLA compliance. The required sign-offs exist. The control evidence is complete. The review passes. Yet the operating truth has already changed: the work is no longer being completed within the system the organization believes it is managing. The first signal was not the audit finding three months later. It was the workaround becoming normal.
In a structurally coherent risk function, that chain would become visible before failure. The issue would not wait for audit. It would be flagged when the workaround appeared, when the same ticket pattern repeated, when administrative closure began separating from physical completion, and when the metric stayed green only because the work had been reclassified. The question would not be, “Did the control owner provide evidence?” It would be, “Does the evidence still describe the work?”
This pattern is not hypothetical. In the Wells Fargo sales-practices scandal, the Office of the Comptroller of the Currency found unsafe or unsound practices that included opening unauthorized deposit and credit-card accounts and transferring funds from authorized accounts to unauthorized ones. The OCC also cited the bank’s failure to develop and implement an effective enterprise risk-management program to detect and prevent those practices.
That is truth-to-action latency in institutional form. The signal did not appear only when regulators announced penalties. It appeared earlier in complaints, employee behavior, internal pressure, account anomalies, and the widening distance between the official sales model and the lived operating reality. By the time the issue became a public scandal, the organization was no longer dealing with a control failure alone. It was dealing with structural incoherence that had matured into consequence.
That is the difference between control testing and coherence testing.
A control tested six months after the behavior changed is not a control. It is an autopsy. A control tested against evidence prepared by people who know what evidence is expected is not necessarily assurance. It may be documentation theater. A dashboard that monitors risk but does not test whether the control actually works is not a nervous system. It is a warning light wired to a committee.
AI’s real contribution should be to collapse the distance between operating reality and decision authority. Not merely to accelerate paperwork. Not merely to reduce audit hours. Not merely to create another executive dashboard with better colors and fewer humans in the loop. The breakthrough is shorter truth-to-action latency.
Truth-to-action latency is the time between the first observable signal and meaningful authorized response. It begins when reality first changes or first becomes visible. It continues through recording, interpretation, belief, escalation, decision-right engagement, action, and system correction. The longer that chain takes, the more exposed the organization becomes, even if every formal control appears documented.
That may be the central metric of next-generation risk management. Not only how many controls were tested. Not only how many findings were closed. Not only how many risk dashboards were reviewed. But how long it took the organization to see reality, believe it, escalate it, decide on it, and act before consequence arrived.
The Coherent Risk Function
This is where Deloitte’s clean-sheet question becomes operational. If risk management were designed from scratch today, it would not begin with a control library, an audit universe, a risk taxonomy, or a three-lines ownership chart. Those things may still have a place, but they are not the beginning. A clean-sheet risk function would begin with what might be called The Coherent Risk Function: an operating model built around reality-point mapping, correspondence testing, signal-flow ownership, truth-to-action latency, and coherence operators.
The first discipline is reality-point mapping. In most organizations, reality does not first appear in the board deck. It appears in maintenance logs, exception reports, customer complaints, rework, quality escapes, field observations, missed handoffs, delayed approvals, unresolved tickets, procurement issues, safety near-misses, warranty claims, attrition patterns, budget variance, and informal workarounds. These artifacts are often treated as operational noise until they become large enough to be categorized as risk. That is backwards. They are the early risk system.
The Coherent Risk Function: an operating model built around reality-point mapping, correspondence testing, signal-flow ownership, truth-to-action latency, and coherence operators.
The job of reality-point mapping is to identify where the organization first learns that the work is not behaving as assumed. Where do people first see that the process has drifted? Where do exceptions first become normal? Where does the system first require a workaround to survive the day? Where does management first receive a signal that is specific enough to act on, but low enough in the hierarchy to be ignored? These questions move risk management upstream, closer to the place where reality is still raw enough to be useful.
The second discipline is correspondence testing. Traditional control testing often asks whether the required evidence exists and whether the documented step occurred. A structurally coherent risk function goes further. It compares the documented process to the actual workflow, the control evidence to the behavior it supposedly verifies, the incentive structure to the desired conduct, and the decision authority to the location of consequence.
This is where AI can be genuinely useful. It can scan approval histories, ticket logs, exception narratives, policy attestations, remediation notes, procurement delays, access records, incident reports, meeting minutes, quality data, maintenance trends, and customer complaints to identify places where the lived process has separated from the official one. The purpose is not to replace the auditor’s judgment. The purpose is to put the auditor closer to reality before the formal failure occurs.
The third discipline is signal-flow ownership. The three lines model should not be discarded casually; it exists for a reason. But it should be re-centered. In the old model, the dominant question is often, “Who owns the risk?” In the coherent model, the dominant question becomes, “Does the risk signal reach the right authority in time to matter?”
Under that model, the first line owns operational reality and signal generation. It is responsible not only for managing risk in the work but for making deviation visible without punishment or cosmetic translation. The second line owns coherence testing, pattern detection, risk translation, and challenge. It asks whether the signal system is showing reality or merely reporting compliance. The third line owns independent assurance over whether the signal system, controls, governance, and escalation pathways correspond to reality. The C-suite and board own truth-to-action latency, because only they can resolve the places where accountability, authority, incentives, and information flow do not line up.
That is a meaningful evolution of the three lines model. It keeps the useful boundaries but changes the center of gravity. Risk management becomes less about proving that the organization followed the documented process and more about proving that the documented process, actual work, control evidence, and decision authority still describe the same world.
The fourth discipline is latency review. Every significant risk should carry a latency profile. When did the first signal appear? Where did it appear? Who saw it? How was it recorded? Was it believed? Where did it slow down? Who had authority to act? What action was taken? Did the system change, or did the organization merely close the finding?
This would change the nature of executive and board reporting. Instead of presenting risk as a static heat map, the organization would present risk as a movement problem. Some risks are severe because of impact. Others are severe because the organization is slow to recognize them. Still others are severe because everyone can see them but no one with authority owns the correction. That last category is especially dangerous because it creates the illusion of transparency without the reality of control.
The fifth discipline is the creation of coherence operators. Deloitte’s article refers to “purple people,” risk professionals who combine business process expertise with data and technology capability. The concept is useful, but the role should be defined more sharply. The future risk function will need people who can read across process, data, controls, incentives, informal work, governance, AI outputs, and human behavior. They cannot be merely auditors with better software or technologists with a compliance vocabulary. They must be translators of reality.
A coherence operator determines whether the formal system is still attached to the real one. They ask whether a control still tests the actual risk, whether a workflow reflects how work is performed, whether a dashboard hides the signal through aggregation, whether an owner has accountability without authority, and whether a transformation plan will survive contact with the organization as it actually behaves.
This is the human role AI cannot replace. AI can identify patterns, compare documents, surface deviations, and reduce the labor required to inspect evidence. But it cannot, by itself, determine whether an organization has become structurally dishonest with itself. That requires judgment, courage, context, and the ability to notice when everyone is technically compliant and still wrong.
Many organizations will stop at surface change. They will buy tools, launch pilots, automate pieces of control testing, and celebrate hours saved. Then they will wonder why the risk profile did not materially improve, because the tool was installed into the same incoherent structure. Incentives still pointed in different directions. Decision rights remained unclear. Bad news still slowed on the way up. People still learned which truths were career-safe. Reports still smoothed over variance. Controls still tested the documented process rather than the lived one. The organization achieved adoption without transformation.
This is the recurring pattern in enterprise change: a structural problem is treated as a capability gap. The organization buys the software, the methodology, the dashboard, the consultant, the operating cadence, or the maturity model, but the actual system remains unchanged. When the initiative fails, leadership calls it adoption failure. More often, it is coherence failure.
That is why risk management may be the most important function to redesign in the age of AI. Not because risk teams are uniquely behind, but because they sit at the intersection of truth, authority, and consequence. If they become faster administrators of the old model, they will help organizations move more efficiently toward the same blind spots. If they become architects of structural coherence, they can become one of the most valuable functions in the enterprise.
A structurally coherent risk function would still care about controls, evidence, auditability, accountability, regulatory expectations, and independence. It would not abandon the discipline of risk management. It would deepen it by attaching that discipline to operating reality. The goal would be to know whether the enterprise is still aligned with the conditions it claims to control.
That is the difference between a compliance architecture and an institutional nervous system. A nervous system does not work by producing quarterly binders. It works by sensing, transmitting, interpreting, and triggering action. It does not merely record pain after the body is damaged. It allows the body to respond before the injury compounds.
AI makes that more possible than it has ever been. It can help risk functions see more, test more, compare more, and detect drift earlier. But possibility is not inevitability. AI will not automatically make risk management more truthful. In a structurally incoherent organization, AI may simply accelerate the production of confident nonsense.
The organizations that benefit will not be the ones that ask, “How do we automate our existing controls?” They will be the ones that ask, “What would our risk system look like if it were designed around reality rather than reporting?”
That question is more threatening because it does not stop at technology. It means some controls will be revealed as obsolete. Some dashboards will be exposed as ornamental. Some committees will be shown to create latency rather than clarity. Some leaders will discover they receive risk information too late to matter. Some functions will learn they own risks created by decisions made elsewhere.
Risk management in the age of AI should not be judged by how much labor it removes from the audit cycle. It should be judged by how much distance it removes between truth and action. Deloitte is right that the profession is about to change in a significant way. The only question is whether it changes at the surface or at the structure.
Surface change makes the old model faster. Structural change asks whether the old model was ever close enough to reality.
That is the clean sheet. And the first sentence on it should be simple:
The risk was whether the organization still knew the truth in time.
References
Deloitte / The Wall Street Journal. “Is It Time to Reimagine Risk Management?” Published April 21, 2026. Used as the primary article this essay responds to, including the AI-enabled control-testing example, the clean-sheet question, and Geoffrey Kovesdy’s comments on risk management and internal audit.
Deloitte Insights. “Developing more effective change management strategies.” Published July 14, 2016. Source for Deloitte’s statement that nearly 70% of large-scale change initiatives fail to meet their long-term goals.
Deloitte / The Wall Street Journal. “Companies Change Margin Improvement Strategies as Efforts Fall Short.” Published around early 2024. Source supporting Deloitte’s margin-improvement transformation discussion, including companies struggling to meet transformation and margin-improvement targets.
Office of the Comptroller of the Currency. “OCC Assesses Penalty Against Wells Fargo, Orders Restitution for Unsafe or Unsound Sales Practices.” Published September 8, 2016. Source for the Wells Fargo sales-practices example, including unauthorized accounts, transfers from existing accounts, and enterprise risk-management failures.
If this framing resonates, I write about Structural Coherence: the alignment between stated goals, operating systems, incentives, information flows, decision rights, and the reality an institution claims to manage. My work is for leaders, boards, operators, and advisors trying to understand why organizations lose contact with reality before they lose control.

