How Artificial Intelligence becomes a board-level accountability issue
EU AI Act governance – When decisions disappear from view
1. A decision without a decision-maker
A mid-sized European financial institution introduces an AI-driven credit assessment model. The stated objective is modest: faster processing, more consistent outcomes, fewer human errors. Within months, approval times drop sharply. Management is satisfied.
Then a pattern emerges. A specific group of applicants is rejected significantly more often than before. Customer complaints rise. A journalist starts asking questions. When the board asks management why this is happening, the answer is unsettlingly vague:
“The model optimises based on historical data. The exact weighting is not fully transparent.”
No individual credit officer made the decision. No explicit policy change was approved. No discriminatory intent can be demonstrated. And yet, a decision was made — repeatedly, systematically, and with real consequences.
This is the governance problem the EU AI Act is designed to address.
The issue is not that AI makes mistakes. Humans do as well. The issue is that AI relocates decision-making power to systems that sit outside traditional governance structures. The decision still exists, but the decision-maker has become invisible.
For decades, boards have been comfortable with complex systems influencing outcomes. Forecasting models, credit scoring, actuarial calculations and risk models are not new. What is new is the degree of autonomy and opacity modern AI systems possess.
The shift is subtle but fundamental.
A forecasting model once supported management judgement. An AI-driven demand model increasingly sets production volumes. A fraud detection tool once flagged anomalies for review. A machine-learning system now blocks transactions automatically. A recruitment algorithm once ranked candidates. It now filters them out before a human ever sees a CV.
In governance terms, AI systems have crossed a boundary:
they no longer merely inform decisions — they increasingly determine outcomes.
This is why the EU AI Act cannot be understood as an IT regulation. It is a response to a relocation of authority within organisations.
3. Why traditional governance failed to notice
One might reasonably ask why existing governance frameworks did not catch this earlier. After all, boards oversee strategy, risk and internal control. Audit committees scrutinise systems and data. Internal audit reviews automated controls.
The answer lies in a structural blind spot.
Governance frameworks traditionally assume that:
-
decision-makers are identifiable,
-
decision logic can be reconstructed,
-
accountability flows through human actors.
AI disrupts all three assumptions.
A machine-learning model may evolve continuously. Its internal logic may not be interpretable in human terms. Responsibility becomes diffused between developers, data scientists, vendors, users and managers. As a result, AI systems often fall between governance domains: not quite strategy, not quite IT, not quite risk, not quite compliance.
The EU AI Act closes that gap deliberately.
Read more in respect of machine-learning in our blog: Dynamic Pricing & Corporate Governance: How Algorithms Became the Invisible Steering Wheel of Modern Markets.
Why Europe intervened: from ethics to systemic risk
4. Not an ethics debate, but a governance failure
Public debate often frames AI regulation as an ethical initiative — fairness, bias, human dignity. While these elements are real, they do not explain the regulatory architecture of the EU AI Act.
The Act is not built like an ethics code.
It is built like a risk regulation framework.
The EU’s concern was not isolated harm, but systemic effects:
-
automated exclusion from essential services,
-
erosion of due process,
-
large-scale discrimination without intent,
-
loss of institutional trust.
Crucially, these risks are ex ante risks. They materialise before anyone realises something is wrong, and they scale rapidly.
This mirrors earlier regulatory moments. Financial reporting standards were not introduced because every company committed fraud, but because markets cannot function if trust collapses. Prudential regulation emerged not because banks intended harm, but because systemic fragility required structural safeguards.
The EU AI Act follows the same logic.
5. Why ex post remedies are insufficient
Some argue that existing laws — discrimination law, consumer protection, tort liability — are sufficient. The EU explicitly rejected this view.
The reason is practical.
Imagine challenging an AI-driven decision in court. The affected individual must demonstrate:
-
that a decision occurred,
-
that it was unlawful,
-
that harm resulted,
-
and that the organisation is responsible.
When the decision logic is opaque, data-driven, and probabilistic, this burden becomes almost insurmountable. Ex post remedies fail precisely where automation is most powerful.
The EU AI Act therefore shifts the emphasis from after-the-fact correction to before-the-fact governance — a hallmark of mature risk regulation.
The risk-based architecture: understanding it by example
6. Why risk classification is the backbone of the Act
The most important conceptual element of the EU AI Act is its risk-based structure. Without understanding this, compliance efforts become superficial.
The Act distinguishes four categories of AI use. These are not abstract labels; they determine the governance burden imposed on organisations.
Let us examine each category through practical examples.
7. Unacceptable risk — prohibited by design
Certain AI practices are banned outright. A prominent example is social scoring by public authorities: systems that aggregate behaviour to assign trustworthiness scores affecting access to services.
The logic is clear. Such systems:
-
undermine individual autonomy,
-
create self-reinforcing exclusion,
-
concentrate power without accountability.
For boards, the lesson is not limited to public-sector use. Any AI system that effectively classifies individuals into moral or behavioural categories without due process should trigger immediate governance alarm bells.
8. High-risk AI — governance becomes mandatory
High-risk AI systems are permitted, but only under strict conditions. This is where most corporate exposure lies.
Example: AI-based CV screening
A company uses AI to pre-select candidates. The system is trained on historical hiring data. On paper, it improves efficiency. In practice, it replicates historical biases: certain profiles rarely pass the initial screening.
Under the EU AI Act, this is high-risk AI. The organisation must demonstrate:
-
that training data is relevant and representative,
-
that risks of bias are identified and mitigated,
-
that decisions can be traced and explained,
-
that humans can meaningfully intervene.
Crucially, the company must document these elements before deploying the system.
EU AI Act governance
EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance
9. Limited risk — transparency as a control
Limited-risk systems are subject primarily to transparency obligations.
Example: AI-driven customer chatbots
Users must be informed that they are interacting with an AI system. The objective is not to restrict use, but to preserve informed interaction.
From a governance perspective, this reinforces a recurring theme:
transparency is not a courtesy — it is a control mechanism.
10. Minimal risk — freedom with responsibility
Most AI applications fall into the minimal-risk category: recommendation engines, spam filters, predictive maintenance tools.
These systems are largely unregulated, but this does not mean governance is irrelevant. Over time, minimal-risk systems may migrate into higher-risk categories as their influence grows — a dynamic boards must actively monitor.
High-risk AI in corporate reality
11. What “compliance” actually looks like in practice
Consider again the AI-driven credit assessment model.
Under the EU AI Act, compliance is not achieved by a checklist. It requires an operational governance system:
-
A documented risk assessment before deployment
-
Clear ownership of the model
-
Defined escalation procedures
-
Ongoing monitoring for drift and bias
-
Human decision-makers empowered to override outcomes
If a board cannot trace how these elements are organised, the organisation is not compliant — regardless of technical sophistication.
12. The documentation problem
One of the most underestimated requirements of the EU AI Act is documentation.
For finance professionals, the analogy is obvious: undocumented controls do not exist. An AI system that cannot be explained, logged and reviewed is — from a governance perspective — indistinguishable from an uncontrolled process.
This has direct implications for:
-
internal audit scope,
-
management representations,
-
and ultimately external assurance.
One other way of documenting parts of the combination of AI and ERP has been presented in our blog: Model Context Protocol (MCP): The Missing Governance Layer Between AI and ERP Systems.
Human oversight: where governance most often collapses
13. “Human-in-the-loop” as a comforting myth
Few concepts in the EU AI Act are cited as often — and understood as poorly — as human oversight. In many organisations, it is treated as a procedural checkbox: a human must be “involved somewhere”. In practice, this often means a manager approving AI-generated outputs without genuine insight into how those outputs were produced.
This creates what might be called ceremonial oversight.
Consider a pricing system used by a large retailer. The AI proposes dynamic price adjustments based on demand patterns, competitor behaviour and customer segmentation. A pricing manager receives a dashboard and formally approves the changes daily. When asked how the system arrives at its recommendations, the answer is vague: “It’s a complex model trained on multiple data sources.”
From a governance perspective, this is not oversight. It is delegation disguised as control.
The EU AI Act explicitly rejects this. Human oversight must be effective, not symbolic.
Read more in the source, the EU AI Act Key Issues Human Oversight.
14. What effective oversight actually requires
Effective human oversight has three indispensable components.
First, situational awareness. The human overseer must understand the purpose, limits and risk profile of the AI system. This does not mean understanding the mathematics, but it does require understanding what the system is optimising for, what data it uses, and where it is likely to fail.
Second, intervention capability. Oversight without the authority to intervene is meaningless. If business pressure, performance incentives or organisational hierarchy discourage overrides, oversight is structurally undermined.
Third, accountability clarity. Someone must be explicitly accountable for outcomes — not for “using the system correctly”, but for the decisions themselves. This mirrors the accountability of management for accounting estimates generated by complex valuation models.
Where these conditions are absent, the AI system effectively operates as an autonomous decision-maker — precisely the situation the EU AI Act seeks to prevent.
15. A failure scenario: when oversight exists only on paper
A European logistics company deploys an AI system to optimise driver assignments and delivery routes. The system increases efficiency but gradually produces schedules that push legal working-hour limits. Drivers complain. Management insists that human supervisors can intervene at any time.
An investigation later reveals that:
-
supervisors lack visibility into how schedules are generated,
-
overriding the system requires time-consuming justification,
-
performance metrics penalise manual intervention.
Formally, human oversight exists. Substantively, it does not.
Under the EU AI Act, this would constitute a governance failure — not because AI was used, but because oversight was structurally ineffective.
Read more in the article ‘Governance Matters: Don’t Overlook Board Oversight‘ from the Harvard Law School Forum on Corporate Governance.
Accountability and liability: when something goes wrong
16. The accountability question AI cannot answer
When an AI-driven decision causes harm, a deceptively simple question arises: who is responsible?
The EU AI Act provides a clear answer, even if organisations are uncomfortable with it. Responsibility rests with human actors and legal entities, not with systems.
This has profound implications.
AI vendors may provide tools. Data scientists may design models. Consultants may advise. But the organisation deploying the AI — and ultimately its management and board — remains accountable.
This mirrors long-established principles in corporate governance. Management cannot deflect responsibility by pointing to complex financial models, external advisors or system limitations. AI changes the medium, not the principle.
17. A concrete liability scenario
Imagine an insurance company using AI to assess disability claims. The system systematically underestimates claims for a specific category of applicants due to biased training data. After regulatory scrutiny, the company argues that the model was supplied by a reputable vendor and validated internally.
This defence fails.
Under the EU AI Act, the deploying organisation must demonstrate:
-
that it assessed risks prior to deployment,
-
that it monitored outcomes,
-
that it ensured meaningful human oversight,
-
that it maintained proper documentation.
Absent this, liability attaches — regardless of vendor assurances.
For boards, the implication is direct: AI governance failures are governance failures, not technical mishaps.
AI as part of the internal control system
18. The uncomfortable truth for control frameworks
Many organisations already rely on AI to perform functions that, in substance, are controls:
-
fraud detection systems that block transactions,
-
credit models that determine approval thresholds,
-
anomaly detection tools flagging revenue irregularities.
Yet these systems are often not recognised as such in internal control documentation. They sit in IT architectures, not in risk and control matrices.
The EU AI Act forces a conceptual correction. If an AI system influences whether a transaction occurs, a customer is accepted, or a financial outcome is realised, it is part of the control environment.
Ignoring this does not make the control disappear; it merely makes it uncontrolled.
19. Mapping AI to COSO: a practical illustration
Take a fraud detection system that uses machine learning to block suspicious payments.
From a COSO perspective:
-
Control environment: Who owns the system? Who is accountable?
-
Risk assessment: What fraud risks does the model address — and which does it ignore?
-
Control activities: When does the system block automatically, and when does it escalate?
-
Information & communication: Are decisions logged and explainable?
-
Monitoring: Is performance reviewed and bias detected over time?
If these questions cannot be answered, the organisation’s internal control framework is incomplete — regardless of technical sophistication.
20. Documentation: the forgotten governance discipline
One of the most persistent misconceptions about AI governance is that explainability is optional or aspirational. Under the EU AI Act, it is neither.
Documentation serves three governance functions:
-
enabling oversight,
-
enabling challenge,
-
enabling accountability.
Finance professionals instinctively understand this. An undocumented valuation is not reliable. An undocumented control cannot be tested. An undocumented AI decision cannot be defended.
The Act therefore elevates documentation from an administrative burden to a governance safeguard.
AI in the annual report: making the invisible visible
21. Why AI inevitably belongs in external reporting
For many organisations, AI is still treated as an internal operational matter — something for IT, data science or innovation teams. Annual reports mention “digitalisation” or “advanced analytics” in passing, if at all. This is no longer tenable.
The EU AI Act does not explicitly prescribe reporting templates. What it does instead is far more consequential: it redefines materiality. Once AI systems materially influence decisions, risks or outcomes, they become reportable — not because the law says so, but because governance logic demands it.
This mirrors the evolution of financial reporting. Complex financial instruments were once footnotes. Eventually, they reshaped the balance sheet, risk disclosures and management commentary. AI is following the same trajectory.
22. Where AI surfaces in the annual report
In practice, AI-related governance should surface in at least five sections of the annual report.
First, the business model. If AI systems shape how value is created — through pricing, customer selection, logistics optimisation or credit decisions — this must be explained. Stakeholders need to understand not just what the organisation does, but how decisions are made.
Second, strategy and outlook. AI is rarely neutral. It reflects strategic choices: efficiency versus inclusion, speed versus explainability, automation versus human judgement. These trade-offs belong in strategic narrative, not hidden in technical annexes.
Third, principal risks and uncertainties. AI introduces new risk categories: model risk, bias risk, regulatory risk, reputational risk. Boilerplate statements about “technology risk” are insufficient. Readers increasingly recognise when organisations are being evasive.
Fourth, governance disclosures. Boards that oversee AI meaningfully should say so — and be able to explain how. Silence increasingly suggests absence of oversight.
Fifth, internal control statements. Where AI influences financial reporting or operational outcomes materially, it cannot be excluded from management’s control narrative without creating inconsistency.
23. Weak versus strong disclosure — a practical contrast
A weak disclosure reads as follows:
“The company uses advanced data analytics and artificial intelligence to improve operational efficiency. Appropriate controls are in place to manage associated risks.”
This says nothing.
A stronger disclosure does not reveal trade secrets, but it does demonstrate governance maturity:
“The company uses AI-based models in selected operational and credit decision processes. These systems are classified as high-risk under emerging European regulation and are subject to enhanced governance, including documented risk assessments, human oversight mechanisms and periodic performance reviews. The board receives regular updates on their use and associated risks.”
The difference is not length. It is specificity with restraint — a hallmark of credible reporting.
EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance EU AI Act governance
24. Why AI is inherently an ESG issue
AI systems do not merely optimise processes; they shape outcomes for people. Who gets hired. Who gets credit. Who is flagged as risky. These are social outcomes, not technical ones.
This is where the EU AI Act intersects naturally with the Corporate Sustainability Reporting Directive (CSRD) and the ESRS framework, particularly governance (G1) and social standards.
An AI-driven recruitment tool that systematically disadvantages certain groups is not only a compliance problem. It is a social impact issue. An automated pricing system that exploits vulnerable consumers is not merely aggressive strategy. It raises ethical and reputational questions.
The EU AI Act and CSRD are therefore best understood not as parallel regimes, but as mutually reinforcing governance frameworks.
25. Workforce and society: a concrete example
Consider an organisation using AI to monitor employee performance and predict attrition. The system flags individuals as “high risk” for leaving and triggers managerial intervention.
From a CSRD perspective, this touches on:
-
working conditions,
-
dignity and privacy,
-
fairness and transparency.
From an AI Act perspective, it raises questions about:
-
data quality,
-
bias,
-
explainability,
-
human oversight.
Reporting on such systems requires careful narrative balance. Organisations must neither conceal their existence nor trivialise their impact. Governance maturity lies in acknowledging complexity and demonstrating control.
Failure modes: how AI governance crises unfold
26. From technical issue to reputational crisis
AI governance failures rarely announce themselves as such. They often begin as technical anomalies: unexpected outcomes, statistical drift, edge cases. What turns them into crises is not the initial error, but the organisational response.
A familiar pattern emerges:
-
management downplays concerns as “technical”,
-
explanations are vague or inconsistent,
-
accountability is unclear,
-
external scrutiny intensifies.
At this point, the absence of prior governance becomes visible. The organisation cannot explain what it does not understand, and cannot defend what it did not oversee.
27. The board’s critical moment
When regulators, journalists or stakeholders ask questions, boards face a decisive moment. Either they can demonstrate that AI use was anticipated, governed and monitored — or they discover that oversight existed only in theory.
The EU AI Act raises the stakes of this moment. It transforms governance silence into regulatory exposure.
This is why boards must engage before problems arise. Once trust is lost, documentation created after the fact convinces no one.
From compliance to strategic maturity
28. AI governance as an institutional capability
The most forward-looking organisations treat the EU AI Act not as a constraint, but as a design framework. They integrate AI governance into:
-
strategy formulation,
-
risk management,
-
internal control,
-
reporting and assurance.
Over time, this produces an institutional capability: the ability to deploy AI responsibly, explainably and credibly.
This mirrors the evolution of financial reporting itself. What began as compliance became infrastructure. AI governance is following the same path.
29. The emerging maturity spectrum
At one end of the spectrum, organisations react defensively. AI governance is fragmented, documentation is minimal, boards are briefed late.
At the other end, AI governance is embedded. Systems are classified, risks are understood, oversight is real, and reporting is coherent.
The EU AI Act accelerates this differentiation. Over time, stakeholders will distinguish sharply between organisations that use AI and organisations that govern AI.
Conclusion: institutionalising the invisible executive
Artificial Intelligence has already entered the organisation’s decision-making core. What the EU AI Act does is strip away the illusion that this can happen without accountability.
The Act does not ask boards to become technologists. It asks them to do what governance has always required: understand where power resides, ensure it is exercised responsibly, and be prepared to explain outcomes.
AI is the invisible executive.
The EU AI Act insists that it finally be governed.
For organisations willing to engage seriously, this is not a burden. It is an opportunity to rebuild trust in automated decision-making — before trust is lost.

