Last Updated on 01/04/2026 by 75385885
AI regulation United States – Artificial intelligence is rapidly becoming the new nervous system of the global economy. It processes information, drives decisions, and increasingly determines outcomes—whether in financial markets, healthcare, or strategic policymaking. Against that backdrop, regulation is no longer a peripheral issue; it is the backbone that determines whether trust is preserved or eroded.
At first glance, the global regulatory landscape seems clear. The European Union has taken a decisive step with the AI Act, introducing a structured, risk-based framework that classifies systems and imposes obligations before they are deployed. The United States, by contrast, appears hesitant—fragmented, even reluctant—to regulate. This perception, however, is misleading.
The United States is not unregulated. It is regulated differently.
Where Europe builds a formal architecture of rules, the US operates through a more diffuse system of enforcement, sectoral oversight, and political negotiation. It is less a cathedral of regulation and more a living organism—reactive, adaptive, and at times contradictory. This distinction is not merely academic; it has profound implications for boards, regulators, and investors navigating AI-driven risks.
Two governance philosophies: design versus discipline
The European model is rooted in predictability. By defining “high-risk” AI systems and imposing compliance requirements upfront, the EU attempts to shape outcomes before harm occurs. This ex-ante approach aligns with a broader tradition in European governance: codify risks, define boundaries, and enforce compliance systematically.
The US model takes a different route. It relies less on predefined categories and more on existing legal frameworks—securities law, consumer protection, competition law—to discipline behavior after the fact. Rather than asking whether an AI system is “high risk” in abstract terms, US regulators ask a more pragmatic question: has harm occurred, or has the market been misled?
This distinction can be understood as the difference between design control and behavioral accountability.
Europe regulates the system.
The US regulates the conduct.
That difference becomes particularly visible when examining how AI is addressed in practice. The absence of a single, overarching AI law in the United States does not imply a regulatory vacuum. Instead, it reflects a strategic choice: regulate AI where it manifests, not where it originates.
The myth of deregulation
The narrative that the US is “deregulating AI” is persistent, especially in political discourse. Policymakers emphasize innovation, competitiveness, and the need to avoid stifling a rapidly evolving technology. Yet beneath this rhetoric lies a more complex reality.
In practice, the US government is deeply involved in shaping the AI ecosystem—just not always at the application level.
Consider the infrastructure layer. AI systems depend on advanced semiconductors, cloud capacity, and global supply chains. Here, the US has been anything but passive. Export controls on high-end AI chips, strategic partnerships with allied nations, and restrictions on technology transfers to geopolitical rivals demonstrate a firm regulatory grip. These measures are not framed as “AI regulation,” but their impact on the development and deployment of AI is profound.
This is regulation by architecture.
At the same time, federal policy increasingly targets the economic and strategic implications of AI. Decisions about data flows, national security, and industrial policy shape the boundaries within which AI can operate. The result is a layered system: light-touch rhetoric at the surface, combined with targeted intervention at critical nodes.
As one analysis aptly notes, US AI policy is “light touch at the surface, iron grip at the core.”
For governance professionals, this is a crucial insight. The absence of a single framework does not reduce regulatory risk—it redistributes it.
Enforcement as a regulatory instrument
If the EU’s strength lies in codification, the US derives its regulatory power from enforcement. Agencies such as the Securities and Exchange Commission (SEC), the Federal Trade Commission (FTC), and the Consumer Financial Protection Bureau (CFPB) are not waiting for comprehensive AI legislation. They are applying existing laws to new technological realities.
This approach is particularly visible in financial markets.
The SEC, for example, has begun to position itself as a de facto AI regulator—not by issuing AI-specific rules, but by enforcing longstanding principles of disclosure, transparency, and anti-fraud. The concept of “AI-washing” has quickly emerged as a focal point. Companies that exaggerate or misrepresent their AI capabilities risk misleading investors, thereby violating securities law.
This is not a hypothetical concern. As highlighted in recent analysis, firms may project an image of technological sophistication that does not reflect reality, driven by competitive pressure and market expectations. When such misrepresentation occurs, the consequences extend beyond individual firms. Investor trust is undermined, and the integrity of markets is called into question.
The parallel with ESG reporting is striking. Just as “greenwashing” became a central enforcement theme in sustainability disclosures, AI-washing is emerging as a comparable risk in the digital domain. In both cases, the core issue is not the technology itself, but the credibility of the narrative surrounding it.
For boards and audit committees, this shifts the focus. AI is no longer merely an operational or strategic topic; it becomes a matter of financial reporting integrity and governance accountability.
Read more in the Guardian – Don’t be fooled. The US is regulating AI – just not the way you think.
From innovation to accountability
This enforcement-driven model has important advantages. It allows regulators to respond quickly to emerging risks without waiting for legislative consensus. It leverages existing legal frameworks, reducing the need for entirely new regulatory architectures. And it places responsibility squarely on organizations to ensure that their use of AI aligns with established standards of conduct.
However, it also introduces uncertainty.
Without clear ex-ante rules, companies must interpret how existing laws apply to new technologies. This creates a grey zone in which innovation and compliance are not always easily reconciled. What constitutes sufficient disclosure of AI capabilities? When does an algorithmic decision become materially relevant for investors? How should boards document oversight of systems that are inherently complex and often opaque?
These questions do not have uniform answers—and that is precisely the point.
The US model embraces ambiguity as part of its regulatory logic. It relies on case law, enforcement actions, and evolving guidance to gradually define the boundaries of acceptable behavior. In doing so, it transforms regulation into a dynamic process rather than a fixed framework.
Healthcare as a stress test for AI governance
If financial markets reveal how the United States regulates AI through disclosure and enforcement, the healthcare sector exposes something even more fundamental: what happens when algorithmic decision-making directly affects human lives.
Here, AI is no longer an abstract governance topic or a matter of investor communication. It becomes tangible, immediate, and politically charged.
The use of AI in health insurance—particularly in claims processing and prior authorization—has become one of the most controversial applications of the technology in the United States. Algorithms are increasingly used to assess whether treatments are medically necessary, to streamline administrative processes, and, crucially, to control costs.
In theory, this promises efficiency. In practice, it raises uncomfortable questions.
Patients, doctors, and policymakers are confronted with decisions that are often difficult to explain. Why was a treatment denied? On what basis did the algorithm reach its conclusion? And to what extent is a human actually involved in the decision-making process?
These concerns are not merely anecdotal. Public skepticism toward AI is significant across the political spectrum, and the use of algorithms in healthcare has amplified that distrust. Reports of automated or semi-automated claim denials have triggered scrutiny from lawmakers, regulators, and professional bodies alike.
What makes healthcare particularly relevant from a governance perspective is the asymmetry of impact. In financial markets, misrepresentation may distort investor decisions. In healthcare, algorithmic decisions can delay or deny treatment. The stakes are existential.
This transforms AI governance from a question of efficiency into a question of legitimacy.
Read more on KFF Health News – Red and Blue States Alike Want To Limit AI in Insurance. Trump Wants To Limit the States.
The rise of state-level intervention
In the absence of a comprehensive federal framework, US states have begun to step into the regulatory vacuum. Across the country, legislators are introducing and passing laws aimed at controlling the use of AI in healthcare and insurance.
These initiatives share a common theme: reintroducing human accountability into algorithmic processes.
Typical measures include:
- Prohibiting AI from being the sole basis for coverage decisions
- Requiring human review or sign-off
- Mandating transparency and auditability of algorithms
States such as Arizona, Maryland, Texas, and Nebraska have already enacted legislation, while others continue to develop similar frameworks.
At first glance, this appears to be a logical and necessary response. However, it introduces a new layer of complexity.
What does “human oversight” actually mean in practice? Is a cursory review sufficient, or must the human decision-maker fully understand and challenge the algorithmic output? And how does one ensure that human involvement does not become a mere formality—a rubber stamp that legitimizes automated decisions without truly scrutinizing them?
These questions highlight a deeper governance issue: the illusion of control.
Simply inserting a human into the process does not automatically resolve the risks associated with AI. Without clear standards, documentation, and accountability mechanisms, human oversight can become symbolic rather than substantive.
For boards and regulators, this is a critical lesson. Effective governance is not achieved by adding layers, but by ensuring that each layer functions meaningfully.
Read more on the BBC – Trump signs order to block states from enforcing own AI rules.
The federal response: innovation first
While states move toward tighter control, the federal government has taken a markedly different stance—particularly under political leadership that emphasizes competitiveness and technological leadership.
The argument is straightforward: excessive regulation risks undermining the United States’ position in the global AI race. Innovation, speed, and scale are seen as strategic imperatives, especially in the context of geopolitical competition.
This perspective has led to attempts to limit or preempt state-level regulation. A recent executive order sought to restrict states from enforcing what it characterized as overly burdensome AI rules, arguing that fragmented regulation could hinder national competitiveness.
From a governance standpoint, this introduces a fundamental tension.
On the one hand, centralized coordination can reduce fragmentation and create a more predictable environment for businesses. On the other hand, limiting state-level initiatives risks leaving critical gaps in consumer protection—particularly in sectors like healthcare, where the impact of AI is immediate and deeply personal.
This tension is not easily resolved because it reflects two competing visions of governance:
- Innovation-driven governance, prioritizing growth and global leadership
- Protection-driven governance, prioritizing fairness, transparency, and accountability
A fragmented battlefield: federal versus state
The result is a regulatory landscape that is increasingly fragmented—and, in some respects, contradictory.
States continue to push forward with their own initiatives, often in direct opposition to federal preferences. Legislative activity at the state level is accelerating, driven by public concern, political incentives, and the perceived absence of federal leadership.
As one observer noted, AI policy in the US has become “hyperlocal, national and international in significance” at the same time.
This creates a multi-layered governance environment:
- Federal policy sets broad strategic direction
- States implement sector-specific protections
- Agencies enforce existing laws across both levels
For companies operating across multiple jurisdictions, this translates into a patchwork of obligations. Compliance is no longer a matter of adhering to a single framework, but of navigating overlapping and sometimes conflicting requirements.
The comparison with Europe becomes particularly stark at this point. While the EU’s AI Act may be complex, it offers a unified structure. The US system, by contrast, resembles a mosaic—dynamic, decentralized, and difficult to predict.
AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States
The governance implications of fragmentation
From a corporate governance perspective, fragmentation introduces several challenges.
First, it increases compliance complexity. Organizations must monitor developments at both federal and state levels, often in real time. This requires not only legal expertise but also a governance structure capable of integrating diverse regulatory signals.
Second, it creates strategic uncertainty. Decisions about where and how to deploy AI systems may depend on jurisdictional differences. What is permissible in one state may be restricted in another, complicating operational models and investment decisions.
Third, it amplifies reputational risk. In a fragmented environment, inconsistencies in how AI is used across jurisdictions can attract scrutiny from regulators, media, and stakeholders.
But perhaps the most important implication is more subtle.
Fragmentation shifts the burden of governance from regulators to organizations themselves.
In the absence of a clear, unified framework, companies must define their own standards for responsible AI use. They must decide how much transparency to provide, how to ensure fairness, and how to balance efficiency with accountability. In effect, they become not just subjects of regulation, but co-creators of the governance landscape.
This is both an opportunity and a risk.
It allows leading organizations to set benchmarks and build trust. But it also exposes laggards to enforcement actions, litigation, and reputational damage.
From policy debate to governance reality
The debate over AI regulation in the United States is often framed as a political contest—federal versus state, innovation versus control. While these dimensions are real, they can obscure a more fundamental point.
Regardless of where the regulatory lines are ultimately drawn, AI is already being governed.
It is governed through enforcement actions, through sector-specific rules, through state legislation, and through the internal policies of organizations themselves. The system may lack coherence, but it is far from inactive.
For governance professionals, the implication is clear.
The question is not whether regulation exists. The question is whether organizations understand how it operates—and whether their governance structures are equipped to respond.
So far it seems the United States does not lack regulation—it lacks a single narrative about where regulation resides. That ambiguity is not accidental. It is a structural feature of a system that prefers enforcement over codification and decentralization over uniformity.
To understand the implications, it is necessary to step back and compare two fundamentally different regulatory logics.
Enforcement versus legislation: two paths to control
The European Union and the United States are not merely applying different tools; they are pursuing different philosophies of control.
The EU’s approach is legislative. It defines categories, prescribes requirements, and creates a compliance framework that organizations must follow before deploying AI systems. The advantage is clarity. Companies know where they stand, even if compliance is burdensome.
The US approach is evolutionary. It allows practices to develop and then intervenes when risks materialize or when behavior crosses legal boundaries. Enforcement actions, litigation, and regulatory guidance gradually shape the contours of acceptable conduct.

This distinction can be framed as follows:
- EU: regulation precedes behavior
- US: behavior triggers regulation
From a governance perspective, each model carries trade-offs.
The European system reduces uncertainty but may struggle to keep pace with technological change. The American system is more flexible but shifts interpretative risk onto organizations. In the absence of predefined rules, companies must anticipate how regulators will apply existing laws to new contexts.
This is particularly evident in AI. There is no single US rulebook for algorithmic governance, yet there is a growing body of enforcement actions, speeches, and guidance that together form an implicit framework.
For boards, this means that compliance is no longer a checklist exercise. It becomes a matter of judgment.
Read more on EU Digital Operational Resilience in our blog: DORA and the Boardroom – Why Digital Operational Resilience Has Become a Core Governance Responsibility.
AI governance as a board-level responsibility
One of the most significant shifts driven by AI is the elevation of technology governance to the highest levels of the organization. AI is not just an IT issue, nor merely a strategic lever. It is a governance issue that cuts across risk management, reporting, ethics, and accountability.
In the US context, this is reinforced by the enforcement-driven model.
Because regulators focus on outcomes—misleading disclosures, discriminatory effects, inadequate controls—the responsibility for ensuring compliance rests squarely with the organization. Boards cannot rely on detailed regulatory prescriptions; they must actively define and oversee the governance of AI.
This has several concrete implications.
AI regulation United States AI regulation US vs EU, SEC AI enforcement, AI governance corporate, AI healthcare regulation US, AI washing SEC, federal vs state AI regulation, AI compliance governance, algorithmic accountability US
AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States
1. Oversight of AI strategy and risk appetite
Boards must understand how AI is used within the organization and what risks it introduces. This goes beyond high-level awareness. It requires a structured view of:
- where AI is deployed
- what decisions it influences
- what risks arise from its use
This is analogous to financial risk oversight. Just as boards define risk appetite for market, credit, or operational risk, they must articulate their tolerance for algorithmic risk.
2. Internal control and COSO alignment
AI systems challenge traditional internal control frameworks, particularly in areas such as transparency and auditability. The COSO framework remains relevant, but its application must be adapted.
Key control considerations include:
- Control environment: tone at the top regarding responsible AI use
- Risk assessment: identification of model risk, bias, and data dependencies
- Control activities: validation, monitoring, and override mechanisms
- Information and communication: transparency of AI outputs and limitations
- Monitoring: continuous evaluation of model performance and outcomes
The difficulty lies in the nature of AI itself. Models can evolve, learn, and produce outputs that are not easily explainable. This complicates the traditional notion of control, which relies on traceability and predictability.
In this sense, AI introduces a shift from deterministic control to probabilistic oversight.
3. Disclosure and reporting integrity
In the US, where the SEC plays a central role, disclosure becomes a critical focal point. Companies must ensure that their statements about AI—whether in annual reports, investor presentations, or public communications—are accurate, balanced, and not misleading.
This is where governance intersects directly with financial reporting.
AI capabilities are increasingly positioned as drivers of growth, efficiency, and competitive advantage. The temptation to overstate these capabilities is significant, particularly in a market environment that rewards technological narratives. Yet this is precisely where enforcement risk emerges.
Boards and audit committees must therefore ask:
- Are AI-related claims substantiated?
- Are risks adequately disclosed?
- Is there consistency between internal reality and external communication?
The parallels with IFRS and narrative reporting are evident. Just as management must avoid bias in presenting financial performance, it must avoid distortion in presenting technological capabilities.
4. Audit and assurance challenges
For auditors, AI introduces new complexities. Traditional audit approaches rely on evidence that is verifiable and reproducible. AI systems, particularly those based on machine learning, do not always conform to these characteristics.
This raises several questions:
- How can auditors assess the reliability of algorithmic outputs?
- What constitutes sufficient audit evidence in an AI-driven process?
- How should model risk be incorporated into audit planning?
These challenges are still evolving, but one conclusion is clear: documentation becomes critical. Organizations must be able to demonstrate how AI systems function, how they are governed, and how their outputs are validated.
Without this, assurance becomes difficult—if not impossible.
Read more in our blog on AI – Axelera AI and the Governance of European AI Ambition.
Is the US regulating AI or not?
After examining the various dimensions—enforcement, sectoral regulation, state initiatives, and corporate governance—the answer to the central question becomes clearer.
The United States is regulating AI.
But it is doing so in a way that defies traditional expectations.
There is no single law comparable to the EU AI Act. There is no unified classification system or centralized compliance regime. Instead, regulation emerges from a combination of:
- existing legal frameworks
- agency enforcement actions
- state-level legislation
- strategic policy interventions
This creates a system that is:
- decentralized rather than centralized
- reactive rather than proactive
- principle-based rather than rule-based
From one perspective, this can be seen as a weakness. The lack of clarity increases uncertainty and complicates compliance. From another perspective, it is a deliberate choice—one that prioritizes flexibility and adaptability in a rapidly changing technological landscape.
The key insight is this:
The US regulates AI functionally, not formally.
Toward convergence or divergence?
Looking ahead, the question is whether these different approaches will converge or continue to diverge.
Several forces point toward convergence:
- increasing public concern about AI risks
- pressure for greater transparency and accountability
- the global nature of AI markets, which favors some degree of harmonization
At the same time, structural differences remain. The EU’s preference for codification and the US’s reliance on enforcement are deeply rooted in their respective legal and political traditions.
For multinational organizations, this means that dual compliance is not optional. They must navigate:
- EU-style ex-ante requirements
- US-style ex-post enforcement risk
This is not merely a regulatory challenge. It is a governance challenge.
Organizations must build systems that are robust enough to meet formal requirements where they exist, while also being flexible enough to withstand scrutiny where rules are less explicit.
Final reflection: governance as the stabilizing force
Artificial intelligence is often compared to electricity—a general-purpose technology that transforms entire economies. If that analogy holds, then governance is the grid that ensures the system functions safely and reliably.
In the United States, that grid is less visible than in Europe. It is distributed across agencies, courts, and states. It operates through enforcement as much as through legislation. And it relies heavily on organizations themselves to interpret and implement its principles.
This places a significant responsibility on boards, executives, and governance professionals.
The real question is not whether AI is regulated.
The real question is whether organizations understand where regulation actually resides—and whether they have the governance structures in place to respond accordingly.
AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States AI regulation United States
Because in a system defined by ambiguity, the absence of clarity is not the absence of accountability.
FAQ’s – AI washing SEC
FAQ 1 – Is artificial intelligence actually regulated in the United States?
Yes—artificial intelligence is regulated in the United States, but not through a single, comprehensive legal framework comparable to the EU AI Act. Instead, regulation is distributed across existing legal regimes, regulatory agencies, and sector-specific rules. This creates a system that is less visible but not less impactful.
Key regulators such as the Securities and Exchange Commission (SEC), the Federal Trade Commission (FTC), and the Consumer Financial Protection Bureau (CFPB) already apply existing laws to AI-related activities. For example, misleading claims about AI capabilities can fall under securities fraud or deceptive business practices. In addition, state governments are actively introducing legislation, particularly in sectors like healthcare and insurance, where algorithmic decisions have direct societal consequences.
This decentralized approach reflects a broader governance philosophy in the US: regulate outcomes rather than technologies themselves. Rather than defining what AI is allowed to do upfront, regulators intervene when harm occurs or when existing legal standards—such as transparency, fairness, or fiduciary duty—are violated.
For organizations, this means that regulatory risk is real but less predictable. AI governance in the US requires continuous interpretation of evolving enforcement practices rather than compliance with a fixed rulebook.
FAQ 2 – What is the main difference between AI regulation in the EU and the United States?
The fundamental difference lies in regulatory philosophy: the European Union regulates AI through predefined rules, while the United States regulates behavior through enforcement.
The EU AI Act adopts a risk-based, ex-ante approach. AI systems are categorized (e.g., high-risk), and strict requirements must be met before deployment. This provides clarity and legal certainty, but it can also slow innovation and create compliance burdens.
The US model is more adaptive and reactive. Instead of introducing a single AI law, regulators rely on existing frameworks—such as securities law, consumer protection, and anti-discrimination rules—to address risks as they arise. Enforcement actions, court decisions, and regulatory guidance gradually shape acceptable practices.
From a governance perspective, this creates different challenges. In the EU, compliance is structured and predictable. In the US, companies must interpret how general principles—such as transparency, fairness, and accountability—apply to AI use cases. This increases uncertainty but allows for flexibility.
Ultimately, the EU regulates the design of AI systems, while the US focuses on the consequences of their use. Organizations operating globally must navigate both models simultaneously.
FAQ 3 – Why is the SEC considered a key regulator of AI in the United States?
The SEC has emerged as a de facto AI regulator because of its authority over disclosure, market integrity, and investor protection. While it does not regulate AI technology directly, it regulates how companies represent and use AI in a financial context.
A central issue is “AI-washing”—the practice of overstating or misrepresenting AI capabilities to investors. In capital markets, where valuations can be influenced by technological narratives, this risk is significant. If companies exaggerate their AI sophistication or fail to disclose associated risks, they may violate securities laws.
The SEC’s enforcement approach focuses on core principles: accuracy, transparency, and materiality. Companies must ensure that statements about AI are supported by evidence and consistent with internal reality. This aligns closely with broader financial reporting standards, where misleading disclosures can undermine investor confidence.
For boards and audit committees, this elevates AI to a governance and reporting issue. Oversight must extend beyond operational performance to include how AI is communicated externally. The parallel with ESG reporting is clear: just as greenwashing became a major enforcement topic, AI-related disclosures are likely to receive increasing scrutiny.
FAQ 4 – Why is AI regulation in healthcare such a critical issue in the US?
Healthcare represents one of the most sensitive and high-impact domains for AI regulation because algorithmic decisions can directly affect patient outcomes. Unlike financial applications, where consequences are primarily economic, healthcare decisions can influence access to treatment, timing of care, and ultimately patient well-being.
AI is increasingly used by health insurers for processes such as prior authorization and claims assessment. While this can improve efficiency and reduce administrative costs, it also introduces risks related to transparency and fairness. Patients and healthcare providers often struggle to understand how decisions are made, particularly when algorithms operate as “black boxes.”
This has triggered growing concern among policymakers, regulators, and professional organizations. State-level legislation has begun to address these issues by requiring human oversight, limiting the use of AI as the sole basis for decisions, and mandating greater transparency.
From a governance perspective, healthcare highlights the ethical dimension of AI. It forces organizations to balance efficiency with accountability and cost control with patient care. The sector demonstrates that AI governance is not only a technical or legal issue but also a societal one, where legitimacy and trust are central.
FAQ 5 – What challenges does the fragmented US regulatory landscape create for companies?
The fragmented nature of AI regulation in the United States creates significant complexity for organizations, particularly those operating across multiple states or sectors. In the absence of a unified federal framework, companies must navigate a patchwork of state laws, regulatory guidance, and enforcement practices.
This fragmentation leads to several key challenges. First, compliance becomes more resource-intensive. Organizations must continuously monitor legal developments at both federal and state levels and adapt their policies accordingly. Second, it creates strategic uncertainty. Differences between jurisdictions may affect decisions about where and how to deploy AI systems.
Third, fragmentation increases reputational risk. Inconsistent practices across regions can attract scrutiny from regulators, stakeholders, and the public. What is acceptable in one state may be viewed as problematic in another.
Perhaps most importantly, fragmentation shifts responsibility onto companies themselves. Without clear, uniform rules, organizations must define their own governance standards for AI. This requires strong internal controls, clear accountability structures, and a proactive approach to risk management.
In this environment, leading companies distinguish themselves not by minimal compliance, but by establishing robust, transparent, and defensible AI governance frameworks.
FAQ 6 – What does AI regulation mean for boards, audit committees, and governance professionals?
AI regulation fundamentally reshapes the responsibilities of boards, audit committees, and governance professionals. It transforms AI from a technical or operational issue into a core element of enterprise governance.
Boards must ensure that AI risks are identified, assessed, and integrated into the organization’s overall risk management framework. This includes understanding how AI is used, what decisions it influences, and what potential risks—such as bias, lack of transparency, or model errors—may arise.
Audit committees play a critical role in overseeing disclosure and reporting. As AI becomes part of the narrative presented to investors, committees must ensure that claims are accurate, balanced, and supported by evidence. This aligns closely with existing responsibilities under financial reporting and internal control frameworks.
From a COSO perspective, AI introduces new dimensions to internal control, particularly around data governance, model validation, and monitoring. Traditional control mechanisms must be adapted to address systems that are dynamic and, in some cases, difficult to explain.
Ultimately, AI regulation reinforces a broader trend: governance is moving upstream. It is no longer sufficient to react to regulatory requirements. Organizations must proactively design governance structures that anticipate scrutiny and build trust.
