Agentic AI

Use back button from the brower

1. What is it?

Agentic AI refers to AI systems that can pursue goals, take sequences of actions and adapt their behaviour with a degree of autonomy, often interacting with tools, systems or other agents. The emphasis is on goal-directed behaviour over time, rather than single, isolated outputs.


2. What problem does it aim to address within governance and regulation?

The term is used to capture emerging behaviours that feel qualitatively different from traditional task-based AI, particularly where systems:

  • initiate actions,

  • chain decisions,

  • operate continuously rather than episodically.

It helps signal increased complexity and potential risk, but it does not itself define governance obligations.


3. Where does it typically appear in organisational practice?

Agentic AI is typically referenced:

  • in innovation and R&D discussions,

  • in vendor marketing and technical demos,

  • in strategic conversations about automation and orchestration,

  • in early-stage risk discussions about future AI capabilities.

It is rarely embedded in formal policies or compliance documentation.


4. What can go wrong if it is interpreted or applied incorrectly?

If agentic AI is treated as a formal risk category, organisations may:

  • overestimate novelty while overlooking existing control requirements,

  • design ad hoc governance structures without regulatory basis,

  • conflate system behaviour with accountability allocation,

  • neglect that responsibility still lies with human decision-makers.

The main risk is terminology-driven governance instead of impact-driven governance.


5. Who is accountable, and what oversight is required?

Regardless of whether a system is described as agentic, accountability remains with the organisation deploying and using the AI system. Boards and oversight bodies should ensure that:

  • perceived autonomy does not obscure responsibility,

  • control mechanisms are aligned with actual system behaviour,

  • emerging capabilities are assessed within existing governance frameworks,

  • experimentation does not bypass established approval processes.