“EA in the Age of AI: From Static Models to Living Systems”
Enterprise Architecture (EA) has always been about structure and alignment. But with the rise of generative and agentic AI (Artificial Intelligence), we are entering a new era; one where systems learn, adapt, and interact in ways we didn’t explicitly design.
This shift challenges the very foundations of EA. It is no longer enough to model what is, we must also govern what emerges. Gartner has estimated that one of three enterprise software applications by 2028 are expected to have incorporate agentic AI.
As Jesper Löwgren recently highlighted (see his post here), system design has gone through three major logics; each of which alters the logic of system design itself. This directly challenges how we in future should architect solutions, design organisations, and govern technology in the execution of strategy. In this post, we want to explore:
- How EA and AI intersect in two distinct but related ways
- Why autonomy and emergence demand new governance models
- How to think about AI agents in architectural terms
- Why business architecture and modern EA solutions, e.g. like Next-Insight matter more than ever.
Two Intersections: AI in EA vs EA for AI
If you follow the various fora, you will notice that two conversations are really happening at once:
AI inside EA systems
This is about using AI to make EA practice more efficient. Imagine automatic mapping of dependencies, faster scenario modelling, or instant insights into portfolio risks. EA platforms are experimenting with AI; powered assistants to crunch data, generate diagrams, and suggest roadmaps. This does not change the business or what EA is; it simply makes the work faster and smarter.
EA for AI in organisations
Here is where things get interesting. AI is no longer just another technology to integrate; it is a capability that can reshape business models, operations, and customer value. Enterprise Architecture can help organisations design where AI adds value, how it links to strategy, and what capabilities must evolve to leverage it. Without EA, AI risks becoming a set of scattered experiments that don’t scale. With EA, AI becomes a structured enabler of digital transformation.
Both conversations are valid. But the second one; EA for AI; is where the real stakes lie. You could even argue that EA should come first to guide the AI initiatives.
Autonomy and New Governance
Autonomy gave us scalability. An AI agent can act without waiting for human steps or input. But once multiple agents begin to interact, we enter the realm of “emergence”. Patterns appear that were never explicitly designed. Some are valuable; others may be harmful, biased, or unstable.
This is where governance must shift. Traditional governance is built around components such as processes, systems, metadata. If each piece is controlled, the whole system is predictable. But in emergent systems, the new risks lie in the spaces “between” components.
Signals moving between agents: the dynamic interactions or messages that propagate between AI agents. These signals are actionable and may influence behaviour, creating emergent patterns that have not explicitly been designed. Dependencies that multiply risks, or feedback loops that can spiral out of control or stabilise
Governance and EA should not be a substitute for operational control; but it is important to ensure operations are observed at runtime with accountable management. Such realtime observability is to be expected by AI platforms, but overall metrics and status to be exposed to the EA management portal e.g. as Next-Insight.
Real-time monitoring, embedded safeguards, and clear lines of responsibility must supplement classic component governance. In future, it is no longer sufficient to assign responsibility after the fact.
The big question is where this observability takes place, and who is accountable for an autonomous agent’s decisions? A component-based governance must be supplemented with instrumentation that guides and ensures ethical, accountable behaviour. This is precisely where EA has a role: framing the design and oversight needed to govern autonomous systems effectively with integrations to AI platforms.
Modelling AI Agents in EA
So, how do we architect for AI agents? A practical way forward is to model AI agents as applications with enhanced attributes. Each agent can be catalogued in the application portfolio, with properties indicating that it is an agent, its type of autonomy, its governance owner, and its risk/ethics classification. This ensures agents are not “floating around” but are embedded within the same governance model as other information assets.
From there, EA can link agents directly to capabilities and strategic goals. For example, an “AI Sales Assistant Agent” might support the “Customer Engagement” capability while being governed as an application with dependencies, data flows, and safeguards. This dual view helps business leaders understand where AI drives outcomes, and IT leaders see where agents sit within the landscape.
As agents begin to interact, EA must also capture what needs to be modelled around them. As we observe generative patterns, the spaces between components gradually demand more attention. At least three distinct modelling lenses are commonly discussed:
- Applications lens: agents have owners, dependencies, and governance.
- Capabilities lens: agents are justified by enhanced business outcomes.
- Interaction lens: collective behaviours are monitored and guided.
However, agents are not merely relatively static components; they exhibit changing behaviour, emit “signals”, may even form dependencies, and act as stabilisers that shape emergent behaviour. This is where runtime observability typically offered by AI platforms becomes essential, and where considerations such as ethical checks and architectural safeguards come into play.
The EA challenge is that AI agents do not simply execute code; they learn, adapt, and occasionally misbehave:) This means enterprise architecture cannot stop at documenting what the agent does; it must, to some extent extend into runtime visibility including triggers and alarms, often referred to as “observability integration”.
To evolve the EA practice, then EA can integrate with AI platforms to enable observability and properly model agents, as autonomy and risk as content may drift. Such drift can impact the risk profile of data and components, pushing agents to the edge of the updated metadata and documentation landscape.
We are witnessing a shift towards EA gradually modelling agents with increased sophistication; tracking metrics of the decisions agents make, mapping dependencies on other agents, assessing risks in data descriptions, and potentially surfacing feedback loops.
This is where AI platforms must prevent runaway effects. Stabilisers such as circuit breakers, escalation paths, and runtime guardrails must be in place, alongside dashboards that flag anomalies to the overall impact modelling. In practice, AI platforms should represent the runtime observability layer, emitting signals via REST APIs that can be consumed by modern EA tools. These signals provide insight into agent behaviour, risk drift, dependencies, and anomalies, enabling EA to function as a living architecture and almost near realtime landscape that remains current and responsive.
With Next-Insight , it is easy to extend components using custom fields, allowing you to represent AI agent attributes such as Autonomy Level, Signal Type, Risk Classification, etc. By integrating these custom fields with observability data from AI platforms via REST APIs, the EA model becomes dynamically updated in real time. This enables a living documentation where agent metrics are consumed directly into the enterprise architecture landscape.
While operational actions and escalation paths may still reside within the AI platform itself, the visibility of these mechanisms become part of the EA landscape for portfolio analysis.
This also introduces a clear division between governance and accountability in EA (“who owns it?”, and “why do we have it?”) and AI responsibility in runtime (“who monitors it live, and what tools intervene when it goes wrong?”).
Why Business Architecture and EA Portal Matter
This is where business architecture becomes critical. AI is powerful, but without a clear link to business strategy and capabilities, it risks becoming a distraction, or just another tech project. Business architecture provides the map; the EA Portal acts as the knowledge-base for decision-makers:
- Which capabilities should be AI-enabled first?
- How do those capabilities connect to strategic goals?
- Which value streams and processes benefit most from AI intervention?
But business architecture also needs modern platforms to thrive. Static documents and disconnected models are no longer sufficient. What’s required is a dynamic, digital knowledge base, typically referred to as the EA Portal; this is why Next-Insight captures strategy, processes, capabilities, and as an evolving need, AI agents.
Autonomous systems gave us scale. Emergent systems give us something altogether new. For Enterprise Architecture, the implication is clear:
- We must design for outcomes, but also for the conditions that shape emergent behaviour.
- We must govern not just components, but also observe the spaces between them to act timely.
- We must link AI explicitly to business goals and capabilities, so it serves strategy rather than distracts from it.
The paradox of EA has always been that “almost everything can be called architecture.” With AI, that paradox only deepens, but it also creates opportunity. If we can architect for autonomy and emergence, balancing scale with governance, we don’t just keep up with AI. We make AI accountable, strategic, and truly transformative.
Let’s connect if you require assistance getting started with EA for AI and for governing AI at scale. Book a demo here.


