After agents, part 2 – Agents and the Agora

LLMs
AI
agents
Why agents are the least interesting part of agentic AI.
Author

Chris von Csefalvay

Published

10 February 2025

If you spend any time on LinkedIn, it’s almost a certainty that you have come across a bevy of alleged ‘agentic AI architectures’. They all look something like this:

flowchart TD
    %% Define the main nodes
    n1([Start])
    n2[[Manager Agent]]
    n3((End))

    %% Subgraph 1: Data Ingestion
    subgraph s1["Data Ingestion"]
        n11["Ingestion Manager"]
        n12["SQL Interactor Agent"]
        n13["RAG Agent"]
        n11 --> n12
        n11 --> n13
    end

    %% Subgraph 2: Analysis
    subgraph s2["Analysis"]
        n21["Analysis Manager"]
        n22["Analyst Agent"]
        n21 --> n22
    end

    %% Subgraph 3: Reporting
    subgraph s3["Reporting"]
        n33["Reporting Manager"]
        n34["Report Writing Agent"]
        n33 --> n34
    end

    %% Subgraph 4: Some Other Stuff
    subgraph s4["Some Other Stuff"]
        n41["Foo Manager"]
        n42["Bar Agent"]
        n43["Baz Agent"]
        n41 --> n42 & n43
    end

    %% Main flow
    n1 --> n2
    n2 --> s1 & s2 & s3 & s4
    s3 --> n3

All very neat, but the audience might be forgiven for asking what exactly is agentic about this, except for relabeling subprocesses in what is a run-of-the-mill RPA workflow as ‘agents’. And the audience is, this once, perfectly right. This is agentic AI in its most limited sense – and that limited sense is in many ways a result of the semantics we chose to adopt to reason about agents. For while we call it agentic AI, it’s not, actually, the agents that matter: it’s how they are structured. It is this governed agentic connectome, which I have come to call the Agora, that holds the power of agentic AI – and which is almost universally neglected.

Beyond agents, to the Agora

This narrow perspective is essentially a rebranding of what any decently designed application does – self-contained pieces of code passing information to one another – embellished with the agentic buzzword du jour, rendering what we all have been doing for the last few decades ever so much more VC-friendly. It misses the true key to the power of agentic systems. In a previous post, I reflected on the need to adopt an ecosystem thinking about agents, to consider their strength in creating complexity through interconnectedness. At the moment, our epistemic perspective on agents is intrinsically tied to, and defined by, the what, i.e. the agents themselves. Much of the time, it fails to take account of what matters vastly more, namely how those agents relate to each other – the how of agents. Just as we understand that the power of the human brain does not derive from a bunch of neurons in one place but their interconnectedness, we need to come to understand that agents are the least interesting part of agentic AI. In the end, in any complex system, it’s the connections that matter more than what is doing the connecting.

What should be agentic AI’s focus, then, is the space in which those agents can interact. Few of these neat hierarchical frameworks that are now touted as ‘agentic’ on LinkedIn envisage any meaningful interaction beyond manager agents bossing around single-functionality executor agents. This fits very well with existing software development paradigms, but has a hard limitation: the complexity of the resulting system will reach just as far as the developer has had time and energy to wire up various components. Even if it’s a rat’s nest of agent spaghetti, this complexity will be limited in at least two ways. It will, for one, be limited by the static, pre-defined nature of the framework: what is once defined remains set in stone. If no connections are manually made a priori, processes and agents live separate lives. More concerning, however, is the epistemic limitation: if we have to a priori define the agentic structure, we are stuck with the known knowns and perhaps the known unknowns. We are trying to tie reality to Procrustes’ bed, except it’s us who will end up a foot short in the end.

The alternative focus, then, should be on creating agentic frameworks that focus on a governed space where agents can engage with each other – the space I chose to call the Agora, in analogy with the city-square of Ancient Greece where merchants, artisans, philosophers, politicians and citizens got to interact and form connections. The Parthenon may have been the most glorious structure of Athens, the Pnyx might have been the seat of the Assembly’s power, the Areopagus might have been where life and death was decided upon – but it was its agora that made Athens great.1 The Agora of agentic AI holds the same promise: to act as a place of free interaction, within governed bounds, for our agents, unlocking the true power of the agentic perspective: emergence.

1 To the point that people enjoyed the latter enough to forego participation in the former. Eventually, a bunch of slaves would have to roam the agora every time the Assembly was in session, carrying ropes with a red dye. Staining the garments of those who preferred the agora to the Assembly, the red dye identified them and made them liable to punishment.

Constructing the Agora

There is, in fact, relatively little new in the concept of tackling complexity through self-organising emergence. Consider neural networks: what lends artificial neural networks their awesome power is that instead of having to manually code stacks of filter banks, we use – typically – backpropagation to condition a large number of highly connected filters to minimise a loss function (i.e. to make the resulting model more accurate). Nobody would propose to manually define each filter in a neural network a priori: why, then, are we still talking about deterministically defined hierarchies and flows of agents instead of allowing agents to organise themselves and control that process through some outcome metric?

One aspect of this is that the Agora is more than a collection of agents idly milling around. Crucially, we need to provide three key elements:

  • A discovery framework: agents must be able to discover other agents, and what they can do, so as to be able to identify other agents that they may recruit to assist them in their goal – this would typically take place using a registry where agents ‘enroll’ their profiles and which other agents can then access.
  • An interaction framework: agents must be able to communicate with each other, which requires both a message-passing standard (i.e. a minimum interface of how one agent may programmatically call another), and a suitable implementation (i.e. the message broker service that implements this standard).
  • A governance framework: the governance of the Agora relies on the fact that not all agents may register themselves to the agent registry. Who may, and who may not, participate in the Agora determines and governs the overall process. Equally, the fact that we do not want our agents’ interaction to be entirely deterministic does not mean we want it to be entirely ungoverned. Various policies can be used to condition where connections can, and cannot, be made: some agents may be barred from creating certain direct connections, for instance it should be possible to specify that no agent should be able to directly return data to the user without having to pass it through a guardrail agent. The agora was a place of free interaction, but not of lawlessness – the same goes for the Agora of our AI agents.

The Agora is not an ‘enhancement’ of agentic AI – it is what agentic AI is, or at least ought to be. It is what allows the greatest strength, i.e. self-organised emergence, of AI agents to unfold in a governed, controlled domain. And perhaps quite perplexingly, it is probably going to be easier to implement than most deterministic agentic structures. Certainly it is going to be more economical to allow agents to reason through how to solve their problems and discover the resources they need within the Agora, recruiting them as needed and releasing them once done, than having to think through the process a priori. A solid agentic framework can accommodate the fact that the world is complex, and organise itself to cater for the unexpected (within, of course, its means – that is, within what agents are available to the Agora).

The Agoric Shift

This, then, is where I personally see our next challenge – both from an epistemic-conceptual perspective, which will call for us to rethink agentic AI in a way that perhaps focuses less on agents and more on their interconnectedness (and ways to facilitate it) and from an engineering perspective, which will require us to implement the tools and structures it will take to make this interconnectedness happen. Neither challenge is trivial. There is a pervasive trend to attempt to simply take deterministic RPA-like processes and workflows, rename them agents and watch the money roll in. The conceptual challenge thus is to illuminate what agentic AI properly so called brings to the table – the promise of emergence.

From a technical perspective, there is as of yet no universal way for agents to interact. Anthropic’s Model Context Protocol is a valiant attempt at beginning to set some standards in that regard, but the reality on the ground is that most agent implementations have relied on exaptation, and for inter-process communication in the internet era, that means REST for the most part. This may support deterministic designs with modest needs for interaction, but the Agora has need for other structures, too, such as a model registry. This raises a wealth of coordination problems that need to be tackled before we can let our agents go to (the) town (square).

Yet this shift is where we unlock the power of modern AI. In 2023, I predicted the rise of agentic systems as a way to unleash the potential of LLMs by making them interact with each other in various roles. The Agoric Shift is the consummation of this idea: agentic systems where such interactions arise not from predefined workflows and patterns but from self-organising assemblies of agents2 – the point where we finally stop trying to painstakingly orchestrate every step of our agents’ interactions and instead build a vibrant Agora for them to roam, collaborate and perhaps even surprise us.

2 Which, by the way, may include humans. There’s no reason why we shouldn’t conceive of ‘humans in the loop’ not as a superordinate stage that comes after agentic AI has done its part, but simply another agent.

Note: These are my personal (and somewhat tongue-in-cheek) views, and may not reflect the views of any organisation, company or board I am associated with, in particular HCLTech or HCL America Inc. My day-to-day consulting practice is complex, tailored to client needs and informed by a range of viewpoints and contributors. Click here for a full disclaimer.

Citation

BibTeX citation:
@misc{csefalvay2025,
  author = {{Chris von Csefalvay}},
  title = {After Agents, Part 2 -\/- {Agents} and the {Agora}},
  date = {2025-02-10},
  url = {https://chrisvoncsefalvay.com/posts/agents-agora/},
  langid = {en-GB}
}
For attribution, please cite this work as:
Chris von Csefalvay. 2025. “After Agents, Part 2 -- Agents and the Agora.” https://chrisvoncsefalvay.com/posts/agents-agora/.