After agents

LLMs
AI
agents
2024 was the year of agents. 2025 will be about figuring out how we orchestrate their interactions. Welcome to the year of ecosystems.
Author

Chris von Csefalvay

Published

4 January 2025

When he first began his excavations at what is today Hisarlik in modern-day Türkiye, Heinrich Schliemann set out to find a single city – the city of Homer’s Iliad, a city many actually felt lay in the realms of fiction rather than any map he could lay his hands on. By the time excavations were over, Schliemann would find not one but nine cities, all built on top of each other.1 In that, he found something relatively common – cities built on top of cities, each turning the last one into the foundation of the next.

1 Some of which he would blow up. Schliemann’s excavations were peak cowboy archaeology.

Technology is not much different. What Newton described as ‘standing on the shoulders of giants’ was an astute reflection of this fact. When I surveyed the landscape of LLMs in late 2023, I saw – and I was not alone in doing so! – that there was more to the practical application of LLMs than their most ubiquitous ‘low hanging fruit’ use case at the time – that is, chatbots and conversational interfaces. That notion, of course, became the agentic revolution that emerged as the most talked-about topic of 2024.

If your hot take for 2025 is that ‘agentic AI is going to dominate’, however, you have missed the train. The agentic revolution is over, done and accomplished. In my wild-ass guessing of what 2025 may bring, I tried to reflect as a leading theme on what I believe comes after agents – namely, systems of interaction. In this post, I hope to expand on that notion a little.

Agents are over (long live agents!)

Looking back at 2024, I have to laugh at how quickly the agentic revolution went from being seen as wild-eyed speculation to something almost embarrassingly obvious. I spent a good chunk of late 2023 explaining to sceptical audiences why autonomous AI agents were not just chatbots that could call APIs. I would spend most of the next year fielding calls from the same audiences to help develop a strategy for agentic AI. The whiplash-inducing speed of this transformation was stunning even by AI’s standards, where we seem to have moved into a 24-minute, rather than 24-hour, news cycle. What started as hacky demos and GitHub repos with more stars than working features evolved into agents casually writing production code, running research pipelines and tying all of this together into operational workflows that actually made sense. The real shift was not just in what these agents could do – it was in how they subtly changed our relationship with AI systems in the process. We went from the digital equivalent of playing 20 Questions with chatbots to having persistent virtual assistants that could actually maintain context, manage complex tasks and make reasonable decisions without needing to be guided through every minor choice. For someone who spent years working with the digital equivalent of goldfish, this was heady stuff.

The thing about agents is that their true power is not in what they can do alone – it is in how they work together. This is not just some hand-wavy My Little Pony-esque ‘collaboration is magic’ adage. It is fundamental to the nature of what an agent is, and why we even bother with them. An agent that can write code is useful, but an agent that can write code while collaborating with another that handles testing while yet another manages deployment and a fourth monitors performance is truly transformative. The industry’s current obsession with making individual agents (or, even worse, just individual foundation models) more powerful is like trying to make a better dish by adding more of a particular ingredient.2 The magic of a perfect meal lies not in how many spoonfuls of exotic ingredients like fennel pollen and saffron threads you can dump into your pan, but in the fine balance between whatever ends up on a plate – even if it is just arugula, shaved parmesan and balsamic glaze (see recipe in sidebar). In short, the magic is in the complex web of interactions between the ingredients, the emergent phenomena that arise from their coexistence. We see this pattern play out all the time in technology. The apex of maturity always involved interaction. Consider, most ubiquitously, the web: from individual sites, we evolved to a more semantic web and eventually, a knowledge/information ecosystem driven through APIs. We are at the cusp of that third phase with AI agents, and anyone still fixated solely on individual agent capabilities is missing the plot entirely.3

2 More AI folks should spend more time in the kitchen. This is my hot take for 2025.

3 This is even more so as agents themselves are becoming meaningless. An agent, in a properly designed ecosystem, is entirely replaceable. It implements a contract or protocol. It then does not quite matter what that agent does. And if that sounds a little like the Liskov substitutability principle, then that is not quite by accident.


Recipe: The simplest salad you’ll ever love

  • A bunch of baby arugula
  • 1/4 cup of your favourite nuts
  • freshly ground salt and pepper
  • Parmigiano Reggiano, lots
  • 1 clove garlic
  • 1/4 cup lemon juice, freshly squeezed
  • 1/2tsp honey
  • 1/4 cup extra virgin olive oil
  • 1tsp Dijon, the smoother the better
  • 1/2tsp fresh thyme
  • optional: a good balsamic reduction

Mix the lemon juice, minced garlic, Dijon and honey, and salt & pepper to your heart’s content. While whisking, drizzle in the olive oil. Add the thyme and taste – add some more olive oil if it’s too acidic for your taste. In a separate bowl, add the arugula, and toss it with the vinaigrette you just made. Mix, using your hands – plastic or wooden mixing tools break the arugula leaves, which renders the whole thing bitter. Plate, then cover with the nuts. Using a coarse Microplane grater, shave enough Parmigiano Reggiano to make the whole thing a happy mixture. Enjoy on its own or as a light side.


We are already seeing the first tentative steps from multi-agent systems towards ecosystems of agents, even if most have not recognised them as such. Enterprise agent marketplaces will be 2025’s hot commodities, drawing on past experiences with data marketplaces and exchanges. But conceptually, most seem to still treat agents as distinct pieces of software rather than collaborators in an ecosystem. The real pioneering work for the coming year(s) will be in developing the frameworks and protocols at the edges: agent orchestration systems that go beyond simple API calling, trust negotiation protocols that let agents establish their capabilities and limitations and collaborative frameworks that enable genuine multi-agent workflows. Technical implementations of these, however, are scarce. Working out an interaction protocol is not glamorous (trust me – speaking from personal experience), and there are going to be few headlines and fewer medals in working out how this sudden flood of AI agents is going to interoperate. And yet, this is the manifest destiny of agentic AI. Of all targeted agentic AI spend in 2025, marketplaces and interoperability orchestrators will without a doubt be the best dollars spent bar none.

From agents to ecosystems

What is an ecosystem? In its natural sense, we have relatively little trouble distinguishing between a species, an individual, a population and an ecosystem. In short, an ecosystem has three distinguishing features:

  1. Diversity: ecosystems consist of multiple species that all play their role, quite similarly to agents in a well-architected system.
  2. Interactions/rhizomality: ecosystems become what they are from the interactions between their participants, not the mere assemblage of the participants. In that sense, a good meal is an ‘ecosystem’ of sorts, where the acids balance out the fats and so on. In a more functional context, however, as we are dealing with here, what makes a bunch of agents an ecosystem is their ability to exercise a higher function in complementarity.
  3. Interdependence: ecosystems produce their benefits through these interactions, which collectively amount to more than the sum of the parts.

In an agentic AI system, our definition can largely be similar: an agentic AI ecosystem is a bunch of agents with different functionalities that interact and thereby unfold value. An ecosystem implies more than just the ability to pass messages between agents or chain them together in sequence. It requires the emergence of specialisation, of niches, of ways to establish trust and capabilities, and – crucially – ways to negotiate the terms of interaction. Unlike today’s relatively deterministic structures, this environmental discovery could eventually be self-governing, more like a bustling market bazaar, where agents can discover each other’s capabilities, negotiate terms of engagement, establish trust relationships, and even form longer-term collaborative partnerships. This is a fundamental reimagining of how artificial intelligence systems interact with each other.

The emergence of enterprise agent marketplaces will be one of the defining developments of 2025, but most organisations are still thinking about them wrong. The knee-jerk reaction is to build something akin to an app store: a catalogue of pre-built agents with rating systems and standard pricing. That is a useful (and often indispensable) first step, but it could be so much more. The real value of these marketplaces will not be in the agents themselves, but in the curation and verification mechanisms they enable. Think less ‘app store’ and more ‘commodity futures exchange’: what matters is not just what is being traded, but the rules of engagement, the verification of capabilities, the establishment of trust, and the standardisation of interfaces. We will need ways to verify that agents can actually do what they claim, that they operate within defined constraints and that they can be trusted with sensitive data or critical operations. This is where enterprise agent marketplaces will differentiate themselves from consumer platforms – through robust governance frameworks that make agent deployment actually feasible in regulated environments.

The real challenge – and opportunity – in building these ecosystems lies in standardising the right things while leaving room for innovation. Over-standardisation kills ecosystems as surely as no standardisation at all. The emerging protocols for agent interaction will need to thread this needle. They will have to standardise the essential patterns of trust establishment, capability discovery and resource negotiation, while remaining flexible enough to accommodate new forms of agent collaboration we have not even imagined yet. This is where the enterprise agent marketplace builders of 2025 will either make their fortunes or waste their investors’ money. The winners will be those who create the right balance of structure and flexibility – the ones who understand that they are not building an app store so much as cultivating an ecosystem.

Trust falls (and rises)

The hardest part of building these agentic ecosystems is not the technical implementation (in fact, we arguably have most of that already in place, mutatis mutandis, for data and other asset marketplaces) – but the trust architecture. In particular where agents choose and commission other agents to perform tasks as delegates or helpers, we need to find protocols that outline the powers of delegation and the flows of authority in such an architecture. The frameworks we are building now are laughably primitive compared to what we will need, mostly amounting to simple API keys and rate limits. It is like trying to build a modern financial system with nothing but paper IOUs and handshake agreements.

The chain of trust problem in agent delegation is fascinating precisely because it mirrors and yet fundamentally differs from how we handle human organisational hierarchies. When Agent A delegates a task to Agent B, which in turn needs to commission Agent C, we are not just passing around access tokens - we are creating a chain of responsibility that needs to be both traceable and constrained. Each link in this chain needs to carry not just the authority to act, but also the constraints and audit requirements of all previous links. An agent operating as a fourth-level delegate should still be bound by the original constraints set at the root of the delegation tree, even if it has no direct knowledge of them. This is not just about security - it is about maintaining coherent behaviour across increasingly complex chains of interaction. The financial sector learned this lesson the hard way with automated trading systems: without clear chains of responsibility and well-defined constraint propagation, you end up with cascading failures that no single participant can explain or control.

The next year will be defined not by breakthroughs in individual agent capabilities, but by our success or failure in building these frameworks for trusted collaboration. The winners will not be those who build the most powerful agents, but those who crack the code of helping agents work together effectively and safely. This is not just about technology – it is about understanding how to create systems of trust that can scale with the complexity of agent interactions.

The agentic revolution of 2024 was just the overture. The real symphony begins when we figure out how to let the players actually work together in concert. Those who are still focused solely on building better individual agents are composing for soloists in an age that demands orchestras.

The future belongs to those who can conduct.


Note: These are my personal (and somewhat tongue-in-cheek) views, and may not reflect the views of any organisation, company or board I am associated with, in particular HCLTech or HCL America Inc. My day-to-day consulting practice is complex, tailored to client needs and informed by a range of viewpoints and contributors. Click here for a full disclaimer.

Citation

BibTeX citation:
@misc{csefalvay2025,
  author = {{Chris von Csefalvay}},
  title = {After Agents},
  date = {2025-01-04},
  url = {https://chrisvoncsefalvay.com/posts/after-agents/},
  langid = {en-GB}
}
For attribution, please cite this work as:
Chris von Csefalvay. 2025. “After Agents.” https://chrisvoncsefalvay.com/posts/after-agents/.