What Is Agentic Computing?

Blog Image

Agents are all the rage these days. Open-source projects like Auto-GPT, corporate products like Microsoft’s Copilot Agents , Sam Altman says agents are the inevitable next step — the tech world is abuzz with the potential of autonomous agents:

People are quick to conjure up dreams of the future where the majority of employees in a company might be autonomous agents, collaborating with humans. Others argue that this is unduly anthropomorphizing agents, and we’ll more likely have a few “super agents” who are good at many things. Still others believe agents are just a stepping stone to AGI, and will be forgotten when we get there.

What are agents, anyway?

So before we talk about agentic computing, let’s talk about what an agent actually is: 

An autonomous agent is any piece of software capable of taking a high-level task and autonomously accomplishing it by breaking it down into smaller steps, making decisions based on available data and context, and interacting with other computer systems to execute those steps.

This definition avoids any anthropomorphization and dramatic speculation about the future of AI. With this definition, it’s obvious that agents are here to stay. What’s slowly sinking in is the realization that this approach to getting tasks done is a radically new paradigm, and we have a lot of work to do to prepare existing software for it.

Towards agentic computing

In 1984, Apple introduced the Macintosh, the first commercially successful personal computer with a graphical user interface. That ushered in the personal computing revolution, and over the next two decades, software would be primarily built around a GUI that a single human could interact with.

Almost twenty years later, the internet was in full bloom. Salesforce launched in 1999 as one of the first successful SaaS products, and in 2002 AWS made it possible to deploy and scale web applications without hosting your own infrastructure. Increasingly, these applications did not just need to communicate with the user, but with each other. Software was written with APIs as their main mode of data input and output.

Fast forward to now: Large Language Models are rapidly improving and empowering software to make its own decisions about what other software to interact with and how. This is the dawn of agentic computing. Alas, we’re missing an important piece of the puzzle:

We’ve been focused so much on building agents and and ever more powerful foundational models, but we haven’t put much thought into the environment that agents need to be successful: the way agents interact with existing software.

You might say “isn’t the job of agents to figure that out themselves”? The core tenet of agentic computing is enabling software to autonomously execute complex tasks by breaking them into manageable steps and interacting with other systems. Agents are focusing on the reasoning and planning, and yet the absence of an agent-native interface to existing software presents a huge hurdle when it comes to the interaction part. In particular, agents need

  1. Contextual understanding. To be truly helpful, agents need to understand the world we live in — your world. Your work, who you work with, who’s working on what, and so on. The good news is that most of that data exists and is just an API call away! The bad news is that APIs are a terrible interface for getting contextual data. Imagine an AI lawyer that needs to defend an IP case by finding all documentation relating to an idea: you can’t just iterate through tens of thousands of pages of emails of everyone at a company until you find a good match. Most companies build highly specific RAG (retrieval augmented generation) pipelines for this purpose, but true agent friendly software would allow for this kind of use out of the box.
  2. Unified permission and identity management. As ever more agents and AI apps are accessing this data, companies and end users alike need a trusted way to permission this data, know what’s being shared, and how it is being used. It’s already hard enough to keep track of which legacy apps can access your data; with agents being able to make autonomous decisions about what to do with data, we need understandable paper trails for your data.
  3. Task execution protocols. Current RESTful APIs are by nature stateless and lack continuity, which makes it hard for agents to predict the outcome of their actions, a crucial building block for agentic reasoning. There is also typically no easy way to reverse operations or “dry-run” them that would allow an agent to determine if they are using an API right. The interface for agentic computing needs to be modular, explainable, chainable, and reversible, and we’ll get into how this looks in another blog post.
  4. Agent-native data storage. Agents are beginning to “learn” and form memories, but a core challenge is the lack of a way to keep these grounded as the world around them changes and new information becomes available. We don’t want yet another legacy API to interact with when it comes to forming agent’s memories.

Together, these four building blocks — context, security, execution, and storage — will create the interface layer for agentic computing. At Hyperspell, we aptly call this the Agentic Computing Interface, or ACI.

We believe that an open web will make great leaps towards a universal ACI in the next few years. Unfortunately, most legacy software — especially in enterprise companies — will be slow to adapt, and that is a severe impediment to the adoption of AI and agents. Hyperspell exists to bridge the gap between legacy software and an agent-native environment, and allow new software to be written to be AI proof from ground up.

Build faster. Build Better.