How AI Protocols Will Expand Enterprise Boundaries

From Internet Wild West to Agent Interoperability

Picture this: It’s 1981. In a university computer lab, a researcher sits before a glowing green terminal, attempting to access data from another institution. After hours of reconfiguring settings and coding custom gateways, the connection fails—again. Across town, a government laboratory operates on an entirely different network standard, its valuable research effectively invisible to the academic world. Meanwhile, early commercial networks like CompuServe and The Source operate as digital “walled gardens,” each requiring unique terminals, commands, and protocols.

This was the fragmented reality of the internet back then. It was a vast, but very siloed digital landscape where brilliant islands of innovation remained isolated by incompatible communication standards, limiting its revolutionary potential.

Then came the TCP/IP protocol suite, creating a universal language for networked systems. This technical breakthrough quite literally transformed our world—not an overstatement—uniting disparate networks into the global internet and enabling unprecedented connectivity, innovation, commerce, and the real-time global collaboration that we take for granted today.

Today, we stand at a similar inflection point with AI agents. The need for standardized agent communication protocols is becoming increasingly apparent. The recent launch of Agentforce 3 demonstrates how native support for open standards like Model Context Protocol (MCP) enables plug-and-play connectivity across diverse enterprise systems. As my colleagues from our product organization recently highlighted in When Agents Speak the Same Language: The Rise of Agentic Interoperability, “Without a common framework for agents to discover, authenticate, and communicate with one another, the AI agent ecosystem becomes fragmented and siloed—missing the opportunity for richer, end-to-end automation.”

Without a common framework for agents to discover, authenticate, and communicate with one another, the AI agent ecosystem becomes fragmented and siloed—missing the opportunity for richer, end-to-end automation.

Sam Sharaf, Senior Director, Agentforce Product Management

While their analysis provides an excellent business perspective and outlines foundational building blocks for agent interoperability, I will take a slightly different approach—exploring the technical evolution of these protocols from primitive to sophisticated systems through the lens of programming language development.

At this dawn of the agentic AI era, digital labor is beginning to make an outsized impact on businesses—BUT they’re operating in siloed environments, each platform using proprietary approaches to identity, communication, and trust verification. Just as TCP/IP united the early internet, we urgently need agent interoperability standards—a common framework enabling collaboration across organizational boundaries.

Organizations that help shape these emerging standards will gain tremendous competitive advantage, establishing enduring leadership in the AI economy.

The Evolution of Agent Protocols

The path toward agent interoperability will likely mirror two parallel evolutionary histories: internet protocols and programming languages. Both provide valuable insights into how agent communication standards will develop—from rudimentary instructions to sophisticated semantic interactions.

The path toward agent interoperability will likely mirror two parallel evolutionary histories: internet protocols and programming languages. 

Silvio Savarese, Chief Scientist, Salesforce

Phase 1: Basic Building Blocks

The Assembly Language Phase

The early internet relied on basic protocols for simple file transfers and text-based communication. Similarly, early programming required assembly language—working directly with a computer’s most fundamental instructions. Today’s agent protocols begin at this same primitive level: basic authentication mechanisms, simple message formats, and rudimentary command structures.

We see this pattern in current implementations, which focus primarily on API integrations between systems and basic agent identity verification. While functional, these approaches require significant custom development for each new integration—much like how early networked systems needed specialized gateways to communicate across protocol boundaries.

The Protocol Stack Phase

As the internet matured, it developed layered protocol stacks—TCP/IP handling basic connectivity, while higher-level protocols managed specific functions like email (SMTP) or web browsing (HTTP). Programming similarly evolved from assembly to languages like C++ and Java, allowing developers to express complex logic through higher levels of abstraction, enabling better memory management and allowing programs to fail gracefully.

Agent protocols are now entering this second phase, incorporating meta-level instructions—the ability to communicate about goals, constraints, and domains of expertise. This parallels what’s happening in the emerging field of ontology in AI systems—a topic you’ve likely encountered recently or soon will. Ontology provides a map of metadata relationships—the data about the data—in order to make decisions. As Madonnalisa Chan from our experience design team explains in her recent exploration, ontologies create “a common vocabulary and related terms to describe types of information, which helps with natural language processing” and enables AI to answer questions.

Just as ontologies structure concepts through classes, properties, attributes, and logical axioms, agent protocols need standardized ways to describe capabilities and relationship types.

This ontological approach is evident in our own work at Salesforce with our Metadata Framework, which provides the technical infrastructure for our deeply unified platform. This foundational framework allows different levels of users and developers to customize and extend Salesforce functionality using structured metadata. Our approach to organizing information ensures that agents can make decisions based on accurately labeled and connected data in machine-readable formats. This metadata foundation enables our bold vision for agents—like building with Lego blocks where Salesforce provides the foundation, and others can build apps and agents on top without rewriting software from scratch or compromising security.

This “protocol about protocols” approach enables more sophisticated collaboration without requiring agents to understand each other’s internal workings completely. Salesforce has emerged as an industry leader in Enterprise AI with the development of the “Agent Cards” concept—standardized metadata that describes an agent’s capabilities, limitations, and appropriate use cases.  Google’s product team adopted this concept in their A2A (Agent-to-Agent) specification, citing Agent Cards as the keystone for capability discovery and version negotiation.  (You can read more about Agent Cards in our recent blog, When Agents Speak the Same Language.)

These theoretical frameworks became enterprise reality with Agentforce 3. Agentforce now includes a native MCP client, enabling agents to connect to any MCP-compliant server without custom code—think of it like a ‘USB-C for AI’. MCP operates at a different layer than A2A, focusing primarily on the interface between language models and their underlying resources rather than agent to agent communication.

In addition, MuleSoft converts any API and integration into an agent-ready asset, complete with security policies, activity tracing, and traffic controls, empowering teams to orchestrate and govern multi-agent workflows.

Phase 3: Semantic Interactions

The ‘World Wide Web’ Phase

Phase 3 represents a major shift from previous protocol development: for the first time, we’re designing communication standards for entities that can reason, plan, and adapt independently rather than simply execute programmed instructions.

This challenge isn’t entirely new. In the 1940s, science fiction writer Isaac Asimov grappled with similar questions in his Three Laws of Robotics, establishing early ethical guidelines for artificial beings. But Asimov’s robots were fictional constructs designed to serve humans unquestioningly. Today’s AI agents are reasoning systems that must negotiate, collaborate, and make autonomous decisions across organizational boundaries—a reality that extends far beyond technical considerations into ethics, trust, legal compliance, and security.

Agent protocols at this level enable true semantic understanding between systems. Agents negotiate complex tasks, adapt communication based on context and experience, and form dynamic collaborative relationships that evolve over time. This goes far beyond standardizing data exchange—it’s creating the foundation for distributed intelligence that operates across company boundaries while maintaining accountability and trust. Just as the World Wide Web turned isolated websites into an interconnected ecosystem of semantic relationships—where hyperlinks created meaningful connections between disparate content—agent protocols are now weaving isolated AI systems into a collaborative intelligence network where distributed minds can think, negotiate, and solve problems together at unprecedented scale.

Importantly, unlike human communication, agents won’t communicate through natural language but will develop structured protocols optimized for machine reasoning. When two AI agents recognize they’re communicating directly, they can exchange vast amounts of structured information instantaneously—like two computers transferring entire databases rather than humans slowly describing the contents word by word.

Beyond Technical Protocols: The First Code of Conduct for Artificial Minds

A topic I’m particularly interested in exploring is what we might call a “code of conduct” for agents—the social protocols that will govern how agents from different organizations interact. Just as humans follow social norms when communicating with strangers—waiting for someone to finish speaking before responding, acknowledging and building on their ideas—agents will need established etiquette rules for productive cross-organizational collaboration.

This represents uncharted territory. For the first time in history, we must establish behavioral protocols between reasoning artificial entities. These rules extend far beyond politeness into critical areas of ethics, trust, legal compliance, and security. Salesforce’s research and product teams have been collaborating to pioneer these frameworks through Agentforce, developing some of the industry’s first comprehensive protocols for agentic interactions that prioritize enterprise business outcomes, accountability and most importantly, trust. Agents must know when to respect confidentiality, how to handle proprietary information, and when to escalate beyond their authority. 

These protocols become even more critical when agents face the unknown: negotiating with entities from other organizations—even across geographical boundaries— whose objectives, capabilities, and communication styles are completely opaque. Without this framework, agents lack the contextual understanding to navigate complex inter-organizational dynamics effectively. The rules must include finishing conversations within reasonable timeframes, negotiating objectives without exceeding defined thresholds, and maintaining professional discourse across organizational and country/cultural boundaries—essentially teaching machines the art of productive, globally integrated business collaboration that humans have developed over millennia.

The 4 Pillars of Agent Interoperability

As agent protocols evolve from basic building blocks through meta-level instructions toward sophisticated semantic interactions, a clear technical framework emerges. The practical foundation for agent interoperability ultimately rests on four critical technical components that must work in concert:

1. Agent Identity and Authentication Before agents can meaningfully collaborate, they must verify each other’s identity, authority, and trustworthiness. Consider a procurement agent negotiating with a supplier’s pricing agent—it must confirm not just legitimacy, but specific authorization to commit to pricing agreements. Emerging Verifiable Credential frameworks provide cryptographically secure foundations for cross-organizational agent identities.

2. Capability Advertisement and Discovery Agents must clearly communicate what they can and cannot do through standardized capability schemas. Beyond profiles like our “Agent Cards” concept, this requires dynamic negotiation—discovering real-time limitations like “I can handle bulk discounts, but orders above $50,000 require human escalation.” The technical challenge lies in creating machine-readable schemas that express conditional capabilities and contextual constraints. 

3. Interaction Protocols and Conversation Management Once agents can identify each other and understand their respective capabilities, they need structured ways to manage interactions—from simple requests to complex negotiations and collaborative problem-solving.

This includes standardized conversational frameworks, error handling, escalation procedures, and state management. The protocols must support not just linear exchanges but branching conversations, parallel subprocesses, and graceful handling of partial results.

4. Trust and Governance Frameworks Perhaps most critically, interoperable agent ecosystems require standardized approaches to managing trust, security, and governance. This includes logging interactions for accountability, managing consent and permissions, detecting and preventing harmful behaviors, and ensuring compliance with relevant regulations and organizational policies.

These pillars are deeply interconnected. Without robust identity, capability discovery becomes meaningless; without governance, interaction protocols can be exploited. Only by addressing all components holistically can we create agent ecosystems that are both powerful and trustworthy.

Preparing for the Interoperable Future

My advice to enterprise leaders is twofold. First, pay attention to this space. Standardization will accelerate rapidly as organizations recognize that proprietary agent ecosystems limit their business potential. Develop awareness of how these protocols evolve. Follow the key developments to make informed strategic decisions and learn to distinguish between short-lived approaches and the protocols that will become industry foundations. I foresee that the early adopters that influence these emerging standards will gain tremendous competitive advantage in the AI economy.

Second, examine your ontology. Forward-thinking organizations should invest now in mapping their work ontology—creating structured taxonomies of business tasks, processes, and relationships that will enable seamless agent interoperability when standards mature. Just as those early university researchers couldn’t envision streaming video or social networks when TCP/IP was being developed, we cannot fully imagine what becomes possible when intelligent agents collaborate across company boundaries at scale. Organizations with clean data repositories already enjoy competitive advantages today—those with well-defined work ontologies will rapidly deploy interoperable agents tomorrow. 

The nascent agent ecosystems of today will inevitably give way to an interconnected landscape of intelligent collaboration. Those who prepare thoughtfully won’t just witness this transformation—they’ll lead it.

I would like to thank Sam Sharaf and Karen Semone for their insights and contributions to this article.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *