Part IV: Engineering AI Products
Chapter 19.5

Protocol-Driven System Design

"Protocols are the architecture of AI products. Just as HTTP defines how web components communicate, MCP and A2A define how AI components communicate. Build on protocols and your system becomes composable. Build on custom integrations and your system becomes technical debt."

Chief Architect, DataForge

Introduction

This chapter has covered MCP for tool integration and A2A for agent coordination. This section synthesizes these protocols into a coherent system design approach. Protocol-driven design means building AI systems where components communicate through standardized interfaces rather than bespoke integrations. The result is systems that are more maintainable, extensible, and interoperable.

The Protocol Stack for AI Products

Production AI products operate across multiple protocol layers, each addressing different communication needs.

A2A Layer - Agent-to-Agent Coordination
Agent Layer - Reasoning, Planning, Task Decomposition
MCP Layer - Model-to-Tool Integration
Tool Layer - External Services, Databases, APIs
Data Layer - Documents, Knowledge Bases, User Data

Why Layered Protocol Design Matters

Each layer addresses a different abstraction level. When you need to change how agents coordinate, you modify A2A implementations. When you need to add a new tool, you implement an MCP server. When you change data sources, you update the data layer. Layers enable isolated changes without system-wide rewrites.

Designing for Protocol Interoperability

Protocol-driven design requires designing components that speak protocols rather than custom integration code. This means defining clear interfaces at each layer.

Protocol Gateways

Not all components will natively support MCP or A2A. Protocol gateways translate between legacy interfaces and modern protocols.

+------------------------------------------------------------------+ | PROTOCOL GATEWAY ARCHITECTURE | +------------------------------------------------------------------+ | | | +----------------+ | | | A2A Agent | | | | Network | | | +-------+--------+ | | | | | | A2A | | v | | +----------------+ +----------------+ | | | A2A Gateway |<----->| Legacy System | | | | | API | Adapter | | | +-------+--------+ +----------------+ | | | | | | MCP | | v | | +----------------+ | | | MCP Server | | | +----------------+ | | | +------------------------------------------------------------------+

Service Discovery

Protocol-driven systems need ways to discover available services. MCP and A2A define discovery mechanisms, but your infrastructure must support them.

Discovery as a Service

Service discovery should be a dedicated service, not hard-coded configuration. As AI products scale, the number of available tools and agents grows beyond what static configuration can manage. Implement service registries that agents and models query at runtime to discover available capabilities.

Case Study: DataForge Protocol-Driven Architecture

DataForge Enterprise Data Pipeline Platform

Who: DataForge, building AI-powered data pipeline automation for enterprises

Challenge: Enterprises have diverse data sources (Snowflake, BigQuery, Redshift, S3, legacy databases) and want to use AI to automate data transformations without manual engineering.

Architecture Overview

+------------------------------------------------------------------+ | DATAFORGE PROTOCOL ARCHITECTURE | +------------------------------------------------------------------+ | | | User Interface Layer | | +--------------------------------------------------------------+ | | | Natural Language Interface | Pipeline Dashboard | Monitoring | | | +--------------------------------------------------------------+ | | | | Agent Layer (A2A) | | +--------------------------------------------------------------+ | | | Orchestrator Agent | | | | +----------+ +----------+ +----------+ +----------+ | | | | | Parser | | Schema | | Code | | Testing | | | | | | Agent | | Agent | | Gen | | Agent | | | | +--------------------------------------------------------------+ | | | | Protocol Layer (MCP) | | +--------------------------------------------------------------+ | | | MCP Server: Snowflake | BigQuery | Redshift | S3 | Internal | | | +--------------------------------------------------------------+ | | | | Data Layer | | +--------------------------------------------------------------+ | | | Enterprise Data Sources | Knowledge Base | Version Control | | | +--------------------------------------------------------------+ | | | +------------------------------------------------------------------+

Protocol Benefits Realized

Adding new data sources: Previously required 2-4 weeks of engineering. With MCP, implementing a new data source MCP server takes 2-3 days and is reusable across all agents.

Agent specialization: New specialized agents (data quality agent, lineage tracking agent) can be added by registering with A2A registry. Existing agents discover and delegate to them automatically.

Multi-tenant isolation: Protocol-level permission scopes ensure agents can only access data sources authorized for their tenant.

Designing Your Protocol Architecture

When building a new AI product, how do you decide which protocols to adopt and how to layer them?

Assessment Questions

Will you integrate external tools? If yes, MCP provides standardization benefits even if you only have one model provider today.

Will you have multiple agents? If yes, A2A or a similar coordination protocol prevents bespoke agent integration code.

Do you need to scale agents or tools independently? Protocol layers enable independent scaling by decoupling components.

Will you need to integrate with future AI capabilities? Protocol-based architectures adapt more easily to new model providers and agent frameworks.

Protocol Readiness Checklist

For single tool integration, consider MCP if tool complexity warrants it. For multiple tools, there is a strong case for MCP with a centralized tool registry. If you have a single agent, protocols may add overhead so focus on tool integration first. For multiple agents, A2A or an orchestration protocol is needed to prevent integration chaos. Dynamic tool or agent discovery requires a service registry at scale. Multi-tenant deployments require protocol-level permission scopes critical for security. Real-time requirements mean protocol overhead must be measured and acceptable.

Migration Strategies

Most AI products do not start with perfect protocol architectures. They evolve from prototypes to production systems. Here is how to migrate toward protocol-driven design.

Start Simple, Add Protocols Incrementally

Begin with well-designed custom integrations that work. When you see patterns repeating across integrations, extract those patterns into protocols. The second time you build a tool integration, build it with MCP even if the first did not.

The Rewrite Trap

Resist the temptation to rewrite everything with protocols before you understand where complexity actually lies. You may discover that your custom integration handles edge cases that a generic protocol approach would miss. Let protocol adoption be incremental and driven by demonstrated need.

Hybrid Approaches

Many production systems are hybrids. Core business logic uses custom integrations that are too specialized for protocols. Peripheral integrations use protocols for standardization benefits. This is acceptable as long as the boundaries are clear.

Future-Proofing Your Architecture

Protocols are still evolving. The MCP and A2A protocols of 2026 may look different from today's versions. Design for adaptation.

Abstraction Layers

Do not call protocols directly from your business logic. Create abstraction layers that translate between your internal representations and protocol messages. When protocols evolve, you update adapters, not business logic.

Versioning Strategy

Support multiple protocol versions simultaneously during transitions. Implement version negotiation so newer components can use newer protocol features while maintaining backward compatibility.

Monitoring Protocol Health

As with any network protocol, monitor latency, error rates, and version distribution. Protocol-level metrics help identify issues before they cascade into user-visible problems.

Cross-References

For MCP fundamentals, see Section 19.1 MCP as a Standard. For A2A agent coordination, see Section 19.2 A2A for Agent Interoperability. For orchestration patterns enabled by protocols, see Chapter 13 Prompts and Agents.

Section Summary

Protocol-driven system design builds AI products on standardized communication layers. The protocol stack includes A2A for agent coordination, MCP for tool integration, and underlying data layers. Protocol benefits include component interchangeability, independent scaling, and easier debugging. DataForge case study demonstrates these benefits in enterprise contexts. Protocol adoption should be incremental, starting with demonstrated needs rather than theoretical elegance. Future-proofing requires abstraction layers, versioning strategies, and protocol-level monitoring.