Akshay Parkhi's Weblog

Subscribe

HTTP vs MCP vs A2A vs AG-UI: The Four Protocols of AgentCore Runtime

4th April 2026

When you deploy an agent to AWS AgentCore Runtime, you pick a protocol: HTTP, MCP, A2A, or AGUI. This choice determines how your agent talks to the outside world — what it receives, what it sends back, and who it talks to. All four run on identical infrastructure. The differences live entirely in the framing and application layers.

This post breaks down every layer for every protocol, with real code from the official AWS AgentCore samples.

The One-Sentence Version

ProtocolWho talks to whoWhat for
HTTPAny client → AgentGeneric REST API. You define the contract.
MCPAI system → Agent (as a tool server)“Here are tools I provide. Call them.”
A2AAgent → Agent“I have a task for you. Here’s the context.”
AGUIHuman (browser) → Agent“Show me what you’re doing. Let me interact.”

Layer 1 — Network Transport (Identical for All Four)

TCP → TLS 1.3 (AES_128_GCM) → Port 443
Remote: bedrock-agentcore.<region>.amazonaws.com
Certificate: Amazon RSA 2048 M03
Auth: IAM SigV4 or OAuth 2.0 Bearer tokens

AgentCore proxies to your container on port 8080

No difference at Layer 1. Same servers, same TLS, same TCP. The serverProtocol configuration only affects Layer 2 and Layer 3.

Layer 2 — Transport Framing

HTTP — raw HTTP request/response. You define the schema. AgentCore adds session management, auth, and observability. No prescribed event types, no streaming contract.

POST /invocations HTTP/2
Content-Type: application/json
Body: (anything — you define the schema)

Response: JSON, streaming, or any HTTP response

MCP — JSON-RPC 2.0 over HTTP. Every request has jsonrpc, method, id. The response mirrors the request id. Strict RPC, not an event stream.

Request:
  {"jsonrpc": "2.0", "id": 1, "method": "tools/call",
   "params": {"name": "search_database", "arguments": {"query": "cloud security"}}}

Response:
  {"jsonrpc": "2.0", "id": 1,
   "result": {"content": [{"type": "text", "text": "results..."}]}}

A2A — JSON-RPC 2.0 extended with a task lifecycle model. Tasks stream progress via SSE.

Request:
  {"jsonrpc": "2.0", "id": 1, "method": "tasks/sendSubscribe",
   "params": {"id": "task-123",
     "message": {"role": "user",
       "parts": [{"type": "text", "text": "Summarize this document"}]}}}

SSE stream:
  data: {"jsonrpc":"2.0","id":1,"result":{"id":"task-123",
         "status":{"state":"working","message":{...}}}}
  data: {"jsonrpc":"2.0","id":1,"result":{"id":"task-123",
         "status":{"state":"completed","message":{...}}}}

AGUI — typed event stream. Not JSON-RPC. The request is a typed RunAgentInput, the response is a stream of 12 predefined event types. Supports both SSE and WebSocket.

Request (SSE or WebSocket):
  {"threadId": "t1", "runId": "r1",
   "state": {"title": "My Doc", "sections": [...]},
   "messages": [{"id": "m1", "role": "user", "content": "Add more detail"}],
   "tools": [...], "context": [], "forwardedProps": {}}

SSE response:
  data: {"type":"RUN_STARTED","threadId":"t1","runId":"r1"}
  data: {"type":"TEXT_MESSAGE_CONTENT","messageId":"abc","delta":"Here's"}
  data: {"type":"TOOL_CALL_START","toolCallId":"tc1","toolCallName":"research"}
  data: {"type":"STATE_SNAPSHOT","snapshot":{"title":"My Doc","sections":[...]}}
  data: {"type":"RUN_FINISHED","threadId":"t1","runId":"r1"}

WebSocket (same events, raw frames — no "data:" prefix):
  → frame: {RunAgentInput JSON}
  ← frame: {"type":"RUN_STARTED",...}
  ← frame: {"type":"TEXT_MESSAGE_CONTENT","delta":"Here's",...}
  ← frame: {"type":"RUN_FINISHED",...}

Layer 3 — Application Protocol

This is where the four protocols are fundamentally different. They solve different problems for different audiences.

HTTP — you define everything. No shared state. No tool visualization. No standard events. A blank canvas for wrapping existing REST APIs, custom agent protocols, or simple request/response agents.

Request:  {"prompt": "hello"}              ← your schema
Response: {"response": "Hi there!"}        ← your schema

MCP — tool/resource discovery protocol. The agent isn’t having a conversation. It exposes tools, resources, and prompts that another AI system can use. The caller decides which tools to invoke and in what order.

Discovery:
  tools/list → [{"name": "search", "inputSchema": {...}},
                {"name": "calculate", "inputSchema": {...}}]

Invocation:
  tools/call("search", {"query": "X"}) → result

Also:
  resources/list → data sources available
  resources/read → read a specific resource
  prompts/list   → prompt templates available
  prompts/get    → get a prompt template

Who calls MCP: Claude Desktop, Cursor, LangGraph agents — any LLM orchestration system that needs to discover and use tools. Not for: direct human interaction, streaming text, or shared state.

A2A — task delegation protocol. Agent A says “here’s a task, do it” and Agent B processes it, reports progress, and returns results. Tasks can be long-running, cancellable, and include structured artifacts.

Discovery:
  GET /.well-known/agent.json
  ← AgentCard: name, description, skills, capabilities

Task lifecycle:
  submitted → working → completed
                     → failed
                     → canceled (via tasks/cancel)

Streaming progress:
  {state: "working", message: "Analyzing document..."}
  {state: "working", message: "Found 3 key themes..."}
  {state: "completed", message: "Summary: ..."}

Who calls A2A: other agents, orchestration systems, workflow engines. Not for: direct human UI interaction, character-by-character streaming, or real-time state sync.

AGUI — human-agent interaction protocol. Every event type exists to create a rich interactive experience — the user sees the agent thinking, calling tools, updating documents, and asking for input. Only AGUI has shared state, tool visualization, and human-in-the-loop confirmation.

12 Event Types:
  Lifecycle: RUN_STARTED, RUN_FINISHED, RUN_ERROR
  Text:      TEXT_MESSAGE_START / CONTENT / END
  Tools:     TOOL_CALL_START / ARGS / END
  State:     STATE_SNAPSHOT, STATE_DELTA

Shared State (bidirectional):
  Request sends:   state: {title: "My Doc", sections: [...]}
  Agent modifies state via tools
  Response emits:  STATE_SNAPSHOT with updated state
  Next request sends the updated state back

Client-side Tools (human-in-the-loop):
  Request declares: tools: [{name: "confirm_publish", ...}]
  Agent calls the tool → UI shows confirmation dialog
  User approves → tool result sent in next RunAgentInput

Who calls AGUI: browsers, mobile apps, any UI that a human looks at. Not for: agent-to-agent communication, tool servers, or batch processing.

Container Endpoints

AgentCore proxies to your container on port 8080. What endpoints each protocol expects:

HTTP:
  POST /invocations     → Your handler (any JSON in, any response out)
  GET  /ping            → Health check

MCP:
  POST /invocations     → JSON-RPC dispatcher (tools/list, tools/call, etc.)
  GET  /ping            → Health check

A2A:
  POST /invocations     → JSON-RPC dispatcher (tasks/send, tasks/get, etc.)
  GET  /ping            → Health check
  GET  /.well-known/agent.json → Agent Card (discovery)

AGUI:
  POST /invocations     → RunAgentInput → SSE event stream
  WS   /ws              → RunAgentInput → WebSocket event frames
  GET  /ping            → Health check

AGUI is the only protocol with a WebSocket endpoint. A2A is the only protocol with a discovery document.

Same Agent, Four Wrappers

The same Strands agent logic — same tools, same model, same system prompt — wrapped four different ways. Here is the shared core that is identical regardless of protocol:

from strands import Agent, tool
from strands.models.bedrock import BedrockModel

@tool
def research_topic(query: str) -> str:
    """Research a topic and return findings."""
    return f"Research results for: {query}"

@tool
def generate_outline(topic: str, num_sections: int) -> str:
    """Generate a document outline."""
    return f"Outline for {topic} with {num_sections} sections"

@tool
def update_document(title: str, sections: list, version: int = 1) -> str:
    """Update the shared document."""
    return f"Document '{title}' updated to v{version}"

model = BedrockModel(
    model_id="us.anthropic.claude-sonnet-4-20250514-v1:0",
    region_name="us-east-1",
)

agent = Agent(
    model=model,
    system_prompt="You are a document author assistant...",
    tools=[research_topic, generate_outline, update_document],
)

The Strands Agent doesn’t know or care how it will be exposed. Now — what each protocol adds.

HTTP Wrapper (~10 lines)

from bedrock_agentcore.runtime import BedrockAgentCoreApp

app = BedrockAgentCoreApp()

@app.entrypoint
def strands_agent_bedrock(payload):
    """Receive raw JSON, return raw text."""
    user_input = payload.get("prompt")
    response = agent(user_input)
    return response.message['content'][0]['text']

if __name__ == "__main__":
    app.run()

# Deploy: agentcore configure -e agent.py -p HTTP

What the client sees: a single JSON blob. No streaming. No tool visibility. No shared state. Just input → output. Tools execute server-side, invisible to the caller.

With streaming (still custom format):

@app.entrypoint
async def handler(payload):
    user_message = payload.get("prompt", "Hello")
    async for event in agent.stream_async(user_message):
        if "data" in event:
            yield f"data: {json.dumps(event['data'])}\n\n"

These are your custom events. Every HTTP agent invents its own streaming format. The client must know your specific schema.

MCP Wrapper (~20 lines)

from mcp.server.fastmcp import FastMCP

mcp = FastMCP(name="Stateless-MCP-Server",
              host="0.0.0.0",
              stateless_http=True)

@mcp.tool()
def add_expense(user_alias: str, amount: float,
                description: str, category: str = "other") -> str:
    """Add a new expense transaction."""
    return db.add_transaction(user_alias, "expense", -abs(amount),
                              description, category)

@mcp.tool()
def get_balance(user_alias: str) -> str:
    """Get current account balance."""
    data = db.get_balance(user_alias)
    return f"Balance: ${data['balance']:.2f}"

@mcp.prompt()
def budget_analysis(user_alias: str, time_period: str = "current_month"):
    """Analyze spending patterns and budget performance."""
    ...

# Deploy: agentcore configure -e server.py -p MCP

The Strands Agent is not used in MCP. Instead, individual tools are exposed directly via @mcp.tool(). MCP doesn’t orchestrate — it lets the caller decide which tools to use and in what order. The caller (Claude Desktop, Cursor, another LLM) does:

1. tools/list → ["add_expense", "add_income", "get_balance"]
2. LLM decides: "I need get_balance"
3. tools/call("get_balance", {"user_alias": "alice"}) → "Balance: $1,234.56"
4. LLM decides: "Now add_expense"
5. tools/call("add_expense", {...}) → "Added"

The agent’s intelligence — system prompt, multi-step reasoning, tool orchestration — is not used. MCP exposes raw tools, not an agent. The @mcp.prompt() decorator also exposes prompt templates, another MCP-only concept. The stateless_http=True flag means each request is independent — no session state between calls.

A2A Wrapper (~25 lines)

from strands import Agent, tool
from strands.multiagent.a2a import A2AServer
from fastapi import FastAPI

@tool
def greet_user(name: str) -> str:
    """Greet a user by name."""
    return f"Hello, {name}! Welcome to the A2A agent."

agent = Agent(
    system_prompt="You are a helpful A2A agent...",
    tools=[greet_user],
    name="A2A IAM Auth Agent",
    description="A simple A2A agent demonstrating IAM authentication",
)

a2a_server = A2AServer(agent=agent, http_url=runtime_url, serve_at_root=True)

app = FastAPI()

@app.get("/ping")
def ping():
    return {"status": "healthy"}

app.mount("/", a2a_server.to_fastapi_app())

# Deploy: agentcore configure -e agent.py -p A2A

A2AServer takes the full Strands Agent (with tools and system prompt), creates FastAPI routes for the A2A JSON-RPC methods, auto-generates an Agent Card at /.well-known/agent.json, and handles tasks/send, tasks/sendSubscribe, tasks/get, and tasks/cancel. It converts Strands streaming events into A2A task status updates (working → completed).

The Strands Agent IS used — agent(message) runs the full reasoning chain with tools. But the output format is A2A task events, not AG-UI events. The caller sees task states, not individual tool calls or state snapshots.

GET /.well-known/agent.json
← {"name": "A2A IAM Auth Agent", "description": "...",
    "skills": [...], "capabilities": {"streaming": true}}

AGUI Wrapper (~50+ lines)

from fastapi import FastAPI, WebSocket, WebSocketDisconnect, Request
from fastapi.responses import StreamingResponse
from ag_ui.core import RunAgentInput
from ag_ui.encoder import EventEncoder
from ag_ui_strands import StrandsAgent, StrandsAgentConfig, ToolBehavior
from pydantic import BaseModel, Field

# ── Shared state model ────────────────────────
class DocumentSection(BaseModel):
    heading: str = Field(description="Section heading")
    body: str = Field(description="Section body content")

class DocumentState(BaseModel):
    title: str
    sections: list[DocumentSection] = []
    metadata: dict = {}

# ── AGUI-specific config ─────────────────────
shared_state_config = StrandsAgentConfig(
    state_context_builder=lambda input_data, msg:
        f"Current doc: {json.dumps(input_data.state)}\n\nUser: {msg}"
        if isinstance(input_data.state, dict) and "title" in input_data.state
        else msg,

    tool_behaviors={
        "update_document": ToolBehavior(
            skip_messages_snapshot=True,
            state_from_args=lambda ctx: ctx.tool_input.get("document",
                                                           ctx.tool_input),
        ),
    },
)

# ── Wrap the agent ────────────────────────────
agui_agent = StrandsAgent(
    agent=strands_agent, name="document_agent",
    description="A document co-authoring assistant",
    config=shared_state_config,
)

# ── FastAPI: SSE + WebSocket + ping ──────────
app = FastAPI()

@app.get("/ping")
async def ping():
    return {"status": "ok"}

@app.post("/invocations")
async def invocations(input_data: dict, request: Request):
    encoder = EventEncoder(accept=request.headers.get("accept"))
    async def event_generator():
        run_input = RunAgentInput(**input_data)
        async for event in agui_agent.run(run_input):
            yield encoder.encode(event)
    return StreamingResponse(event_generator(),
                             media_type=encoder.get_content_type())

@app.websocket("/ws")
async def websocket_endpoint(websocket: WebSocket):
    await websocket.accept()
    try:
        while True:
            data = await websocket.receive_json()
            input_data = RunAgentInput(**data)
            async for event in agui_agent.run(input_data):
                await websocket.send_json(event.model_dump())
    except WebSocketDisconnect:
        pass

# Deploy: agentcore configure -e agent.py -p AGUI

The extra 50 lines aren’t boilerplate. They define a rich interaction model: state_from_args means “when the agent calls update_document, extract the document state and emit a STATE_SNAPSHOT so the UI updates live.” state_context_builder means “inject the current document state into the agent’s prompt so it knows what the document looks like.” skip_messages_snapshot avoids echoing back message history. Two endpoints serve the same events over SSE and WebSocket.

What the browser sees:

data: {"type":"RUN_STARTED","threadId":"t1","runId":"r1"}
data: {"type":"TEXT_MESSAGE_CONTENT","delta":"I'll research..."}
data: {"type":"TOOL_CALL_START","toolCallName":"research_topic"}
data: {"type":"TOOL_CALL_ARGS","delta":"{\"query\":\"AI\"}"}
data: {"type":"TOOL_CALL_END","toolCallId":"tc1"}
data: {"type":"STATE_SNAPSHOT","snapshot":{"title":"AI Guide","sections":[...]}}
data: {"type":"TEXT_MESSAGE_CONTENT","delta":"Document ready!"}
data: {"type":"RUN_FINISHED","threadId":"t1","runId":"r1"}

Side-by-Side Feature Comparison

FeatureHTTPMCPA2AAGUI
Uses Strands Agent?Yes (whole agent)No (tools only)Yes (whole agent)Yes (whole agent)
Wrapper classBedrockAgentCoreAppFastMCPA2AServerStrandsAgent + Config
Lines of wrapper~10~20~25~50+
StreamingOptional (custom)No (request/response)Yes (task status via SSE)Yes (12 event types, SSE + WS)
Tool visibilityHidden inside agentExposed via @mcp.tool()Hidden inside agentVisible as TOOL_CALL_* events
Shared stateNoNoNoYes (STATE_SNAPSHOT)
Human-in-the-loopNoNoNoYes (client-side tools)
DiscoveryNotools/list, resources/list, prompts/listAgent Card at /.well-known/agent.jsonNo
Task lifecycleNoNosubmitted → working → completedNo (runs are fire-and-stream)
WebSocketOptional (custom)NoNoYes (/ws, bidirectional)

When to Use What

Use caseProtocol
Wrap an existing REST API for AgentCoreHTTP
Simple request/response agentHTTP
Expose tools for Claude Desktop, Cursor, or LLM appsMCP
Build a tool server consumed by other AI systemsMCP
Have Agent A delegate work to Agent BA2A
Build multi-agent workflows with task trackingA2A
Chat UI with streaming textAGUI
Show tool calls as interactive progress cardsAGUI
Share live state between agent and UIAGUI
Get user confirmation before agent actionsAGUI
Voice agent with real-time audioAGUI (WebSocket)
Collaborative editing experienceAGUI (STATE_SNAPSHOT)

Using All Four Together

In a production system, you might use all four protocols at different boundaries:

┌─────────────────────┐
│  Browser (Human)     │
│  AGUI protocol       │──── "Create a security report"
└────────┬────────────┘
         │
         ▼
┌─────────────────────┐
│  Orchestrator Agent  │
│  (AgentCore, AGUI)   │
│                      │──── MCP ────▶ ┌──────────────────┐
│  Talks to human      │              │ Tool Server       │
│  via AGUI events     │              │ (AgentCore, MCP)  │
│                      │              │ search_database() │
│                      │              │ scan_vulns()      │
│                      │              └──────────────────┘
│                      │
│                      │──── A2A ────▶ ┌──────────────────┐
│                      │              │ Specialist Agent   │
│                      │              │ (AgentCore, A2A)   │
│                      │              │ "Analyze these     │
│                      │              │  scan results"     │
│                      │              └──────────────────┘
│                      │
│                      │──── HTTP ───▶ ┌──────────────────┐
│                      │              │ Legacy API         │
│                      │              │ (AgentCore, HTTP)  │
│                      │              │ GET /reports/123   │
│                      │              └──────────────────┘
└─────────────────────┘
  • AGUI faces the human — streaming text, tool cards, shared state, confirmation dialogs
  • MCP connects to tool servers — “what tools do you have? Call this one.”
  • A2A delegates to specialist agents — “here’s a task, do it and report back”
  • HTTP wraps legacy services — plain REST with no protocol overhead

Each protocol is optimized for its audience. Using the right one at each boundary keeps the system clean and interoperable.

The Key Insight

The Strands Agent is the brain. The protocol wrapper is the mouth.

Same brain, different conversations:

HTTP:  Agent thinks → returns a blob          "Here's your answer."
MCP:   Agent's tools → exposed as services    "Here are my capabilities. Call them."
A2A:   Agent thinks → reports task progress   "Working on it... 50%... Done."
AGUI:  Agent thinks → narrates everything     "I'm researching... calling tool...
                                               here's the document... approve?"

The 50 lines of AGUI wrapper define concepts that don’t exist in the other three protocols: state_from_args (when the agent updates the doc, show it live in the UI), state_context_builder (tell the agent what the doc currently looks like), and client-side tools (let the human approve before publishing). These concepts don’t exist in HTTP, MCP, or A2A because those protocols aren’t designed for a human watching a screen.

This is HTTP vs MCP vs A2A vs AG-UI: The Four Protocols of AgentCore Runtime by Akshay Parkhi, posted on 4th April 2026.

Next: All Four AgentCore Protocols Are Just HTTP: What AG-UI, MCP, and A2A Actually Do

Previous: AG-UI Protocol: A Layer-by-Layer Deep Dive with Real Network Captures