Akshay Parkhi's Weblog

Subscribe

I Built an Agent in 5 Minutes: Anthropic Managed Agents vs AWS AgentCore + Strands

9th April 2026

A side-by-side look at two very different bets on what “agent infrastructure” should mean.

Disclosure: I work at AWS. I’ve tried to keep this honest — AgentCore is genuinely powerful, but the developer experience gap on day one is real, and pretending otherwise doesn’t help anyone choose the right tool.

The 5-minute agent

I just built a Competitor Analysis Agent in the Claude Console. Total time: under five minutes. Here’s the entire build:

  1. Click “New Agent”
  2. Name it: Competitor Analysis Agent
  3. Pick model: claude-opus-4-6
  4. Paste a system prompt describing the job (“research what competitors do better, identify gaps, deliver structured reports to ClickUp...”)
  5. Toggle on built-in tools (bash, read, write, web_search, web_fetch)
  6. Connect ClickUp MCP server
  7. Hit save → agent is Active

That’s it. No code. No container. No IAM role. No deployment. The agent has its own per-session sandbox, file system, internet access, the entire Claude Code-style toolset, and a third-party integration — all from a form.

Now let me show you what the same thing looks like in AWS Bedrock AgentCore Runtime + Strands.

The two philosophies

Anthropic Managed AgentsAWS AgentCore + Strands
Mental model“Here’s a hosted agent harness. Configure it.”“Here’s a serverless runtime. Bring your agent.”
What you writeA system promptPython agent code + Dockerfile + IaC
Agent loopManaged by AnthropicYou write it (or use Strands/LangGraph/CrewAI)
SandboxPer-session container, auto-provisionedmicroVM (Firecracker), you configure
Model lock-inClaude onlyAny model (Bedrock, Anthropic, OpenAI, local)
Time to “hello world”MinutesHours to days

Anthropic decided agents should be a product. AWS decided agents should be a platform. Both bets are reasonable. They produce wildly different developer experiences.

Building the same agent on AgentCore + Strands

To recreate my Competitor Analysis Agent on AgentCore, here’s roughly what I’d do:

Step 1 — Write the agent code (Strands)

# competitor_agent.py
from strands import Agent, tool
from strands_tools import shell, file_read, file_write, http_request
from bedrock_agentcore.runtime import BedrockAgentCoreApp

app = BedrockAgentCoreApp()

@tool
def clickup_create_task(list_id: str, name: str, description: str) -> dict:
    """Create a task in ClickUp."""
    # ... wire up ClickUp REST API with token from Secrets Manager
    ...

@tool
def web_search(query: str) -> str:
    """Search the web."""
    # ... wire up Tavily / Serper / Brave API
    ...

agent = Agent(
    model="us.anthropic.claude-opus-4-6-20260101-v1:0",
    system_prompt="You are a competitive intelligence analyst...",
    tools=[shell, file_read, file_write, http_request, web_search, clickup_create_task],
)

@app.entrypoint
def invoke(payload):
    return agent(payload["prompt"])

if __name__ == "__main__":
    app.run()

Step 2 — Containerize

FROM public.ecr.aws/docker/library/python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8080
CMD ["python", "competitor_agent.py"]

Step 3 — Build for ARM64 and push to ECR

aws ecr create-repository --repository-name competitor-agent
docker buildx build --platform linux/arm64 -t competitor-agent .
docker tag competitor-agent:latest <acct>.dkr.ecr.<region>.amazonaws.com/competitor-agent:latest
docker push <acct>.dkr.ecr.<region>.amazonaws.com/competitor-agent:latest

Step 4 — Deploy to AgentCore Runtime

agentcore configure --entrypoint competitor_agent.py
agentcore launch

…then wire up an IAM execution role, set up Secrets Manager for the ClickUp token, configure observability, decide on memory backend, and set up Identity if you want OAuth.

Time check: half a day if you’ve done it before. Two days if you haven’t.

What you actually get for that effort

This is the honest counterpoint. AgentCore isn’t slower because AWS is bad at developer experience — it’s slower because you’re getting a different product.

CapabilityManaged AgentsAgentCore
Sandbox isolationPer-session containerPer-session microVM (Firecracker) — stronger isolation, up to 8-hour sessions
Memory size5 GiB RAM, 5 GiB diskConfigurable, up to several GB
Model choiceClaude onlyAny: Bedrock, Anthropic, OpenAI, local Llama, fine-tuned
Agent frameworkNone — you use Anthropic’s loopStrands, LangGraph, CrewAI, LlamaIndex, Pydantic AI, your own
Identity & OAuthVaults (MCP credentials)Full AgentCore Identity — OAuth providers, workload identity
ObservabilityEvent stream + token usageFull AgentCore Observability with OpenTelemetry, CloudWatch, traces
Memory serviceBuilt-in auto-compactionStandalone AgentCore Memory service with semantic recall
Browser automationNot yet first-classAgentCore Browser Tool (managed headless Chrome)
Code interpreterBuilt-in via bash + PythonAgentCore Code Interpreter as separate service
Gateway / tool catalogMCP servers per agentAgentCore Gateway — converts APIs/Lambdas to MCP, central tool registry
Multi-tenancyWorkspace-scopedIAM-scoped, fits AWS org structures
Cloud lock-inAnthropic 1P onlyAWS native

The pattern is clear: AgentCore is a Lego set. Managed Agents is a finished toy.

Pricing — the real divergence

This is where the philosophies show up in your bill.

Managed Agents

  • Tokens at standard Claude rates (Opus 4.6: $5/$25 per MTok)
  • $0.08 per session-hour, only while session is running
  • Idle time = free (huge for chat / long-lived sessions where users think)
  • File storage, vaults, environments, agents themselves: free
  • Container hours rolled into the session-hour fee — no double charge

1-hour Opus session, 50K in / 15K out = ~$0.70

AgentCore

  • Tokens billed via your model provider (Bedrock or direct)
  • Runtime compute: CPU-second + memory-GB-second metering — accrues whenever the microVM is running
  • AgentCore Memory, Identity, Gateway, Browser, Code Interpreter are separate services with their own pricing
  • CloudWatch for logs/traces
  • ECR for container storage
  • Plus the AWS dependencies you wire in (Secrets Manager, IAM, VPC if used)

The runtime fee tends to be cheap per-hour, but you’re now reasoning about 5+ line items instead of 2, and idle compute often still bills (depending on how the microVM is configured).

Verdict: Managed Agents is more predictable and almost certainly cheaper for low-to-medium volume. AgentCore wins at scale when you can amortize infra investment across many agents and want fine-grained cost control.

Developer experience compared

Defining a tool

Managed Agents:

{ "type": "agent_toolset_20260401" }

Done. You get bash, read, write, edit, glob, grep, web_fetch, web_search.

Strands on AgentCore:

from strands_tools import shell, file_read, file_write, http_request
# ...and you wire each one into agent.tools=[...]
# Web search is BYO — pick a provider, get an API key, write a tool

Adding a third-party integration

Managed Agents: Connect MCP server in the UI. Drop OAuth credential in a vault. Anthropic auto-refreshes the token.

AgentCore: Either (a) write a Python tool that calls the API, store the secret in Secrets Manager, handle refresh yourself, or (b) use AgentCore Gateway to expose the API as MCP — which is great but is a separate service to learn.

Streaming events to a frontend

Managed Agents: SSE stream out of the box (/v1/sessions/{id}/events/stream). Event types are typed and documented (agent.message, agent.thinking, agent.tool_use, etc.).

AgentCore: Streaming supported via the runtime, but the event shape is whatever your agent code emits. You design the protocol.

Long-running tasks

Managed Agents: Sessions persist; idle time is free; reconnect to the SSE stream from any client. Built-in compaction handles 200K+ context.

AgentCore: Up to 8-hour sessions in a single invocation, microVM stays alive. Memory service handles long-term recall across sessions. More powerful, more to wire up.

When to use which

Pick Managed Agents when:

  • You’re committed to Claude (the best frontier model + you don’t need multi-model)
  • You want to ship an agent this week, not next month
  • You’re building a chat UI, internal tool, or a customer-facing assistant where simplicity matters
  • Your team is small and doesn’t have AWS infra specialists
  • You want predictable per-hour pricing with idle = free
  • You like the MCP ecosystem and Anthropic-native skills (xlsx, docx, pptx, pdf)
  • The use case fits: code assistants, research agents, doc generators, support bots

Pick AgentCore + Strands when:

  • You’re already deep in AWS and need IAM/VPC/CloudWatch integration
  • You need multi-model flexibility (Claude + Llama + a fine-tuned in-house model)
  • You’re running thousands of concurrent agents and infra cost matters
  • You need 8-hour continuously-running sessions or unusual memory profiles
  • You want OpenTelemetry traces flowing into your existing observability stack
  • You need stronger sandbox isolation guarantees (Firecracker microVMs vs containers)
  • You’re building a multi-agent platform and need Gateway as a tool registry
  • You have a security/compliance team that wants everything in your AWS account

Pick both when:

  • You’re prototyping in Managed Agents and migrating to AgentCore for production scale
  • You’re A/B-testing the two stacks for the same use case
  • Different agents in your company have different requirements

A migration path that actually works

If you start in Managed Agents (you should), here’s how the migration to AgentCore looks if you outgrow it:

  1. Lift the system prompt — works as-is in any framework
  2. Replace built-in toolset with strands_tools equivalents (shell, file_read, file_write, http_request) or custom tools
  3. Replace MCP servers — Strands has MCP support; same MCP server URLs work
  4. Replace vaults with Secrets Manager + your own refresh logic (or AgentCore Identity)
  5. Replace SSE event handling with whatever streaming protocol your agent emits
  6. Replace the session model with AgentCore Runtime invocations
  7. Replace output capture from /mnt/session/outputs/ with S3 uploads from your agent code

Nothing in Managed Agents is a one-way door — but the leverage you get from not doing all of this on day one is enormous.

My take

The biggest mistake in agent development today is starting with the heavy framework. You spin up AgentCore, you write Strands code, you containerize, you deploy — and you discover three weeks later that what you actually needed was a different system prompt and one extra tool.

Anthropic Managed Agents is the closest thing to “prompt → agent” we have. The Competitor Analysis Agent I built in 5 minutes would have taken me a full day in AgentCore + Strands, and 80% of that day would have been infrastructure plumbing that doesn’t matter to the user.

Use Managed Agents to discover what your agent should be. Then if you outgrow it — different model, multi-cloud, custom isolation, multi-agent fleets — graduate to AgentCore. The lift isn’t that bad because the agent’s intent (system prompt + tool surface) is the part that survives the migration.

Most teams will never need to graduate. That’s the point — and as someone who works on the AWS side, I think that’s fine. The right tool depends on where you are, not which company you’re rooting for.

TL;DR

Managed AgentsAgentCore + Strands
Build a useful agentMinutesDays
Lock-inClaude/AnthropicAWS
Code requiredZeroPython + Docker + IaC
PricingTokens + $0.08/hr runningTokens + compute + 5 services
CeilingHigh enough for 90% of use casesEffectively unlimited
Best forShipping fast, Claude-nativeMulti-model, AWS-native, scale

This is I Built an Agent in 5 Minutes: Anthropic Managed Agents vs AWS AgentCore + Strands by Akshay Parkhi, posted on 9th April 2026.

Next: Beyond Tool Calling: A Practical Tour of Advanced MCP Concepts

Previous: AgentCore Auth from First Principles: How JWT Flows from Browser to Agent Container