I Built an Agent in 5 Minutes: Anthropic Managed Agents vs AWS AgentCore + Strands
9th April 2026
A side-by-side look at two very different bets on what “agent infrastructure” should mean.
Disclosure: I work at AWS. I’ve tried to keep this honest — AgentCore is genuinely powerful, but the developer experience gap on day one is real, and pretending otherwise doesn’t help anyone choose the right tool.
The 5-minute agent
I just built a Competitor Analysis Agent in the Claude Console. Total time: under five minutes. Here’s the entire build:
- Click “New Agent”
- Name it: Competitor Analysis Agent
- Pick model:
claude-opus-4-6 - Paste a system prompt describing the job (“research what competitors do better, identify gaps, deliver structured reports to ClickUp...”)
- Toggle on built-in tools (bash, read, write, web_search, web_fetch)
- Connect ClickUp MCP server
- Hit save → agent is Active
That’s it. No code. No container. No IAM role. No deployment. The agent has its own per-session sandbox, file system, internet access, the entire Claude Code-style toolset, and a third-party integration — all from a form.
Now let me show you what the same thing looks like in AWS Bedrock AgentCore Runtime + Strands.
The two philosophies
| Anthropic Managed Agents | AWS AgentCore + Strands | |
|---|---|---|
| Mental model | “Here’s a hosted agent harness. Configure it.” | “Here’s a serverless runtime. Bring your agent.” |
| What you write | A system prompt | Python agent code + Dockerfile + IaC |
| Agent loop | Managed by Anthropic | You write it (or use Strands/LangGraph/CrewAI) |
| Sandbox | Per-session container, auto-provisioned | microVM (Firecracker), you configure |
| Model lock-in | Claude only | Any model (Bedrock, Anthropic, OpenAI, local) |
| Time to “hello world” | Minutes | Hours to days |
Anthropic decided agents should be a product. AWS decided agents should be a platform. Both bets are reasonable. They produce wildly different developer experiences.
Building the same agent on AgentCore + Strands
To recreate my Competitor Analysis Agent on AgentCore, here’s roughly what I’d do:
Step 1 — Write the agent code (Strands)
# competitor_agent.py
from strands import Agent, tool
from strands_tools import shell, file_read, file_write, http_request
from bedrock_agentcore.runtime import BedrockAgentCoreApp
app = BedrockAgentCoreApp()
@tool
def clickup_create_task(list_id: str, name: str, description: str) -> dict:
"""Create a task in ClickUp."""
# ... wire up ClickUp REST API with token from Secrets Manager
...
@tool
def web_search(query: str) -> str:
"""Search the web."""
# ... wire up Tavily / Serper / Brave API
...
agent = Agent(
model="us.anthropic.claude-opus-4-6-20260101-v1:0",
system_prompt="You are a competitive intelligence analyst...",
tools=[shell, file_read, file_write, http_request, web_search, clickup_create_task],
)
@app.entrypoint
def invoke(payload):
return agent(payload["prompt"])
if __name__ == "__main__":
app.run()
Step 2 — Containerize
FROM public.ecr.aws/docker/library/python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8080
CMD ["python", "competitor_agent.py"]
Step 3 — Build for ARM64 and push to ECR
aws ecr create-repository --repository-name competitor-agent
docker buildx build --platform linux/arm64 -t competitor-agent .
docker tag competitor-agent:latest <acct>.dkr.ecr.<region>.amazonaws.com/competitor-agent:latest
docker push <acct>.dkr.ecr.<region>.amazonaws.com/competitor-agent:latest
Step 4 — Deploy to AgentCore Runtime
agentcore configure --entrypoint competitor_agent.py
agentcore launch
…then wire up an IAM execution role, set up Secrets Manager for the ClickUp token, configure observability, decide on memory backend, and set up Identity if you want OAuth.
Time check: half a day if you’ve done it before. Two days if you haven’t.
What you actually get for that effort
This is the honest counterpoint. AgentCore isn’t slower because AWS is bad at developer experience — it’s slower because you’re getting a different product.
| Capability | Managed Agents | AgentCore |
|---|---|---|
| Sandbox isolation | Per-session container | Per-session microVM (Firecracker) — stronger isolation, up to 8-hour sessions |
| Memory size | 5 GiB RAM, 5 GiB disk | Configurable, up to several GB |
| Model choice | Claude only | Any: Bedrock, Anthropic, OpenAI, local Llama, fine-tuned |
| Agent framework | None — you use Anthropic’s loop | Strands, LangGraph, CrewAI, LlamaIndex, Pydantic AI, your own |
| Identity & OAuth | Vaults (MCP credentials) | Full AgentCore Identity — OAuth providers, workload identity |
| Observability | Event stream + token usage | Full AgentCore Observability with OpenTelemetry, CloudWatch, traces |
| Memory service | Built-in auto-compaction | Standalone AgentCore Memory service with semantic recall |
| Browser automation | Not yet first-class | AgentCore Browser Tool (managed headless Chrome) |
| Code interpreter | Built-in via bash + Python | AgentCore Code Interpreter as separate service |
| Gateway / tool catalog | MCP servers per agent | AgentCore Gateway — converts APIs/Lambdas to MCP, central tool registry |
| Multi-tenancy | Workspace-scoped | IAM-scoped, fits AWS org structures |
| Cloud lock-in | Anthropic 1P only | AWS native |
The pattern is clear: AgentCore is a Lego set. Managed Agents is a finished toy.
Pricing — the real divergence
This is where the philosophies show up in your bill.
Managed Agents
- Tokens at standard Claude rates (Opus 4.6: $5/$25 per MTok)
- $0.08 per session-hour, only while session is running
- Idle time = free (huge for chat / long-lived sessions where users think)
- File storage, vaults, environments, agents themselves: free
- Container hours rolled into the session-hour fee — no double charge
1-hour Opus session, 50K in / 15K out = ~$0.70
AgentCore
- Tokens billed via your model provider (Bedrock or direct)
- Runtime compute: CPU-second + memory-GB-second metering — accrues whenever the microVM is running
- AgentCore Memory, Identity, Gateway, Browser, Code Interpreter are separate services with their own pricing
- CloudWatch for logs/traces
- ECR for container storage
- Plus the AWS dependencies you wire in (Secrets Manager, IAM, VPC if used)
The runtime fee tends to be cheap per-hour, but you’re now reasoning about 5+ line items instead of 2, and idle compute often still bills (depending on how the microVM is configured).
Verdict: Managed Agents is more predictable and almost certainly cheaper for low-to-medium volume. AgentCore wins at scale when you can amortize infra investment across many agents and want fine-grained cost control.
Developer experience compared
Defining a tool
Managed Agents:
{ "type": "agent_toolset_20260401" }
Done. You get bash, read, write, edit, glob, grep, web_fetch, web_search.
Strands on AgentCore:
from strands_tools import shell, file_read, file_write, http_request
# ...and you wire each one into agent.tools=[...]
# Web search is BYO — pick a provider, get an API key, write a tool
Adding a third-party integration
Managed Agents: Connect MCP server in the UI. Drop OAuth credential in a vault. Anthropic auto-refreshes the token.
AgentCore: Either (a) write a Python tool that calls the API, store the secret in Secrets Manager, handle refresh yourself, or (b) use AgentCore Gateway to expose the API as MCP — which is great but is a separate service to learn.
Streaming events to a frontend
Managed Agents: SSE stream out of the box (/v1/sessions/{id}/events/stream). Event types are typed and documented (agent.message, agent.thinking, agent.tool_use, etc.).
AgentCore: Streaming supported via the runtime, but the event shape is whatever your agent code emits. You design the protocol.
Long-running tasks
Managed Agents: Sessions persist; idle time is free; reconnect to the SSE stream from any client. Built-in compaction handles 200K+ context.
AgentCore: Up to 8-hour sessions in a single invocation, microVM stays alive. Memory service handles long-term recall across sessions. More powerful, more to wire up.
When to use which
Pick Managed Agents when:
- You’re committed to Claude (the best frontier model + you don’t need multi-model)
- You want to ship an agent this week, not next month
- You’re building a chat UI, internal tool, or a customer-facing assistant where simplicity matters
- Your team is small and doesn’t have AWS infra specialists
- You want predictable per-hour pricing with idle = free
- You like the MCP ecosystem and Anthropic-native skills (xlsx, docx, pptx, pdf)
- The use case fits: code assistants, research agents, doc generators, support bots
Pick AgentCore + Strands when:
- You’re already deep in AWS and need IAM/VPC/CloudWatch integration
- You need multi-model flexibility (Claude + Llama + a fine-tuned in-house model)
- You’re running thousands of concurrent agents and infra cost matters
- You need 8-hour continuously-running sessions or unusual memory profiles
- You want OpenTelemetry traces flowing into your existing observability stack
- You need stronger sandbox isolation guarantees (Firecracker microVMs vs containers)
- You’re building a multi-agent platform and need Gateway as a tool registry
- You have a security/compliance team that wants everything in your AWS account
Pick both when:
- You’re prototyping in Managed Agents and migrating to AgentCore for production scale
- You’re A/B-testing the two stacks for the same use case
- Different agents in your company have different requirements
A migration path that actually works
If you start in Managed Agents (you should), here’s how the migration to AgentCore looks if you outgrow it:
- Lift the system prompt — works as-is in any framework
- Replace built-in toolset with
strands_toolsequivalents (shell,file_read,file_write,http_request) or custom tools - Replace MCP servers — Strands has MCP support; same MCP server URLs work
- Replace vaults with Secrets Manager + your own refresh logic (or AgentCore Identity)
- Replace SSE event handling with whatever streaming protocol your agent emits
- Replace the session model with AgentCore Runtime invocations
- Replace output capture from
/mnt/session/outputs/with S3 uploads from your agent code
Nothing in Managed Agents is a one-way door — but the leverage you get from not doing all of this on day one is enormous.
My take
The biggest mistake in agent development today is starting with the heavy framework. You spin up AgentCore, you write Strands code, you containerize, you deploy — and you discover three weeks later that what you actually needed was a different system prompt and one extra tool.
Anthropic Managed Agents is the closest thing to “prompt → agent” we have. The Competitor Analysis Agent I built in 5 minutes would have taken me a full day in AgentCore + Strands, and 80% of that day would have been infrastructure plumbing that doesn’t matter to the user.
Use Managed Agents to discover what your agent should be. Then if you outgrow it — different model, multi-cloud, custom isolation, multi-agent fleets — graduate to AgentCore. The lift isn’t that bad because the agent’s intent (system prompt + tool surface) is the part that survives the migration.
Most teams will never need to graduate. That’s the point — and as someone who works on the AWS side, I think that’s fine. The right tool depends on where you are, not which company you’re rooting for.
TL;DR
| Managed Agents | AgentCore + Strands | |
|---|---|---|
| Build a useful agent | Minutes | Days |
| Lock-in | Claude/Anthropic | AWS |
| Code required | Zero | Python + Docker + IaC |
| Pricing | Tokens + $0.08/hr running | Tokens + compute + 5 services |
| Ceiling | High enough for 90% of use cases | Effectively unlimited |
| Best for | Shipping fast, Claude-native | Multi-model, AWS-native, scale |