Akshay Parkhi's Weblog

Subscribe

Why Every Coding Agent Is TypeScript (And Every ML Framework Is Python)

8th March 2026

Every major coding agent — Claude Code, OpenCode, Pi, Amp — is built in TypeScript. Not Python. This isn’t a coincidence. It’s architecture. And to understand why, you need to go back to the origin stories.


The Origin Stories — Why They’re Different by Design

Node.js — Born from a Frustration with Blocking

Creator: Ryan Dahl, 2009. He watched a Flickr upload progress bar and asked: “Why can’t the server tell me how much has been uploaded?” The answer: because Apache created one thread per connection and blocked on I/O. At 10,000 concurrent connections, it collapsed.

Core design purpose: Never block. Never wait. When you ask for something, move on and come back when it’s ready.

Node.js exists for one reason: multiplexing I/O operations on a single thread using an event loop (libuv). Everything else — npm, the ecosystem, TypeScript — is secondary to this.

Python — Born from a Love of Clarity

Creator: Guido van Rossum, 1991. Existing languages (C, Perl, shell scripts) were either too low-level or too cryptic. Guido wanted a language where you could read someone else’s code and immediately understand it.

Core design purpose (from the Zen of Python): “Readability counts. Simple is better than complex. There should be one — and preferably only one — obvious way to do it.”

Python exists for one reason: making computation human-readable and accessible. Everything else — NumPy, ML, data science — grew from this.


The Two Philosophies, in Code

Node.js thinks: “Start all 5, come back as each finishes”

// OpenCode batch.ts -- THE ENTIRE parallel execution:
const results = await Promise.all(toolCalls.map((call) => executeCall(call)))

Python thinks: “Build infrastructure to manage 5 concurrent tasks safely”

# Strands concurrent.py -- SAME problem, 30+ lines:
task_queue: asyncio.Queue[tuple[int, Any]] = asyncio.Queue()
task_events = [asyncio.Event() for _ in tool_uses]
stop_event = object()
tasks = []
for task_id, tool_use in enumerate(tool_uses):
    tasks.append(asyncio.create_task(self._task(
        agent, tool_use, tool_results, cycle_trace, cycle_span,
        invocation_state, task_id, task_queue, task_events[task_id],
        stop_event, structured_output_context,
    )))
task_count = len(tasks)
while task_count:
    task_id, event = await task_queue.get()
    if event is stop_event:
        task_count -= 1
        continue
    yield event
    task_events[task_id].set()

Neither is wrong. Node.js assumes I/O concurrency is the default. Python assumes you need to explicitly design concurrent systems because most computation is sequential.


The Event Loop vs The GIL

Node.js: The Event Loop

+-------------------------------------------+
|           Node.js Event Loop              |
|                                           |
|   +-----------+  Your code runs here      |
|   | Call      |  (single thread)          |
|   | Stack     |                           |
|   +-----+-----+                           |
|         |                                 |
|   +-----v----------------------------+    |
|   |  Event Queue                     |    |
|   |  +------+ +------+ +------+     |    |
|   |  |file  | |http  | |stdin |     |    |
|   |  |ready | |resp  | |data  |     |    |
|   |  +------+ +------+ +------+     |    |
|   +----------------------------------+    |
|        ^         ^         ^              |
|   +----+---------+---------+----------+   |
|   |  libuv (epoll/kqueue/IOCP)        |   |
|   +-----------------------------------+   |
+-------------------------------------------+

In Node.js, there is no synchronous I/O API for network/subprocess. You literally cannot block even if you want to. The language forces you into the event-driven model.

What this means in practice — from Pi’s agent-loop.ts:

// 1. Stream LLM response from Anthropic API
const message = await streamAssistantResponse(context, config, signal, stream);
// 2. While streaming, TUI renders each token (via EventStream push/consume)
// 3. While streaming, check if user typed something
const steering = await config.getSteeringMessages?.();
// 4. While waiting for tool results, file watchers detect changes
// 5. While waiting, AbortSignal listens for Ctrl+C

// NONE of these block each other. The event loop interleaves them.

Python: The GIL (A Deliberate Trade-off)

+-------------------------------------------+
|        Python Execution Model             |
|                                           |
|   +----------------------------------+    |
|   |  Interpreter (CPython)           |    |
|   |  +----------------------------+  |    |
|   |  |  GIL (Global Lock)         |  |    |
|   |  |  One thread computes       |  |    |
|   |  |  at a time                 |  |    |
|   |  +----------------------------+  |    |
|   +----------------------------------+    |
|                                           |
|   +----------------------------------+    |
|   |  The Power Layer:                |    |
|   |  +--------+ +--------+          |    |
|   |  | NumPy  | |PyTorch |          |    |
|   |  | (C)    | |(C++/   |          |    |
|   |  |        | | CUDA)  |          |    |
|   |  +--------+ +--------+          |    |
|   +----------------------------------+    |
|                                           |
|   +----------------------------------+    |
|   |  Metaprogramming Layer:          |    |
|   |  - Frame introspection           |    |
|   |  - Runtime class creation        |    |
|   |  - ast module (parse code)       |    |
|   +----------------------------------+    |
+-------------------------------------------+

The GIL is not a bug — it’s a design choice:

The GIL HURTS:
  x Can't do true parallel I/O multiplexing (like Node.js)
  x Can't handle 10,000 concurrent connections efficiently
  x asyncio is bolted-on, not native

The GIL HELPS:
  + C extensions (NumPy, PyTorch) are trivially safe
  + No data races in Python code -- ever
  + Simple mental model for computation
  + Reference counting works (no GC pauses)

Python chose computational simplicity over I/O performance. Node.js chose the opposite.


How This Maps to Agent Frameworks

Node.js Design → Coding Agents

CODING AGENT NEEDS:              NODE.JS PROVIDES:
-----------------------          -----------------------
Stream LLM tokens to TUI    -->  EventStream (push/async-iterate)
Run shell commands           -->  spawn() returns immediately
Read files while streaming   -->  Non-blocking fs
User types while agent works -->  stdin is just another event
Cancel mid-execution         -->  AbortSignal propagates
5 tools in parallel          -->  Promise.all()
Sub-agent in new session     -->  Just another Promise
Watch for file changes       -->  fs.watch (libuv)
100ms startup                -->  V8 JIT compiles fast

Pi’s EventStream is the purest expression of Node.js’s design purpose:

// PRODUCER (anywhere, anytime):
stream.push({ type: "text_delta", text: "Hello" });

// CONSUMER (in the TUI render loop):
for await (const event of stream) {
    renderToTerminal(event);  // Never blocks, never waits
}

This is literally what Node.js was built for — pushing data from I/O sources to consumers without blocking.

Python Design → ML / Research / Orchestration

ML AGENT NEEDS:                  PYTHON PROVIDES:
-----------------------          -----------------------
Optimize prompts             -->  NumPy + Optuna (C-speed math)
Evaluate across datasets     -->  Pandas DataFrames
Fine-tune models             -->  PyTorch / Transformers
Define agent roles           -->  Pydantic models + metaclasses
Create DSLs ("q -> a")      -->  Metaclass magic
Introspect agents            -->  Frame introspection
Notebook experimentation     -->  Jupyter (REPL-first design)
Parse and transform code     -->  ast module (built-in)
Readable team definitions    -->  "Readability counts"

DSPy’s Signature("question -> answer") is the purest expression of Python’s design purpose:

# This STRING becomes a CLASS at runtime via metaclass magic
qa = dspy.Signature("question -> answer", "Answer questions concisely")

# Python reads the string, parses it, walks the call stack to find types,
# creates a Pydantic BaseModel subclass with input/output fields,
# and returns a new class -- not an instance.

This is what Python was built for — making complex computation readable and manipulable as data.


The Language Split Today

After digging through the source code of 13 agent frameworks, the split maps directly to the origin stories:

FrameworkLanguageTypeFocus
Pi (pi-mono)TypeScriptCoding Agent CLIInteractive coding agent
OpenCodeTypeScriptCoding Agent CLIAI coding agent
OpenClawTypeScriptPersonal AI AssistantMulti-channel assistant
MastraTypeScriptAgent FrameworkFull-stack agent toolkit
Amp (Sourcegraph)TypeScriptCoding AgentAutonomous engineering
CrewAIPythonMulti-Agent FrameworkTeam orchestration
StrandsPythonAgent SDKAWS-backed agent toolkit
LangGraphPythonAgent FrameworkStateful agent graphs
AutoGenPythonMulti-AgentDistributed agents
AgentScopePythonMulti-AgentAgent collaboration
DSPyPythonPrompt OptimizationDeclarative LM programs
AgnoPythonAgent FrameworkFull-stack agents
GooseRustCoding Agent CLIHigh-performance coding agent

The pattern: Every coding agent that edits files, runs shells, and streams to a TUI is TypeScript. Python dominates orchestration and ML pipelines.


Why TypeScript Wins for Coding Agents — 7 Structural Advantages

A coding agent does five things simultaneously: streams LLM responses, executes shell commands, watches files, accepts user input, and manages sub-agent sessions. This is I/O multiplexing — Node.js’s core design purpose.

1. Non-Blocking I/O by Default

The single biggest advantage. In Node.js, every file read, network call, and subprocess is non-blocking without any special syntax. Pi’s agent loop:

// Stream response while checking for user steering -- both non-blocking
const message = await streamAssistantResponse(currentContext, config, signal, stream, streamFn);
const steering = await getSteeringMessages();

In TypeScript, this is just Promise.all(). In Python, Strands needed explicit asyncio.Queue and asyncio.create_task() infrastructure for the same thing.

2. EventStream — Push-Based Real-Time Streaming

Pi’s EventStream<T, R> is the backbone of its agent loop — producers push events, consumers iterate with for-await:

export class EventStream<T, R = T> implements AsyncIterable<T> {
    push(event: T): void { ... }    // Producer pushes events
    async *[Symbol.asyncIterator]()  // Consumer iterates with for-await
    result(): Promise<R>            // Final result promise
}

This enables streaming LLM tokens to the TUI, tool execution events flowing in real-time, and user steering messages intercepted mid-flight. Python’s async generators can’t bidirectionally push/pull like this.

3. TUI Is a JS/TS Sweet Spot

Coding agents need rich terminal UIs. Node.js is unmatched: Pi uses a custom differential renderer, OpenCode uses ink (React for terminals). Python’s rich/textual libraries exist but have higher overhead for real-time streaming.

4. Single-Threaded = No Race Conditions

Node.js’s event loop means no mutexes, no locks. OpenCode’s event bus is thread-safe by construction:

export const GlobalBus = new EventEmitter<{ event: [{ directory?: string; payload: any }] }>()

Python requires threading.Lock, asyncio.Lock, or complex coordination when mixing sync/async.

5. Type Safety = Tool Schema

LLM tool calling requires JSON Schema. TypeScript’s Zod maps types directly to schemas — one definition does validation, schema generation, and IDE autocomplete:

const parameters = z.object({
  description: z.string().describe("A short (3-5 words) description"),
  prompt: z.string().describe("The task for the agent to perform"),
  subagent_type: z.string().describe("The type of specialized agent"),
})

Python has Pydantic (excellent), but it’s runtime-only. TypeScript catches schema mismatches at build time.

6. Startup & Distribution

npm install -g opencode-ai → instant. Node.js starts in ~100ms. Python needs pip, venvs, and 1-2s startup. For a CLI you use 50 times a day, this matters.

7. Full-Stack Parity

OpenCode ships a TUI, desktop app, website, and VS Code extension — all TypeScript, all sharing types. Python agents need a separate JS frontend, creating a split-brain codebase.


The Sub-Agent Architecture

This is the most fascinating pattern across all these codebases. Here’s how it actually works.

The Key Insight: Sub-Agents Are Just Tools

There’s no special sub-agent spawner. The LLM calls a task tool, and the runtime starts a new session with its own agent loop:

Main Agent
  |
  +-- LLM: "I need to explore the codebase AND edit a file"
  |
  +-- tool_use: task(type="explore", prompt="find all API endpoints")
  |   +-- NEW SESSION (restricted: read-only tools)
  |   +-- Runs own agent loop: think -> search -> think -> grep -> done
  |   +-- Returns results to parent
  |
  +-- tool_use: task(type="general", prompt="refactor auth module")
  |   +-- NEW SESSION (full tools, but can't spawn more sub-agents)
  |   +-- Runs own agent loop: think -> read -> edit -> test -> done
  |   +-- Returns results to parent
  |
  +-- Main agent synthesizes both results

The Three-Layer Architecture

Every coding agent follows this structure:

+-------------------------------------------+
|  Layer 3: SESSION MANAGEMENT              |
|  - Creates/resumes sessions               |
|  - Parent-child relationships             |
|  - Permission inheritance                 |
+-----------------+-------------------------+
                  |
+-----------------v-------------------------+
|  Layer 2: AGENT LOOP                      |
|  - Stream LLM response                   |
|  - Execute tool calls                    |
|  - Handle user steering (interrupts)     |
|  - EventStream for real-time updates     |
+-----------------+-------------------------+
                  |
+-----------------v-------------------------+
|  Layer 1: TOOLS                           |
|  - File tools (read, write, edit)         |
|  - Shell tools (bash)                     |
|  - Search tools (grep, glob)             |
|  - task tool <-- spawns sub-agents       |
|  - batch tool <-- parallel execution     |
+-------------------------------------------+

Four Secrets That Make It Work

Secret 1: Parallel execution via Promise.all

OpenCode’s BatchTool executes up to 25 tool calls simultaneously:

const results = await Promise.all(toolCalls.map((call) => executeCall(call)))

One line. In Python, asyncio.gather() requires the entire call stack to be async.

Secret 2: Permission-scoped sub-agents

// "explore" agent: read-only, can't edit files
explore: {
  permission: { "*": "deny", grep: "allow", glob: "allow",
                list: "allow", bash: "allow", read: "allow" }
}

// "general" agent: full tools, but can't spawn MORE sub-agents
general: {
  permission: { todoread: "deny", todowrite: "deny" }
}

Safety mechanism: no infinite recursion, no accidental writes from read-only agents.

Secret 3: Session continuity (resume)

task_id: z.string().describe(
  "This should only be set if you mean to resume a previous task"
).optional()

Sub-agent work persists. “Continue the refactoring from earlier” picks up the exact session.

Secret 4: Real-time user steering

Pi checks for user messages between every tool call:

if (getSteeringMessages) {
    const steering = await getSteeringMessages();
    if (steering.length > 0) {
        // SKIP remaining tools, inject user's message
        for (const skipped of remainingCalls) {
            results.push(skipToolCall(skipped, stream));
        }
        break;
    }
}

Type while the agent works, it immediately redirects. Non-blocking input checking is natural in Node.js, awkward in Python.


The Full Picture: How It All Connects

                      +---------------------+
                      |    USER (Terminal)   |
                      +----------+----------+
                                 |
                          types / steers
                                 |
                      +----------v----------+
                      |   SESSION MANAGER   |
                      |   (Layer 3)         |
                      +----------+----------+
                                 |
              +------------------+------------------+
              |                                     |
   +----------v----------+            +----------v----------+
   |   MAIN AGENT LOOP   |            |  SUB-AGENT LOOP(s)  |
   |   (Layer 2)         |            |  (Layer 2)          |
   |                     |            |  - Restricted perms |
   |  think -> tool ->   |   spawns   |  - Own context      |
   |  result -> think    +----------> |  - Returns result   |
   |                     |            |                     |
   +----------+----------+            +----------+----------+
              |                                  |
   +----------v-----------+          +-----------v----------+
   |   TOOLS (Layer 1)    |          |   TOOLS (subset)     |
   |                      |          |                      |
   | read, write, edit,   |          | read, grep, glob,    |
   | bash, grep, glob,    |          | bash, list           |
   | task, batch          |          | (no write, no task)  |
   +----------------------+          +----------------------+
              |                                  |
              +----------------------------------+
              |
   +----------v----------+
   |   LLM API (Claude)  |
   |   - Streaming        |
   |   - Tool use         |
   +----------------------+

The flow: LLM plans (“I need X, Y, Z”) → emits tool calls → runtime executes in parallel via Promise.all() → sub-agents run their own loops with restricted permissions → results flow back → parent synthesizes.


Where Python Wins

TypeScript owns interactive coding agents, but Python owns everything else:

DomainPython AdvantageTypeScript Equivalent
Prompt optimizationDSPy (18 optimizers, Optuna-backed)Nothing comparable
Distributed multi-agentAutoGen, CrewAI, AgentScopeSame-process only
RAG pipelinesLangChain + vector DBs + chunkersBasic/emerging
Fine-tuningPyTorch, Transformers, LoRANothing comparable
EvaluationDSPy metrics, pandas, scikit-learnNot a focus
ResearchJupyter notebooks, REPL-firstNo equivalent
Agent-to-agent (A2A)Reference implementationConsumers only

The Deep Comparison

Design AxisNode.jsPython
Core purposeI/O multiplexingReadable computation
I/O modelNon-blocking by default (can’t block)Blocking by default (opt-in async)
ConcurrencyEvent loop (libuv, single thread)GIL (single thread, for simplicity)
Type systemTypeScript (structural, compile-time)Duck typing (runtime, introspectable)
MetaprogrammingLimited (decorators only)Deep (metaclasses, descriptors, frames, ast)
C interopN-API/WASM (awkward)ctypes/Cython/pybind11 (seamless)
Ecosystem gravityWeb, CLI tools, real-time appsML, data science, scientific computing
REPL cultureWeak (node REPL rarely used)Strong (Jupyter notebooks, IPython)
Memory modelV8 GC (generational)Reference counting + cycle GC

The Bottom Line

PYTHON = "Design, optimize, evaluate AI agents"
  - Prompt optimization (DSPy)
  - Distributed multi-agent teams
  - RAG, fine-tuning, evaluation
  - Research in notebooks

TYPESCRIPT = "Build, ship, interact with AI agents"
  - Interactive coding agents (CLI + TUI)
  - Real-time streaming + user steering
  - Parallel sub-agents via Promise.all()
  - Ship as CLI + Desktop + Web + Extension

RUST = "Performance-critical AI"
  - Goose: native speed, system-level control

Node.js was designed to wait for 10,000 things at once without blocking. Python was designed to make one thing at a time readable and powerful.

That’s why every coding agent (Pi, OpenCode, Claude Code) is Node.js — because a coding agent IS “waiting for 10,000 things” (LLM stream + tools + user input + file system + subprocesses). And every ML/optimization framework (DSPy, LangChain, CrewAI) is Python — because ML IS “computing one powerful thing readably” (optimize prompts, evaluate models, orchestrate teams).

They are two sides of the same brain. Node.js is the nervous system (fast signals, many channels). Python is the brain (deep computation, understanding, reasoning about code itself). They’re not competing — they solve different halves of the AI agent problem.

This is Why Every Coding Agent Is TypeScript (And Every ML Framework Is Python) by Akshay Parkhi, posted on 8th March 2026.

Next: Multi-Agent Is Two Problems: Why TypeScript and Python Each Win Half"

Previous: Why .md Files Are the Agent: How ClawdBot Turns Markdown Into a Living Assistant