Akshay Parkhi's Weblog

Subscribe

Friday, 13th March 2026

Mental Models in the AI Agent Age

Mental models are compressed knowledge of human experience — patterns discovered over centuries by many thinkers across physics, biology, economics, mathematics, and systems theory. In the age of AI agents, these same patterns don’t just help you think better. They help you build better systems, debug reality faster, and make decisions that compound over decades.

[... 1,757 words]

Coding in the AI Agent Age — Why Typing Code Is Dying But Engineering Is Thriving

If you think coding is just putting human-defined processes into structures, loops, functions, rules, packages, and web pages — you’re not wrong about the past. But that definition is dying. AI is automating the typing. What remains is the thinking.

[... 1,387 words]

How Skills Work in AI Agents — From Lazy-Loading Instructions to LLM Attention Weights

When you hear “skills” in AI agents, it sounds like a new concept. It’s not. Skills are a lazy-loading pattern for instructions — delivered through the same tool-calling mechanism the LLM already uses. But the details of how they load, where they land in the message hierarchy, and why they break at scale reveal deep truths about how LLMs actually work.

[... 2,801 words]

Autoresearch and Context Rot — How a Stateless Agent Loop Avoids Memory Problems (And Where It Breaks)

The autoresearch pattern — where a coding agent runs hundreds of autonomous experiments to optimize code — produced a 53% speedup on Shopify’s 20-year-old Liquid codebase and a 69x speedup on a demo text processor. But there’s a fundamental flaw nobody talks about: the agent has no memory of failed experiments. Here’s exactly how the pattern works, where it breaks, and how Tobi Lütke’s team quietly fixed it.

[... 2,392 words]

2026 » March

MTWTFSS
      1
2345678
9101112131415
16171819202122
23242526272829
3031