Monday, 9th March 2026
Context Engineering for AI Agents: 6 Techniques from Claude Code, Manus, and Devin
After studying how production AI agents like Claude Code, Manus, and Devin actually work under the hood, the single most important concept isn’t prompt engineering — it’s context engineering. The art of controlling exactly what goes into the model’s context window, and what stays out.
[... 2,082 words]playwright-cli: How to Give Your AI Coding Assistant a Real Browser
If you use Claude Code (or any AI coding assistant), there’s a tool that makes browser automation trivially easy: playwright-cli. It’s a command-line wrapper around Microsoft’s Playwright that lets you control a real browser from your terminal — navigate pages, click buttons, fill forms, take screenshots, and scrape content. Here’s how to set it up and why it’s genuinely useful.
[... 1,104 words]Build a Research Engine in Claude Code: YouTube Search → NotebookLM → Synthesized Insights in 5 Minutes
I built a research engine inside Claude Code that searches YouTube, feeds results into Google NotebookLM, and lets me query across all the sources — all without leaving the terminal. Here’s exactly how it works and how to set it up yourself.
[... 1,151 words]Top Claude Code Skills: What 20 YouTube Videos and 2.3M Views Agree On
I researched 20 YouTube videos on Claude Code skills, fed them all into Google NotebookLM, and asked it to synthesize the top skills across every source. Here’s what came back — ranked by how often they were mentioned and how impactful creators found them.
[... 848 words]10 Hidden Concepts in Strands SDK Hooks That You Won’t See by Reading the Code
The Strands SDK hook system looks simple on the surface — register a callback, receive an event. But there are 10 hidden concepts buried in the design that you’ll never see by just reading the code. Here’s what’s actually happening under the hood.
[... 1,390 words]From Webcam to Robot Brain: How Vision-Language Models and Vision-Language-Action Models Actually Work
I built a webcam app that sends live frames to Claude and GPT-4o for real-time scene understanding. Along the way, I discovered how fundamentally different this is from what robots like OpenVLA do with the same camera input. Here’s the full pipeline — from photons hitting your webcam sensor to tokens coming back from the cloud.
[... 1,756 words]