We (Tech) spent the last three years obsessing over how we interact with language models - chatbots, agents, AI-powered workflows. Meanwhile, a more fundamental shift has taken place: prompting is giving way to context engineering and how you interface matters very little. When you need a problem solved with AI, asking a chatbot out-of-the-blue might give you the occasional good result. But for exceptional results, models need information about you, your problem space, and your expectations. Do it well and every thread feels like chatting with a partner that’s just as deep-in-the-weeds as you are.
Evolution Below the Surface
It's ironic how we obsessed over the interface while the real breakthrough was happening in the background. It wasn't about how we talk to AI. It was about what the AI knows before we even start talking.
Since ChatGPT's breakout in 2021, the scope of context we can engineer into these systems has expanded in three distinct stages:
Stage 1: Prompt-only context. "Hey, create this file for me based on some examples. Here's good, what's bad." The AI only knows what you tell it in that moment. Success meant obsessing over every word in your prompt.