It Wasn’t a Prompting Technique

The skill that separates great AI users from average ones is the same skill that separates great leaders from average ones. It has nothing to do with technology.

I had an idea the other day I wanted to develop, and I went to Claude and said, essentially, "I think there's something here. Help me figure out what it is."

But I didn't just give it a task. I gave it context. I shared my why behind what I was trying to accomplish. I shared my uncertainty — that I wasn't sure exactly where it would land. And I asked it to help me get there rather than just telling it what to produce.

We had a genuinely good conversation. It pushed back. It asked clarifying questions. It helped me find the sharpest version of the idea. The output was better than anything I would have gotten if I had just said "write me a post about…."

What struck me afterward was that what I did with AI wasn't a prompting technique. It was a leadership technique.

MIT researchers recently confirmed what a lot of us are discovering the hard way — that half the performance disappointment with AI has nothing to do with the model. It comes from how users communicate with it. The people getting the best results weren't better technologists. They were better communicators.

It turns out a psychologist named Robert Rosenthal documented the exact same dynamic sixty years ago.

In 1965, Rosenthal ran an experiment at a California elementary school. He gave students a fictitious IQ test and told teachers that certain randomly selected students had scored exceptionally high and were poised to bloom academically.

A year later, the "bloomers" significantly outperformed their peers.

Nothing about the students had changed. What changed was how their teachers engaged with them. They gave those students more specific feedback. They were more patient with their mistakes. They communicated higher expectations, and the students rose to meet them.

Rosenthal's conclusion reached well beyond the classroom. The same factors, he said, operate with bosses and employees, with therapists and clients, with parents and children. The more clearly and warmly you communicate your expectations, the better the person receiving them will perform.

Same basic finding, sixty years apart. One with humans. One with AI.

We invest heavily in capability — we hire talented people, we adopt powerful AI — and then we underinvest in the communication that unlocks it. And when we don't get what we wanted, we look outward. We blame the hire. We blame the model. When the bottleneck was almost always the same place.

It's us. It's the clarity we didn't invest in. The “why” we assumed was obvious. The expected outcome we never actually defined.

The Fix:

So what does better look like? Three things, whether you're directing a person or prompting an AI:

  • Start with why. Context is the difference between a capable collaborator and a confused one. Tell them what you're trying to accomplish and why it matters.

  • Define success. Vague requests produce vague results. The clearer your picture of the destination, the better the path your collaborator will find to get there.

  • Invite, don't just instruct. The best output often comes when you share what you're uncertain about and let the other party contribute to shaping the answer.

The skill is the same. The discipline is the same. And the return — whether from your team or your AI — depends entirely on how much clarity you're willing to invest upfront.

Next
Next

The Interpretation Tax