Is it a matter of taste?

Maybe it's not about Prompting. Maybe it's about taste. I don't have an answer.

I've been watching people use AI for months now. Same tools, same access, wildly different results. The gap isn't in their prompting technique - it's in their ability to distinguish between quality and mediocrity.

I think it might be about taste. But I'm not sure yet.

The Pattern I See

Watching two people work with the same AI tool, on the same tasks. Both following "prompt engineering" rules. Be specific, provide context, assign a role, etc.

Pippo (Person A) gets generic, I dare say, mediocre outputs.
Accepts them. Doesn't even recognize them. Moves on.

Saro (Person B) gets a very similar output.
Frowns and asks to reformulate the output, adding some more details. Gets something better. Iterates again and eventually gets something really good.

Same initial prompt. Same tool. Completely different results.

So the difference isn't in the initial prompt but in knowing when to continue and when to stop. I don't even think it's a prompt engineering skill. I think it's... taste? The ability to look at something and say "no, this isn't right, it's not good."

Taste Schema

What I Mean by Taste

Let me explain this way:

- knowing when the output is really good vs "seems okay"
- sensing in which direction to push when it's not right
- recognizing "AI flavor"
- knowing when to continue iterating and when to stop

It's not a checklist you can follow. It's more like... when you look at a design and just feel something's off. Or read code and feel it's fragile even though it works. The famous Vibe.

Three Levels of Taste

I’ve sketched out how this develops. I don't know if these are the right categories but here's what I’m seeing:

Level 1: AI Blind

- Accepting output as AI spews it
- Unable to grasp what's wrong
- Either abandoning or randomly rephrasing
- Copy/pasting AI's output

Level 2: AI Aware

- Spotting obvious issues ("too formal," "missed the point")
- Iterating with specific corrections
- Mixing AI output with their thoughts
- Knowing when to stop

Level 3: AI Native

- Spotting issues others don't see
- Guiding AI toward something better
- Knowing when AI is the wrong tool
- Creating things that exceed mediocrity

So What?

If taste is the real skill, then AI tools have a taste barrier to entry.

People without taste → get mediocre results → conclude "AI isn't useful" → give up

People with taste → get great results → improve their work. Their way of doing things, better.

The gap is huge. And it's not about technical competence. It's about judgment. Which means... what? How do you help people develop taste faster? Can you build tools that give you taste? Will this become a gatekeeping like AI literacy?

Current State

I’m still trying to figure it out. I might be completely wrong about the "taste" framework.

But when people now ask me "how do I get good with AI?" I no longer say "learn better prompts".

I say something like: "Look at a lot of outputs. Learn to see the difference. Practice identifying and writing what’s wrong. Build reference points. Iterate with intention."

It's judgment, not technique.

And judgment comes from repetition. Practice beats talent (?).


*I'm still processing. Probably missing something big. Meanwhile, I lose sleep over it.*

← Back