What do you actually want from AI?

The biggest barrier to AI adoption isn't skill or access. It's the gap between knowing you should start and actually starting, and no framework will close it for you.
Designers aren't struggling with AI because the tools are hard. They're struggling because the ground shifted beneath something they spent years mastering, and nobody acknowledged how disorienting that feels. The barrier isn't technical. It's emotional.
In this article: We look at why comfort and fear keep designers stuck, what the uncomfortable space between "I know I should" and "I'm doing it" actually looks like, why targeted practice matters more than general exploration, and what it takes to move from learning into mastery.
Comfort is the real barrier
The most common reason designers don't engage with AI isn't that they lack access or information. It's that the way they work right now still feels fine. Deadlines get met. Stakeholders are satisfied. The tools they know produce reliable results.
Why risk disrupting something that works?
This is comfort as a barrier, not comfort as a reward. When your current skills still produce acceptable outcomes, there's no urgency to change. The problem is that "acceptable" quietly becomes the ceiling. And by the time the gap between what you can do and what the work demands becomes visible, you're playing catch-up instead of leading.
Comfort doesn't look like avoidance. It looks like competence. That's what makes it so hard to see in yourself.
What fear actually sounds like
Underneath comfort, there's often something less pleasant. The way design work used to function is shifting, and even positive change involves loss. Familiar routines feel less certain. Expertise that took years to develop feels suddenly fragile. The definition of "good work" is moving.
In coaching, once we get past the practical objections (accuracy concerns, workflow disruption, not enough time), what I hear more often is something closer to grief. And grief has recognizable patterns.
Some people hold tighter to how things were. Some deflect: "I'll get to it once the tools are more mature," or "We're waiting for clearer guidelines from leadership," deferring to a future condition that may never arrive. Some direct frustration at the tools themselves, as if the real problem is that AI isn't good enough yet.
Notice: these aren't strategic objections. They're emotional responses to change. The distinction matters because strategic objections respond to better information. Emotional resistance responds to safety, permission, and time.
The liminal space nobody talks about
There's a gap between comfort and real learning that most people try to skip. You've left the familiar, but you haven't arrived at competence yet. Everything feels awkward. You're slower than you used to be. Your output is worse than what you're used to producing.
This is the liminal space: the disorienting middle ground where the old way no longer works but the new way hasn't clicked. It's where most designers quit, not because they failed, but because the discomfort felt like evidence that this wasn't for them.
Consider this scenario. A senior designer spends an afternoon prototyping a concept with an AI tool. The results are rough. Unimpressive. Not something you'd show a stakeholder. She closes the tool and doesn't open it again for three weeks.
What happened wasn't a tool failure. A professional accustomed to producing excellent work encountered a context where excellence wasn't immediately available, and the discomfort was enough to stop the process entirely.
The liminal space is supposed to feel like this. Confusion, frustration, and slow progress aren't signs that you're doing it wrong. They're signs that you're doing the part that actually changes you.
Permission to produce something rough. Permission to throw away your first three attempts. Permission to call a failed afternoon "learning" instead of "waste." These sound small, but they're the difference between people who push through the liminal space and people who retreat to comfort.
Nobody moves from zero to one without unglamorous, repeated commitment to the outcome. AI is no different.
Learning requires a target, not just exposure
Pushing through discomfort matters, but directionless exploration only gets you so far. The designers who build real fluency don't just "play around with AI." They practice with a specific goal.
Targeted practice means choosing a real task, defining what success looks like, and using the tool to get there. Not experimenting in the abstract. Not following someone else's tutorial. Solving your own problem, with your own constraints, and learning from what works and what doesn't.
The difference is significant. General exploration teaches you what buttons to press. Targeted practice teaches you judgment: when to use AI, when not to, what to trust, what to override. Judgment is what separates someone who uses AI from someone who is good with AI.
Next time you open an AI tool, try writing down one sentence: what you want to achieve, how you'll know it worked, and what you'll do with the output. Sixty seconds of clarity before you start saves hours of wandering.
If you can't answer clearly, that's useful information. Maybe you need more clarity on the problem before you need a tool.
Mastery is on the other side
Comfort, fear, the liminal space, learning with a target. These aren't stages you pass through once. But there is a progression, and on the far side of sustained practice is something most designers haven't considered: mastery.
Mastery with AI doesn't mean knowing every feature or writing perfect prompts. It means the tool becomes transparent. You stop thinking about the interface and start thinking about the outcome. You develop a feel for where AI accelerates your work, where it falls short, and how to bridge the gap with your own judgment. No course or article can deliver this. It only comes from repeated practice on real work, over weeks and months.
Designers operating at this level aren't waiting for AI strategies from leadership. They're building things that represent their point of view. They decide what AI does in their work. They choose the problems. They set the standards for where human judgment matters most, because they've done the practice to know.
The question underneath the question
"What do you actually want from AI?" is really asking something more personal: what do you want to build, and are you willing to feel uncomfortable long enough to get there?
AI amplifies whatever you point it at. Point it at vague exploration, and you'll produce impressive outputs with no through-line. Point it at a specific goal, and you'll build a body of work that demonstrates your thinking. That work is yours. Not your company's. Not the tool's.
The opportunity isn't faster workflows or cooler prototypes. It's the chance to own something. But that only happens on the other side of the liminal space, not before it. On the other side of targeted practice, not general reading. On the other side of permission you gave yourself, not permission someone else granted.
Start with a goal. Accept that the middle will be messy. Keep going anyway.