Logo Raphael Pereira Raphael Pereira
PT EN

UX & Conversion

The Prompt Isn't the Interface: Why Designers Are Rethinking AI UX

Natural language seemed like the future of interaction. Until we realized that asking users to write what they want isn't design.

6 min

Listen to article

0:00 / —:—

When ChatGPT made natural language interaction mainstream, a lot of products assumed the interface of the future would be this: a text field and a send button. The user types what they want, the AI responds, problem solved.

That’s not interface design. That’s the absence of design.

The results show up in practice: users who don’t know what to ask, inconsistent results, frustration with responses that don’t do what they imagined. The prompt shifts cognitive load from the system to the person. And people have better things to do than learn how to “talk right” to a tool.

The problem isn’t AI. It’s the interaction model.

There’s a persistent confusion in the market: treating “conversational interface” as synonymous with “good AI UX”. They’re not the same thing.

Conversational interface is an interaction pattern. It can work in some contexts (personal assistants, customer support, open-ended exploration). In others, it’s the worst possible choice.

Think about a scheduling system. The user wants to book a meeting for Tuesday at 2 PM with three people. In a traditional interface, they click the calendar, select the time, add attendees. In a prompt interface, they have to write: “Schedule a meeting for Tuesday, the 21st, at 2 PM with John, Maria, and Carlos.”

The second option sounds smarter. But it requires the user to remember exact names, specify the date unambiguously, and trust the AI interprets correctly. If the AI gets it wrong, they have to rewrite. If the AI asks clarifying questions, they have to answer. That’s more steps, not fewer.

The illusion of simplicity

A blank text box looks simple because it has few visual elements. But interface simplicity isn’t the same as ease of use.

Prompt interface

  • No visual guidance
  • User has to know what to ask
  • Results depend on phrasing
  • Errors require manual rewrite
  • Learning curve is hidden

Structured interface

  • Visible, actionable options
  • System guides the next step
  • Results are predictable
  • Errors fixed with one click
  • Learning curve is transparent

Real simplicity reduces cognitive load on the user. False simplicity just hides complexity somewhere else — usually in the head of the person using it.

Where natural language actually works

I’m not saying prompts are always bad. I’m saying they’re a tool, not a universal strategy.

Natural language works well when:

  • The task is exploratory and scope isn’t predefined
  • The user has domain expertise and can articulate what they need
  • The interaction is iterative by nature (research, brainstorming, open analysis)
  • The cost of error is low and correction is easy

It fails when:

  • The task has known parameters that can be presented as options
  • The user isn’t an expert and needs guidance
  • Precision is critical (transactions, settings, sensitive data)
  • The feedback loop between intent and result needs to be immediate

What designers are doing differently

The pattern emerging in well-designed products is hybrid. It’s not “either prompt or traditional interface.” It’s using each pattern where it actually solves the problem better.

1. Assisted prompting

Instead of a blank box, the system offers contextual suggestions. Notion AI does this: when you open the assistant, it suggests actions based on the current content. “Summarize this page”, “Translate to English”, “Create a task list”. The user can ignore it and write what they want, but most don’t need to.

2. Structured controls with AI underneath

The user interacts with a traditional interface. The AI processes in the background. Figma AI works this way in several functions: you select an element, click “remove background”, and the AI executes. No prompt. The verb is the click.

3. Confirmation before execution

For AI agents that execute actions, the safest pattern is showing what will happen before it happens. The user doesn’t write “delete old emails.” The system interprets, displays “I’ll delete 47 emails from before January 2024,” and waits for confirmation.

4. Fallback to natural language

The text box exists, but as an escape hatch. The primary path is structured. The prompt handles edge cases the interface didn’t anticipate.

The case for autonomous agents

With AI agents gaining traction, the interface problem gets more critical, not less.

An agent that executes tasks on behalf of the user needs clarity on scope, permissions, and limits. Delegating that to prompts is a recipe for disaster. The user types “organize my files”, the agent interprets one way, and when the user notices, half their documents have been moved to folders they don’t recognize.

The products getting this right treat the agent as a system that needs explicit configuration, not as an assistant that “understands” everything through natural language.

  • Does the user know exactly what the agent can and can’t do?
  • Is there confirmation before destructive or irreversible actions?
  • Does the system show what it’s doing in real time?
  • Are there configurable scope limits (which folders, which data, which actions)?
  • Is the fallback to manual control accessible and obvious?

The cost of ignoring this

When the interface is just a prompt, the quality of experience depends entirely on the user’s ability to formulate requests. This creates two business problems.

First, the product’s perceived value becomes unstable. The same system seems “incredible” to people who know how to use it and “useless” to those who don’t. Adoption falls, churn rises, and the real cause stays hidden because it doesn’t show up in technical error metrics.

Second, support explodes. Users don’t understand why the AI “doesn’t work” when it actually works — just not the way they expected. Documentation doesn’t help because the problem isn’t lack of information. It’s lack of design.

What changes in practice

If you’re building or specifying a product with AI, the question isn’t “how will the user write the prompt.” It’s: what task does the user want to accomplish, and which interface pattern reduces friction between intent and result the most?

In many cases, the answer doesn’t involve a prompt at all. It means understanding context, offering relevant options, and letting the AI work in the background without the user having to think about it.

Natural language was an important innovation because it allowed interaction without learning specific commands. But treating it as the end state of interface design is confusing the means with the end. The end is solving the user’s problem with minimum effort. Sometimes that involves a prompt. Most of the time, it doesn’t.

Retrato de Raphael Pereira

Author

Raphael Pereira

Designer & strategist focused on performance-led digital experiences.

Related posts