Product

Product

Product

How to design AI products

How to design AI products

How to design AI products

May 5, 2025

Subscribe to
our newsletter

Subscribe to
our newsletter

👋 Welcome to today’s edition of Build & lead (formerly the CTO blueprint), a newsletter by Appolica. Every two weeks, we dive deep into the tech, product, and leadership challenges that keep founders up at night.

When we talk about good products, we rarely talk about the technology behind them, or even the way they look. A great product is (almost) always one that’s easy, even obvious, to use. That’s even more important in the age of AI.

However, just a few AI products are intuitive. Most are smart, and some are usable. But that’s not enough.

Most founders are still designing AI like regular software. They assume users will figure it out. That people will know what the AI can do, what it can’t, and how to talk to it. But they won’t (and they shouldn’t have to).

AI design is a new skillset. It’s about making chaotic systems feel predictable and matching the expectations of users who are also still figuring things out. Yet, AI products should build trust without overpromising and employ interfaces that guide, rather than overwhelm.

That being the case, designing AI products is not the same as designing SaaS. The interface isn’t just a wrapper around the model, it is the product. If people can’t figure out what the AI does, or how to use it, the tech doesn’t matter. Most teams overlook this. That’s why their products feel like black boxes.

Here’s how to do AI design right.

Why AI products need a different kind of UI

Product builders still design AI like features, not systems: set up a flow, add a button, drop in a model, ship. But AI isn’t deterministic. It’s unpredictable. It doesn’t behave the same way twice, even with the same input.

That’s why AI design requires embracing unpredictability. Through the UI, you need to offer intuitive paths for users to adjust and correct the system’s (inevitable) missteps, making them more confident about using the product.

You need to design for ambiguity. Guide users through open-ended interactions. Communicate uncertainty without eroding trust. Give people ways to correct the AI, not just react to it. If you design your product like it’s just triggering a function, you’re setting users up for confusion and failure.

And here’s the real problem: AI doesn’t fail like regular software. It fails in unexpected ways, often with a lot of misplaced confidence. Sometimes it’s wrong but sounds right. Sometimes it breaks in ways that feel careless. If your design doesn’t account for that, users will quickly lose trust (and they are probably not coming back).

Designing conversations

For decades, user interfaces have guided us along well-defined paths. They were these familiar structures that gave users confidence in knowing exactly where they are and what they can do next.

This paradigm shaped not just how products were built, but how people interact with them. The exact sequence of actions when someone clicks a button, how information would appear on various screens, and how the interface responds to common tasks, made interacting with software predictable and intuitive.

AI and large language models (LLMs) fundamentally changed this.

We’ve moved from static interfaces to dynamic conversations. Users now interact with AI through natural language. But conversational design is harder than traditional UI and it requires an understanding of human psychology.

Prompting, for instance, is akin to writing a detailed specification. Clearly articulating exactly what you want and controlling the outcome can be difficult. Two users searching for the exact same outcome might end up with drastically different results, depending entirely on how they phrased their requests. This creates a new level of unpredictability in product experiences.

This shift from predictable, deterministic interfaces to something more freeform and unpredictable, poses a new and significant challenge for designers: without clearly defined user journeys, how do we deliver products that are high-quality, intuitive, and satisfying?

There’s no clear answer yet, but there are principles worth holding onto: expect unpredictability, be transparent, and be honest. In design, these go a long way.

Be transparent in every interaction

AI is inherently opaque. Users can’t easily understand how it makes decisions. This creates frustration. To deal with that, you must actively design transparency into every interaction. You can’t afford to have users wonder if your AI just did something by itself or why it chose one option over another. If users have to guess what happened, you've already failed.

Transparency isn’t optional. It should be embedded into every interaction. Here’s what this looks like:

  • Labels and annotations: Clearly indicate when content was generated, suggested, or modified by AI. (e.g., "AI suggested" tags next to recommendations).

  • Confidence scores: Visibly show how certain the AI is about its output (e.g., “80% confident this match is correct”).

  • Explanations: Offer short, human-readable reasons for actions. (e.g., "We recommended this article because you frequently read about UX design.")

  • Revision history: Let users see and revert AI-made changes easily, instead of hiding them.

  • User prompts and confirmations: If the AI is taking an action on the user’s behalf, always ask for review and confirmation first.

  • Process indicators: Show when the AI is "thinking" or analyzing data (progress bars, loading indicators, or visible steps in the workflow make a huge difference).

  • Error visibility: If the AI fails or is unsure, surface it openly (e.g., "I'm not confident about this result. Would you like to refine your input?")

Good AI interfaces never leave users wondering what just happened. They show it, explain it, and offer ways to act on it.

Always give users control

One of the easiest way to make users lose trust in an AI product is to take away their sense of control.

A lot of teams do this without realizing it. They let the AI make changes without asking, rewrite things automatically, or act without user input. It might feel efficient, but it usually backfires.

Why? Because:

  1. Users don’t like surprises. If they don’t understand what just happened (or why), they’ll assume something’s broken.

  2. Control builds confidence. When users can accept, reject, or tweak what the AI suggests, they feel in charge. That makes them more likely to keep using it.

  3. Transparency creates trust. When users can see how and why the AI made a decision, it builds confidence in the system. A clear explanation is often more valuable than a perfect answer.

Building AI products people actually care about

Users don’t stick with a product just because it works. They stick with it because it feels right. AI that feels generic or robotic is easy to forget. But AI that reflects a user’s voice, personality, or values? That’s something people keep coming back to.

A great example of this is Granola, a note-taking app. It stands out not through automation alone, but by preserving the user’s personal voice and intent. Instead of rewriting what you write, it helps you express yourself more clearly. As a result, users feel respected and empowered, rather than replaced. The result is emotional resonance, user loyalty, and word-of-mouth growth.

You don’t get that from slick UIs or fast performance alone. You get it from designing experiences that understand the human behind the user.

Personalization is non-negotiable

Users don’t want a smart product. They want a product that feels built for them.

Generic experiences frustrate users. However, personalization is more than remembering a user’s name. It’s your product learning what users like, adapting to how they work, and getting better over time.

Great AI products don’t make the user feel like they’re smart. They make the user feel smart. Good AI design doesn't overwhelm with features and complexity. Instead, it clearly communicates its capabilities, limits, and reasoning, so users always feel in control.

Here are practical ways to achieve this across any AI product:

  • Transparency: Communicate the confidence level and certainty of your AI’s decisions clearly (e.g., “I’m 80% sure this is accurate.”).

  • Explainability: Provide meaningful explanations about AI-generated outputs (e.g., "We recommended this because of your previous choices.").

  • Control & feedback: Offer quick, easy ways to correct, override, or refine results (undo, retry, feedback buttons), so users never feel trapped or powerless.

  • Clear expectations: Make it obvious what kind of assistant your product is (e.g., a co-pilot offering suggestions, a tool that learns and adapts, or a fully autonomous system). When users know what the AI can (and can’t) do, they’re less likely to feel confused or misled.

These aren't minor features. They're foundational. Without them, users won't trust or rely on your product.

How we're designing AI products at Appolica

We often work with early-stage AI teams who have something powerful, but struggle to make it usable. The problem usually isn’t the model, it’s the experience around it. Users are confused, onboarding is clunky, and the product needs too much explanation.

We help fix that with short, focused design sprints that improve usability fast.

That might mean:

  • A fast UX audit to identify friction points.

  • A redesign of a key workflow that users keep getting stuck on.

  • Tightening up the product experience.

The goal is always the same: help people understand what the AI does, trust it, and make the product easy to use.

The bottom line

As AI agents become more capable, users won’t just click buttons. They’ll delegate tasks, monitor progress, and step in when things go off track. This new interaction model demands more than just good UX. It requires a new kind of interface: one that makes the AI’s goals clear, defines its boundaries, and makes it easy to review and validate outputs.

In this context, design becomes critical. We need to create ways for people to understand, supervise, and trust systems that are increasingly autonomous. Interfaces should help users visualize what the agent is doing, why it’s doing it, and what happens next without overwhelming them.

This requires a shift in how we work. We can’t always start with clear requirements, because we don’t fully know what AI is capable of yet. Instead of designing backwards from fixed user needs, we need to design through exploration. Build, test, learn, and iterate.

Continue reading
Continue reading
Continue reading

The latest handpicked blog articles

The latest handpicked blog articles

The latest handpicked blog articles

Let's build your product together.

Ready to start your project? We're here to help.

Let's build your product together.

Ready to start your project? We're here to help.

Let's build your product together.

Ready to start your project? We're here to help.