Insights

Insights

Insights

We changed the way we build AI products. You should too

We changed the way we build AI products. You should too

We changed the way we build AI products. You should too

August 1, 2025

Subscribe to
our newsletter

Subscribe to
our newsletter

AI isn’t a feature or a commodity. It’s a shift in how software is built, how products feel, and how you communicate with users. More than anything, it’s a mindset.

And yet, most teams don’t adjust for that. They treat AI like something you layer on top of what already exists, with no real change in how their product works or how users are supposed to interact with it.

This leads to products that feel confusing, hard to trust, and unlikely to improve over time.

We realized early on that this wouldn’t work. So we changed the way we build. Not just the tech stack, but the thinking behind it. Because once you build with AI at the core, the way you ship, test, and collaborate starts to look very different:

  • You no longer need to wait on a tech team to test an idea. With AI, a PM, designer, or marketer can generate a working version, try it out, and involve engineers only if it’s worth building properly.

  • You don’t need a senior designer to make polished UIs. With tools like shadcn and AI-based generators, even a non-designer can create usable interfaces fast.

  • You can understand unfamiliar code in minutes. Just select the function, ask AI to explain it, and you’ll get context that used to take hours to figure out.

  • You can spot bugs instantly. Not just fix them faster, but actually surface edge cases and logic issues you might have missed.

  • AI replaces low-code tools because it's just as fast to write custom code, and you avoid the limitations of prebuilt systems.

Here’s how that changed our processes across design, engineering, and go-to-market.

Don’t ship features, ship systems that evolve

Writing code hasn’t gone away. But the way we write it, and what we expect from engineers, is evolving fast.

Yes, AI helps. You can use tools like Cursor to write boilerplate faster, scaffold components, refactor legacy code, or generate tests. That part is real, and it saves time.

But if you stop there, you’ll fall behind. What AI still can’t do is make decisions for you. It doesn’t understand long-term impact. It doesn’t know which tradeoffs make sense for your system. It can’t think in layers, or weigh what’s fast versus what’s maintainable.

So we don’t treat AI as a shortcut. We treat it as leverage. It helps us move faster, but only when we stay in control.

Recently, we started building a new product called Clearcue. The idea came from our own pain. We were running campaigns across different channels, but we couldn’t tell which companies were genuinely interested. The signals were there. A few visits from one domain, someone liking a post, someone else reading a blog. But they were scattered and easy to miss.

We wanted to connect those signals automatically and surface intent early without relying on forms or attribution hacks. That meant working with real-time data, multiple channels, and behavior that doesn’t follow a fixed path.

To do that, the system needs to process large amounts of real-time data, analyze user behavior, assign intent scores, and turn that into clear insights for marketing teams. The output needs to be accurate, timely, and easy to act on.

It’s not something you can build by prompting and hoping for the best. So we broke the system down:

  • We isolated each layer so front-end, back-end, and data pipelines could evolve independently.

  • We separated model experiments from production logic to test new ideas without touching the interface.

  • We used Cursor to generate design components directly and avoid unnecessary back-and-forth with the design team.

  • We took it even further by giving designers access to the codebase, so they could directly style the product using Cursor.

  • We also set things up so product managers can test directly with AI. They simulate different user scenarios and explore how the product behaves without needing help from the tech team.

This didn’t just speed us up. It made the team more focused. Less time syncing between roles. More time solving real product problems.

But none of this works if you expect the tools to do the thinking for you. AI can only amplify your clarity. It can’t create it.

That’s why deep technical skill is more important than ever. Writing code isn’t enough. You need to understand how systems behave, how memory works, how to think about performance, and how to debug without guesswork. The framework might change, but the fundamentals don’t.

We’re not building apps with static screens anymore. We’re building systems that learn, adapt, and interact in unpredictable ways. And that requires engineers who can do more than just write code.

You need AI-first design

Our design process had to evolve too.

We no longer think of design as just creating clean UIs or arranging buttons on a screen. Now we ask a different question: how do we design a product that uses the full strengths of AI?

That means fewer dropdowns and endless configuration screens. Instead, the product should proactively suggest solutions, ask for quick confirmations, and let users take action through a conversational interface. The UI is no longer just something you click. It becomes something you talk to, correct, or guide.

Designing this way forces you to think differently. You do not build fixed flows. You build flexible systems that adapt to what the user wants. You do not assume users know where to click. You help them figure out what to do next.

Here’s what that looks like in practice:

  • We organize the design system with clear naming and logic. AI tools rely on patterns. If components are too vague or custom, they break.

  • Designers define core primitives and system rules instead of full mockups. Engineers then use those to build features that deliver business value.

  • Designers work side by side with engineers. No long handoffs. Changes happen in real time using Cursor.

  • Designers often guide the AI prompts themselves. That makes it faster and more aligned than sending over static designs.

  • We focus on structure and clarity, not just visual polish. The goal is to make the UI easy to generate, adapt, and extend with AI.

Designing for AI-first products is not the same as designing for traditional SaaS. People need to understand what the AI is doing, how to correct it, and what to expect when it goes wrong. That requires a different mindset.

So we ask a few key questions every time the model makes a decision:

  • What context does the user need? We provide just enough detail to make the output understandable without overwhelming the user.

  • Can they verify or change the result? AI gets things wrong. That’s normal. But if users can’t intervene, trust fades. We make it easy to flag, correct, or remove what’s not useful.

  • What happens when the model fails? Failure is part of the process. We make sure users can always recover, with fallback paths that are as smooth as the ideal flow.

We don’t hide the AI behind static UI. We expose it. If something is uncertain, we say so. If something needs explanation, we offer it. If something fails, we give the user control.

Rethinking go-to-market for AI products

Our approach to go-to-market has always been fast. We’ve tested ideas quickly, launched MVPs, and iterated based on feedback. But building and growing AI products forced us to rethink even that.

Now we can launch a new landing page in a few hours. We generate the design, copy, and visuals using AI, adjust based on how people respond, and publish updates the same day.

We run early tests with simulated personas in ChatGPT. These personas act like our target buyers. We feed them landing pages, onboarding flows, or value props and see how they respond. This helps us catch confusion and spot objections early, without spending anything on ads or sales outreach.

Reddit is often where people talk openly about their frustrations, needs, and goals. We’re building a tool to analyze these discussions at scale. AI helps us extract patterns from thousands of posts and uncover insights that would take hours to find manually (this tool is nearly ready and we plan to release it soon on https://clearcue.ai).

We also needed a better way to track what people actually do, not just what they say. That’s how Clearcue started. We built it to see how buying intent forms across multiple channels and people, before anyone fills out a form or books a call.

Instead of focusing on clicks or open rates, we now watch for:

  • one person reads your blog, while another from the same company visits your pricing page;

  • a founder likes your LinkedIn post, and their teammate checks your website twice that week;

  • someone signs up for your webinar, but a colleague is already exploring your product docs;

  • a decision-maker searches for your brand, while someone else from the team spends time on your solutions page;

  • traffic from the same company builds across several days, with visits to your blog, homepage, and careers page.

These patterns don’t show up in a CRM, but they’re the early signals we care about. Clearcue makes them visible so we can act sooner.

We use the same approach for the AI products we build. Every part of the go-to-market process is shaped by simulation, feedback, and adaptation:

  • We use AI to analyze feedback and behavior in minutes.

  • We update onboarding and messaging based on what people actually do.

  • We rank feedback by how consistent it is across different sources. AI helps us find those patterns quickly.

  • We design experiments that would normally take a full team, but now take one person.

  • We use AI to find language gaps: what our users say vs. how we write.

  • We test channel–message fit before going broad.

  • We build SEO content with AI-assisted research and review it with GPT-based personas to check clarity and angle.

  • We also revisit old ideas often. Something that doesn’t perform today might work next week.

We’re not following traditional GTM playbooks. They assume clean funnels, fixed outcomes, and feature lists. AI changes how people interact with products, which means we’ve had to change how we reach them, how we test what works, and how we grow.

Continue reading
Continue reading
Continue reading

The latest handpicked blog articles

The latest handpicked blog articles

The latest handpicked blog articles

Let's build your product together.

Ready to start your project? We're here to help.

Let's build your product together.

Ready to start your project? We're here to help.

Let's build your product together.

Ready to start your project? We're here to help.