← All insights
Product

Human-centred design in the age of AI

AI is changing what it means to design products — but it has not changed what humans fundamentally need. If anything, it has made understanding human needs more important than ever.

25 November 2024 · 3 min read

When people talk about AI changing design, the conversation usually focuses on tools: generative interfaces, AI-assisted prototyping, personalised experiences, multimodal interaction. These are real and important shifts.

But there is a more fundamental question that gets less attention: does AI change what we are actually trying to design for?

The short answer is no. And understanding why is crucial for product teams trying to navigate this moment.

What does not change: human needs

Human needs are remarkably stable. People want to feel safe, connected, competent, and in control. They want to accomplish meaningful things without unnecessary friction. They want to be understood. They want to trust the systems they depend on.

These needs do not change because the technology changes. A customer trying to get a mortgage approved has the same underlying need whether they are dealing with a human advisor, a web form, or an AI assistant. They want clarity, confidence, and to feel that someone — or something — is genuinely working in their interest.

This is the insight at the heart of Jobs to Be Done: the jobs people hire products to do are stable even as the technology for doing those jobs changes dramatically.

What does change: the design surface

What AI changes is the surface on which you design — and the range of experiences that are now possible.

Previously, designing for human needs meant designing information architecture, interaction flows, visual hierarchy, and copy. The medium was the screen.

With AI, the design surface expands dramatically:

  • Conversations that adapt to individual context and history
  • Proactive assistance that anticipates needs before they are articulated
  • Personalisation that goes far beyond surface-level preferences
  • Interfaces that blur the line between tool and collaborator

This is exciting. It also creates significant new risks.

The new failure modes

AI-enabled products introduce failure modes that did not exist before, or existed at smaller scale:

Opacity. When an AI makes a recommendation or decision, users often cannot understand why. This undermines trust — especially in high-stakes domains like health, finance, or legal matters.

Over-reliance. Well-designed AI assistance can become a crutch. Users lose the capability to function without it, which is fine until the system fails or gives bad advice.

Misalignment. AI systems optimised for engagement, retention, or conversion can systematically work against user interests. This is not hypothetical — it is documented and widespread.

Exclusion. AI systems trained on historical data inherit historical biases. Populations underrepresented in training data get worse experiences.

Good design in the AI era requires active attention to all of these. Not as edge cases, but as central design concerns.

Human-centred AI design in practice

The practical implication is that human-centred design becomes more important, not less, as AI becomes more capable. The skills of deep empathy, ethnographic research, usability testing, and genuine curiosity about human behaviour are the skills that will prevent AI products from becoming extractive, opaque, or harmful.

The teams that will build the best AI-enabled products are not those with the most sophisticated models. They are the teams that most deeply understand what their customers are trying to accomplish, what they fear, what they trust, and what they need to feel confident.

The technology is the easy part. Understanding humans never was.

Want to talk through this?

Book a free discovery call with the Berst Consulting team.

Book a Call