The Research That Talks Back.

Years of user knowledge. Finally close enough to use.

Built so the whole organisation can think with its users, not about them.

Client:

Danfoss

Role:

Product Design - Research Infrastructure
& AI Tooling

Product Design - Visual Systems
& AI Tooling

Date:

2025

Overview

Danfoss had years of user knowledge. Interviews, behavioural data, role segmentation across a global industrial organisation. It sat in documents. Product teams moved faster than research access allowed, and by the time someone needed a user perspective mid-sprint, the data was either unfindable or the right person unavailable. Decisions got made on assumptions. The brief was to change that - not by doing more research, but by pushing existing knowledge closer to the people building the product, so they could move and validate at pace.

The Problem

Danfoss had already invested in understanding their users. UX researchers had conducted interviews, mapped behaviours, and built role-based personas to represent the range of people their products were designed for. The work existed. But it lived in files that didn't travel well - PDFs shared over email, documents buried in folders, knowledge that degraded every time someone left the team or a project moved on without a handoff.

The deeper issue was access. Danfoss operates at industrial scale across global markets. The people who needed user insight most - product developers, product owners, designers mid-sprint - were rarely the same people who had generated it. Getting to the right knowledge meant finding the right person, booking time, waiting. Most of the time, teams didn't wait. They made calls based on experience and assumption and moved on.

The ask was specific: take what Danfoss already knew about their users and make it queryable - and make it fit. Danfoss already had a digital ecosystem in development, a direction for how internal tools should work and feel. The solution couldn't be a standalone tool bolted on from outside. It had to belong.

The System

A synthetic user is not a persona document given a voice. The difference matters. A traditional persona is a composite - averaged behaviours, assumed motivations, demographic brackets. Useful for alignment, unreliable for interrogation. Ask it a specific question and you get a generalisation back. What was needed was something grounded in actual people: how they speak, what they prioritise, where they hesitate.

Traditional Persona

Synthetic User

composite

transcript-grounded

assumed

specific

static

updatable

The System

The starting point was the transcript. Real users were interviewed, recorded, and transcribed. That raw material - not a summary, not a synthesis, the actual language and rhythm of how someone talks about their work - became the knowledge base fed into a custom GPT. The GPT architecture and the final interface were my work. The experiment and validation were run as a team. Building the architecture required repeated testing: too thin a prompt produced generic responses indistinguishable from a standard ChatGPT answer; too rigid a prompt made the synthetic user brittle, unable to handle questions outside the original interview scope. Finding the right calibration - specific enough to be useful, flexible enough to be queryable - was the core design problem at the system level.

The Decision

The second layer was the Danfoss user base itself. The UX lead provided existing role documentation - behavioural profiles and segmentation data covering the range of user types across Danfoss products. These fed the personas in the gallery: not invented characters, but synthetic users grounded in real organisational research. Critically, the roles weren't flat - they carried the hierarchy of the actual organisation. A head chef and a kitchen worker aren't just different user types; they have a dependency relationship, different decision-making authority, different things they would and wouldn't say. The architecture was built to hold that - because Danfoss's own structures work the same way.

The system was also designed to stay current. When new research comes in - a fresh round of engineer interviews, an updated field service study - it feeds directly into the relevant synthetic user. The knowledge base doesn't freeze at the point of creation. It updates.

Validation

Before anything could be designed for Danfoss employees to use, one question had to be answered: does it actually work? A synthetic user that drifts from the real person it represents isn't a research tool - it's a confidence problem. Teams would either over-trust it or dismiss it entirely. Accuracy was the condition everything else depended on.

The test was direct. A real user - Levi, an IT Product Design student - was interviewed, recorded, and transcribed. That transcript became the knowledge base for his synthetic counterpart. The synthetic Levi was then asked questions he hadn't been asked in the original interview. His real answers, given separately, were compared against what the synthetic version produced.

The results were instructive rather than perfect. On questions close to the interview material - working style, collaboration preferences, how he handles deadlines - the synthetic version tracked well. Tone, priorities, and characteristic hesitations matched. On questions further from the transcript, the model generalised. It remained coherent, but it became less distinctly Levi and more like a capable design student in general. The gap was predictable and useful: it defined exactly where the system could be trusted and where it needed more input.

The Outcome

The same method was applied to two canteen workers - a kitchen worker and a head chef - running a deliberate test of whether the system could hold organisational hierarchy, not just individual personality. The head chef planned, delegated, and spoke with ownership. The kitchen worker operated within constraints set by someone else. Both stayed true to their transcripts. The distinction between them held.

A parallel test compared a transcript-grounded synthetic user against a generic persona prompt given the same questions. The difference was immediate. The generic prompt produced structured, professional answers. The transcript-grounded version produced specific ones - the kind that contain the small resistances and qualifications that only come from a real person's experience.

That specificity is what makes the tool useful. Generic answers don't change decisions. Specific ones do.

The Interface

The system needed a surface. Not just a prompt box - a tool that Danfoss employees across product and design could open mid-sprint, understand immediately, and use without a briefing. Two constraints shaped every decision: the interface had to fit Danfoss's existing digital ecosystem, and it had to be unambiguous about what it was. A tool. Not a simulation of a person.

The tool opens to a personalised home screen with three intent modes - Check and Validate Input, Ideate and Brainstorm, Learn about the Users. This isn't navigation for its own sake. Selecting a mode before reaching a persona forces the user to frame what they're actually trying to find out. It shapes the quality of the prompt they write and positions the synthetic user as a focused research instrument rather than a general-purpose chatbot.

The Interface

The persona gallery surfaces suggested personas based on the selected mode, with the full library available below. The avatars were a considered decision - abstract and geometric rather than human faces. The earlier instinct to humanise them was tested and dropped. Making the personas look like people risks collapsing the distinction between talking to a synthetic user and talking to a real one - and that distinction is the whole argument for why real research still matters. The avatars signal clearly: this is a tool. The role descriptions and capability tags do the work of making each persona legible without the interface pretending to be something it isn't.

The Interface

The chat interface carries that through. Responses surface sources at the bottom of every answer - internal Danfoss resources alongside external references, both visible, both attributed. A user interview with a field service technician sitting next to a published design framework. That pairing is the trust architecture made visible: the synthetic user isn't generating answers from nothing, and the user can see exactly what it's drawing on. A persistent disclaimer runs throughout - synthetic personas may display inaccurate information, double-check responses. Not a legal footnote. A design decision about how to hold the right relationship between the tool and the judgement of the person using it.

What Changes

Synthetic users don't replace research. They replace the gap - the weeks between studies, the assumption-driven decisions, the institutional knowledge that walks out when a researcher moves on.

They replace the version of the process where a product team needs a user perspective at 3pm on a Tuesday and has no way to get one. Where years of carefully collected research sits in a folder nobody opens.

What changes is the confidence. Not certainty - grounded thinking. Decisions made against something real instead of whoever has the loudest opinion in the room.

Let's build something
that lasts.

Turning hard problems into clear interfaces - analytics, enterprise SaaS, data-heavy products.

© 2026 Michal Jaworski

Let's build something
that lasts.

Turning hard problems into clear interfaces - analytics, enterprise SaaS, data-heavy products.

© 2026 Michal Jaworski

Let's build something
that lasts.

Turning hard problems into clear interfaces - analytics, enterprise SaaS, data-heavy products.

© 2026 Michal Jaworski

Let's build something
that lasts.

Turning hard problems into clear interfaces - analytics, enterprise SaaS, data-heavy products.

© 2026 Michal Jaworski