Humans innately know how to communicate. So, when they participate in a conversation, they understand how to take turns talking. In that exchange, people expect a familiar and predictable flow. They don’t have to know anything about linguistics or conversation design to interact with other people.
As AI becomes the front door to product experiences across industries from healthcare to financial services to sales and more, it’s not a given that people will know how to interact or collaborate with agents. They may not even have confidence that AI will deliver the outcomes they seek. To build AI experiences that earn the trust of users, it’s necessary to shape how AI behaves end-to-end.
This is where conversation design comes in. More than the transactional ability to prompt an AI assistant and receive a grounded response, conversation design empowers you to build holistic AI experiences that are clear, accurate, empathetic, and, importantly, trustworthy. Let’s talk about what this means.
Here’s what we’ll cover:
What is conversation design?
Build the infrastructure for trust
Learn four conversation design principles
Where conversation design shows up
Ask these questions to get started
What is conversation design?
To begin to understand conversation design and AI, it’s important to distinguish between a prompt and a conversation. A prompt is a single instruction. A conversation is a dynamic, goal-oriented exchange. Prompts are entry points to conversations.
Conversation design for AI is the craft of shaping how AI agents behave across an entire interaction — not just what they say, but how they reason, respond, recover, and reflect your brand’s voice. It moves beyond prompts to design agent behavior that’s clear, helpful, and human-centered.
Together, these enable AI to act less like a tool and more like a trusted teammate.
At Salesforce, I lead a team of conversation designers who have a deep understanding of human linguistics, how people communicate, and how they expect to be communicated to. We design agent behavior across four core dimensions:
- Persona and tone – to reflect a consistent, brand-aligned voice.
- Reasoning logic – to guide how agents think through problems.
- Recoverability and fallback strategies – to handle edge cases with grace.
- Prompt architecture – to ensure structured, reusable interaction patterns.
Why is conversation design crucial to building great AI experiences? If your AI nails a single question but falls apart in the flow of an interaction, it’s a brittle experience. And in high-trust contexts, like a patient exploring symptoms or a donor checking gift history, that failure costs more than just clicks — it damages credibility.
Build the infrastructure for trust
The real opportunity isn’t just smarter prompts. It’s building AI experiences that behave like capable, trustworthy teammates. That requires more than just language modeling. It demands designing clarification loops (to reduce ambiguity), trust signals (like summarizing what was heard or offering a transcript), and repair strategies (when AI gets it wrong). This is where empathy becomes functional.
An empathetic repair strategy doesn’t just state an error; it acknowledges the user’s potential frustration and clearly charts a path forward. This shows the user that the system understands their emotional state, not just their query
At Salesforce, our conversation design team partners with product, data teams, clients, and vertical experts to bring this thinking to every cloud. They help define tone, shape agent behaviors, and align responses to brand values. We build prompt libraries and frameworks that evolve with agent usage and context, not static scripts.
More than design as a layer, it’s design as infrastructure.
To build AI experiences that earn the trust of users, it’s necessary to shape how AI behaves end-to-end.
Why trust is important
Our work involves exploring how conversation design principles apply across industries, from high-stakes patient care and financial fraud prevention to guided home buying journeys and sales lead qualification.
- Healthcare: When patients ask about symptoms, medications, or coverage, a vague or misleading answer is inconvenient and can be dangerous. Trust is built through transparency, clear limits on what the AI can do, and graceful handoffs when it can’t help.
- Financial services: Users checking account activity or reporting fraud expect precision. A casual tone or incorrect assumption can erode trust fast. Conversation design helps ensure the AI aligns with user intent, risk sensitivity, and regulatory boundaries.
- Retail and nonprofit: Whether helping a shopper compare products or guiding a donor through giving history, AI should feel like a reliable guide. Structured, step-by-step conversations help users ask better questions, get clear answers, and take confident action.
Learn four conversation design principles
In our work with customers, we’ve found that straightforward fixes can solve many of the hiccups they encounter.
To make AI trustworthy and useful, we must design for the whole conversation. At Salesforce, we break this down into four core principles to create meaningful, trust-building conversations:
1. Turn-taking
A natural back-and-forth rhythm for when to speak, when to listen, and when to pause. Missteps here can make even accurate responses feel awkward. Turn-taking means knowing when to speak, when to pause, and when to offer silence for reflection or action.
2. Predictable responses
Users come to every exchange with expectations. Responses that follow a consistent logic and align with user intent build trust. Inconsistent or tone-deaf replies, even when factually correct, degrade the experience. And trust evaporates.
3. Defined role and goal
(SPEAKING Model, Hymes): Like any good service interaction, the AI needs to know: What is the user trying to do? What role should the AI play? Is it an advisor, a concierge, or a helper? An essential part of defining the AI’s role is defining its capacity for empathy. Is the user frustrated, curious, or in a hurry? Understanding the user’s emotional state is critical to fulfilling their goal. An AI that can’t distinguish between a casual question and an urgent complaint will fail, even if its answer is technically correct. This clarity shapes tone, language, and behavior. Confused roles create confused users.
4. Persona
People don’t trust faceless systems. By giving an AI a recognizable voice, tone, and pattern of behavior, a persona transforms it into something users can understand and rely on. More than just adding a layer of personality, a well-defined persona sets clear expectations. This is especially critical in complex journeys like financial planning, health, or service recovery, where ambiguity can create user anxiety.
The most effective personas are built on empathy, designed to adapt their tone based on the user’s situation. An AI might use reassuring language when a customer reports a service issue, or shift to an upbeat, encouraging tone when someone is exploring a new feature. This adaptability is what makes the interaction feel more trustworthy and human.
Let’s look at a couple of examples:
- During onboarding, an AI that says, “Great choice! Let’s customize this to fit how you work best,” invites engagement far more effectively than a generic walkthrough.
- As a patient awaits test results, an agent that begins with, “Here’s what your results mean, and what to expect next,” immediately eases uncertainty compared to one that solely surfaces clinical data.
Where conversation design shows up
These principles aren’t abstract theories. They’re the blueprint for how AI behaves in the real world. Rather than looking at them in isolation, let’s trace a “steel thread” — a single, coherent user journey — to see how they weave together to create a trustworthy experience.
For our steel thread, let’s follow 37-year-old Maya, who was recently referred for a minor surgical procedure. She visits her insurer’s website to check if it’s covered and what her out-of-pocket costs might be. It’s late, she’s tired, and anxious about what comes next.
1. Starting with support and structure
Coverage Agent AI: “Hi, Maya. I can help check coverage, costs, and what to expect. Do you have your member ID handy, or would you rather start by walking through the basics?”
Persona + defined role: the AI immediately sets a calm, helpful tone and offers control.
Maya: “Let’s just start.”
Coverage Agent AI: “Sounds good. First, is the procedure called a hysteroscopy?”
Turn-taking: instead of guessing, the AI confirms details step-by-step.
2. Setting expectations and breaking things down
Coverage Agent AI: “Based on your plan, diagnostic hysteroscopies are usually covered. I’ll walk you through three things:
- Is your provider in-network
- Estimated cost
- Whether pre-approval is needed
Ready to check those?”
Predictability: clear structure makes the process feel manageable.
Maya: “Yes.”
3. Smooth specialist handoff
The system hands off to a benefits specialist AI.
Coverage Agent AI: “I’m bringing in a benefits expert now. You won’t need to repeat anything.”
Defined role + persona continuity: the handoff is seamless, respectful, and maintains tone.
Benefits Agent AI: “Hi, Maya. Your provider is in-network. The estimated cost is $380, including your deductible and co-pay. Want a copy emailed to you?”
Predictability: estimates are clear. Turn-taking: the AI invites follow-through, not assumptions.
Maya: “Yes, thanks.”
4. Handling uncertainty with grace
Maya: “Do I need pre-approval?”
Benefits Agent AI: “Your plan requires it, but I can’t confirm if it’s been submitted. Want me to flag this for our human team to follow up?”
Recoverability: the AI admits limits without evasion and offers a clear next step.
Maya: “Yes.”
Benefits Agent AI: “Done. You’ll hear back by text within 24 hours. I’ve also sent a checklist of questions for your provider.”
Persona + empathy: supportive without overstepping.
5. Closing with confidence
Benefits Agent AI: “Want a summary of today’s answers in a secure PDF?”
Predictability + trust signal: confirms understanding and reinforces transparency.
Maya: “Yes.”
Benefits Agent AI: “Sent. Look for an email from us shortly. And just a reminder: we’ll text you with your pre-approval status. I’m here if anything changes.”
Defined role + persona: reassuring, warm, clear, and dependable.
What this demonstrates
Maya didn’t just interact with two bots. She had a conversation with two AI agents that:
- Acknowledged her needs and emotional state
- Adapted across steps and agents
- Took responsibility when answers weren’t immediate
- Closed the loop with clarity and empathy
AI is no longer a back-end tool — it’s becoming your front line.
Ask these questions to get started
You don’t need to be a conversation designer to apply these principles. Start with these questions:
- What’s the AI’s job in this experience? Can your team clearly define its role and value? Think of it like hiring someone — if you can’t describe what the role is, how can they succeed?
- Can users trust what it says and what it doesn’t? Are failure modes handled with care? Imagine a co-worker who makes things up when they don’t know the answer. Would you keep going to them for help?
- Is your AI responsive across steps? Or does it reset after every prompt? You wouldn’t introduce yourself to someone you’ve met three times in a week, would you?
- Can they trust it to behave like a responsible representative of our brand? Does it reflect your brand’s values in tone, clarity, and accountability? If your brand were a person, would this be how they’d talk to your customers?
AI is no longer a back-end tool — it’s becoming your front line. If it feels robotic, evasive, or inconsistent, it erodes brand equity. But when it behaves with clarity, care, and humility, it builds something rare: trust at scale.
And getting started is simpler than you might think. Even small things — like clarifying an agent’s role, or tightening up prompt structure — can lead to better outcomes.