Skip to main content
Register now for TDX! Join the must-attend event to experience what’s next and learn how to build it.

Design Trusted Behavior

Learning Objectives

After completing this unit, you’ll be able to:

  • Identify key factors that impact your users’ trust in agents.
  • Design for preserving trust and promoting smooth user experiences.

In the world of AI, few things matter as much as trust. An agent that’s fast but hallucinating is a liability; an agent that’s smart but hard to talk to is frustrating. To bridge the gap, designers need to be architects of trust, building it into the logic layer that defines agent interactions. That’s because designers define what correct behavior looks like for an agent—what makes a response good or bad. And, in turn, designers rely on a combination of insight from evaluations and proven principles of trusted design for guidance.

Two park rangers examine a holographic bridge blueprint.

[AI-generated using Google Docs Gemini.]

Successful agent design anticipates friction before it becomes failure. It prevents both technical breakdowns and negative user perception. The goal is not just task completion, but trust. Strong agents guide users toward success and surface solutions before problems escalate.

Keep It Clear

Focus: Transparency and Predictability

Users should never have to guess what an agent is doing or why. Generally, the idea is to create as much transparency as possible when interacting with an agent. Its logic should be visible, and its limitations should be known.

What It Looks Like

  • Explains Reasoning: Instead of just giving an answer, the agent briefly cites its source. For example, “Based on our Warranty Policy..."
  • Sets Expectations: The agent is honest about what it can and can’t do. For example, “I can check your order status, but I’ll need a human colleague to process a full refund.”
  • Communicates Steps Taken: It provides a play-by-play of its actions. For example, “I'm searching our inventory now... OK, I’ve found the item.”

Why It Matters

Clear behavior reduces user anxiety and increases confidence. When an agent is transparent, a minor error, like a momentary failure to retrieve a record, is seen as a technical glitch. When it isn’t, and the user isn’t even sure the agent recognizes the problem, an error is seen as a failure of the brand.

Be Context Aware

Focus: Relevance and Continuity

Nothing makes a conversation more frustrating than an agent that forgets what you said 2 minutes ago. Context awareness turns a series of transactions into a continuous conversation.

What It Looks Like

  • Uses Prior Interactions: The agent remembers that the user mentioned a leaky faucet earlier in the chat and doesn't ask, “How can I help you today?” halfway through the resolution.
  • Adapts to Role and Channel: An agent knows it should talk differently to a field service technician on a mobile app than it does to a CEO on a desktop browser.
  • Responds with Appropriate Depth: It doesn’t provide a 500-word essay for a simple Yes/No question.

Why It Matters

Context awareness prevents the robotic feeling of generic responses. It makes the interaction feel natural and efficient, saving the user time and effort and surfacing the most relevant solutions for their needs.

Build Trust with Respect

Focus: Safety, Ethics, and User Control

Respectful behavior is about boundaries. A respectful agent understands the weight of the data it handles and the importance of the user’s time, and acts with a clear weight given to the value of the customer’s time and trust.

What It Looks Like

  • Knows When to Escalate: The agent identifies sensitive and highly complex scenarios, then offers a warm hand-off to a human when the user’s sentiment becomes highly negative or the task requires human-level permissions.
  • Respects Privacy and Permissions: The agent uses only the data it’s authorized to access, and never shares PII inappropriately.
  • Avoids Hallucination and Overreach: If it doesn’t know the answer, it says so. It doesn't make up a creative solution to a technical problem.

Why It Matters

Trust drives adoption. If users don’t feel respected or safe, they won’t use the tool. Protecting the user also protects the business from reputational and legal risks.

Build Behavior that Builds Trust

Designing trusted behavior is an iterative process. By keeping your agents clear, context aware, and respectful, you create a foundation of trust that allows AI to flourish within your organization. Not only that, but the steps it takes to create trusted interactions also set up effective ones. When you can rely on your AI to perform as designed and your customers can rely on it, too, you’re ready to unlock richer experiences powered by a growing team of humans and agents working together. Equipped for today’s challenges and adaptable for what comes next, that collaboration sets the foundation for new possibilities. Learn more about specific ways to diagnose and triage potential agency failures by taking the Agent Behavior Evaluation Trailhead badge next.

Resources

Comparta sus comentarios de Trailhead en la Ayuda de Salesforce.

Nos encantaría saber más sobre su experiencia con Trailhead. Ahora puede acceder al nuevo formulario de comentarios en cualquier momento en el sitio de Ayuda de Salesforce.

Más información Continuar a Compartir comentarios