Skip to main content
Register now for TDX! Join the must-attend event to experience what’s next and learn how to build it.

Explore Agent Behavior and Conversation Design

Learning Objectives

After completing this unit, you’ll be able to:

  • Explain the history and importance of Conversation Design in AI.
  • Differentiate between system and intent failures.

Designing for Humans + Agents

In the early days of the web, designers focused on clicks when evaluating user experiences. Today, the introduction of agents means looking at more than which buttons users press. Success in the agentic era depends on the ability to design for how people naturally interact with agents, and how agents respond to the people who use them.

As we move into this new era, the role of the designer is even more important. No longer just architects of on-screen functions and flows, designers now have to anticipate and account for a dynamic behavioral ecosystem that aligns user intent, the system specifications, grounding data, and the model’s reasoning capabilities. This unit explores the evolution of Conversation Design (CxD) and guides you on how to transition from the logic-flow mindset of the past to an agentic mindset that sets users up for continued success.

Although it’s not required, we recommend that you take the Conversation Design Trailhead badge for useful background info for this badge.

The Evolution of Conversation Design

To understand where we’re going, let’s look at where we’ve been. The journey from simple if-then logic to autonomous reasoning has fundamentally changed the relationship between humans and machines. As a result, designers face a wide range of new and evolving challenges that come with architecting intent-first systems that can reason and act autonomously. Rather than scripting every response, designers now focus on shaping an agent’s reasoning logic and personality, which broadens the scope of what designers have to know, anticipate, and react to effectively to drive success.

While this is a major shift, it’s part of a longer-running trend toward digital assistance in everyday tasks. To understand how things are changing, it’s helpful to look at three distinct eras in the evolution of automation—past, present, and future—and how they shape what designers need to focus on when they build web experiences.

The Past: Scripted Interactions (Rule-Based)

Think of the traditional phone tree or basic chatbot. These systems were deterministic. They followed a rigid, predefined path. If a customer veered off script, the system reached a dead-end. Design focused on mapping every possible turn in a flow.

The Present: AI + Humans (Assistive)

With the rise of large language models (LLMs), we entered the era of the copilot. These systems are collaborative, living alongside humans: summarizing transcripts, drafting emails, and suggesting next best actions. The human is still the pilot, but the AI is the navigator and assistant, helping humans take actions efficiently at scale with natural language. AI executes after being told the steps to take and tools to use.

The Future: Autonomous Agents (Goal-Driven)

Autonomous agents select actions and tools based on a defined goal and context. They don't wait for step-by-step instructions.

Instead of following a fixed workflow, they determine the next step needed to move toward the goal. For example, if you give a goal like “Resolve this billing dispute,” the agent identifies the relevant tools, retrieves the right information, and executes the necessary steps.

This shift, from scripted flows to goal-driven execution, changes how we design. We are no longer designing screens or linear paths. We are designing the behavior that guides how the agent acts to achieve outcomes.

A wide landscape scene showing five illustrated mile markers along a winding path. It visualizes the evolution of customer service technology from simple, confusing menus to a partnership with trusted companies that design safe AI behavior for a high-tech future.

[AI-generated using Google Docs Gemini.]

What Is Agent Behavior Design?

In traditional systems, designers controlled the output directly. In agentic systems, designers shape the conditions under which agents generate outputs of their own.

Agent behavior is defined as the patterns of responses and actions produced by an agent based on these key components:

  • Its defined role.
  • Its goals and guardrails.
  • The instructions it receives.
  • The data it can access.
  • A user’s input.

Agent behavior design is the practice of deliberately defining those conditions so that the agent responds predictably, safely, and in alignment with user expectations. Conversation design and agent behavior design are not separate disciplines. In agentic systems, they are intertwined. The words you choose, the boundaries you set, and the data you provide shape how the agent behaves.

In the past, designers built maps. They created fixed paths (buttons, menus, and screens) and hoped users would follow them. But in the era of AI agents, designing interfaces is only part of building for success. Designers also need to understand how the character and content of a response can contribute to the value users find in an interaction.

Designing an agent is like onboarding a digital employee. You wouldn’t just give a new hire a script and walk away. To ensure they perform well and provide a great user experience, you must provide them with a mission, access to company knowledge, and clear boundaries.

The Four-Way Contract: Your New Design Canvas

To map agent behavior effectively, designers must manage the contract between four critical components—user intent, the spec, the data, and the model. When these four align well, the agent is high-performing. When they are out of sync, the user experience breaks.

User Intent (The North Star)

  • The Design Shift: It’s time to move from “What button did they click?” to “What outcome is the user trying to achieve?”
  • Performance Metric: How accurately does the agent decode human language and subtext to identify the true goal?

The Spec (The Mission Control)

  • The Design Shift: Your design workspace is now the system prompt. You define the persona, the tone, and the boundary-setting rules.
  • Performance Metric: Does the agent stay on-brand and follow company policy even when the conversation gets complex?

The Data (The Source of Truth)

  • The Design Shift: Designers now curate the knowledge base. For an agent to be useful, it must be grounded in real-time CRM data or documentation.
  • Performance Metric: Is the agent providing factual, high-utility answers, or is it making things up?

The Model (The Cognitive Engine)

  • The Design Shift: Understanding how LLMs function is crucial for today’s agentic experiences. Designers must know the reasoning limits of the model to set realistic expectations for the user.
  • Performance Metric: Does the agent have the reasoning power to solve the user’s problem without escalating to a human too early?

Core Principles for the Agentic Mindset

Because agents need to react to an infinite number of inputs safely and effectively, designing autonomous behavior requires a shift in strategy. Instead of designing for perfection, design for adaptation. Agents may not always respond as expected, but thoughtful design can guide users to better outcomes. Let’s look at some core principles that help designers tackle this challenge.

Understanding what’s expected from a good-quality agent experience—based on the user’s perceived success—helps you analyze problems and ensure that you’re getting the most value out of each user interaction.

Design from Intent First

In a scripted world, you might design a path for “I lost my credit card.” Instead, in an agentic world, you design for the intent of Security & Asset Management. By focusing on the high-level intent, you allow the agent to handle variations in human language without needing a unique flow for every sentence. While there may still be specific responses for specific issues, the important thing to understand is that an agent needs to be able to translate the language of the user into an actionable goal.

Make Behavior Predictable and Explainable

Trust is central to AI. If an agent makes a decision, such as denying a refund, it must explain why and offer clear next steps to resolve the issue. This also makes it easier for users to route to the best solution available in their case. An effective agent explains important decisions clearly and helps users move forward. If it cannot fulfill a request, it tells the user why and offers the next best option. If an agent can explain why it made a decision, it can also help guide users to alternatives when their initial intent can’t be met.

Optimize for Reasoning, Not Just Response

A strong agent does more than match keywords to a canned answer. It evaluates the user’s request in context and chooses the right action based on real-time data. For example, if a user asks, “Where’s my order?” the response should change based on the order status. “Shipped” requires tracking details. “Delivered” might require proof of delivery. “Delayed” might require apology and next steps.

Reasoning depends on context. Context depends on data. To support reasoning, your data and tools must be AI-ready. This means:

  • Unify data so the agent can access it in one place.
  • Structure it in consistent formats.
  • Resolve duplicate customer records by using identity resolution.
  • Apply clear security and governance policies.
  • Test to ensure data supports real-time decisions.
  • Establish human-review loops to monitor and improve outputs.

Agents can’t reason beyond the quality of the data and tools they’re given.

Success and Failure in Agent Behavior

Agent experiences can fail in two ways: One is a system failure, such as code breaks or UI elements that don’t work. The other is an intent failure, where an agent technically follows its instructions, but still doesn’t solve the user’s problem. For designers, understanding this second type of failure is crucial. It's about recognizing when an agent is working as designed but failing to deliver the outcome a user actually needs.

For this reason, Salesforce categorizes failures in behavior design in three different severity tiers—P0, P1, and P2.

By defining clear goals, following proven principles, and evaluating agents based on how they serve users, you can often prevent the most severe P0 failures before they damage trust. In the era of agents, designers are more critical than ever to delivering value in user experiences. Now, the job has an even broader scope. Armed with strategies for understanding what users want, what they actually experience, and how to bridge the gaps, you can help build an effective, ever-evolving digital workforce.

Resources

Share your Trailhead feedback over on Salesforce Help.

We'd love to hear about your experience with Trailhead - you can now access the new feedback form anytime from the Salesforce Help site.

Learn More Continue to Share Feedback