Skip to main content
Tableau Conference is live on Salesforce+. Stream it free — watch now.

Design Accessible AI Products and Experiences

Learning Objectives

After completing this unit, you’ll be able to:

  • Identify design considerations for accessible generative AI experiences.
  • Explain the importance of inclusive user testing and diverse partnerships for data gathering.
  • Implement solutions to accessibility challenges within AI and Agentforce.
  • Apply development best practices for accessible AI agents including keyboard operability, screen reader support, and semantic markup.

Innovation in AI and Accessibility

Generative AI, when developed with accessibility (a11y) as a core principle, offers a potential to serve everyone, dismantle existing barriers, and empower people with disabilities.

When designed accessibly, generative AI tools can offer significant benefits.

  • For users with no or low vision: Generative AI provides rich, contextual scene descriptions of images.
  • For neurodivergent users: It supports and fills in gaps in executive functioning, such as breaking down complex tasks into smaller, more manageable steps, or discerning the tone of a Slack or email message.
  • For deaf or hard-of-hearing users: It delivers dynamic meeting summaries that highlight the key takeaways in real time.

These powerful tools benefit everyone. However, they can create barriers if you don’t integrate a11y practices from the start. Always design your products with a11y in mind.

Design Tips for Accessible AI Experiences

Salesforce encourages employees and customers to become Trailblazers with our suite of generative AI tools. These tools include external AI partner tools, Google Workspace tools such as Gemini, and internal tools such as Slackbot. Generative AI tools are designed to boost efficiency and effectiveness by generating content, summarizing critical information, and accelerating research. While these tools enhance the productivity and creativity of all users, it’s crucial to build a11y into your designs from the start. Consider the a11y of both the tool’s interface (the input) and content it generates (the output).

Accessibility Considerations for Generative AI Tools

Generative AI tools primarily produce outputs in four main categories: text, image, audio, and video. Consider the following examples of generative AI tools for each output type. For each format, reflect on what barriers or challenges might occur.

Generative AI Output

Accessibility Considerations

Text-based tools generate textual information such as code or summaries based on text prompts.

  • Structure: Does the tool and output use semantic HTML such as headings, lists? This helps screen readers navigate and consume the content.
  • Readability: Can you prompt the AI to output plain language to improve cognitive a11y?
  • Real-time status updates: Does the app communicate progress to screen readers during long generations? Users need to know the process is active so they don’t assume the tool has failed.
  • Forms: Do form controls include visible, persistent labels so users always know the purpose of an input field?
  • Prompting by switch control or voice: How can we ensure that AI interaction is accessible for users with motor disabilities and those who prefer not to type? How does the tool address potential bias in speech recognition tools to ensure they accurately interpret diverse speech patterns?

Image-based tools create or edit visual art, photos, and data visualization such as diagrams and charts.

  • Automatic alt text: Does your image generation tool automatically include an alt text description for the images that it creates? Is it accurate and meaningful? Is there a way for admins to edit AI-generated alt text?
  • Color contrast: For AI-generated UI or charts, does the tool check whether the colors used comply with WCAG contrast ratios requirements?

Audio-based tools generate human-like voices, music, or fix recorded audio.

  • Transcripts: Ensure that generated audio files include a synchronized. accurate text transcript for deaf and hard-of-hearing users.
  • Visualizations: Does your audio or video player include keyboard-accessible, high-contrast play and pause buttons? Are the buttons properly labeled and coded for assistive technologies?
  • Personalization: Can users personalize the vocal characteristics of the audio, such as pitch, style, pace, and language?

Video-based tools generate moving images with synchronized audio.

  • Audio description (AD): Are secondary audio tracks that describe the visual action for users with no or low vision available?
  • Seizure safety: How do you ensure that generated video content omits flashing or harmful patterns that could trigger photosensitive epilepsy? Any video content where flashing exceeds three flashes per second is considered harmful and potentially dangerous to users.

Consider these questions when you create any kind of digital content. Fortunately, resources such as the following checklist can help you prioritize a11y. While a checklist doesn’t guarantee your designs are accessible, it ensures you cover the basics for both inputs and outputs. Use these considerations to design accessible, digital experiences.

Color and Contrast

  • Ensure all text meets minimum color contrast requirements. Text must have at least a 4.5:1 contrast ratio with its background. Large text must have at least a 3:1 contrast ratio with its background.
  • Ensure essential non-text UI elements (the “functional” elements) adhere to a 3:1 color contrast ratio. This includes visible focus indicators.
  • Avoid color as the sole means of conveying information. Supplement color with icons, text, or other indicators.

Interaction and Input

  • Ensure keyboards can engage with every interactive element (buttons, text fields, links) in a predictable, logical order.
  • Ensure every interactive element includes an accessible visible focus indicator. Use distinct styles to differentiate between various states.
  • Ensure form fields include a visible label clearly describing their purpose. Placeholder text is insufficient.
  • Avoid hiding content that only appears on hover.
  • When users interact with multiple areas simultaneously, such as using an agent in one panel while working in the builder UI, maintain a consistent and predictable focus. Ensure users remain aware of their current location within the interface.

Responsive Design and Common Components

  • Use existing accessible design patterns and component libraries such as Salesforce Lightning Design System (SLDS) blueprints and Lightning Web Components (LWC) rather than creating custom components.
  • Prioritize responsive design—layouts must reflow at smaller viewports and when users resize text up to 200%.
  • For mobile designs, create tap targets that are 44 by 44 pixel tap targets. The minimum allowable size is 24 by 24 pixels.

Screen Reader Considerations

  • Implement specific ARIA attributes or live regions so screen readers can properly announce content that appears dynamically, such as an agent's response in a chat.
  • Use the correct HTML and ARIA roles so assistive technologies understand what each component is.

The Role of Inclusive User Testing

Salesforce is dedicated to championing inclusive design and ensuring its products are accessible. To support this commitment, Salesforce partners with Fable to engage people with disabilities in research and user testing. Accessibility and inclusive design are core to the development process. By integrating a11y checks and practices from the earliest stages of design, Salesforce practices “shifting left” so that a11y is a primary design decision from the start of the project.

Through our partnership with Fable, Salesforce is committed to an inclusive user experience for our customers. Fable provides comprehensive a11y research and testing services that gather valuable feedback from a large and diverse participant pool, which includes users of various assistive technologies and configurations. Feedback is collected through methods such as user interviews, prototype reviews, and dedicated research sessions. The participants cover several key groups, including screen reader users—with common tools such as Job Access With Speech (JAWS), Non-Visual Desktop Access (NVDA), and VoiceOver—screen magnification users, and alternative navigation users (including individuals who rely on voice control and switch devices). Testing is also performed across a variety of browser and device combinations.

Key Findings from Fable Studies

Our research, particularly from Agentforce studies, shows that people with disabilities are eager to use AI tools, but are rightfully cautious (as emphasized in Unit 2). However, with thoughtful designs, these tools can be powerful. Review these takeaways to learn more about how you can address specific user challenges identified in this research.

Challenge

What You Can Do

Vague AI responses: Users with disabilities expect AI to make their tasks more efficient. When an AI agent provides a vague or general response to a difficult question, it’s often unhelpful.

Provide clear guidance: If the tool can’t generate a response, give users clear guidance to prevent them from making repeated, unsuccessful attempts. Include suggested keywords, tips on how to rephrase the prompt, or links to related topics.

Difficulties with typing and switch devices: For desktop users who rely on switch devices, typing is difficult and time-consuming.

Provide text suggestions: To minimize user effort, particularly during multiquery or continuous interactions with the agent, offer pregenerated word or phrase suggestions as the user types. Providing these text prompts serves as a helpful alternative for users who aren’t using a voice input.

Unpredictable focus in complex builder flows: In complex builder flows with agent integration, the keyboard focus often shifts to unexpected places. This creates a confusing and frustrating experience for screen reader and keyboard-only users.

Manage keyboard focus programmatically: When parts of the flow load dynamically, such as a new agent panel, move focus to the updated heading or the first meaningful control so that users understand what has changed. Prevent the user from tabbing out to the underlying page until they choose to close it. Then return focus to the component they originally used to launch it. Because builder flows have so many moving parts, users must know exactly where they are and what they’re interacting with.

Inefficient navigation to chat: Users who use screen readers or alternative navigation methods often have to tab through an entire page before they can reach the chat feature. This makes simple interactions slow and frustrating.

Make chat easy to reach: Ensure the chat interface is consistently reachable via standard element types such as buttons, headings, edit fields, or regions. Include a “Skip to Agentforce chat” link at the beginning of the page so users can jump directly to the agent.

Focus trapped in chat after navigating: When users select a link to an external resource, the keyboard focus often unexpectedly stays inside the Agentforce chat panel. This forces users to use trial and error required to find the new content, which is frustrating and time-consuming.

Move focus to the new content: Move the focus directly to the heading of the new article or resource. Provide clear, accessible feedback about leaving the chat panel and moving to a linked resource.

Combine these takeaways with the best practices and checklist in this badge to intentionally design and develop Agentforce products. Because user preferences are highly individualized, what works for one user might not work for another. To meet these diverse needs, Agentforce constantly evolves based on the latest user testing and research.

Resources

在 Salesforce 帮助中分享 Trailhead 反馈

我们很想听听您使用 Trailhead 的经验——您现在可以随时从 Salesforce 帮助网站访问新的反馈表单。

了解更多 继续分享反馈