Skip to main content
Tableau Conference is live on Salesforce+. Stream it free — watch now.

Address AI Bias and Ethical Challenges

Learning Objectives

After completing this unit, you’ll be able to:

  • Identify challenges and potential biases of AI that affect people with disabilities.
  • Explain how biased or unrepresentative training data results in inaccessible AI.
  • Summarize the legal framework for accessibility in AI, such as the Americans with Disabilities Act and Section 508.

What Are the Challenges and Limitations of AI?

AI is a powerful and rapidly innovating tool that we must use responsibly. Salesforce is committed to innovation, improving our knowledge, and making sure the AI we build and use is accessible, inclusive, and trustworthy. Everyone is responsible for using and creating AI that supports our core values of trust, customer success, sustainability, innovation, and equality. Simply put, using AI responsibly means viewing it as a partner and a brainstorming tool, not a quick fix or a replacement for human decision-making.

Key Challenges and Limitations of AI

  • Bias and fairness issues: AI models can reflect and amplify biases present in training data because training data might not represent diverse populations adequately. AI outputs can perpetuate or magnify existing societal biases.
  • Accuracy and reliability problems: AI can generate inaccurate or fabricated information known as hallucinations, which means that AI outputs are often incorrect. Results can be confidently wrong, which makes errors harder to detect.
  • Human oversight gaps: There’s a risk of overreliance on AI without adequate human validation. AI also lacks clear guidelines for human supervision, so teams might not know how to validate AI outputs.
  • Organizational and adoption barriers: Teams might experience friction and uncertainty as they integrate AI into their workflows. This results in uneven adoption and enablement across teams.

The Intersection of AI, Bias, and Accessibility

The failure of AI systems to account for diverse needs, particularly those of individuals with disabilities, poses a significant risk. Societal biases embedded in AI training data can be harmful and create accessibility (a11y) barriers. It is crucial to ensure that the design, development, and application of AI tools do not inadvertently lead to inaccessible systems and outputs.

Examples of Accessibility Barriers Due to AI Bias

  • Speech recognition: If the training data for speech recognition models lacks samples from users with diverse speech patterns or accents, the models perform poorly for these individuals which creates a barrier.
  • Alternative text (alt text) generation: Hallucinations can impact alt text—an essential a11y requirement that enables users with vision disabilities to understand the meaning of images and complex data visualizations via screen readers—by generating incorrect or meaningless descriptions. AI models learn from vast amounts of internet data, which contains societal biases and inaccurate information. Since these models repeat the patterns in their training data, they might produce biased, harmful, or false content. Incorrect alt text creates a barrier that can have a profound personal impact, especially in sensitive areas such as healthcare and finance.
  • Code generation for a11y: When generative AI writes code, it must be trained on accurate a11y standards such as proper usage of ARIA attributes. If not, the resulting code might create inaccessible digital experiences.
  • Automated AI testing: AI-powered a11y testing tools provide a valuable asset for development teams that speeds up the discovery of potential issues. If you can find these issues early, that means users aren’t experiencing these barriers in production. However, relying solely on automation can inadvertently create new a11y barriers. Therefore, it’s crucial to complement these tools with human validation and manual testing. Include a “human in the loop” (HITL) when you validate automated a11y bugs and effectively address complex or contextual concerns, so you can ensure the final product is accessible to users with disabilities.

Priority Considerations for AI and Accessibility

Get familiar with key a11y regulations and standards, which provide the blueprints for accessible product development. Apply these established standards when you develop AI experiences to ensure that your products meet a11y requirements.

  • Americans with Disabilities Act (ADA): A US civil rights law that prohibits discrimination based on disability.
  • Section 508: A US federal law that requires federal agencies to ensure their electronic and information technology is accessible to people with disabilities.
  • European Accessibility Act (EAA): An E.U. law that mandates specific a11y standards for a wide range of digital and physical products.

Policies like these ground the Web Content Accessibility Guidelines (WCAG), the foundational standard for accessible digital content.

While AI provides powerful tools in maximizing efficiency, innovation, and content creation, it's your responsibility to ensure your AI products remain accessible. The next two units empower you with guiding principles and resources for designing and developing accessible AI experiences, which helps you build inclusive products from the start.

Resources

Teilen Sie Ihr Trailhead-Feedback über die Salesforce-Hilfe.

Wir würden uns sehr freuen, von Ihren Erfahrungen mit Trailhead zu hören: Sie können jetzt jederzeit über die Salesforce-Hilfe auf das neue Feedback-Formular zugreifen.

Weitere Infos Weiter zu "Feedback teilen"