Learn the Basics of Artificial Intelligence
Learning Objectives
After completing this unit, you’ll be able to:
- Define machine learning and artificial intelligence (AI).
- Understand how bias affects AI.
- Give a real-life example of how AI can be biased.
Introduction to AI
Artificial intelligence can augment human intelligence, amplify human capabilities, and provide actionable insights that drive better outcomes for our employees, customers, partners, and communities.
We believe that the benefits of AI should be accessible to everyone, not just the creators. It’s not enough to deliver just the technological capability of AI. We also have an important responsibility to ensure that our customers can use our AI in a safe and inclusive manner for all. We take that responsibility seriously and are committed to providing our employees, customers, partners and community with the tools they need to develop and use AI safely, accurately, and ethically.
How Is AI Different from Machine Learning?
Not familiar with AI? Before completing this module, check out the Artificial Intelligence for Business module (part of the Get Smart with Salesforce Einstein trail) to learn what it is and how it can transform your relationship with your customers.
The terms machine learning and artificial intelligence are often used interchangeably, but they don’t mean the same thing. Before we get into the nitty-gritty of creating AI responsibly, here is a reminder of what these terms mean.
Machine Learning (ML)
When we talk about machine learning, we’re referring to a specific technique that allows a computer to “learn” from examples without having been explicitly programmed with step-by-step instructions. Currently, machine learning algorithms are geared toward answering a single type of question well. For that reason, machine learning algorithms are at the forefront of efforts to diagnose diseases, predict stock market trends, and recommend music.
Artificial Intelligence (AI)
Artificial intelligence is an umbrella term that refers to efforts to teach computers to perform complex tasks and behave in ways that give the appearance of human agency. Often they do this work by taking cues from the environment they’re embedded in. AI includes everything from robots who play chess to chatbots that can respond to customer support questions to self-driving cars that can intelligently navigate real-world traffic.
AI can be composed of algorithms. An algorithm is a process or set of rules that a computer can execute. AI algorithms can learn from data. They can recognize patterns from the data provided to generate rules or guidelines to follow. Examples of data include historical inputs and outputs (for example, input: all email; output: which emails are spam) or mappings of A to B (for example, a word in English mapped to its equivalent in Spanish). When you have trained an algorithm with training data, you have a model. The data used to train a model is called a training dataset. The data used to test how well a model is performing is call test dataset. Both training datasets and test datasets consist of data with input and expected output. You should evaluate a model with a different but equivalent set of data, the test dataset, to test if it is actually doing what you intended.
Bias Challenges in AI
So far, we've discussed the broad ethical implications of developing technology. Now, let's turn our attention to AI. AI poses unique challenges when it comes to bias and making fair decisions.
Opacity
We don’t always know why a model is making a specific prediction. Frank Pasquale, author of The Black Box Society, describes this lack of transparency as the black box phenomenon. While companies that create AI can explain the processes behind their systems, it’s harder for them to tell what’s happening in real time and in what order, including where bias can be present in the model.
In one effort to promote greater transparency and understand how a deep learning-based image recognition algorithm recognized objects, Google reverse engineered it so that instead of spotting objects in photos, it generated them.
In one case, when the AI was asked to generate an image of a dumbbell, it created an image of a dumbbell with a hand and an arm attached because it categorized those objects as one. The majority of the training data it was provided had images of someone holding the dumbbell, but not of dumbbells in isolation. Based on the image output, the engineers realized that the algorithm needed additional training data that included more than a single dumbbell.
Speed, Scale, and Access to Large Datasets
AI systems are trained to optimize for particular outcomes. AI picks up bias in the training data and uses it to model for future predictions. Because it’s difficult if not impossible to know why a model makes the prediction that it does, it’s hard to pinpoint how the model is biased. When models make predictions based on biased data, there can be major, damaging consequences.
Let’s take one example highlighted by Oscar Schwartz in his series for the Institute of Electrical and Electronics Engineers on the Untold History of AI. In 1979, St. George’s Medical School in London began using an algorithm to complete the first-round screening of applicants to their program. This algorithm, developed by a dean of the school, was meant to not only optimize this time-consuming process by mimicking human assessors, but to also improve upon it by applying the same evaluation process to all applicants. The system made the same choices as the human assessors 90–95 percent of the time. In fact, it codified and entrenched their biases by grouping applicants as “Caucasian” and “non-Caucasian” based on their names and places of birth, and assigning significantly lower scores to people with non-European names. By the time the system was comprehensively audited, hundreds of applicants had been denied interviews.
Machine learning techniques have improved since 1979. But it’s even more important now, as techniques become more opaque, that these tools are created inclusively and transparently. Otherwise, entrenched biases can unintentionally restrict access to educational and economic opportunities for certain people. AI is not magic; it learns based on the data you give it. If your dataset is biased, your models will amplify that bias.
Developers, designers, researchers, product managers, writers—everyone involved in the creation of AI systems—should make sure not to perpetuate harmful societal biases. (As we see in the next module, not every bias is necessarily harmful.) Teams need to work together from the beginning to build ethics into their AI products, and conduct research to understand the social context of their product. This can involve interviewing not only potential users of the system, but people whose lives are impacted by the decisions the system makes. We discuss what that looks like later in this module.
Resources
- Trailhead: Artificial Intelligence Basics
- Trailhead: Get Smart with Salesforce Einstein
- Salesforce Acceptable Use Policy