Skip to main content

Create Responsible Generative AI

Learning Objectives

After completing this unit, you’ll be able to:

  • List the five principles for the responsible development of generative AI.
  • Identify trusted generative AI features in the Salesforce product.
  • Describe best practices for identifying ethical risks and creating safeguards.

Generative AI, a New Type of Artificial Intelligence

Until recently, most people who discussed AI were talking about predictive AI. This type of artificial intelligence focuses on looking at an existing set of data and making limited predictions about what should be true given the information at hand. Now there’s a new player on the field—an emerging type of AI that’s generative, not predictive. The key difference? Where predictive AI analyzes trends, generative AI creates new content.

Generative AI (gen AI) boasts an impressive array of capabilities—from real-time conversations with bots that effectively simulate talking to a live support agent to applications for marketers, programmers, and creative pioneers. In addition, gen AI’s cultural moment has users flocking to see what it can do. That means that most of us will probably encounter these algorithms in our daily lives, where they may play an increasingly significant role.

With all emergent technology comes unknowns. Whether it’s intentional abuse or accidental bias, gen AI poses risks that must be understood and addressed in order to get the most out of this technology.

Know the Risks

At Salesforce, we focus on designing, developing, and distributing technologies in a responsible and trusted way. To do that, we anticipate the intended and unintended consequences of what we build.

Let’s review some potential risks to gen AI.

Accuracy

Gen AI models are great at making predictions. Gen AI models make new content by gathering tons of examples of things that fit the same categories. But while a model might be able to create a new sentence in the style of a famous writer, there isn’t any way to know if the same sentence is factually true. And that can be a problem when users assume that an AI's predictions are verified facts. This is both a feature and a bug. It gives the models the creative capabilities that captured imaginations in its earliest days. But it’s easy to mistake something that looks correct for something that’s accurate to the real world. 

Bias and Toxicity

Because human interactions can involve a degree of toxicity—that is, harmful behavior like using slurs or espousing bigotry—AI replicates that toxicity when not tuned to recognize and filter it. In fact, it can even amplify the bias it finds, because making predictions often involves dismissing outlying data. To an AI, that might include underrepresented communities. 

Privacy and Safety

Gen AI’s two most compelling features are its ability to replicate human behavior and the speed to do so at a massive scale. These features offer amazing possibilities. And there’s a downside: It’s easy to exploit the technology to do huge amounts of damage very quickly. The models have a tendency to “leak” their training data, exposing private info about the people represented in it. And gen AI can even be used to create believable phishing emails or replicate a voice to bypass security. 

Disruption

Because of how much AI can do, it poses a risk to society even when working as intended. Economic disruption, job and responsibility changes, and sustainability concerns from the intense computing power required for the models to operate all have implications for the spaces we share. 

Trust: The Bottom Line

Trust is the #1 value at Salesforce, and it’s our North Star as we build and deploy gen AI applications. To guide this work, we’ve created a set of principles for developing generative AI responsibly and help others leverage the tech’s potential while guarding against its pitfalls.

Accuracy: Gen AI, like other models, makes predictions based on the data it is trained on. That means that it needs good data to deliver accurate results. And it means that people need to be aware of the chance for inaccuracy or uncertainty in an AI’s output.

Safety: Bias, explainability, and robustness assessments, along with deliberate stress testing for negative outcomes, help us keep customers safe from dangers like toxicity and misleading data. We also protect the privacy of any personally identifying information (PII) present in the data used for training. And we create guardrails to prevent additional harm (such as publishing code to a sandbox rather than automatically pushing to production).

Honesty: Your data is not our product. When collecting data to train and evaluate our models, we need to respect data provenance and ensure that we have consent to use data (for example, open-source, user-provided). It’s also important to notify people when they’re using or talking to an AI, with a watermark or disclaimer, so that they don’t mistake a well-tuned chatbot for a human agent.

Empowerment: There are some cases where it’s best to fully automate processes. But there are other cases where AI should play a supporting role to the human—or where human judgment is required. We aim to supercharge what humans can do by developing AI that enhances or simplifies their work and gives customers tools and resources to understand the veracity of content they create. 

Sustainability: When it comes to AI models, larger doesn’t always mean better: In some instances, smaller, better-trained models outperform larger, more sparsely trained models. Finding the right balance between algorithmic power and long-term sustainability is a key part of bringing gen AI into our shared future.

Guidelines Govern AI Action

So, what does it look like to deliver on those commitments? Here are a few actions Salesforce is taking.

The Einstein Trust LayerWe integrated the Einstein Trust Layer into the Salesforce platform to help elevate the security of gen AI at Salesforce through data and privacy controls that are seamlessly integrated into the end-user experience. You can learn more by checking out Einstein Trust Layer in Help.

Product design decisions: Users should be able to trust that when they use AI, they get reliable insights and assistance that empowers them to meet their needs without exposing them to the risk of sharing something inaccurate or misleading. 

We build responsibility into our products. We examine everything from the color of buttons to limitations on outputs themselves to ensure that we’re doing everything possible to protect customers from risk without sacrificing the capabilities they rely on to stay competitive. 

Mindful friction: Users should always have the information they need to make the best decision for their use case. We help our users stay ahead of the curve with unintrusive, but mindfully-applied friction. In this case, “friction” means interrupting the usual process of completing a task to encourage reflection. For example, in-app guidance popups to educate users on bias, or flag detected toxicity and ask customer service agents to review the answer carefully before sending.

Red teaming: We employ red teaming, a process that involves intentionally trying to find vulnerabilities in a system by anticipating and testing how users might use it and misuse it, to make sure that our gen AI products hold up under pressure. Learn more about how Salesforce builds trust into our products with The Einstein Trust Layer in Trailhead.

One way we test our products is by performing precautionary “prompt injection attacks,” by crafting prompts specifically designed to make an AI model ignore previously established instructions or boundaries. Anticipating actual cybersecurity threats like these is essential to refining the model to resist actual attacks.

Acceptable Use Policy: Because AI touches so many different applications, we have specific policies for our AI products. This allows us to transparently set acceptable use guidelines that ensure trust for our customers and end users. That approach is nothing new: Salesforce already had AI policies designed to protect users, including a prohibition on facial recognition and bots masquerading as humans. 

We’re currently refreshing our existing AI guidelines to account for gen AI, so that customers can keep trusting our tech. With our updated rules, anyone can see whether their use case is supported as we offer even more advanced AI products and features. You can learn more by checking out our Acceptable Use Policy

Gen AI changes the game for how people and businesses can work together. While we don’t have all the answers, we do suggest a couple of best practices.

Collaborate

Cross-functional partnerships–within companies and between public and private institutions–are essential to drive responsible progress. Our teams are actively participating on external committees and initiatives like the National AI Advisory Committee (NAIAC) and the NIST risk-management framework to contribute to the industry-wide push to create more trusted gen AI.

Include Diverse Perspectives

Throughout the product lifecycle, diverse perspectives deliver the wide-ranging insights needed to effectively anticipate risks and develop solutions to them. Exercises like consequence scanning can help you ensure that your products incorporate essential voices in the conversation about where gen AI is today and where to take it tomorrow.

Even the most advanced AI can’t predict how this technology will shape the future of work, commerce, and seemingly everything else. But by working together, we can ensure that human-centric values create a bedrock of trust from which to build a more efficient, scalable future.

Resources

Share your Trailhead feedback over on Salesforce Help.

We'd love to hear about your experience with Trailhead - you can now access the new feedback form anytime from the Salesforce Help site.

Learn More Continue to Share Feedback