Skip to main content
Join the Agentforce Hackathon on Nov. 18-19 to compete for a $20,000 Grand Prize. Sign up now. Terms apply.

Protect Artificial Intelligence Technologies

Learning Objectives

After completing this unit, you’ll be able to:

  • Define artificial intelligence (AI) and its applications.
  • Explain how AI is changing the cyber landscape.
  • Outline ways to address the challenges associated with protecting AI technologies.

Defining Artificial Intelligence

Artificial intelligence, or AI, simulates human intelligence in machines so that they think like humans and mimic human interactions. Some applications of AI include:

  • Natural language processing: Organizations use AI to create real-time dialogue and translation, for example, as used in virtual assistant AI technology in smart speakers.

A person asks a virtual assistant a question about the weather, and the assistant responds.

  • Computer vision: Sports organizations use AI to improve player performance by extracting, analyzing, and tagging game-day video segments, to automatically generate performance statistics in a match or training session.
  • Pattern recognition: Banks use AI for financial fraud detection, based on historical fraud patterns, which complete data analysis within milliseconds and detect complex patterns more efficiently.
  • Reasoning and optimization: Wind farms use AI for diagnostic maintenance, which involves identification of mechanical faults using images of wind turbines to improve the accuracy and efficiency of inspections.

Responsible Use of AI

With the increased pervasiveness of AI across a range of critical business processes and functions, your organization likely relies on AI algorithms. However, a lack of assurance exists regarding algorithm design, development, and use. AI systems are missing a cohesive underlying legal framework, and that opens the door for anyone to use the technology as they wish. While your network defense team employs the technology, so might an attacker. 

Your organization may need new tools to protect AI-based processes and assist your organization's defenders to collaborate against AI-enabled threats. Your organization also likely needs security principles that cover secure design, lifecycle management, and incident management. By implementing such principles, you form the basis of a more robust assurance regime governing AI-associated cybersecurity risks.

Race for AI Dominance

Organizations across the globe are racing to develop AI technologies and apply them to greater swathes of the global economy. These organizations use AI to build reasoning systems: technologies that can perform tasks that normally require human intelligence (such as decision-making, visual perception, and speech recognition), and that can adapt to changing circumstances. Organizations are also striving to achieve high-level machine intelligence (“strong AI”), where unaided machines can “think” exactly like humans, creating advantages across reasoning tasks in general. Although this is the end goal, full implementation will unlikely be achieved in the near future.

Worldwide, organizations are investing substantially in AI research and development, using machine learning techniques in particular. According to Technologies.org, it’s estimated that global spending on AI in 2019 was $37.5 billion, and is forecast to reach $97.9 billion by 2023, with China and the US dominating global AI funding. 

Organizations are designing emerging technologies capable of faster, more precise analytics and decision-making. These technologies derive information from big data, which allows them to outperform traditional digital approaches and some aspects of human capabilities in diverse fields such as transport, manufacturing, finance, commerce, and healthcare.

The Shifting Attacker-Defender Balance

Dangerous Attackers: Speed and Scale, Precision and Stealth

While organizations are developing the first generation of AI-enabled tools to test and secure their networks and applications, attackers use AI-enabled tools to circumvent security controls. As the technology matures and becomes more widely accessible in the coming years, malicious actors will accelerate their use of AI and become increasingly sophisticated. Here are some of the ways that adversaries will take advantage of these enhanced capabilities throughout the stages of a cyberattack.

  • Speed and scale: By automating attacks, attackers will speed up and scale up their operations, while also reducing the need for expertise to attack.
  • Precision: Attackers will craft more precise attacks, using deep-learning analytics to predict victims’ attack surfaces and defense methods.
  • Stealth: Attackers will exploit AI to bypass detection using a range of evasion attacks, including malware to bypass security controls.

Opportunities for Defenders

AI isn’t for attackers only. Defenders can use AI to improve their analytical ability to predict threat actors and their attack strategies, thereby better orchestrating defensive moves. The balance of the advantage between attackers and defenders may come down to whose AI is more mature. Organizations can also use AI to enhance the speed, precision, and impact of their network defenses, and to support operational resilience. While attackers can use AI to predict defenders’ moves and circumvent AI-based defenses, defenders can use AI to augment and automate tasks usually performed by analysts, for example, threat triage.  

Expanded Attack Surface and Manipulation of the Algorithm

AI-driven systems and processes are quickly becoming part of organizations’ vital assets, performing critical functions with decreasing human oversight. As a result, your organization may expand the scale and criticality of the attack surface that adversaries could use AI-based attacks to exploit. Adversaries could manipulate or disrupt your organizational processes, and the infrastructure you rely on, by altering the integrity of algorithms and the data that feeds them. 

Additionally, if your organization uses these algorithms in critical functions, it could have grave consequences—including physical harm, as autonomous cyber-physical systems emerge, such as autonomous vehicles. Your organization may also have trouble explaining data-driven risk decisions due to complex probabilistic algorithms and huge quantities of information, which would leave leaders accountable for decisions they are unable to verify or justify.

What Is Truth?

Malicious actors take advantage of AI technology by spreading disinformation. They may create digitally manipulated videos, images, and audio (“deepfakes”) that are sophisticated and difficult to distinguish from reality. This makes it more difficult for individuals like you to establish “the truth.” 

These actors may also generate realistic and finely targeted fake news and manipulated messaging, distorting public perception of the truth and altering political or economic outcomes by spreading disinformation. They may also use deepfakes as a tool in creating ransomware attacks, such as automatically generating a fake video that shows an individual using offensive language and then attempting to extort them unless a ransom is paid. 

Challenges and Suggested Actions

Ongoing Evolution of AI Defensive Tools

It is not yet clear where the balance between AI-enabled attackers and defenders will lie. You can mitigate the risk of AI-based attacks against your organization by continuously evolving your defense capabilities. 

This action will allow you to keep up with the technologies and operational capabilities within your organization to match the pace, dynamism, and sharpened predictive capabilities of AI-based attacks (as well as build defenses against new threats). While it’s a good idea to implement traditional (non-AI) risk-based controls as a baseline, it’s critical that your organization invests in and implements faster and more dynamic AI-enabled defenses.

Systemic Risk: Collaborative Operational Security

Your organization faces risks on two fronts: a direct security compromise or failure in the autonomous decision-making component of AI, or a supply chain compromise affecting another organization in the same industry or digital ecosystem. For this reason, it’s crucial you develop collaborative operational security approaches to ensure the resilience of the digital ecosystem as a whole to the advancing threat from AI. For example, your organization may need to continue evolving its information-sharing approaches in order to remain effective against emerging algorithmic threats.

Defensive Capacity

AI-enabled attackers will target a wide range of assets. As a result, a number of organizations may lag behind in responding to attacks due to their inability to procure new, sophisticated, and costly AI-enabled defenses. It’s wise to think about how to build the capacity you need to support and defend your part of the digital ecosystem, as well as how to contribute to collaborative defense efforts. 

Secure, Defensible Algorithms

Verifying the integrity of your organization’s AI algorithms is imperative when leveraging AI technology. You may need to confirm that algorithms are sound and unbiased, in addition to ensuring that they haven’t been subverted by attackers. Here are some security principles for AI that your organization can implement to support the development of secure, defensible algorithms:

  • Secure design: You may need to harden AI systems against adversarial manipulation and disruption techniques.
  • Leadership awareness: It’s wise to empower your organization’s leadership with the necessary knowledge about your AI systems so they may make effective risk-based decisions.
  • Lifecycle management: You may need to vet algorithms rigorously and dynamically, and implement version control for various AI models.
  • Incident management: Those responsible for the outputs of AI algorithms will need to detect when algorithms have been manipulated, mitigate their impacts through incident response, and recover to a state of algorithmic integrity.

Sum It Up

In this unit, you’ve been introduced to the growing intelligence of autonomous machines. You’ve learned more about what AI is, and its applications. You’ve also discovered some of the challenges associated with the shifting attacker-defender balance, and suggested actions to address them. 

In the next unit, you learn about the development of quantum computers and the impact this could have on security. You also learn actions to address the challenges associated with quantum computing. Let’s go!

Resources

Salesforce ヘルプで Trailhead のフィードバックを共有してください。

Trailhead についての感想をお聞かせください。[Salesforce ヘルプ] サイトから新しいフィードバックフォームにいつでもアクセスできるようになりました。

詳細はこちら フィードバックの共有に進む