Skip to content

AI Expected to Magnify the Complexity of Cybersecurity Challenges, Experts Say

By: Daniel Cohen

In just the past several years, nearly every industry has endeavored to harness the power of artificial intelligence. But while the field offers tremendous promise for society, it still is in its infancy as researchers strive to manage the risks the new technology poses.

Keeping up with the rapid evolution of AI technology, for example, remains an overriding challenge for scientists. Overcoming that hurdle and learning to trust the output of AI models as they quickly advance requires developers to demonstrate that their systems are transparent, reliable, and unbiased, among other attributes. When applied to the world of cybersecurity, another rapidly evolving field, to automate decision-making for the defense of critical systems, the use of AI raises concerns about the reliability and transparency of deployed models and the precautions taken to protect personal data used to train them.

At a panel recently hosted by Northeastern University’s Khoury College of Computer Sciences at its Arlington campus, experts discussed emerging issues related to AI and its use for cybersecurity. The conversation covered a range of relevant topics including how to build trust in AI systems, securing AI models against adversarial attacks, the benefits and risks associated with using AI to enhance cybersecurity, and the threats posed by AI in cybersecurity applications and how to mitigate them.

Even at this juncture, experts don’t fully comprehend AI, said Dr. Elizabeth Hawthorne, professor and director of cybersecurity programs for Northeastern’s Arlington campus, who moderated the session. Researchers continue to grapple with a host of issues raised when applying this nascent technology to the cat-and-mouse dynamic associated with defensive and offensive cybersecurity, she said.

“It’s a constant learning game,” Hawthorne said.

To date, AI systems have typically been black boxes, making it difficult for the public to trust the technology’s outputs. Establishing trust will require people to understand the rationale behind models’ algorithms and gain confidence that the systems are robust and reliable, safe for public use, secure from tampering, able to protect personal data, accountable, and fair, free from bias.

There are tradeoffs between achieving some of the characteristics of trustworthiness, however. As a result, it may not be possible to develop a model that meets all of the attributes, said Dr. Apostol Vassilev, research manager for the Computer Security Division at the National Institute of Standards and Technology (NIST).

Applying AI to the realm of cybersecurity offers an instant advantage to potential attackers Vassilev said. Armed with the offensive capabilities of AI tools, hackers no longer need to be experts in detecting and exploiting computer systems’ vulnerabilities to carry out a successful attack, he explained. By lowering the cost to launch an attack, AI “changes the economics of the fight for cybersecurity,” Vassilev said. “[It] is going to revolutionize the cybersecurity space,” he added.

Dr. Alina Oprea, a professor at Northeastern’s Khoury College, Boston, highlighted AI’s capabilities to detect cybersecurity attacks but also emphasized the risks associated with using the technology to defend against attacks. “There are many false positives. That is the number one challenge,” she said.

“If you’re going to use AI as a defensive component, it could become the weakest point in the system,” Oprea said.

Oprea and Vassilev are co-authors of a new NIST report intended to provide developers and others who work with AI systems a better understanding of the types of adversarial attacks they are vulnerable to, along with associated mitigation strategies. Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, which covers both predictive and generative AI systems, describes the primary threats these models face, including:

  • “evasion” attacks, which occur after a system is deployed, attempt to alter its inputs to adversely affect its performance;
  • “poisoning” attacks, which manipulate the training data or underlying algorithms used by a model;
  • “privacy” attacks, which prompt a deployed system to “leak” sensitive information about the AI or its training data; and
  • “abuse” attacks, which attempt to corrupt legitimate sources, such as a webpage, with incorrect information that the model will learn from.

“Such attacks have been demonstrated under real-world conditions, and their sophistication and impacts have been increasing steadily,” the report states.

By providing a framework to evaluate the security of AI systems, the authors hope the report allows stakeholders to assess whether the risks of relying on such systems are outweighed by the benefits, Vassilev said.

Dr. Alvaro Velasquez, program manager for the Information Innovation office at the Defense Advanced Research Projects Agency, warned that AI developers should not become overconfident in their work simply because a model performs well. He stressed that “neural networks are very black boxy [so] it’s difficult to explain what is going on inside.” And the models also are vulnerable to a variety of attacks. When ChatGPT was first released, for example, researchers were able to bypass the large language model’s guardrails and prompt it to reveal the steps for constructing a bomb. It took a year for its developers to fix the problem.

“When we adopt tools with the empirical confidence that they are trustworthy, that does not mean that they are,” Velasquez said.

He advised the students attending the discussion not to become complacent. “There will always be more sophisticated attacks, there will always be more sophisticated defenses. The silver lining — it keeps the field exciting.”

We use cookies to improve your experience on our sites. By continuing to use our sites, you agree to our Privacy Statement.

Share via
Copy link