How AI helps—and hurts—cybersecurity

When it comes to cybersecurity, AI is a double-edged sword.

On one hand, it can be used to find weak points and identify bugs, especially if it is well-trained. On the other, not only can it miss errors, it can be weaponized or tricked by bad actors.

Research led by Lan Zhang, an assistant professor in the School of Informatics, Computing, and Cyber Systems at Northern Arizona University, is examining the various roles AI plays in cybersecurity. Her team’s work ranges from how to leverage AI to enhance cybersecurity, AI’s impact on everyday users and the security risks that come with using AI, and she recently received a grant from the National Science Foundation to study adversarial malware can trick artificial intelligence.

“AI shows promise in enhancing cybersecurity by modeling complex threats and identifying vulnerabilities, but real-world effectiveness depends on precise problem definitions, quality data, human oversight and addressing security risks like adversarial attacks, membership inference attacks and poisoning attacks,” Zhang said.

Enhancing cybersecurity through AI

Researchers have had success adopting mathematical formulas of cybersecurity challenging and then using AI to solve those formulas. Zhang said that in lateral movement attacks, which enable attackers to move through a compromised network after gaining initial access, researchers can model a network as a graph. AI has shown strong potential for detection and defense in these instances.

That’s research, though, not a real attack. Large language models (LLMs) can successfully identify and fix bugs in programs with fewer than 100 lines of code, but as the database gets larger, the LLM has to be taught with much greater specificity. Human oversight is essential.

“Real-world environments are far more complex than controlled scenarios,” Zhang said. “Effective deployment of AI in practice requires more precise problem formulations and algorithms tailored to specific challenges. AI is not a magic solution—it depends on the availability of high-quality training data, well-scoped problem definitions and effective learning methodologies.”

AI in your daily life

Long before ChatGPT, most of us interacted with AI somewhat regularly—talking to Siri and Alexa, using our faces to unlock our smartphones, autopilot in cars. They’re valuable but also pose security risks for users—facial recognition can be fooled, Tesla’s AI can misread altered traffic signals and Siri can be trigged by hidden commands in audio.

“As reliance on AI grows, so do the risks, making security and robustness critical for protecting everyday users,” Zhang said.

Her research is looking into security gaps coming up as people increasingly rely on LLMs for decision-making and tech support.

Security risks in AI systems

There are established algorithmic biases in AI, such as racism or sexism in responses, but security challenges extend beyond that.

“Blind spots can be exploited by adversaries and pose real risks to the general public,” Zhang said. “Our current research aims to identify and understand these AI blind spots, with the long-term goal of building more secure, resilient and robust models that can withstand adversarial manipulations.”

There’s also a risk of jailbreaking LLMs, which refers to manipulating the system into bypassing their built-in safety constraints. Zhang said a model like ChatGPT can be “tricked”—it’s prohibited from offering instructions on building explosives, but attackers can embed malicious prompts within innocuous-looking text to trick the LLM into generating prohibited content. Hackers also use this method in conducting cyberattacks.

Northern Arizona University Logo

Heidi Toth | NAU Communications
(928) 523-8737 | heidi.toth@nau.edu

NAU Communications