Threats, Hopes, & Theories

A.I. and Cybersecurity

The promises and perils of A.I.’s fastest-emerging trends in cybersecurity

 

Eric Wall

WHEN CHATGPT LAUNCHED on November 30, 2022, it became the fastest growing consumer application in history, reaching one million users in just five days and 100 million in about two months. For perspective, the public Internet took roughly a decade to hit one million users and more than two decades to reach 100 million. Today, ChatGPT is the most widely used A.I. chatbot in the world, woven into everything from personal productivity tools to enterprise workflows. That explosive adoption has been a force multiplier in cybersecurity, both empowering defenders with faster analysis, automation, and intelligence, and arming attackers with equally powerful tools to craft sophisticated threats at unprecedented speed and scale. Here are a few areas where the technology’s trajectory could define the future of both innovation and risk.

THE AUTONOMOUS SOC

A Security Operations Center (SOC) is like the digital version of a 24/7 security control room. Instead of looking at security cameras and building doors, the SOC team monitors an organization’s computer systems. Think of them as the cybersecurity first responders.

Scary prediction: A misconfigured A.I. agent escalates privileges and quarantines critical systems during a false positive, causing an unintentional denial of service across critical infrastructure.

Hopeful prediction: Within five years, most low-to-mid level SOC tasks will be fully automated by A.I., reducing burnout and improving threat response speed.

Eric’s insight:  A.I. is already changing the game in security operations, taking over repetitive SOC tasks like log analysis, alert triage, and even some elements of incident response. That kind of automation can seriously cut down on analyst fatigue and speed up how fast threats get caught. But there’s a flip side: If an A.I. system is misconfigured, it could easily misinterpret normal activity as malicious and shut down critical systems. Imagine that happening in a hospital. As we move closer to fully autonomous defense, it’s worth asking whether we’re building smarter protection or introducing new kinds of risk we won’t fully understand until it’s too late.

SYNTHETIC IDENTITIES

Identities in cybersecurity refer to the unique digital representations of people, devices, applications, or services within a network. Just like a passport proves who you are in the real world, digital identities help systems verify who or what is trying to access data or perform actions.

Scary prediction: Synthetic, A.I.-generated identities become indistinguishable from real ones, used by nation-states and threat actors to infiltrate organizations, spread disinformation, and undermine trust on a massive scale.

Hopeful prediction: A.I.-driven identity verification becomes nearly foolproof, ending phishing as we know it.

Eric’s insight: I’ve been saying for a while now that identity is the new network edge, and with the rise of generative A.I., that edge is under siege from both sides. In an era where seeing is no longer believing, identity becomes both the first line of defense—and the most dangerous point of attack.

WEAPONIZED LLMS

Large Language Models (LLMs) are advanced A.I. systems trained on vast amounts of text to understand and generate human-like language. LLMs power tools like ChatGPT and Copilot and can assist with everything from writing and coding to analyzing data and automating workflows.

Scary prediction: A rogue open-sourced LLM is fine-tuned for cybercrime, offering nearly everyone step-by-step guidance to bypass a target’s defenses and deploy ransomware in minutes.

Hopeful prediction: Security researches harness LLMs to rapidly reverse-engineer malware and auto-generate mitigation code before exploit code is released.

Eric’s insight:  LLMs are quickly becoming powerful tools in cybersecurity—capable of helping defenders or empowering attackers. Researchers are already leveraging them to analyze malware faster, generate defensive code, and anticipate threats before they strike. But a publicly available LLM, fine-tuned for malicious purposes, could easily walk someone with little to no technical background through bypassing security controls and deploying ransomware. As these tools become more accessible, the bigger question isn’t just how they’ll be used, but who decides what knowledge they contain and how far it should go.

DATA AS A BATTLEFIELD

Data isn’t just an asset, it’s a target. As A.I. systems/LLMs increasingly rely on massive datasets to learn and make decisions, the integrity of that data becomes a critical security concern.

Scary prediction: Nation-states begin injecting poisoned training data into public A.I. models, subtly influencing global algorithmic decision making.

Hopeful prediction: A.I. allows organizations to identify and eliminate toxic data sets, enabling privacy-by-design at scale.

Eric’s insight:  It’s not hard to imagine a scenario where nation-states quietly poison public datasets, inserting subtle errors or biases that gradually influence how global models behave. The effects might not be visible right away, but over time, they could impact everything from medical diagnoses to financial forecasting. On the flip side, A.I. is helping us spot and clean up bad data more effectively, paving the way for stronger privacy practices and more reliable systems. Going forward, securing data may be just as important as securing networks—because what A.I. learns is only as trustworthy as the data it’s fed.

ZERO TRUST=ZERO PEOPLE?

If you haven’t finished your “Buzzword Bingo” card by now, here’s another one to mark off—Zero Trust. While Zero Trust has been a goal for cybersecurity practitioners for a long time, it’s pretty hard to achieve in the real world. The most secure network is the one that’s worst for business, and the network that’s best for business is the least secure one. A.I. may be able to help make Zero Trust work better by continuously analyzing behavior and adjusting access in real time, reducing the risk of breaches. But relying on it too heavily might increase the risk rather than reduce it.

Scary prediction: Over-reliance on A.I. policy enforcement leads to black-box security decisions that no one understands—or is able to override during an emergency.

Hopeful prediction: A.I.-enforced Zero Trust architectures adapt in real time to user behavior, making breaches far less likely.

Eric’s insight: A.I. has the potential to take Zero Trust to the next level by enabling systems that adjust access permissions in real time based on user behavior, context, and risk signals. That kind of dynamic control could make it much harder for attackers to move laterally or exploit compromised credentials. But as these systems become more complex and automated, there’s a growing concern about transparency. If access decisions are made by A.I. models that even security teams can’t fully interpret or override when something goes wrong, we may be trading one risk for another. The tech is promising, but it needs to be paired with clear guardrails and human oversight to ensure it remains a help, not a hazard.

A.I.-AUGMENTED THREAT ACTORS

Just like there are no bullets that only work in good guys’ guns, the A.I. tools that defenders use to detect threats, analyze malware, and automate response can also be used by attackers to scale their operations, craft smarter attacks, and evade detection.

Scary prediction: Nation-state Advanced Persistent Threats (APTs) adopt A.I. co-pilots that evolve malware autonomously during an attack, outpacing even the best defense teams.

Hopeful prediction: Cyber defense becomes predictive, not reactive, thanks to A.I.’s ability to model attacker behavior before it happens.

Eric’s insight: APT groups could soon use A.I. co-pilots to tweak malware on the fly, respond to defenses in real time, and overwhelm even the most experienced security teams. At the same time, defenders are using A.I. to model attacker behavior, map out likely breach paths, and spot unusual activity before an attack even starts. It’s no longer just people battling it out, it’s machine against machine. In that kind of fight, the edge will go to whoever has better data, smarter integration, and more adaptable models.

CYBERSECURITY TALENT SHIFT

The landscape of how we build and support the cyber workforce is changing. A.I. is helping close the skills gap by enabling junior analysts to perform tasks, but we can already see that that’s a slippery slope.

Scary prediction: Entry-level roles vanish, leaving no pathway for a next generation of defenders to develop hands-on experience.

Hopeful prediction: A.I. bridges the cybersecurity skills gap by enabling junior analysts to operate at senior levels through intelligent assistants.

Eric’s insight: A.I. could help level the playing field in cybersecurity by equipping junior analysts with smart tools that let them punch above their weight. Microsoft, for example, tackles this by having senior techs handle unique or complex issues and document their solutions in a helpdesk LLM. The next time that problem pops up, a junior analyst can step in, follow the guidance, and fix it—effectively closing the skills gap on the fly. But as A.I. takes over more entry-level tasks, we risk eliminating the very roles where people gain hands-on experience. It’s not just about boosting capabilities—it’s about making sure there’s still a path for the next generation to get into the field in the first place.

THE “DEEP FAKE” INSIDER THREAT

We’ve all heard about the email scams where someone (usually new to the organization and eager to please the boss) receives an urgent email that purports to be from their CEO requesting the code for an Apple Gift Card for some special function. Now, imagine it’s not an email but a phone/video call. It’s not that far-fetched….

Scary prediction: A.I.-generated deepfakes of C-level executives are used to socially engineer high-stakes wire transfers or data disclosures—and they work.

Hopeful prediction: A.I. tools become reliable at detecting manipulated audio/video and authenticating communications.

Eric’s insight: With A.I.-generated audio and video becoming more convincing by the day, deepfake-driven scams are shifting from hypothetical to highly plausible. Imagine a CFO getting what looks like a real-time video call from their CEO, urgently asking for a wire transfer—only it’s not real, and now the money’s gone. On the upside, detection tools that can flag manipulated media are improving fast, giving organizations a better shot at catching these attacks before it’s too late. It also helps to put strong processes in place, like using a vendor management platform that sits between your AP system and the bank. Ideally, it should include fraud indemnification and require that any changes to payment info go through verified channels. When you can’t trust what you see or hear, the best defense is controlling how and where decisions get made.

SECURITY AS CODE—AND AS LIABILITY

A.I. is getting better at helping teams build security into their code, reducing the number of vulnerabilities that software has. But it’s not perfect, so what happens if it introduces a zero-day vulnerability (a security flaw that’s unknown to the vendor or developer, meaning there are zero days of warning or preparation before it can be exploited) that is widely reused across thousands of apps?

Scary prediction: A single A.I.-generated code snippet reused across thousands of systems introduces a zero-day vulnerability at unprecedented scale.

Hopeful prediction: A.I.-integrated DevSecOps (development, security, operations) pipelines enforce security from day one, identifying vulnerabilities before code is ever deployed.

Eric’s insight:  A.I. is reshaping how software gets built, making it possible to catch security flaws earlier by baking protections right into the DevSecOps process. But the speed and efficiency A.I. provides also come with risks. As more teams lean into A.I.-assisted development, it’s not enough to focus on automation alone; we also need to know where code comes from, how it’s validated, and who’s responsible when something goes wrong. At this point, securing your applications might start with securing your A.I. tools.
____________________________________
Eric Wall is Chief Information Security Officer for the University of Arkansas System.

Share post: