IN TODAY’S WORLD, there’s no more valuable aspect of technology than cybersecurity. Whether you’re running a business or running for office, the sanctity of your innermost secrets—whether data, documents, or internal communications—is of supreme importance. Protecting your information from hackers of all stripes and purposes was hard enough—and now A.I. has crashed the party. We asked Philip Huff, the cybersecurity guru at U.A. Little Rock, to tell us what this means in the eternal battle between the good guys (us) and the bad guys (them).
———————————————————-
How is A.I. going to affect cybersecurity?
There are two pathways to consider. The first is with automation. Cybersecurity is an attack and defense game, and both the adversary—the attacker—and the defender are going to have a lot more tools available. I think A.I. benefits the defender more than the adversary, because the defender’s already in a position of extreme overload. They have too many systems and too much data to really know how to best defend their systems day to day. So just the advancement in automation—being able to use A.I. to sort through the millions of combinations of threats and vulnerabilities in how someone might attack your system, and either autonomously configuring the system securely (which I think is a ways off) or working with human operators to make decisions in real time that would optimally defend their systems—that’s what I think is most important for the defenders.
Now look at it from the adversary point of view. You can pull off an attack if you have the skills to identify novel methods of attack. But again, I think defenders are given a more significant advantage because adversaries have been very creative in their attacks for a long time. But now, defenders have a tool that will help them be able to process all of the data that’s currently overwhelmed them.
Well, the defenders at least have an idea of what they have to defend, whereas the attackers are trying to find their opening wherever they can.
Yes, the defenders always have a lot of weaknesses in their system and weaknesses will be introduced pretty continuously. So it’s really kind of a game of how do I fix the weaknesses or mitigate them in a way that is responsive to the adversaries and their evolving skills and resources and techniques. That’s what’s currently an unknown. A.I. can help with that. A.I. can process all that information and help come up with those decisions.
I’m assuming you’re bringing A.I. into your curriculum a lot more?
Yes. There’s an international movement in that direction, and there’s definitely a national movement with the NSA and DHS to look at university curriculum and how we incorporate A.I. into cybersecurity. In our curriculum, A.I. is now being introduced much earlier.
The other big way that we teach students about A.I. and cybersecurity is that this is all part of a bigger system. Organizations operate their own systems. They may operate a manufacturing system, or a financial system, or a retail system—whatever it is, we teach that systems are a combination of people, processes, and technology. You have whatever technology you have, and the way the people interact with that technology is what comprises the system into performance function.
What’s changed about that is that A.I. doesn’t really belong in the category of “people, processes, and technology.” A.I. is an agent that isn’t going to act deterministically like a computer would. By that, I mean it’s non-deterministic from what we’re used to, in which we always get the same output based on all the same input. A.I. is different—it’s going to make decisions of its own, and people are now putting those A.I. decisions into the system processes, maybe with humans interacting with them, maybe not. So how do we understand how we can or can’t trust what the A.I. is doing?
Security is about trust, but not just about trusting—because you can trust anything, right or wrong. The question now is, how can we ensure that the A.I. agent is trustworthy, so that we can understand what attacks could be taken on the A.I. that would impact the system? I mean, cybersecurity is still about system impact—how does a cyberattack impact the function that a system is performing? And now you add this A.I. component, which means we have to redefine a lot of our
terminology.
Are you saying that A.I. and technology are two different things?
A.I. is technology, of course. It’s definitely been the basis of technology. But technology as we have known it always produces the same output based on the same input. So when we think of technology “pre-LLM,” if you will, you have servers and workstations and routers and networking equipment and they all operate deterministically. Their code is going to operate consistently. But then you have these A.I. agents that are intended to replace and improve human performance. It’s not really technology in the sense of the technology we’ve known for the past several decades. And it’s not human in the sense that it’s consciously operating the process but it is involved in the process, and it’s something that can be attacked and that can impact the system function.
You mentioned the NSA and DHS and the movement afoot to teach A.I. within cybersecurity. Would you say that’s an urgent movement?
A.I. has been a factor in cybersecurity since the emergence of commercial Large Language Models in late 2022, and I would say that the technology that wins today is the technology that gets A.I. embedded in its tools. So from our students’ standpoint, when they come out of a college program learning cybersecurity, they’re going to go into an organization that’s using tools augmented with A.I.
The speed at which our students have to adapt and understand all this is just incredible. Students who entered a cybersecurity program in 2021 are graduating in 2025, and the world has shifted for the organizations that they’re going to be entering. These organizations have A.I. in their business processes too, analyzing finances, performing legal analysis, and on and on. So it’s important for our students—these organizations’ future employees—to understand what’s trustworthy and what’s not.
It’s almost like they need an extra year in school.
Well, fortunately they’re on the same playing field as everybody in the business. Everybody’s struggling to catch up with this and figure it out. There are a lot of interesting policies out there on A.I right now, and we’re definitely trying to rapidly change our education to keep up with the innovation.