Nitin Agarwal, Ph.D,
University of Arkansas at Little Rock
AS THE FOUNDING DIRECTOR of UA Little Rock’s Collaboratorium for Social Media and Online Behavioral Studies (COSMOS), Prof. Nitin Agarwal spearheads a team funded by such entities as the U.S. Department of Defense and National Science Foundation to keep tabs on all things social media—especially when social media turns anti-social. In one of COSMOS’ latest reports, we learned that the military is now categorizing certain A.I.-enhanced social media tactics as the “newest war-fighting domain.” It’s called “cognitive warfare,” and we asked Prof. Agarwal to tell us all about it.
_____________________________________
Tell me when, why, and how what you call “weaponized media” became known as “cognitive warfare.”
Cognitive warfare took its form over the last few years. But its evolution took over a decade. Initially, attempting to convert someone to your way of thinking via social media was lumped under such terms as “information warfare,” “info ops,” or “influence ops,” and, because cyber platforms or digital platforms were used as the medium for conducting such tactics, they were considered cyber security issues.
Over time, however, it became apparent that this wasn’t a suitable label. Because cyber security issues typically involve things like viruses, malware, attacking computers or servers, and disrupting operations. But these “info ops” are more about occupying your cognitive space and influencing beliefs and behaviors, without affecting computers, without infecting with viruses or malware. It’s more about hacking brains than hacking computers.
So it’s going straight through the computer to the operator of the computer.
Right, and these tactics aren’t even new. When you look at military doctrines, you find a rich body of literature on information operations. But when it’s conducted beyond military confines, in public spaces like social media platforms or general digital communication platforms, those kinds of operations clearly deserve a new categorization. So that’s when the U.S. Department of Defense, as well as NATO, began recognizing cognitive warfare as the newest war-fighting domain, after land, sea, air, space, and cyber. It deserved this new categorization because a lot of the existing domains came with preset notions of how those domains were being handled and how adversaries were operating in those domains, and this is a completely new game. So, we needed to think in new ways in order to tackle adversaries, their tactics, and their operations in the cognitive domain.
I’m a little confused—are we talking about just a military designation or could two groups of civilians be waging cognitive warfare?
Absolutely, they can. The recent India-Pakistan conflict provides an example of cognitive warfare in the military domain. Several false narratives were spread, trying to persuade populations from both sides that the other side was the aggressor, the one about to take everyone to the brink of nuclear war, et cetera. While being a kinetic military conflict, all sorts of public social media platforms and digital communication tools were used to persuade individuals to think their way. Such tactics were considered narrative ops.
But this kind of operation can be waged outside of military confines, in public spaces, in non-kinetic, non-combative settings. Advertising agencies have been doing this very thing all along—influencing our buying behaviors through persuasiveness.
So, in that sense, this isn’t all that remote from what we’re already familiar with. But it’s becoming more dangerous, especially when multi-domain operations are conducted around divisive topics in a society. An adversary who wants to wage non-combative and non-kinetic operations (e.g., narrative ops) would rather stick with this type of threat tactic, especially due to the low (to zero) costs and high impact. Our research in the Indo-Pacific region provides several examples of cognitive threats, where anti-American narratives are spread to advance Chinese strategic interests and coerce nations into debt traps under the guise of economic advancement and infrastructure development projects.
A year or so ago we talked about A.I. and deepfakes, but this seems to be beyond that. How do they compare?
A.I. acts as a force multiplier for any cognitive war tactics, including bots and deepfakes. A.I.-driven analytics can be used to understand and target a particular demographic because different messages resonate differently within different demographics. We’re producing tons of data, videos, images, text, audio, et cetera. With advanced A.I. capabilities, now you can process all of the data to understand what issues a community or a society cares about, what the fault points of that community are, and how to exploit those fault points.
So besides just understanding how it’s done, how can you combat this?
That involves a multi-part solution. The first part is certainly recognizing the types of cognitive threats that are out there and assessing their impact. The second part is finding out who is doing it—so attribution is the second issue, because without attribution, it’s very difficult to understand the intent behind such acts. The third part, and I think the key to the solution, is community resiliency. These types of threats or attacks aren’t going to go away, just as the other five war-fighting domains aren’t going away. We have to shore up our defenses, which means the community should be more vigilant, more aware of the common tactics that are being used, whether it’s deepfakes or bots or some other carrier of divisive, toxic, or hate speech. Three, we need to understand which populations are more vulnerable, so that we can have targeted interventions and training for our communities—our folks who are subject to all this information from social media and other digital platforms.
At COSMOS, we are developing A.I.-inspired technologies for all these parts. I call it fighting fire with fire. Our solutions help (a) identify the cognitive threat and measure how strong or weak their impacts would be felt in the community, (b) leverage social cyber forensic methodologies for probabilistic attribution, and (c) provide computational assessment of vulnerable populations and design targeted interventions. These solutions require an interdisciplinary blend of approaches, including A.I., machine learning, social science, cognitive science, cyber forensics, and contagion modeling, among others. We intensively publish findings from our studies that are available through my website, https://agarwalnitin.com/.
You say that one of the goals is to get the communities to be aware and to be vigilant. Unfortunately, as divided as we are today, everybody might be vigilant but pulling in opposite directions. What do we do about that?
A.I. also has a role to play there, because most of our content has been curated for us by the platforms and the A.I. that underlies these platforms. These programs can be manipulated by adversaries to show certain types of content to different people or different groups of people, or different societies of people. That “algorithmic manipulation” is well studied in our research and is also known as “algorithmic warfare.” Awareness, A.I. literacy, media literacy, and Internet literacy in general are much needed for people of all ages in our society.
Well, you and your group have been doing all kinds of research for a long time now, but this seems to be getting harder and harder to deal with.
There are several pain points for us. One is access to the data, because much of this data lies behind the walled gardens of social media platforms. And within those confines, we have very limited visibility. I believe we all need to come together to see how researchers like us are devising solutions to the known problems, not in the sense of a critical analysis of the social media platform, but just reaching out to them and helping them solve this problem. Making the data more accessible to researchers would certainly help ease that pain point.
The second pain point, which gets compounded by the first one, is the amount of data out there these days. Even with that limited data access, we are drinking from a fire hose. Therefore, we need faster or advanced information processing to digest all this multimedia data, images, videos, short videos, reels, memes, networks, text, audio files, et cetera. With smart and scalable information processing techniques, we can quickly sift the signal from the noise and provide meaningful insights to policymakers.
Another pain point is that currently there are very few researchers who are working in this extremely critical domain. So, getting trained and skilled personnel who can work in this problem domain is also one of the key challenges we face.
I can only imagine. A.I. is really moving fast. If it’s creating these problems now, what’s it going to be like in two years?
You’re absolutely right—A.I. is evolving at an incredible pace, and with that comes both promise and risk. While technology itself is neutral, its impact depends on how we, as a society, choose to use it. Unfortunately, harmful or exploitative uses often emerge quickly because they can be driven by immediate incentives or unchecked behavior. That’s why it’s crucial to stay vigilant, build ethical safeguards early, and invest just as much—if not more—in promoting responsible and beneficial applications of A.I., such that America leads the A.I. race both militarily and otherwise. The next few years will be pivotal in shaping that balance. I have the privilege and honor to serve on the Governor’s A.I. task force for Arkansas, where our lessons learned and research findings will help establish guardrails around safe and productive adoption of A.I.
You’ve always defended social media in that it democratizes speech. But this isn’t exactly democratizing, is it? This is a bad turn, it seems to me.
You’re right—it is a troubling turn, and it highlights the dual nature of social media. When people ask me whether it’s a net positive or net negative, I often say it depends on when and where you look. On one hand, social media has undeniably democratized speech, amplified marginalized voices, and connected people in powerful ways. But on the other hand, we also see manipulation, bias, and abuse thriving on the same platforms. These contradictions make it difficult to give a definitive answer. It’s not just a matter of the technology itself, but how it’s governed, used, and responded to by society.
A.I.—like nuclear energy or the Internet—was developed with tremendous potential for good. But history shows us that once a powerful tool is created, people often find both constructive and destructive ways to use it. The technology amplifies human intent—it doesn’t create it. So, when we talk about things like cognitive warfare, we’re really confronting how human choices shape the trajectory of these tools. A.I. reflects us more than it defines us.
_______________________________
Dr. Nitin Agarwal is the Maulden-Entergy Chair and Donaghey Distinguished Professor of Information Science, as well as the Director of the Collaboratorium for Social Media and Online Behavioral Studies (COSMOS), University of Arkansas at Little Rock. Visit https://cosmos.ualr.edu/ for more details.