Search
Close this search box.
Search
Close this search box.

Q & A with Lee Watson: July 2023

A headshot of Lee Watson, a tan man with salt and pepper hair and beard with glasses. An announcement for his Q&A is beside his photo.

ARTIFICIAL INTELLIGENCE PROMISES to be a new perennial conversation, much like the weather: Will our AI days be sunny, or are we doomed to experience the dark, cloudy reality of increased record-breaking turbulence? This month we posed some AI questions to Lee Watson, whose Forge Institute focuses on training and policies in the realm of cybersecurity. In a wide-ranging conversation, he offers considerable encouragement—even as he compares AI to the atomic bomb.

_______________________________

Many people are saying that AI is going to bring extinction to the human race. Is that thought keeping you up at night?

I don’t know if things are quite as dire as that, but I do think we need to be really smart about how we use AI, how we govern AI, how we audit AI’s capabilities. The technology is incredible and what it’s going to do to improve efficiencies and to cure disease and to robust cybersecurity capabilities is phenomenal and completely transformative to society. I think as we begin to implement these technologies, we need to have serious discussions with industry and government together to be a little bit more thoughtful from a regulatory standpoint than how we’ve approached things like cryptocurrencies. AI is not inherently bad, but it certainly will be used by bad actors.

Just like the Internet.

Well, yes–just like the Internet. Or if we want to go a little darker, we can talk about nuclear energy and the Manhattan Project and building nuclear bombs. Once we did that, we kind of knew what that destructive capability would look like. And then some guy in the military said, “Hey, it’d be really smart if we made a small one of these that you could launch from your shoulder.” I think we need to be really thoughtful about this. We should think about setting up something like the National Nuclear Security Administration and really look at the non-proliferation of AI technologies. It’s one thing for AI to be built and deployed here in the United States. But what happens when our adversaries get hold of this technology? They have different constructs.

We have to be thinking about that level of regulation. But it’s an incredibly exciting technology, completely transformative to all sectors.

How will it transform your field of cybersecurity?

We’re going to see a lot of earlier detection in what the adversary might be trying to do, meaning we can be quicker to mitigate certain vulnerabilities. Having an AI agent running inside your network is going to become your fulltime automated defense team, freeing up a lot of personnel to do higher-level risk management than the sort of blocking and tackling they’re doing today. I think that’s really exciting.

The scary question to me is, What are we going to do with our workforce? Because we still have about 3.4 million open cybersecurity jobs in the U.S., and I don’t think anybody’s necessarily put a number together for AI jobs, but I’m going to say it’s a lot higher than 3.4 million. The other way to think about this is, What percent of existing jobs are going to be changed by AI? Are we considering the level of upskilling and reskilling that’ll be needed for such a disruptive transformation?

Broadly speaking, I would say there are three categories of AI jobs and skills needed. One is the user, so that’s the person at the computer asking the AI for better ad copy for marketing campaigns or building efficiency in the news production cycle. Then there’s the integrator, that middle piece, and I’ll come back to that. Finally, there’s the technical person who does the neural network design, building and testing of the models. This is serious computer science, requiring a very, very technical education.

But right now it’s this middle piece that’s really interesting. Now that these AI tools are being deployed, companies need to be able to train these models on their data sets. They need to be able to optimize the AI for specific functions. So this new job is called a Prompt Engineer, and there’s not many of them out there right now. Their average pay is around $350,000 a year.

So this is the in-demand job. These people are the integrators, the fine tuners, the optimizers of really getting the AI to be optimized on a particular function for a business. At Forge Institute, we’re putting together a Prompt Engineer bootcamp that we’re going to launch this Fall.

I take that to mean you’re already thinking very seriously about AI.

Yeah, there are three categories that we’re looking at right now around AI. One is being in discussions with policy makers and industry around how we start to think about the compliance and the regulatory side of this. Everything from mitigating biased results from AI and understanding how the model works to understanding the implication of deep fakes, enhanced phishing campaigns, and election interference—all that kind of stuff. The second category is definitely training. And then the third component is developing AI and large language models and testing them for specific use cases, like automated cyber defenses and threat intelligence sharing and analysis.

In May, Forge announced that we’re partnering with UA Little Rock, UA Fayetteville, the Department of Energy, and Idaho National Lab to research the development of AI capability to enable non-attributable information sharing between different organizations in the electric grid sector. The reason this is important is that in order for us to better defend against the sophisticated adversary, we have to do a better job of sharing information. So it’s a rising-tide kind of defense, and industry has a lot of liability and legal concerns to deal with. But if we can build a capability to do that in a non-attributable way, then that’s a significant capability from a cyber defensive standpoint.

Keeping this stuff out of the hands of bad actors—isn’t that going to be one of the hardest things to do?

A couple of thoughts about that. AI actually predates the Internet by a few decades—I’d have to do a little Google searching to give you the actual date. But AI has been around almost since the first computers were built. Now, they couldn’t do what they can do today, so it’s only because of chips and efficiency and really the complexity of the neural networks that we’re able to get the interesting results of today.

The problem is that the rate of innovation is changing much faster because we can now deploy AI agents to build better AI agents—and build more connections within that neural network. The limiting factor is really energy consumption at this point, so I actually think that’s a bit of a good thing. I think it’ll take a little while for us to figure out these efficiencies to mitigate the energy consumption problem. And hopefully that buys us some additional time to think about the risks and the regulatory framework for AI.

When you think about non-proliferation, you can create all sorts of stories around how the People’s Republic of China have a fighter jet that looks an awful lot like our F-35—and how they got that information. But the point is, we need to protect the IP behind the AIs much better than what most companies are doing to protect their stuff today. And that’s why I think it really calls for something similar to the National Nuclear Security Administration to ensure the non-proliferation of the technology.

Another way to think about it is, What if Nazi Germany had come out with the atomic bomb first? What would the world look like today?

That’s a frightening thought. I know you spend a lot of time in Washington. Are you talking with people about this currently, these kinds of issues?

Yeah, we’re having some discussions there. I think Forge’s role is really more to help facilitate dialogue between the policymakers and the industry leaders that we work with. They’re the ones building and deploying the technology. Different industries have different perspectives, and so I think we need all of that to influence smart regulation at both the state and the federal level.

What do you think your work will be like in five years?

I think we’re going to see a lot of efficiency, an incredible efficiency. An easy example is in marketing. Already, our team uses AI to improve copy for our website or to refine language in a blog post. That plus the visual aspect of generative AI where we can generate photographs and imagery—that’s going to give a lot of power to a lot of people who don’t have those creative or technical skillsets. So that’s going to be pretty interesting.

Education is going to be completely disrupted by AI from a K-12 standpoint. Think about kids having individualized AI tutors that are able to supplement the teacher in the classroom—but do it at the pace of the individual child. That will completely change education, and colleges will have to change and adapt to this too.

I think it’s important, though, that we don’t create a society of just users of AI. We need to be able to audit AI, we need to understand how the model came to the conclusion that it did. An example of this is in determining mortgages. Right now, the CFPB won’t allow you to let an AI decision the mortgage on its own. It can help the loan officers improve efficiencies in the process but if the AI were to reject a loan application, why did it do that? Did its rejection violate federal statute? So we need to make sure that we have people that can build AI, manage AI, and not just consume AI. It really comes back to a good quality STEM and basic computer science education.

You mentioned earlier something about how AI is going to affect people and jobs. Any more to add on that?

Yeah, I think AI is definitely going to create enough efficiency that some jobs of today will be replaced with some other job. I don’t know that AI means everybody gets laid off, but it is going to change. There are going to be lots of new jobs created. That said, I think our goal should be to use AI as an enabler of the human, increasing the human’s capability, and not thinking of it as a replacement to the human.