Search
Close this search box.
Search
Close this search box.

Q & A with Jennifer Barrett Glasgow

A headshot of an older white woman with a blonde bob and bangs wearing a white shirt and black pin striped suit jacket.

**

EVP Policy/Chief Privacy & Security Officer,

First Orion

**

AS THE FIRST “Chief Privacy Officer” in the U.S., at Acxiom, Jennifer Barrett Glasgow has been wrestling with questions of data and privacy and innovation longer than most of us have known there were even questions. Now at First Orion, she says that the reason she keeps doing this work is, “I get bored easily—I like a new challenge every week.” We recently sat down with her and asked about the challenges we’re all facing in this brave new tech world we live in.

If you were invited to deliver a “state of privacy” speech, what would be your message?

There are a couple of key messages. Privacy and security go together, and the way we think about them is that security is about who can access the information, and privacy is about how the data is used. Obviously we expect it to be used for a beneficial purpose or in an ethical way. The problem is, the information may be accessed by a legitimate person who may be doing things with it that aren’t particularly ethical. We’ve seen lots of examples of that in the news.

So with that kind of definition in mind, I think we can say a couple of things. First of all, privacy issues are not new. They go all the way back to the Fair Credit Reporting Act from the 1970s. It was in the early ‘90s, and then into the 2000s, that things started really heating up. But what’s happened in the last decade, the 2010s, is that the proliferation of data has just been on an astronomical rise. And a lot of it is data that’s being collected through observation. For example, when you go to a website and order something, you know you’re putting in your name, your address, your phone number, your credit card—you’re in control of that. But what you don’t know are the things that websites or search engines are doing in the background when you use these digital services—things like identifying what pages you look at on a website, how long you stay on, and even things like your location at a particular point in time. All of which become very revealing in ways that you probably wouldn’t expect.

One of the most sensitive pieces of data right now is location data, because as we move into 5G in the telecom space, it’s going to be much more revealing. I’m not talking about which cell tower you’re close to, the way it used to be—I’m talking about knowing that you’re sitting out in one of these parking spots in front of this building. If somebody is collecting location data on you, what are they able to learn? Think about it. They can figure out where your home is because it’s where you are at night or over the evening hours. They can figure out where you work. They can figure out where you shop. They can figure out where you go to the doctor—and from that they can extrapolate if that happens to be a cancer treatment center or an Alzheimer’s specialist. They can then deduce whether you or somebody in your close family has certain medical conditions.

And where might that lead?

We’re still very worried about the discriminatory effect of information, and so we have laws that prohibit using race, gender, marital status, and so on in determining credit eligibility for somebody. But if I know you live in a 95 percent Black neighborhood, I don’t need to know for certain the color of your skin. The point is that data can be surrogates for information that we think we’re protecting by saying you can’t use this or that. In reality, we’re just shifting the method of identification of certain characteristics.

The other kind of micro thing is that data collection is moving at an unbelievable rate, and it’s being done by everybody—your grocery store, your retail outlets, our government. We have video cameras everywhere recording our movements, so almost no place outside of your home is private. And if you have Alexa or another one of those devices, it has a ton of data about what you’re doing when you’re home. And it can be hacked.

So what’s the answer?

We have not resolved this problem anywhere in the world. There’s no country that I could hold up as a model, because what we haven’t figured out is how to identify a limited number of things that are absolutely out of bounds—things whose risks far outweigh any benefits that may come from them. On the other hand, we’ve got to identify all of the things that we should allow because they almost always create overwhelming value. And then, finally, we need to identify this bucket of things in the middle that become somewhat personal choices, like having an Alexa in your home, or not.

Figuring out what goes into each of these three buckets is critical to writing rules that consumers would have to follow. For example, if something is in the “no” bucket, you can’t let consumers say, “Well, I’ll allow that anyway.” And if something is in the “yes” bucket, consumers can’t say, “Well, I’m going to opt out of that.” That would be like saying, “I want to opt out of showing that bankruptcy on my credit report.” Another critical area is medical research—the more people who allow their data to be used, the more valuable that medical research is to all of us. The converse is also true—the fewer people who allow their data, the less valuable that research becomes, to all of us.

Good luck figuring out a plan for the “common good” in today’s world.

I know, but wait—it gets even more complicated. You’ve also got that middle bucket where you ought to give consumers a choice, to opt in or opt out. But what kind of choice? Do you say, “I’m going to gather certain kinds of your data but I’m going to give you the right to tell me to stop”? There are examples of laws that say you can’t ever do this, or you can always do this, but they’re few and far between. And there’s a growing move in the middle by companies and politicians to think—wrongly, in my opinion—that if consumers are given a choice in this middle bucket, they should be in complete control.

It’s a fact that we consumers don’t always make smart decisions for ourselves. There’s a book called Misbehaving: The Making of Behavioral Economics by Richard Thaler, about how we make economic decisions and how many of the economic decisions we make are bad for us, but we don’t know it. It’s a whole field of behavioral study. And yet laws around the world have recently been moving toward giving consumers more and more control.

But I’ll tell you, I’ve lived in this data privacy world for 40 years now, and I know enough to know that I don’t want control. First of all, it’s horribly cumbersome. And second, I’m not sure I would fully understand risks versus benefits without so much research that it’s just not worth it. So we’re at a point where in order to take advantage of artificial intelligence, machine learning, all this stuff that’s wildly exciting and can bring huge benefits, we’ve got to figure out how to manage that in a way that minimizes the risk while not completely eliminating it. This is really key, because the politicians would like to eliminate the risk. A lot of people don’t want to recognize that no one is 100 percent safe. If somebody can use my information to defraud me or embarrass me, you know what? We’re never going to stop that kind of thing completely. Can we get it to a manageable level? Yes. Can we give people some kind of recourse if it does happen? Yes. But the only way to eliminate the risk is to lock up all the data, which eliminates the value.

One of the theories is to begin to separate AI, machine learning, and analytics into a “discovery phase,” when data is being used to discover something, and a “deploy phase,” when what’s been learned is actually being applied. An example of this is medical research: In research mode you’re pretty free to use just about anything so long as you’ve gone through and had an IRB review—that’s the review board for medical research. But once you’ve completed your research, you have to do a discriminatory check and some other things on it before you can deploy. If it’s a drug, for example, you’ve got to go through the FDA.

I sometimes wonder if we’ve created something that’s too big, that nobody can control anymore…

I don’t think we’ve created anything that’s too big. Any new industry has to figure out how the parties can act in a responsible and ethical way. And to some degree it requires experience and practice to learn what’s a problem and what’s not. We didn’t know when we started collecting location data that it could be as revealing as it can be. We thought we’d use it because you were parked in front of Starbucks and we’d give you a coupon to go in and get half off on a cup of coffee. No one had any idea that location data can help someone deduce that you have cancer. But if you believe it’s gone too far, then we might as well shut down innovation.

Let me throw out another what-if: If you were suddenly to become Chief Policy Officer at Facebook, what would you change?

Actually, I know the woman who held that position—until shortly after the Cambridge Analytics episode. But what would I change? I think it’s time for regulation in social media. Those big companies did put profits in front of ethical practices. It gets hard when you have so much data, and you’re so big, so you’re liable to push the envelope a little too far. With these kinds of macro lessons that we all need to learn, the first people in the ring are typically the ones that are going to get hurt first. To some degree, it’s good because they paved the way for a little reasonableness to now be injected into that medium.

They’re already starting to talk about that a bit, especially some of the other players. Google and Microsoft were always a little behind Facebook in terms of excess. They were out there, but they weren’t on the bleeding edge. So now even Google executives are calling for some kind of reasonable legislation.

But what this industry hasn’t done is to band together and try to create self-regulation, the way the marketing industry did. I think that’s a mistake. Because if you agree that there’s some risk and danger in something that your industry is doing, you’re far better off getting your competitors in a room and at least acknowledging it and saying, what can we do either individually or privately, or even publicly, as a self-regulation guideline? Because eventually somebody—either a state or federal government—is going to come in and regulate you. So you’re better off trying to police yourself.

And you look a whole lot better, too.

Oh, yeah. But with social media, the gain was too big and too fast. And so now we need to think about taking laws against monopolies and applying them not just to companies, such as in, “Can Sprint and T-Mobile merge and not create an unfair advantage?” We really should be applying the rule against monopolies to data.

You have to do it sector by sector, because while a lot of the data may not be different, the way it’s collected and how it’s used is. So don’t write a social media law and make the hotels and airlines and anybody else that’s not in the social media space use it, because that way you’re going to screw it up. So it’s a sector-specific area that I think needs some regulation.

What else would you want to change?

Well, some interesting ideas have surfaced in the social media world in Europe, because they’ve been ahead in regulating that industry. One is “the right to be forgotten,” which means you have the right to have negative information about you removed from Internet searches and some directories, under certain circumstances. It’s not a carte blanche, but it’s a really good right in its limited form. There’s another one called data portability. Let’s say I’ve put all my family photos on Facebook for the last 10 years, and now I want off and I want my photos back. I should have a right to be able to download all that data and take it away. I can’t do that here, but in Europe I would be able to. And they’re now giving that right to other countries because they’re doing it in Europe, but it required a lot in Europe before they started doing it.

In 1991, you became the first Chief Privacy Officer in the U.S., and one of the first in the world. Has the definition of privacy changed since you started out?

Not really. We have two flavors of it, which fundamentally are very similar. The Europeans call it data protection. Most in the U.S. call it privacy. They are slightly different on the fringes, but otherwise there’s a lot of overlap. The Europeans are more heavily focused on the security piece of it. I need to set the stage by saying a lot of what privacy means to a country is cultural. The Germans don’t like lists, which goes back to the Nazis. If you were Jewish you were on a list, so it’s easy to understand that.

I’ll give you a real good example about the attitude in China. When I was at Acxiom, I had a privacy officer in China working for me—brilliant girl, Ph.D., spoke a number of languages. Anyway, they were talking about passing a new law in China giving everyone a social identity card that they could use for everything. They already had identity cards, but this was a new one that would create a “social index” for every citizen. And based on your social index, you would have certain privileges—or not.

When I read about this, I thought, Oh my god, how could you even consider something like this?” So I called my privacy officer over there. She actually lived in Taiwan, but she went back and forth between there and the mainland a lot. “So,” I said, “what do you think about this? Are people upset about it?” “Oh, no,” she said, “we’re real happy. We think it’s going to provide us benefits.” So the attitude of what is personal is very cultural, very subjective, and that’s why it’s been difficult to have worldwide solutions that everybody can agree on.

As for Americans, I don’t think we’re all that panicked about it, and different age groups and different people have different attitudes—for example, younger people are far less worried about it.

Maybe they just like the gadgets.

But that’s the point. They love them. They love the convenience. And if there aren’t significant risks, why not? The question becomes, Have we done as much as is reasonably possible, prior to deploying, to identify the inherent risks? And then over time, as we learn about new risks, how do we react and respond? I mean, it’s like water in Flint, Michigan: Water regulations have been around for decades, but obviously they didn’t cover the lead pipes that were 50 years old. So it takes some maturity to identify some of the problems. Then the key becomes how do you deal with them, and that’s where your point about people and government being so polarized makes it really, really hard to get the right laws in place.

And it’s vital that we do get those laws right. The European model, which gives much more control to consumers, creates a lot of confusion and a lot of hesitancy among the little companies with a great idea that typically lean toward taking more risk than a big company that’s got more to lose. So the European model has a dampening effect on innovation, and I’m a big proponent that we cannot—as we move forward with whatever legislation we’re going to end up with here in the U.S.—we cannot stifle innovation. Because China doesn’t have any qualms about doing anything, and they could run over us in the AI and analytics world, so that 10 years down the road we might be looking back going, “What happened?” So when you understand the potential dynamics, I hope that everyone in the profession, or even just a company dealing with a lot of data, is realizing that we’ve got to fight for this right to discover.

As you said earlier, you’ve lived in this “data privacy world” for a long time. In closing, tell me how your work has changed from those early days to today.

Well, of course it’s become more complicated, because of all these things we’re talking about. On the other hand, a big part of my role has always been to make corporate entities sensitive to these complex issues, and people are much more aware today. So in that sense, my job has probably gotten a little easier.

We need to innovate, but we have to do it ethically. Back at Acxiom I had some mugs made, and when someone did something really smart, really enlightened, I gave them a mug that said, “Just because you can doesn’t mean you should.” That’s a message I still preach.

I can’t police 200 different people’s creativity here at First Orion, so I see my job as sensitizing them so that they’ll either think about that question themselves—”Should I do it?”—or they’ll come to me and ask, “Should I do this?” We have policies that we do training on, and we talk about these issues in our Monday morning meetings. What I want to do is make it personal. Would you want your information used that way? Some people go, “Oh, I hadn’t thought of it like that.” And I tell them, “Look, guys, we’re all in that database. We’re all being studied everywhere we go.”

When you make it personal, at least you get them to start questioning themselves about what’s right and wrong. Just because they can doesn’t mean they should.