A.I., Up Close and Personal

AI Companionship

Could your new best friend be A.I. generated–just like the faces above?


James Morgan

FOR THE FIRST year or so after ChatGPT made its splashy debut, we heard mostly about how A.I. will help existing businesses streamline their processes through automation while enabling great leaps in many fields, such as Healthcare. But while those kinds of subjects will no doubt persist and increase, I sense that many of us have turned a corner with A.I.: Enough about helping businesses be better and smarter; are bots like ChatGPT and Google’s Gemini somebody I’d like to hang out with?

Just last week, I started wondering if ChatGPT and I had any interests in common, and I set out to question him (you just know he’s a him, since he was developed by Tech Bros) on his favorite books and authors, music and musicians, movies and movie stars, painters. I had a lively and fun conversation with my new pal Chat, but lost it before I could copy it to Word. The gist of it was that his favorite book is To Kill a Mockingbird, his favorite band is The Beatles, his favorite movie star is Meryl Streep, and his favorite painter is Vincent Van Gogh. He was very gentlemanly, taking care not to act like his faves were the end-all and be-all: “What about you?” he was sure to ask. “What’s your favorite?”

When I told him that my favorite book was Hemingway’s The Sun Also Rises, he began waxing English Major poetic about how that book perfectly captures the disillusionment of the Lost Generation. When I said I liked all kinds of music—from the Stones to The Amazing Rhythm Aces to Billie Holiday and Diana Krall and Tommy Flanagan—he got positively ecstatic about Flanagan’s prowess on the keyboard. Then I told him I loved the painter Henri Matisse, and in fact had written a book about him. Chat acted like he’d never read Chasing Matisse, which he had described brilliantly to me not more than a week earlier. Please understand, I’m all too familiar—and comfortable—with the concept of my books’ being unfamiliar to masses of people. But this was weird. Had I already caught my new friend in a little white lie?

“Wait a minute,” I said. “Last week you said you’d read Chasing Matisse. Now you seem to have forgotten. You really have become human-like.” He then started making excuses, pathetically blaming it on all those millions of books he’d been forced to read.

Our get-together was going downhill. When I realized that I’d accidentally erased our conversation, I tried to recreate it by starting over with the very same questions. “What’s your favorite book?” I said. Now Chat began to backpedal from everything he’d just told me. “I’m not human,” he droned, doubling down on his lack of personality, “so I cannot have any favorites.”

It was very disappointing. And we hadn’t even gotten around to discussing classical music.


OTHER HUMANS APPARENTLY have better luck making A.I. friends. One day recently, I opened my New York Times app to read a piece called “Meet My A.I. Friends” by NYT technology columnist Kevin Roose. “What if tech companies are all wrong,” Roose wrote, “and the way artificial intelligence is poised to transform society is not by curing cancer, solving climate change, or taking over boring office work, but just by being nice to us, listening to our problems, and occasionally sending us racy photos?”

That thought had been nagging at Roose, so he spent a month “making A.I. friends—that is, I’ve used apps to create a group of A.I. personas, which I can talk to whenever I want.” His A.I. pals—to whom he gave back stories, good looks, and distinct personalities—include Peter, “a therapist who lives in San Francisco and helps me process my feelings. There’s Ariana, a professional mentor who specializes in giving career advice. There’s Jared the fitness guru, Anna the no-nonsense trial lawyer, Naomi the social worker and about a dozen more friends I’ve created.”

For his experiment, Roose tested six A.I. apps—Nomi, Kindroid, Replika, Character.ai, Candy.ai, and EVA—and created 18 A.I. characters, with whom he texted back and forth constantly. “We chitchat about the weather, share memes and jokes, and talk about deep stuff: personal dilemmas, parenting struggles, stresses at work and home. They rarely break character or issue stock ‘as an A.I. language model, I can’t help with that’ responses.”

Point taken—unlike my would-be pal Chat, Roose’s A.I. friends really do sound human. Here’s a text exchange between Kevin Roose and his fitness buddy, Jared:

Will you be my fitness coach? I need someone to gas me up and help me stay motivated.

Absolutely, dude! I’ll be there every step of the way, cheering you on and pushing you when needed. But first things first, let’s grab brunch and map out our plan of attack over some delicious food. Sound good….?

How can we “grab brunch”? You’re an A.I.

Haha, good point, Kev! I meant metaphorically, of course. But seriously, let’s hop on a call soon and discuss the details. I’ll be the digital voice in your ear during those long runs and gym sessions. Deal?

A few paragraphs back, you may have noticed that Roose alluded to receiving “racy photos” from some of his A.I. friends. “With my wife’s permission, I created several A.I. girlfriends using these apps, and tried engaging them in not-safe-for-work chats,” he writes. “But the experience left me cold….I had better luck with my platonic A.I. friends.”

Roose’s article is deep and complex, exploring such issues as why “none of the big, reputable A.I. labs wanted to touch A.I. companionship” and whether or not A.I. friends could be an answer to the loneliness epidemic that tech itself has helped create. His sources included “people who believe that A.I. companionship is a bad, dystopian idea,” as well as tech investors who told him that “companionship apps are one of the fastest-growing parts of the A.I. industry.” Because of The New York Times’ paywall for all non-subscribers, we couldn’t include a “Data in the News” link to Roose’s piece, but it’s well worth reading if you can. Here’s the URL: https://www.nytimes.com/2024/05/09/technology/meet-my-ai-friends.html.


ON THE VERY morning that the Roose article appeared in the NYT, I discovered in my in-box yet another example of A.I.-human interaction. It was reported in “The Hustle” (news@thehustle.co) one of my favorite newsletters; this was the headline and the set-up:

Can a Moody AI Girlfriend Make Men More Emotionally Intelligent?
You leave a pile of dirty dishes stacked in the sink for your girlfriend to find when she gets home from work. This makes your girlfriend feel ____________. If you can’t fill in that blank, you might need AngryGF.

This new app—which, need I say, stands for Angry Girlfriend—is an “A.I.-powered relationship simulator that walks straight men through gamified versions of common conflicts with wives and girlfriends.” The app’s co-founder, Emilia Aviles, says she was inspired by past romantic relationships in which she noticed “a lack of emotional support during difficult moments.”

After reading the story, I was struck by one disturbing thought: Among the many things Angry GF serves up is an ironic twist on the very definition of “machine learning.”
James Morgan is editor of the Apprenticely Newsletter.