Is Pluribus actually about AI? We asked Vince Gilligan

Is Pluribus actually about AI? We asked Vince Gilligan

[Ed. note: Spoilers ahead for Pluribus episode 3.]

A lot happens in episode 3 of Vince Gilligan’s science fiction series Pluribus, but the plot could be summed up simply as: “Carol (Rhea Seehorn) attempts to find the limits of what the hivemind is willing to do for her, and discovers there are none.” No matter what she asks for, humanity’s collective consciousness says yes — even when it really shouldn’t. The hivemind is also sycophantic, constantly telling Carol how great she is, and how much it loves her.

Watching the latest episode of Pluribus felt weirdly familiar. Then I realized: The way Carol interacts with the hivemind is almost exactly what it’s like to use ChatGPT. Constant positive reinforcement and the innate desire to always say yes are both traits that define OpenAI’s popular generative AI chatbot.

Was this intentional, or just a weird coincidence? Was Pluribus inspired by Gilligan’s own interactions with AI? I went straight to the source to find out, and got a blunt answer.

“I have not used ChatGPT, because as of yet, no one has held a shotgun to my head and made me do it,” Gilligan tells Polygon. “I will never use it. No offense to anyone who does.”

Image: Ursula Coyote/AMC/Everett Collection

That said, Gilligan and Seehorn both have some thoughts on why audiences might see Pluribus as a metaphor for AI, even if that was never the goal. But before I get to that, let me unpack my argument a bit more thoroughly.

Three particular moments from Pluribus episode 3 made me think of ChatGPT, which I’ve experimented with both for personal use and very lightly in my professional work. (I used to ask it for help coming up with synonyms for overused words, but I’ve reverted to checking the actual thesaurus.)

First, there’s the scene where Carol asks the hivemind for a hand grenade, assuming it will refuse to give her such a dangerous weapon. She’s wrong, and humanity’s collective conscious races to procure the deadly device. This backfires when Carol uses the grenade, blowing up a part of her own home and seriously injuring her “chaperone” Zosia (Karolina Wydra). An agreeable, non-human intelligence saying yes to an irresponsible request that endangers the user… sound familiar?

rhea-seehorn-in-pluribus.jpg
Rhea Seehorn in Pluribus
Apple

Second, there’s the conversation between Carol and Zosia that takes place between the grenade’s arrival and the explosion. Carol appears to finally open up to her hivemind chaperone, and invites Zosia into her home for a drink. Their conversation seems more like a human talking to ChatGPT than two people actually conversing. Carol asks, “How do you say cheers in Sanskrit?” Zosia answers immediately. As they continue to drink, Zosia happily explains the etymology of the word “vodka” — exactly the kind of random factoid an AI agent might spout off.

And third: Later, after the grenade goes off and Zosia is still in recovery, a random character we’ve never seen before, wearing a DHL delivery uniform, approaches Carol in the hospital waiting room. Speaking for the hivemind, he explains that Zosia will survive, despite some blood loss. Carol asks, “Why would you give me a hand grenade?” and he answers, “You asked for one.”

“Why not give me a fake one?” she replies.

The man looks confused. “Sorry if we got that wrong, Carol,” he says.

“If I asked right now, would you give me another hand grenade?”

“Yes.”

The conversation continues from here as Carol attempts to come up with a weapon so recklessly dangerous the hivemind would refuse to obtain one for her. A bazooka? A tank? A nuclear bomb?

The man bristles at that last one, but when he’s forced to answer whether he’d obtain a nuke for Carol, he replies, “Ultimately, yes.”

Pluribus_Photo_010103-1 Image: Apple

This all feels like maddeningly accurate example of what talking to ChatGPT can often feel like. These tools are designed not to be accurate or ethical, but to give the user a satisfactory answer. As a result, they frequently come across as sycophantic and cloying (or even harmful in some cases). And if ChatGPT does make a mistake or hallucinate a fact and you catch it, it will happily apologize and attempt to move forward — as if it hadn’t just given you good reason to distrust whatever it says next.

Carol’s experience with the hivemind feels eerily similar. This is an intelligence that wants to make her happy above all else, even if that means doing something extremely dumb, like giving her access to a grenade or a nuclear bomb.

But Vince Gilligan says that wasn’t what he was thinking of when he wrote Pluribus. In fact, when he first came up with the idea for the series, ChatGPT didn’t even exist.

“I wasn’t really thinking of AI,” he says, “because this was about eight or 10 years ago. Of course, the phrase ‘artificial intelligence’ certainly predated ChatGPT, but it wasn’t in the news like it is now.”

However, Gilligan says that doesn’t invalidate my theory.

“I’m not saying you’re wrong,” he continues. “A lot of people are making that connection. I don’t want to tell people what this show is about. If it’s about AI for a particular viewer, or COVID-19 — it’s actually not about that, either — more power to anyone who sees some ripped-from-the-headlines type thing.”

Seehorn takes it one step further, suggesting that the beauty of Gilligan’s work is how well its relatable storytelling maps onto whatever subject the viewer might be grappling with at the moment.

“One of the great things about his shows is that, at their base, they are about human nature,” she says. “He’s not writing to themes, he’s not writing to specific topics or specific politics or religions or anything. But you are going to bring to it where you’re at when you’re watching.”


Pluribus airs weekly on Apple TV. Episodes 1-3 are streaming now.

News Source link