Imagine GTA with AI-generated characters “just going along with whatever insane thing you say,” muses Valve writer, absent-mindedly spotlighting what terrifies me about genAI

Imagine GTA with AI-generated characters “just going along with whatever insane thing you say,” muses Valve writer, absent-mindedly spotlighting what terrifies me about genAI


Like many people at companies preoccupied with discovering the next “goose that lays the golden egg”, Half-Life 2 and Portal writer Erik Wolpaw has been “poking around” with generative AI. He and a small team at Valve have been testing out different applications, in what Wolpaw assures us isn’t a “concerted” effort at implementing the soul-regurgitating, workforce-abrading gadgetry in any particular new game.

Wolpaw’s current feeling is that generative AI isn’t very good at anything “creative”, like cracking jokes. But he does think Large Language Models could make for entertaining NPC voice reactions in games such as Grand Theft Auto and, indeed, Wolpaw’s own Left 4 Dead, because AI is marvellous at being a fawning little gopher. It is fantastic at “going along with whatever insane thing you say and kind of adjusting to the flow of that”.

Which I think is a bit of an indictment of generative AI at large, but I will cease all this obnoxious editorialising for the moment and let Old Man Murray talk. Quotes have been edited a bit to get rid of conversational ums and ers and the like.

Wolpaw’s reflections on genAI come from the latest MinnMax podcast. “One thing we’ve been doing, and when I say we, I don’t mean Valve – I mean a small group of people at Valve,” he began, around an hour in. “You’re always getting to poke around at new stuff. And so, we’ve been looking at some AI stuff.”

Again, Wolpaw doesn’t think generative AIs are good at “creative” work, such as novel-writing. “Like, I’m currently not worried about AI taking over creative writing because it is pretty bad at it,” he went on. “And I’m not just saying that defensively. Like, we’ve really been messing around with it. And like [with] art, there’s a lot of questions about that, but I don’t think it’s going to anytime soon be writing novels that are better than human. Well, I mean, better than a lot of human novels.”

Where Wolpaw does see potential in generated writing is in characters who can react spontaneously to the vast range of inventive, stupid or otherwise unpredictable things players do in games. This being an idea recently tested out by Where Winds Meet players, who managed to convince one LLM-voiced character he was going to be a dad.

“[If] you throw enough artists at a game, enough humans can can create the art for a game, or almost any of the disciplines,” Wolpaw continued. “The thing with game writing and game writing specifically is that we have always had to simulate characters in the game reacting to whatever you do in real time. And we make these matrices – you know, Left 4 Dead’s a good example of ‘if this happens and this happens, we’ll play this line’. It’s the one place where I feel like AI is worth investigating.”

He stressed that none of this experimentation is “Valve-endorsed”, other than “in the sense that we are working for Valve”, and is not attached to any particular project.

“This is nothing other than just some some people sitting around being like this is a crazy technology, it would be kind of silly for us not to look into it, at least,” Wolpaw went on. “And the things that I’ve found, that we’ve found as a group, is that it’s not good at being especially creative. It’s not good at being funny.

“But it’s kind of interesting,” he said. “Imagine Grand Theft Auto where you’re going around creating a lot of physical chaos. There’s a certain amount of social chaos where you have the AI play the straight man as much as it can, and it’s just reacting to whatever insanity [you come up with]. One thing it’s very good at is just going along with whatever insane thing you say and kind of adjusting to the flow of that.”

Wolpaw elaborated that “we’ve tried to simulate it, all these years, of having characters in games just react to you, right? To whatever action you’re taking. And typically the verbs in games are pretty limited in the sense that it’s jump, it’s punch, it’s something physical. But we’ve actually reached the point now where it can just be the verbs are actual verbs, just saying things and seeing what happens. Now this is all very sketchy and but there something there, again specifically for games, specifically in character interactions. And I don’t know what it is, and it’s too expensive now to ship at scale, I think, but something’s there.”

Myself, I love the handwritten bot chatter of Left 4 Dead. I don’t think that game would have been materially improved for giving me the ability to persuade Francis that I am pregnant with his kid, or bully Zoey into eating bullets, or otherwise nudge these entertaining B-movie personas off the rails. I think that developers who are interested in generative AI perhaps overestimate the degree to which players dream of endless spontaneity. I am absolutely happy with anything ‘finite’ and ‘fixed’, providing it’s made in a witty way.

I do find text generators, specifically ‘old school’ procedural generators of the kind found in Dwarf Fortress, very intriguing when I have the ability to open the blackbox and actually see how they work, which isn’t really possible with many genAIs due to the extent of the number-crunching involved. I also think generative AI tools can be a social good when actual care is shown with regard to acquisition of ‘training’ data or consumption of resources or restrictions on the model giving, say, advice about mental health. I’d like to know if Wolpaw or Valve have any plans for a proprietary LLM, where they retain tight control over the execution.

Still, as I’ve argued elsewhere, there’s a wide gap between these more contained ideas about ‘generative’ software and the all-purpose, all-absorbing, trillion-dollar CompanionBots created by the most powerful technology corporations.

The point about generative AIs “going along with whatever insane thing you say” has been tested out in actual wars, where military planners use generative AI supplied by the likes of Google, Microsoft and OpenAI to speedily comb intelligence data for potential targets, at the behest of governments dreaming of that one, perfect war that produces a rapid win with minimal casualties. As Jonathan Kwik writes over at Global Policy, “an emerging risk identified with respect to these models is ‘sycophancy’: the tendency of AI to align their outputs with their user’s views or preferences, even if this view is incorrect.”

These aren’t the exact same models that are being offered around for use in videogames, of course. There are internal firewalls of all kinds at companies like OpenAI, but the fact that genAI models are designed to be ‘agentic’ and ‘autonomous’ doesn’t fill me with confidence about these internal protections, and well, wouldn’t it be a turn-up for the books if Wolpaw’s hypothetical pliant GTA pedestrian LLM wiggled its way into the mechanism for aiming airstrikes.



News Source link