The first game conference I ever attended was at MIT in the late 90s. It’s where I met people who actually worked in the game industry for the first time. Some were my heroes. Some I’d never heard of. I was just a student, with dreams of someday doing what they did, and I remember the conversations vividly.
This was the early days of 3D gaming, after the CD storage boom had made cutscenes a big part of video games. There was a sense that the industry was experimenting, trying to “crack the code” of video game storytelling, and a lot of the talks, panels, and just general chatter were about this in one sense or another. What was the “right” way to tell a game story, so that it wasn’t “just a movie”? All these people seemed to hate cutscenes, or even just general cinematic presentation, as well as the games famous for them.
I remember talking to one of these developers about Final Fantasy VII and how it compared to Xenogears, which Squaresoft had just published. These games felt almost identical to me in terms of how interactive they were. If anything FF7 felt more interactive, since Xenogears was infamous for just becoming a barrage of cutscenes in its latter half. But this guy adamantly felt Xenogears was more “interactive” than FF7.
“Why?” I asked, bewildered.
“Because you can move the camera,” he replied. “That’s a kind of interactivity, isn’t it?”
It boggled my mind that someone could think that a game where you can’t date anyone, can’t perform CPR, can’t snowboard, can’t order a drink, and can’t do a host of other eccentric little things FF7 let you do was somehow “more interactive” just because you can swing the camera left and right while walking around, but this speaks a lot to the mindset of Western–or maybe particularly North American–game developers at the time. While there were plenty of deep, richly interactive games being made, where you did have tons of such choices—from Baldur’s Gate to Fallout to Ultima to System Shock to many others—there was also this obsession with “eliminating cutscenes”, to the point that any new technique that eschewed traditional cinematic language was seen as inherently a step in the right direction, towards games “being free of the shackles of cinema”, regardless of what that materially in terms of choices available to the player.

For an industry with these obsessions, the release of Half-Life was an instant revelation, like it was the Bell X-1 and Gabe Newell was gaming’s Chuck Yeager, the duo that broke the sound barrier. Valve had “cracked the code”, had finally shown that a game could tell a story without a single cutscene, without ever “taking control away from the player”. This is when Half-Life’s legendary status was solidified, when its list of design choices commonly cited as groundbreaking—the “cutscene-less” narrative design, the coherent sense of spatial exploration, the use of “realistic” locations, the lack of inventory management to slow you down, the crisp strategy offered by its nail-biting close-quarters combat—was first articulated. It was a towering achievement.
It also fucked everything up.
This content is for registered subscribers only
Register now to access "The Half-Life Delusion".
Sign up now Already have an account? Sign in