Video games as a medium for storytelling have often been inspired by movies, and the clearest example of this is the use of cutscenes. It is often said that Pac-Man was the first game to use cutscenes instead of going straight from one level to another without interruption. After the player clears each stage, it would play a short vignette depicting simple scenes of Pac-Man and ghosts chasing each other.

While these little cutscenes are obviously a far cry from how modern cutscenes are used in games, the core concept is the same.

The game takes control of the character away from the player during a sequence to introduce some kind of new information. The length of these sequences can vary widely: Konami’s Metal Gear Solid series is famous for having long cut-scenes, with Metal Gear Solid 4 clocking in at over eight hours of cut-scenes, and can be used for a wide variety of purposes.

They are used to introduce characters, develop established characters, provide backstory, atmosphere, dialogue, and more.

However, despite their ubiquity in modern big-budget games, cutscenes aren’t necessarily the best way to tell a story in a game. There have been many highly acclaimed games that used few cutscenes, preferring to let the player control the character throughout the game.

Valve Software’s Half-Life 2 is currently the highest-scoring game of all time for PC on review aggregation site Metacritic, only having one scene on either end. Control is rarely taken away from the player for more than a few moments, except for one on-rails sequence towards the end, and much of the background information that would be shown in a cutscene elsewhere is shown through scripted events. or background details in the environment. .

But are Half-Life 2’s unskippable scripted sequences that different from cutscenes? After all, the player often can’t progress until other characters finish their assigned actions and dialogue, so why not use traditional cutscenes and be done with it? For truly unique experiences, we must first look at what makes video games unique as a storytelling medium. Unlike movies, where the viewer has no control over the action, or traditional board games, where player actions have little visual payoff, video games provide a unique opportunity to fuse interactivity and storytelling. Games like Gone Home, Dear Esther, and other games in the so-called “walking simulator” genre have been praised as excellent examples of the kind of storytelling that can be unique to games.

For some players, however, these games present an entirely different problem: while they rarely take control away from the player, they also offer very little in the way of gameplay. In fact, Dear Esther has no way for the player to affect the world around them: the only action that can be taken is to walk a predetermined path to the end of the game. There is no way to ‘lose’, no interaction with the environment, just what amounts to a panoramic tour with overlaid narration. So despite the lack of cutscenes in the game, the almost complete lack of player control and interaction in the first place means there’s little to differentiate it from a fairly drawn-out cutscene.

As video games exist today, there seems to be something of a dichotomy between traditional storytelling and gameplay. For a game to tell a player a story, there must be some degree of limitation on what the player can do, either temporary in the form of a scripted cutscene or sequence, or by limiting the players’ actions during the course of the game. the action. play. Perhaps future games will be able to integrate a lot of player interaction with compelling storytelling. But that won’t be achieved by taking control away from players and forcing them to watch a short movie instead of letting them play.

Leave a Reply

Your email address will not be published. Required fields are marked *