Effects Of Audio Technology On Video Game Music Media Essay
|✅ Paper Type: Free Essay||✅ Subject: Media|
|✅ Wordcount: 2620 words||✅ Published: 1st Jan 2015|
How have advancements in digital and computer technology allowed video game music to become less technical and more musical?
Attention to game audio has increased dramatically over the last thirty years, as the game playing populace demand a more fulfilling and immersive sensory experience.
This paper examines the developing relationship between the music and games industries and the effect of new technologies on video game audio and its creative process.
Despite limited empirical data, the research identified that the various emerging technologies of the 1980s and 1990s has dramatically enhanced video game audio to the high fidelity levels of cinematic music. However, there has been a recent trend to return to old technologies, in an attempt to become more creative using technological constraint as a tool.
Video game music is a unique type of background music that has been written exclusively for a video game – its main aim is to enhance the enjoyment and enrich the gaming experience.
Compared to the film industry the video game industry is relatively new, dating back to the early 1970s with the arcade machine. However, music in video games has only come to prominence in the last decade with the rapid advancements in sound technology. Prior to this, game music played a secondary role: an afterthought to the visual and graphics of the game.
Video game music has also become more accessible to musicians over the last decade. A combination of both hardware and software technological innovations – and their increasing ease of use have attracted traditional composers into the “emerging” games music industry. This was a role traditionally confined to specialist programmers who often had little or no musical skills.
This body of research will look into the trends of video game and computer technology, and whether it has affected the video game music industry in a positive or negative way.
Video game music is a relatively new and emerging field which has come to the forefront of game design in the last decade.
Despite being in its infancy, author and academic Karen Collins, who holds the Canadian Research Chair (communication and technology) at the University of Waterloo, notably points out that game sound has been much neglected in the growing literature on game studies, though it is an integral part of the gaming experience.
Collins documents that “The development of game audio can be seen as a series of pressures of a technological, economic, ideological, social, and cultural nature.”(pg 6 chapter1 games sound)
Collins cites technological constraints for the limited sounds of the early games due to programming difficulties and insufficient memory, where an introductory and ‘game over’ music theme sufficed, along with minimal sound effects. Game audio has come a long way since the low fidelity bleeps of pong as Collins rightly points out, “…games audio has reached a ‘cinematic’ quality that it is gaining some recognition.” (Collins, 2007, p. 1)
She writes: “The efforts of industry groups such as the Interactive Audio Special Interest Group (IAsig), Project Bar-B-Q, and the Game Audio Network Guild (GANG) have in recent years been advancing the technology and tools, along with the rights and recognition, of composers, sound designers, voice actors, and audio programmers.” (Collins, 2008, p. 10)
New streams of interest have opened up including classical rearrangements and performances of video game music, such as video games live. Using game music as a marketing ploy and income source has attracted ‘big’ artists such as Metallica to become part of the process. David Thorn, the Senior Vice President of New Media Strategy for Rhino Records, states that “music is an essential part of the gaming experience and gaming is an essential vehicle today for music discovery.” (Gaming Trend, 2004, par. 8)
The importance of sound in games is reinforced by a study carried out by Paul Skalski and Robert Whitbred from Cleveland University on “Images vs. Sound” in video games found that “sound quality almost universally impacted outcomes of interest, including several dimensions of presence and enjoyment.” (Skalski & Whitbred, 2010, p. 1)
With advancing computer technology, music for video games finally began to resemble real music, according to video game composer Tommy Tallarico. He states that the catchy “bleeps and bloops” of the early arcade games has been replaced with a “complexity of music improved to the point where the score of a video game becomes almost indistinguishable from the music played in the finest concert halls.” (NPR Music, 2008, par. 10)
The general consensus of the literature and research to date reveals a positive impact of new technologies on video game audio. Despite its slow progression from the 70s, as Collins points out, game audio today, appears to be getting the recognition it deserves.
Skalski and Whitbred’sstudy examined the impact of image quality and sound quality on game enjoyment and presence using 74 undergraduate communication students ranging in age from 18 to 26 (48% male)playing Tom Clancy’s Ghost Recon Advanced Warfighter on the Xbox 360.
A series of two-way analyses of variance with image quality and sound quality as independent variables were used to test the first four hypotheses and research questions. The control variables were included in the analyses as covariates.
Karen Collins states that “limited empirical research has been carried out to date, due to the relatively new field of game audio.” Collins has obtained the majority of her research from primarily North American and British sources, admitting a certain biasin her analysis – these include interviews with composers, sound designers, voice-over actors, programmers, middleware developers, engineers and publishers of games (Collins, 2008).
The first and most important development for video game music was the creation of Programmable Sound Generators (PSGs) and digital synthesis techniques such as Frequency Modulation (FM) Synthesis. The PSG was first implemented into the late-first generation consoles with popular games such as “Pong” in 1972 (Collins, 2008) The second generation video game consoles such as the Atari 2600 featured more advanced PSGs. Video game sound in this era was very simple, with generated tones playing one at a time (i.e. monophonic). According to “Pong” designer Al Alcorn, the paddle “hit” sound was made by accident:
“The truth is, I was running out of parts on the board. Nolan [Bushnell] wanted the roar of a crowd of thousands-the approving roar of cheering people when you made a point. Ted Dabney told me to make a boo and a hiss when you lost a point, because for every winner there’s a loser. I said ‘Screw it, I don’t know how to make any one of those sounds. I don’t have enough parts anyhow’. Since I had the wire wrapped on the scope, I poked around the sync generator to find an appropriate frequency or a tone. So those sounds were done in half a day. They were the sounds that were already in the machine” (Kent, 2001, pp. 41â€42)
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.View our services
The first console to feature multiple PSGs with polyphonic technology was the Nintendo Entertainment System (NES) in 1985, which used “one sine, one noise, and two pulse-wave voices, with one voice channel of 7-bit delta-modulated sample playback” for both music and sound effects (Deenen, 1992, p. 51).This allowed in-house composers to finally create simple melodies in their games. The most well-known example is Koji Kondo’s score for the hit videogame “Super Mario Bros.”.
However, most music from this era of video games had to be written carefully and systematically, as composers had to trick the listener into hearing various effects and noises. According to Keyboard magazine, “Spare voices were used to double melody lines, playing a millisecond off to create a primitive echo or chorused effect. Percussion sounds were created with carefully timed bursts of static and distortion on the noise track, but often had to be omitted entirely in favor of sound effects.” (Deenen, 1992, p. 48)
FM Synthesis was also being developed and utilised in the late 1970s and early 1980s. FM chips were often found in the larger arcade machines of the time, as well as add-on sound cards for the IBM PC – which replaced the single inbuilt PSG. The FM Synthesis technique, which involves combining two waveforms (a “carrier” wave and a “modulating” wave) to create a new waveform, allowed for a more complex range of timbres for both sound and music (Goehler, 2011).
The 1990s provided MIDI playback and more advanced digital signal processing effects for video game consoles and PC soundcards, which introduced a new era of video games containing more advanced music and sound effects (Collins, 2008, p. 63).
The birth of the CD-ROM and Dynamic Audio:
The other technology which aided game audio development was the creation of the CD-ROM. Originally, the development and release of the Compact Disc by Sony and Philips in 1982 allowed audio to be stored and played back through digital means by CD players. However, it was the CD-ROM or “Compact Disc Read-Only Memory” that would allow consoles and PCs with sound-cards to experience better music and sound (identical to normal CD audio) in video games. This technology quickly spread to video game consoles, and was first implemented in Sony’s PlayStation console in 1994. It also was one of the first consoles to allow 24 voices to be used at the same time, giving composers a greater freedom in their writing (Belinkie, 1999).
Video game composer Tommy Tallarico, interviewed on National Public Radio Music, remembers how “video game music was limited by its hardware” citing the primitive nature of early video game music such as space invaders. NPR host Andrea Seabrook elaborates stating that it not until the release of CD-ROM and the power of the personal computer/console that “suddenly the music track could handle something resembling real music”. (NPR, 2008)
Seabrook uses the music from Myst by renowned video game composer John Wall to demonstrate his point, saying “The complexity of the music improved to the point where the score of a video game became almost indistinguishable from the music played in the finest concert halls.” (NPR, 2008)
The use of CD-ROMs for video games allowed popular artists to take interest in composing for video games – in a similar way to composing film music. One of the more famous examples is Trent Reznor’s score for Quake in 1996, which could also be played in a normal CD player (Collins, 2008, p. 114).
Surround sound technologies from film and TV were an important part during the 1990s for immersing the player inside video game worlds, with recent consoles including the Xbox 360 and PlayStation 3 incorporating well known surround sound codecs including Dolby Digital 5.1 and DTS (Collins, 2008, p. 71)
Dynamic audio is considered to be the technology which separates video game music from other forms of music. According to Collins, “The unique relationship in games posed by the fact that the audience is engaging directly in the sound playback process on-screenâ€¦ requires a new type of categorization of the sound-image relationship.” (Collins, 2008, p. 125)
One of the early examples of this technology is Lucas Arts’ “iMUSE” system, released in 1992. “iMUSE” featured heavily in their early adventure games, and would change the music in game depending on the player’s actions. This allowed composers to write more intricate and advanced scores, according to composer Michael Land:
”The thing that’s hard about music for games is imagining how it’s going to work in the game. The iMUSE system was really good at letting the composer constantly test out the various interactive responses of the music: how transitions worked between pieces, how different mixes sounded when they changed based on game parameters, etc. Without a system like that, it’s much harder to conceive of the score as a coherent overall work” (Mendez, 2005).
A return to 8-bit:
Despite new technologies leading the way with games music Collins, also offers evidence of a return to the classical style of 8-bit video game music – using constraint as a composition tool (Collins, 2008). Ron Hubbard, an in-house composer for the Commodore 64, found the limitation in sound useful:
“Well, you know, part of that [sound aesthetic] is dictated by the fact that you have such limited resources. The way that you have to write, in order to create rich textures, you have to write a lot of rhythmic kinds of stuff… it’s easier to try to make it sound a lot fuller and like you’re doing a lot more if you use much shorter, more rhythmic sounds.” (Collins, 2008, p. 224)
Nintendo composer Nobuo Uematsu believes that their limited technology at the time gave composers their unique style of composing:
“The NES only had three tracks, and each of their sounds were very unique. I had to focus on the melody itself and think about how each chord will move the audience. I struggled to produce originality in the same three tones, just like any composer from that period. It’s amazing to listen to how each of us: Konami composers, Koichi Sugiyama, and Namco composers each had totally different creations by using the same three instruments. There was originality in ‘Game Music’ back then.”(Belinkie, 1999)
There has also been a stylistic return to 8-bit music, without the constraints of the technology from the 8-bit era. BBC’s Matt Danzico reports in New York of the emerging gamer-centric sub-culture using old video game technology to create music. Known as chip music it is essentially using the 1980’s technology of the Commodore 64, Atari 2600 and the NES to create a new musical style and aesthetic (Danzico, 2011).
Undoubtedly new technologies have transformed video game audio. A combination of hardware and software capabilities developed over the last decade has seen game music reach the high fidelity levels of concert hall performances and film music.
Early video game music was limited by technological constraints as academic/author/researcher Karen Collins points out in her research, “the sounds (of early games music) were not an aesthetic decision, but were a direct result of the limited capabilities of the technology” (Collins, 2008)
Advancements in storage capacity and synthesis technologies such as FM synthesis, the CD-ROM, the sound card and the combination of modern programming techniques with dynamic video game soundtracks has enabled traditional musicians to enter the games music industry as composers, a role that was previously held by the games programmers themselves, who often had little or no music background.
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: