Playing With Theory: Boundary Objects and (In)visible Work

Leigh Star’s claim that, “Infrastructure becomes visible upon breakdown” strikes me as incredibly related to translation whenever I hear it (Star 2010: 611). Yes, translation, particularly that of games, is marked by its invisibility. One recent poster to the International Game Developers Association Localization Special Interest Group message board wrote “you either get no qualitative feedback when a job is done well or you get negative feedback” (IGDA LocSIG Sep 24, 2012). A second states “As for localization effectiveness, I think it’s similar to good film soundtracks: if you don’t notice it, it’s great” (IGDA LocSIG Sep 24, 2012). For both of these localization specialists their experience is that a good translation is  unremarked upon and invisible.

Such a beginning leads me to two general theories that Star has worked on. The first is that of boundary objects (Star and Griesemer 1989), and the second is visibility and/of work (Star and Strauss 1999).

While I tend to think of translation as an interface that is actively manipulated by translators in order to alter a text so that it works between people and cultures, it would be not too much of a stretch to think about the practice of localization as a “boundary object” (Star and Griesemer 1989). Everybody — programers, translators and players included — works and interacts on the same text even as it is manipulated and used differently. The problem I find with the boundary object methodology (metaphor?) is highlighted in Star’s 2010 reiteration of her and Griesemer’s three part definition. She begins be noting that, “The object… resides between social worlds (or communities of practice) where it is ill structured” (Star 2010: 604). Games, as locally created texts (never universal despite claims to the contrary) are necessarily between worlds/communities of practice be they national or linguistic. So far, so good.

Star continues by saying, “When necessary, the object is worked on by local groups who maintain its vaguer identity as a common object, while making it more specific, more tailored to local use within a social world, and therefore useful for work that is NOT interdisciplinary” (Star 2010: 604-5). Here is where things get very similar, but at the same time very off when thinking about game translation. The idea that “local groups” work on a game doesn’t hold as it is a third party group, the translator/localizer, and not the participant/player. Although the best localizers are also players, they are forcibly removed from their sociality through NDA restrictions of secrecy. There is a break between the social community: essentially, while under translation there is no community, particularly for freelancers who are forbidden to talk to others. However, somebody works to tailor a game to local use — the whole definition of localization is exactly such: to render local for consumption. Finally, we have an incredibly interesting bit: the result of such boundary work is that the object becomes “useful for work that is NOT interdisciplinary.” If we’re going to continue with the localization practice of games, then local play of games is NOT global. Said another way: translators alter the text so that you do not have to deal with the messiness of inter[action] with an Other.

Finally, Star (and Griesemer)’s third definitional clause is that “Groups that are cooperating without consensus tack back-and-forth between both forms of the object” (Star 2010: 605). And here is where the relationship between localization and boundary objects seems to fall apart. The nature of games as global texts is that they are simultaneously consumed in different forms by people in different locations as the same text. Furthermore, access to “both forms of the object” (eg: both translations/localizations) is generally rendered impossible (PS2), expensive (DS), or difficult (iPhone). Granted, sometimes and in particular places, these different versions are rendered visible. The most obvious example of this exists with European developers and for European versions. Because of the reality of living between languages, a typically visible European method of coding different versions is often more structural (you choose the language when you start the game) than infrastructural (matches system language; only one language per disk). In this European situation the translation is still a boundary object as it is visible. However, in both the current trend and (almost) all games between Japan and the United States there is no cooperation, consensus, or back-and-forth. Is this a good or bad “standardization” according to Star? That, I’m not quite sure of, but following with the lack of inter[action], my own feeling is that it is ethically problematic.

Yes, certain aspects of the boundary object met[thod/aphor] resonate with a study of game translation. However, there are simply too many nitty gritty details that don’t work, particularly the invisibility of the NDA. And here is where I can transition from the front-running popular theory, to the secondary one that seems to be more viable given the translation/localization context: visibility and work.

While unrelated to her work on boundary objects other than through the conceptualization of infrastructure and its invisibility, Star’s writings on work (in)visibility is quite helpful in understanding game localization. Star and Strauss (1999) argue that there are two forms along which work invisibility can be seen, the invisible worker and the invisible work. The first is painfully exemplified in Rollins’ (1985) work on African-American housekeepers where the workers are reduced to the status of invisibility even while their work is valued. A less racially painful, but similar example from early video game history is how programmers’ names (and work) were struck from the games they created. In an attempt to keep programmers from gaining status and therefore the ability to require greater pay, publishers kept programmers anonymous and games were published with only the publishing company’s name. One of the first programmers to gain visibility was Warren Robinett, who inserted his own name into Adventure, the Atari game he was programming. While (many) programmers have since gained authorial/visible status and are now a part of ‘history,’ other workers remain invisible. Translators are my key example, as their work is visible to publishers, but their names are not to be granted visibility. According to one interviewee, translation is a ‘service economy’ — just like housekeeping. Because of translation’s ‘service’ status, the translators themselves become invisible ‘service workers.’ The second form of invisibility is when the work becomes invisible due to its taken for granted status. Because it is taken for granted it begins to be invisible. Star points toward parents, secretaries and call-services, but her primary example is nurses. She notes that, “If one looked, one could literally see the work being done – but the taken for granted status means that it is functionally invisible” (1999: 20). This sort of work is absolutely necessary for standard practices to continue, but its importance and prevalence is generally overlooked. For global media this is translation to a T: media is global thanks to translation, but nobody ever bothers to think about it as it is part of the infrastructure.

So, translation fits with both types of invisibility. However, like with Suchman’s (1995) discussion about rendering visible, translators’ desire to be visible is not a simple issue. Writings on translation theory have discussed the issue of trust, but between the speaker/writer and the translator, and the speaker/writer and audience (as mediated by the translator). Located in a double bind, the translator is ethically required to translate faithfully the words and intentions of the speaker. That both words and intentions cannot both be translated is the first quandary, but the translator must sometimes pick one or the other and jump for that. However, at the same time the translator is ethically tasked to not make the speaker look like a fool in front of his or her audience. Thus, the translator often greases the minds of the audience by making the speaker look better by striking more toward what the speaker meant, and could have said given a better understanding of the contextual audience.

These two binds — faithfulness to the speaker’s words on the one hand (regardless of whether we’re talking about sense/word battles), and attentiveness to the relationship between speaker and listeners on the other — directly contrast with any discussion of the possibility of visibility. To translators (and the above example is most visible with simultaneous translation) being visible is a problem. Naomi Seidman writes explicitly about this using her grandfather as an example of being a “double agent.” As a means of getting things done properly, Seidman’s grandfather intentionally altered his translation when telling French gendarmes what he had explained the Yiddish speaking Jewish refugees: to the Jews he said (in Yiddish) that the local Jews would find them and help them that and that they should not be afraid as the French were not Nazis; in response to the gendarmes query of what he had told the refugees, he falsely responded, “I quoted to them the words of a great Frenchman: ‘Every free man has two homelands — his own, and France” (Seidman 2006: 2). He abandoned faithfulness to both the words and their content in the hopes of helpfully interfacing between the speaker and listeners. According to Seidman, the sole reason that her grandfather was able to help the situation (by unfaithfully translating double agent) was because nobody could check him. He gained autonomy through partial invisibility. Invisibility is a key desire of many translators, both so that they can unfaithfully translate words (as with Seidman’s grandfather), but also so that they can faithfully translate problematic ones. An example that is often used in the dangers of translation is Igarashi Hitoshi, the Japanese translator of Salman Rushdie’s Satanic Verses, who was murdered following Khomeini’s fatwa against the book and all involved. In a desire to be able to faithfully translate the unwanted message, the translator often wishes to become invisible so as to avoid being shot as the messenger. While far less of a life and death situation, certain freelance translators I have interviewed have elaborated on their happiness when invisible as it allows them greater freedom to not care about the work they produce. Knowing that they are invisible these translators are able to work faster, and earn enough money to spend more time on the ‘better’ jobs where they are given credit for their work (and can put it in their resume/CV). Given these situations, it makes sense that translators do not want always their work to become visible. Issues of trust enable them to continue about their work in a more satisfactory way.

Unfortunately, the translator’s invisibility is only one half of the issue, and putting translators’ desires and well being aside for the moment, the translation’s invisibility is also an issue. Venuti’s (2008) has discussed the discursive invisibility of translation within the United States at length. He has argued that in addition to the above invisibility of the translator there is also a discursive invisibility of the fact of translation within American readers (and by extension, viewers of movies/television and players of games). To Venuti, this is a problem given the United States’ socio-cultural dominance in the late 20th century: the invisibility of translation simply supports an ethnic/cultural chauvinism. For Venuti the answer is to give the translator a partial form of authorship. To acknowledge that the original and translation are (necessarily) different, and to understand the translator as having an equally authorial role. Ironically, one part of Venuti’s claim has come to fruition through the concept of “transcreation.” This is ironic largely because the results of transcreation, as industry tactic, are often far from ethically motivated.

Transcreation is about manipulating a campaign to best sell a product in a local market. The authors of The Little Book of Transcreation compare translation and transcreation by saying, “Translation is about the ability to understand someone else’s language. Transcreation is about the ability to write in your own.” The authors then conclude their small book (about 3 inches tall) by writing, “With literary works, the freer approach of transcreation may not be suitable, out of respect for the original. But where the message is more important than the medium – as in marketing – transcreation ensures that far less is lost along the way. So travel transcreation class, to make sure your message gets there with you.” We can take two important points from transcreation: a) it is not about understanding an other, and b) is is not interested in the original message, but the possibility of making money. So, despite giving authorial authority to the translator (they change the text as they please), both translators and translations remain invisible.

End comments:
translation and (in)visibility are crucially tied
industry secrecy makes this even more prevalent
making money and understanding culture are not the same

References:

  • Humphrey, Louise, Amy Somers, James Bradley, and Guy Gilpin. The Little Book of Transcreation. London: Mother Tongue, 2011.
  • International Game Developers Association Localization Special Interest Group. Mailing List. September 24, 2012.
  • Rollins, Judith. Between Women. Boston: Beacon Press, 1985.
  • Seidman, Naomi. Faithful Renderings: Jewish-Christian Difference and the Politics of Translation. Chicago: University of Chicago Press, 2006.
  • Star, Susan Leigh. “This Is Not a Boundary Object: Reflections on the Origin of a Concept.” Science, Technology & Human Values 35, no. 5 (2010): 601-17.
  • Star, Susan Leigh, and James R. Griesemer. “Institutional Ecology, ‘Translations’ and Boundary Objects: Amateurs and Professionals in Berkeley’s Museum of Vertebrate Zoology, 1907-39.” Social Studies of Science 19 (1989): 387-420.
  • Star, Susan Leigh, and Anselm Strauss. “Layers of Silence, Arenas of Voice: The Ecology of Visible and Invisible Work.” Computer Supported Cooperative Work 8 (1999): 9-30.
  • Suchman, Lucy. “Making Work Visible.” Communications of the ACM 38, no. 9 (1995): 56-64.
  • Venuti, Lawrence. The Translator’s Invisibility: A History of Translation. 2nd ed. New York: Routledge, 2008 [1994].

Connect-the-dots: Lippit, Bellah, Pollack and Bogost (or, an Ethics of the Trace in Game Translation)

Akira Lippit has written extensively of the trace in Japanese cultural texts, particularly in its relationship to a post atomic residual [1]. Lippit uses Derridian post-structural critical theory to unpack the ethics of this layering, or avisuality. While his analysis of the texts is wonderful, it exists primarily in the original. Since the films he analyzes are global, like many 21st century texts, the question of an ethics of transmission and translation becomes key. Is it the responsibility of the translator to transfer this particular level of reading? Said a slightly different way, would it be necessary for an American remake of any of the films he discusses to include the same avisuality?

From a completely different disciplinary area yet similar focus on Japan, Robert Bellah has noted the Japanese tendency to incorporate oppositional ideologies insofar as they do not upset the overarching Japanese familial structure [2]. It is when the overarching structure is upset (including Christianity popularization and late Meiji westernization) that larger disruptions occur (leading to purges and the 20th century build up to WWII). What Bellah argues as the partial incorporation of the other, which often takes the form of a mixing of English with Japanese in both daily life and popular cultural texts, can be seen as an important historical and cultural part video games. We can see David Pollack’s study on Japan’s synthesis of Chinese culture and linguistics as it transitions to a synthesis of Western culture as a theoretical link to this claim [3].

Finally, we can mash in Ian Bogost’s claim that games are a mess, and we need to study them as a mess. They’re not simply the code, or the graphics, or the play, or the box, or the advertising, or any other clean ontological state, but everything together in a slutty ontology [4]. The mixture in Bellah and Pollack, which Lippit sees in films that are not subjected to localization, is an important part of the ontological orgy that is video games. With such thinking games are necessarily their cultural connotative meaning as well, and while it may not be in the goals of the industry for localizers to transfer the theoretical and ethical mash that is Japan, it is the ethical responsibility of the translator to do so.

References:
[1] Lippit, Akira Mizuta. Atomic Light (Shadow Optics). Minneapolis: University of Minnesota Press, 2005.
[2] Bellah, Robert. Imagining Japan: Japanese Tradition and its Modern Interpretation. University of California Press, 2003.
[3] Pollack, David. The Fracture of Meaning: Japan’s Synthesis of China From the Eighth Through Eighteenth Centuries. Princeton: Princeton University Press, 1986.
[4] Bogost, Ian. “Video Games Are a Mess.”Keynote at DiGRA 2009: Breaking New Ground: Innovation in Games, Play, Practice and Theory, 2009.

Minor Scandal vs. Systemic Sexism

Minor scandal is great for blog entries. It provides a nice easy topic to talk about at a moment when it seems relevant. One interesting scandal of the moment is with Dead Island. A mistakenly released build revealed the original name of a talent. The rename that went uncommented upon was called “Gender Wars.” The talent gives the player character Purna a damage increase against male zombies. The skill was originally called “Feminist Whore.” There was no change of how the skill worked, just the skill’s name. This means that somewhere along the production cycle it was changed to a linguistically more “appropriate” term, but the concept remained in the final build.

The minor scandal element of this is hardly unpredictable: Fan delves into code, finds something and posts it online; gamers generally laugh it off or get angry to not invade their somehow post gendered turf with pc crap; news sources picks it up and spreads it around; companies apologize and bad apples get blamed; more fan anger that their games must remain outside of the realm of politics as well as angry responses against the sexist act; and then two final things happen:

1) scholars jump on the band wagon and point it out again and again as to the state of games as sexist.
2) the rest of the world forgets about it.

This blog post is an attempt to follow Suzanne de Castell and Jen Jenson’s recent keynote at DiGRA 2011. In their keynote they mentioned the Dead Island issue as well as a number of other scandals, but they discussed that scholars have been simply pointing out sexism for the past 20 years without going further. The typical “feminist” response to the Dead Island scandal has been a reiteration of the same actions that have happened for the past 20 years. Instead, Jenson and Castells call for action. They call for people to get to the source, and change it.

The source here is not the game; the source is not the developer; the source is not the people who vociferously protest. Rather, the source is the system. The Dead Island scandal (“Feminist Whore”) is not the problem, it’s simply the crack that has mistakenly rendered the giant realm of sexism (of which sexism is equal to racism, homophobia, nationalism, and many others as a unacceptable alterity where the Other is rejected deeply and quietly within the system). The true scandal is that Gender Wars and 15% damage against men is unquestioned, and that people react by missing that it is the culture that allows the sexist event to happen.

It follows that the solution is not an apology, nor is it to only scream that games or the industry are sexist. There is no easy solution to a pervasive (systemic) element of a culture other than to work against it by changing general attitudes, by pushing women to STEM jobs and game jobs, by supporting alternate types of games, and by changing culture. The best way I can think of doing this is teaching against it, so that is what I will do. However, I will not use this to teach about how certain video games are sexist, nor how there is sexism within the industry. These are both true, obvious, and useless to simply point out. Rather, I will use this to teach how the real issues go unremarked upon even when they are rendered visible. How they are systemic.

Games, Skopos, and (Functional) Limitations

As I’ve been recovering from qualifying in June, I’ve been playing a few games that I’ve been meaning to play (Assassin’s Creed 2 — the translation/localization issues are ever present in interesting ways; Avadon — a fan translation might be very interesting of an indie RPG like this) and reading some things that I’ve been meaning to read (Wendy Chun’s Control and Freedom, Jeremy Munday’s Introduction to Translation Studies, and various others). Here I’d like to think through a few ideas I had when reading Munday’s intro text book. It’s quite good in its breadth and inclusivity, so there’s been a few areas it has tied in for me, particularly the idea of skopos.

The skopos theory of translation is attributed to Hans J. Vermeer and Katharina Reiss as focusing particularly “on the purpose of the translation, which determines the translation methods and strategies that are to be employed in order to produce a functionally adequate result” (Munday 79). Its advantage, according to Munday, “is that it allows the possibility of the same text being translated in different ways according to the purpose of the TT [target text] and the commission which is given to the translator” (Munday 80). The above sentence is interesting for two reasons. First, it implies different translational possibilities and the likelihood of these possibilities happening, which also indicates that these differences might add up or be equally obtainable. Second,  it is the commission that determines the direction that the TT must go. A popular novel must be translated according to the publishing industry’s whims; a government or legal document according  to a different ‘commission‘ that is more ethically or politically oriented; a game to a different orientation still.

As Munday notes, various theorists critique skopos theory on grounds including the lack of a single purpose or meaning in certain texts (particularly Christiane Nord). On the surface, the different skopos lead forward solving this difficulty, they do different things if not all at once, but the problem still exists in terms of publication and visualization. How do you convince the monetary support to publish multiple versions (here I largely refer to literature and other popular forms that are translated, like movies and games) according to the different skopos, or aims, when the publisher/commissioner’s aim is money? And, how do you visualize these different forms? The latter I’ve discussed elsewhere, and the former is a very spiky question.

For now I wish to breifly look at how skopos theory has been taken up by Carmen Mangiron and Minako O’Hagan in their work on game localization. According to Mangiron and O’Hagan, [t]he skopos of game localization is to produce a target version that keeps the ‘look and feel’ of the original… the feeling of the original ‘gameplay experience’ needs to be preserved in the localized version so that all players share the same enjoyment regardless of their language of choice.” The authors identify a single skopos to game localization and ignore the commission element. They identify the look, feel and experience as legitimate elements to be translated, but ignore the contextual elements causing these particular elements to be the focus. Could there not be a different skopos for different games depending on if it is a publisher or if it is ‘abandonware’ with a rabid fanbase?

If games have distributed authorship (Huber) and fans help author them (Jenkins), then why does the publisher’s wishes get privileged for the skopos commission? The ‘source’ that is considered by skopos theory is not simply the publisher wishes, but a range of things seemingly unconsidered by standard localization discourse.

References:

  • Huber, William. Soft Authorship. Dissertation.
  • Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. New York: New York University Press, 2006.
  • Mangiron, Carmen and Minako O’Hagan. “Game Localization: Unleashing Imagination with ‘Restricted’ Translation.” Journal of Specialized Translation, no. 6 (2006): 10-21.
  • Munday, Jeremy. Introducing Translation Studies: Theories and Applications. 2nd ed. Milton Park, Abingdon, Oxon; New York, NY: Routledge, 2008.

 

Culture, Genre and Localization

I’ve been thinking about the boundaries of translation and localization lately.

Translation is a specific linguistic alteration, but when it is paired with localization in a way that extends beyond linguistic alterations it re-grasps some of its pre-modern sensibilities. Here I refer to the difference between modern traduction, and premodern translation (. The one is vague movement of a text and the other modern authorial/translatorial alteration of a text from one language to another.

Localization (and particularly video game translation localization) includes graphical and ludic alterations, and even censorship and culturalization alterations (Chandler 2005; Edwards 2006).This change hearkens back to translatio and the pre-modern sensibilities that at one point included commentary on texts (in the margins, or critical exegeses) and adaptations (Dryden 2004 [1680]). What it also leads toward is generic manipulations.

Essentially, what I’ve been thinking about is that the localization from Japanese to English of a game involves (practically) the alteration of coded assets to make it salable within a new linguistic, socio-political and geographical context. Localization helps games move to a new ‘culture.’ However, generic alterations, modding, or reskinning also help games move to new cultures, but in this case the cultures are subcultures, or non-national contexts.

The example that I am working with is Glu GamesGun Bros, which was generically modified into Men vs. Machines. Gun Bros (released on October 28, 2010) and Men vs. Machines (released on April 14, 2011), are almost identical. The assets used for Gun Bros, which was released in 2010, were modified and Men vs. Machines is the result. Glu Games has altered the game from a top down twin thumbs shooter (syntactic genre) within a machismo/sexist aesthetic (semantic genre), to a top down twin thumbs shooter (same syntactic genre) of the steampunk aesthetic (different semantic genre). Furthermore, Glu Games released a third iteration, Star Blitz, on May 26, 2011. The third game also alters the genre, but this time to a science fiction space shooter with ships instead of people.

Title Screen:

  

Welcome back:

  

Shopping for new items

  

World Select:

  

Gameplay:

  

Results of play:

  

The above shots, Gun Bros on the left and Men vs. Machines on the right, are almost identical. The assets used for Gun Bros, which was released in 2010, were modified and Men vs. Machines is the result. Glu Games has altered the game from a top down twin thumbs shooter (syntactic genre) within a machismo/sexist aesthetic (semantic genre), to a top down twin thumbs shooter (same syntactic genre) of the steampunk aesthetic (different semantic genre).

Generic alteration alone is not particularly strange, as these alterations occur quite often. It is essentially what makes a syntactic genre (the alteration of semantic elements to fit different groups of people). First person shooters in westerns, space, comic books, modern war, old war, etc are all examples of this standard generic manipulation. What is interesting, is that the alterations were done by the one company, and that the way the game plays is identical. Usually the requirement of using an engine is that it must be changed somewhat, or perhaps that is simply what happens to make it a meaningful game to the gaming populace. Ironically, the response of outrage on the Glu forums toward Men vs. Machines from players who are angered that the time they spent in Gun Bros is for naught as the company will switch to updating the Men vs. Machines is widespread.

What is interesting is that the limited alterations raise the core gameplay to an essential level. Because the only thing that changes are the assets that helped relocate the engine within a new, steampunk subculture, the process of altering Gun Bros to Men vs. Machines and then to Star Blitz is the same process as localizing a game.

Where I am now in my thinking can be summed up with the following thoughts: according to LISA, localization is the process that renders appropriate a game for a new cultural context, but that makes adaptation and reskinning the exact same process and a form of localization. If the above is true, then what is the culture to which localization companies seek to render games accessible? Are they as malleable and perfunctorily determined as generic alterations? Are socio-political realities less important than target market desires (which have themselves been created by the marketeers)?

Is it not that the culture to which localization seeks to render games accessible itself created by the process of localization?

References:

  • Berman, Antoine. “From Translation to Traduction.” Unpublished translation by Richard Sieburth. TTR: Traduction, Terminologie, Rédaction volume 1, number 1, 1988: pp. 23-40.
  • Chandler, Heather Maxwell. The Game Localization Handbook. Hingham: Charles River Media, 2005
  • Dryden, John. “From the Preface to Ovid’s Epistles.” In The Translation Studies Reader, edited by Lawrence Venuti. New York: Routledge, 2004 [1680].
  • Edwards, Kate. “Fun Vs. Offensive: Balancing the ‘Cultural Edge’ of Content for Global Games.” In Game Developer’s Conference: What’s Next. San Jose, CA, 2006.

Ways of Studying Games from a Communication Perspective (Qualifying Paper #2)

The field now known as game studies gradually formed around individuals writing about digital video games. These individuals, some writing since the 1990s, came together with the creation of journals (Game Studies in 2001; Games and Culture in 2006), edited volumes (Cassell and Jenkins 1998; Wolf and Perron 2003; Wardrip-Fruin and Harrigan 2004), and organizations with conferences (DiGRA in 2003; FDG in 2009). To create a history, the field pulled certain writings on digital software from the 1990s, a host of Internet and computer culture works from the 1970s to 1990s, select portions of the study of play and leisure, and the history of material video games (1950s-present) and board games (~3000BC-present). As a formation consisting of individuals from different disciplinary backgrounds, game studies is interdisciplinary. This paper questions the extent of that interdisciplinarity by looking at how the current moment of game studies is still tied to disciplinary subfields.

Introduction

The field now known as game studies gradually formed around individuals writing about digital video games. These individuals, some writing since the 1990s, came together with the creation of journals (Game Studies in 2001; Games and Culture in 2006), edited volumes (Cassell and Jenkins 1998; Wolf and Perron 2003; Wardrip-Fruin and Harrigan 2004), and organizations with conferences (DiGRA in 2003; FDG in 2009). To create a history, the field pulled certain writings on digital software from the 1990s, a host of Internet and computer culture works from the 1970s to 1990s, select portions of the study of play and leisure, and the history of material video games (1950s-present) and board games (~3000BC-present). As a formation consisting of individuals from different disciplinary backgrounds, game studies is interdisciplinary. This paper questions the extent of that interdisciplinarity by looking at how the current moment of game studies is still tied to disciplinary subfields.

In an introductory article to their edited issue of the journal Games and Culture, Thomas Malaby and Timothy Burke (2009) argue that game studies is at a crossroads; it has been interdisciplinary, but is also at the cusp of a cementation and sedimentation into a discipline. The field is full of voices from different disciplinary backgrounds, but the conversations move forward due to a mutual understanding of the game artifact. This mutual understanding coming from different backgrounds is the main reason that it claims interdisciplinarity. And yet, Malaby and Burke write both that certain groups within the field have already sought to lay down disciplinary borders and fences, and that the creation of stable disciplines is historically normal and professionally safe. Other fields have sought (or fought about) the disciplinary route (Communication among them), and younger academics tend to seek some sort of disciplinary home for career safety. In contrast to this drive toward disciplinarity, Malaby and Burke highlight the maintained interdisciplinarity of the authors within their edited journal issue. However, their excitement about their the various methods they consider legitimate for the study of virtual worlds misses the very disciplinary outlook of the other subfields within game studies, and the lack of discussion between the subfields. Where they see interdisciplinarity, I see enclaves of disciplinarity.

As I see them, these enclaves of disciplinarity exist as topics of research. Ontological studies of play through philosophy, where play is linked to biologically universal theories of development, and universality through studies of games as code; art, rhetoric, persuasion and the question of what games are and what they do through art history, design, and critical studies of communication; media effects such as violence and addiction through psychology, cognitive science and traditional communications; gaming cultures, virtual worlds and the collapse of real and virtual boundaries through anthropology, economics, and sociology; political and cultural issues as translated between real and virtual worlds through cultural studies and critical studies of communication. Each topical enclave is dominated by disciplinary methods; each tends to signal the importance a particular moment of the life of games over the other moments; and each has a different focus. The different foci form the subsections of this paper: what games are, what games do, what games do to players, what we do in games, and how games and the world interact.

In this paper I will outline game studies as it is now, how these enclaves stem from certain disciplinary origins, and how there are built-in universalities that I argue are a problem. However, I also write of how the field might be heading in certain new directions away from reductive universalities. These new directions imply a resurgence of particularity, location and interdisciplinarity, all of which lead to a better understanding of how and why games matter to people and the world.

 

Ontology – What Games Are

The subfield that makes categorical definitions of play and games is one of the most active and important. Four seminal theorists are Johan Huizinga, a Dutch cultural historian, Roger Caillois, a philosophically and literary oriented sociologist, Brian Sutton Smith, who studies play in a general sense from a more interdisciplinary perspective, and Jesper Juul, who works specifically on digital games. The four span from the early part of the twentieth century to the early part of the twenty-first century and show a general movement in how the theory has changed over the duration of the formal study of games.

Writing Homo Ludens in the late 1930s as the world once more approached war, Johan Huizinga attempted to see just “how far culture itself bears the character of play” (Huizinga 1955, ix). Not simply innocent or educative, play is a key element of culture that changes depending on the situation, but is always tied to competition and even war. While Huizinga’s link between play and the ‘natural’ progression of civilization is problematic, his overall claim for the rules of play are often a starting point for later conceptualizations of both play and games. According to Huizinga:

play is a voluntary activity or occupation executed within certain fixed limits of time and place, according to rules freely accepted but absolutely binding, having its aim in itself and accompanied by a feeling of tension, joy and the consciousness that it is “different” from “ordinary life.” (Huizinga 1955, 28)

Game studies takes two important points from Huizinga’s definition: play as free, and the magic circle.

The first is that play is free, but this is not in opposition to payment and work. Seriousness, which Huizinga links in a limited sense to ‘work,’ “seeks to exclude play, whereas play can very well include seriousness” (Huizinga 1955, 45). For Huizinga, “Play is a thing by itself. The play-concept as such is of a higher order than is seriousness” (Huizinga 1955, 45). The second concept taken up by later scholars is Huizinga’s concept of the ‘magic circle,’ the arena in which play takes place. The actions of play take place within the circle, and outside the play cannot exist. It is Roger Caillois who moves the discussion from play to games, and then makes a hard distinction between games and work. Caillois and later theorists harden the line of Huizinga’s magic circle denoting an inside and an out, and through further discussions around the topic we arrive at the current generation’s of scholars’ arguments of playbor (Dyer-Witheford and De Peuter 2009; Dibbell 2006), xreality (Coleman 2011), and the porousness of MMORPGs[1] (Taylor 2006; Boellstorff 2008). This question of what play is, what purpose it serves, how it relates to work, and whether these are all contingent on a particular world system (be it the neoliberal Capitalist model or otherwise) are all questions of increasing importance within a certain subset of game studies.

In the second seminal text of game studies, Man, Play and Games (2001 [1958]) Roger Caillois criticizes Huizinga for focusing particularly on the idea of play as having value, as it led to Huizinga missing alternate forms of play and games. In contrast, Caillois creates a classification of games that, while problematic in how it subsumes certain generic differences under a “fundamental kinship” (Caillois 2001, 13), certainly opens up the discussion to a greater number of types of games. Caillois initially outlines four rubrics, classes or categories (he uses all three terms at points, which indicates certain fluidity of the classificatory system) of play: agon (competition), alea (chance), mimicry (simulation) and ilinx (vertigo). These four classes are then subject to a sliding ratio between paidia (free play) and ludus (codified games). While Caillois’ Structuralist classificatory scheme is an interesting expansion of Huizinga’s initial interest in play, Caillois’ location of mimicry and ilinx with primitive civilizations, agon and alea with advanced civilizations, and certain categories as essential for the development of culture is problematically teleological in the same way as Huizinga decades earlier. He also notes that certain categories have been key toward the development of certain cultures in a partially essentialist mode. Caillois defines play with six qualities: free (voluntary), separate (subscribed in certain space-time limits in a similar magic circle), uncertain (the conclusion is unknown), unproductive (creating neither goods nor wealth), governed by rules (breaking with real life rules), and make-believe (aware of it being not in real life) (Caillois 2001, 10). Key for later game studies scholars are both Caillois’ particular classification scheme, and his general definition of play.

A prolific writer on the concept of play, Brian Sutton-Smith is often invoked as a third seminal author in a genealogy of game ontology. In his book Ambiguity of Play, he understands play as an ambiguous, diverse act depending on the rhetoric being used in the discursive context. In the core of the book that combines decades of previous studies done by the author, Sutton-Smith elaborates that play is discussed in seven types of rhetoric: progress, fate, power, identity, imaginary, self, and frivolity. Some of these categories overlap with some of Caillois’ (power is agon, fate is alea, identity and imaginary contain parts of mimicry, and self is in part linked with ilinx), but progress and frivolity are distinct in their interaction with work and use. In contrast to Huizinga and Caillois, Sutton-Smith does not relegate play to a lack of purpose. Rather, the ambiguity of its purpose is itself a purpose. Sutton-Smith invokes Gregory Bateman’s oft quoted phrase involving play, dogs and nipping when he very early on notes that “Animals at play bite each other playfully knowing that the playful nip connotes a bite, but not what a bite connotes.” Following this, the assumption is that meaning is clear for dogs that the playful act is not the painful act. However, Sutton-Smith points toward the ambiguity of play by invoking performance studies scholar Richard Schechner by saying that “a playful nip is not only not a bite, it is also not not a bite” (Sutton-Smith 1997, 1). Despite it’s playfulness, a nip is still not the act of not nipping. The act, nipping, playing, has ambiguous meaning. Sutton-Smith concludes his meta-analysis of the various overlapping and interconnected rhetorics of play writing, “variability is the key to play, and that structurally, play is characterized by quirkiness, redundancy, and flexibility” (Sutton-Smith 1997, 229). The variability and quirkiness of play are linked to the biological adaptability of human evolution: “play’s variability acts as feedback reinforcement of organismic adaptive variability in the real world” (Sutton-Smith 1997, 230). Unlike the video game industry, where play and games are understood as related solely to entertainment and fun, and their job is to code a fun escape from life’s problems (Koster 2005), Sutton-Smith links play to the world, people and biology. Play is not simply fun; play is not ‘not work;’ play is something with purpose.[2]

As the most recent theorist of play and games, Jesper Juul is also the first generally quoted theorist to come from a purely digital game perspective with a doctorate in video game studies. In contrast to the above theorists, Juul attempts to understand what a video game is, not simply what a game is, and not the nature of play. Despite the different goal, Juul’s contribution similarly begins with a definition of what is and is not a game. He takes the definitions of seven previous theorists (including Huizinga, Caillois, and Sutton-Smith) and distills them into what he considers to be the core features. This meta-definition is his “classic game model,” which includes six features: rules; variable, quantifiable outcome; valorization of outcome; player effort; player attached to outcome; and negotiable consequences:

A game is a rule-based system with a variable and quantifiable outcome, where different outcomes are assigned different values, the player exerts effort in order to influence the outcome, the player feels emotionally attached to the outcome, and the consequences of the activity are negotiable. (Juul 2005, 36)

Juul’s definition clearly indicates how ‘play’ has been translated fully into ‘games’ within much of game studies. Noble war, free-form play, ring-a-ring o’ roses are all excluded from his categorization of games because they fail to include certain of the six key features. Even chance-based gambling, which is an obvious form of Caillois’ alea, is relegated to a borderline game for Juul. However, while even Juul understands the problems of this very tight ‘classic game model,’ in that it ignores certain games and types of play, he ignores that his definition does not allow for the world and context. Juul does not account for Salen and Zimmerman’s (2003) tertiary rubric of game, design and context where context is culture, or Raessens and Goldstein’s (2005) separation that similarly includes context/culture. In Juul’s final analysis “the rest of the world” has almost entirely been removed as an “optional” element (41). That Juul pushes context, culture and the world out of his essential ontology of games is unfortunate, but it is quite common for much of the works in game studies; there is a visible distinction between work in the field that argues to see universalities and essential perspectives on the one hand, and work that looks at particular locations and iterations on the other hand. It is my contention that we need to pay more attention to the latter despite the dominance of the former in the past decade of research.

The main problem with such ontologies of play is that they seek to render play as universal across cultures and locations. While Huizinga has a vaguely located understanding of play in that he is discussing a Western evolutionary theory, Caillois makes a much more structuralist argument that borders on universality across cultural particulars. This difference between located particularity and general universality is reproduced in later theorists, but the ontologies tend toward the side of universality for greater emphasis. There are two anchors of play to universality: the first is between play and biology, and the second is between digital games and software code.

The first anchor of play to universality is biological. Sutton-Smith’s ambiguity is a built in element that should allow a more particular understanding of any type of play, but his conclusion links play to biological evolution and naturalized universality. Sutton-Smith’s rhetoric of play as progress is absolutely essential to many subfields of game studies. The rhetoric of play as progress links play to learning how to do things that will be ‘not play’ later: dolls become babies, play becomes housekeeping, practice fighting becomes real fighting, play becomes work, et cetera. Many media effects studies, which I will discuss below, depend on a universal understanding of play as a biologically universal. The active media approach to media effects believes that technological mediation necessarily affects all (passive) players in the same way. The location of play, the particulars of the players, and the details of the game are all subsumed under a reductive understanding that people play, and are affected by, games in the same way.[3] Play, understood through this rhetoric, is universal; we all develop, we all learn to play in the same progressive way. Therefore there must be a universal meaning of games.

The second anchor between to universality is through the mutable nature of games as coded software.[4] For example, Juul’s idea of rules and code is linked to an understanding of software and the machine as universal. The machine is ordered. This concept comes from two of Lev Manovich’s five principles of new media, modularity and variability.[5] Modularity understands that new media texts, such as games, are comprised of a host of smaller elements. These elements are combined into the larger text, but exist as themselves. This can be easily seen in the way that software is coded: not as a continuous file, but as a file that calls smaller files: subroutines, functions, procedures, or scripts (Manovich 2001, 30-1). Individual modules can be easily replaced, which brings up interesting possibilities with versions, originals and derivatives. Video game localizations serve an interesting example, as the national linguistic assets of a game are modular and can be swapped out for another set of national linguistic assets. The program itself, the game’s essence does not change, so the game is considered the same thing.

The principle of modularity links up with Manovich’s fourth principle, variability, to create what I understand as a mutable universality through digital manipulation. Manovich lists numerous examples of variability, but scalability, which he calls the most basic, is also the most understandable (Manovich 2001, 37-9). A digital image can be seen in all of its pixilated glory at the full resolution, or reduced to lesser resolutions all the way down to a miniature desktop icon. This is enabled by the scalability of the digital image, which comes from a necessity of new media to be variable. Again, in terms of localization, we can see variability in the necessity of using the modularity of digital video games to enable variations of the applications. Modules are replaced to create the idea of a mutable video game. However, key to the difference between old and new media is not that this variability happens, but that “there exists some kernel, some structure, some prototype that remains unchanged throughout the interaction” that can be considered the essence of the new media text (Manovich 2001, 40). While everything else may be manipulated at no loss, the kernel is considered the essential core to the text/game, and universally understood.

In contrast to these overarching, universal conceptualizations of play and games, there exist recent studies that delve into the very spatially located and contextually specific nature of play and games. Three of these are Mary Flanagan’s work on critical play, Thomas Malaby’s study of gambling in Greece and contention that games must not be separated into play, and Alex Galloway’s concept of social realism in games. Each of the following studies problematize a universal understanding of play, and the ontologies discussed above, but they are not a unified front. Malaby does not follow Galloway’s focus on play and action; Flanagan focuses on play and not the video games in Galloway’s work; finally, Flanagan is working on critical design interventions into games, in contrast to Malaby’s anthropological studies of how games change and are changed by groups of people.

As a game designer and theorist, Mary Flanagan looks at located practices of play. As Flanagan notes, “while the phenomenon of play is universal, the experience of play is intrinsically tied to location and culture” (Flanagan 2007, 3). Games are played in particular spatial and cultural contexts, and this cannot be divorced from analysis (or an ontology of play and games). Her work then moves toward how to design games that deal with meaningful social issues (games for particular places; games that deal with issues of particular places). In her book Critical Play: Radical Game Design, Flanagan argues that game design practice must start and end with values and goals. These values must come from social contexts, and the act of playing must support the values the designer sets out to interact with. Flanagan looks at a very wide spectrum of cultural ‘games,’ which she defines incredibly broadly as “situations with guidelines and procedures” (Flanagan 2009, 7). This allows her to consider artwork, playing house, board games, alternate-reality-games, and finally computer based. It is important to note that Flanagan, like Huizinga, Caillois, and Sutton-Smith, but unlike Juul and many of the other people that I will discuss later, does not limit her study to digital games despite that being the focus of parts of the game studies field at present. Additionally, Flanagan’s tight focus that goes from design to situated play leaves little question of the importance of context, but it also means that games for her do not necessarily travel between contexts. Translation involves redesign, which leads to a new game.

In Gaming: Essays on Algorithmic Culture, Alex Galloway defines video games as essentially related to action. “If photographs are images, and films are moving images, then video games are actions. Let this be word one for video game theory” (Galloway 2006, 2). Video games run through actions of a computer, and must be actively played by a user. These two sides can be doubled into diegetic and nondiegetic (acts within and acts outside the game world) to form a four quadrant theory: ‘diegetic machine acts’ such as running background noise at certain points, ‘nondiegetic operator acts’ like configuring options and settings, ‘diegetic operator acts’ of playing the game itself, and ‘nondiegetic machine acts’ such as game over or loading screens. So far, this bears much similarity to the universal ontological definitions above, but in the heart of Galloway’s book is a discussion of social realism in games. In the chapter he moves from the argument of visual representation toward including embodied, ludic action. Opposed to an impossible task of ‘realisticness,’ which is the task of mimetically representing the world visually, Galloway understands realism as a form of social critique that is linked to both form and context.[6] Galloway suggests “there must be… some type of fidelity of context that transliterates itself from the social reality of the gamer, through one’s thumbs, into the game environment and back again. This is what I call the ‘congruence requirement,’ and it is necessary for achieving realism in gaming” (Galloway 2006, 78). His point is brought home when he compares the ‘realism’ of two FPS (first person shooter) games with great ‘realisticness’ and a relative amount of ‘realism:’ America’s Army, created in conjunction with the United States Army, and Special Force, created by the Hezbollah Central Internet Bureau. Galloway argues:

video games absolutely cannot be excised form the social contexts in which they are played. To put it bluntly, a typical American youth playing Special Force is most likely not experiencing realism, whereas realism is indeed possible for a young Palestinian gamer playing Special Force in the occupied territories. (Galloway 2006, 84)

In games, realism is inextricable from action in context. In terms of ontology, no meaningful theory of games can be taken away from this contextual action and experience. Games never simply ‘are’; games are always in context.

Thomas Malaby argues that games should not be understood in relationship to play or rules, but as contingent cultural practices in the process of becoming. He believes the link of games to play is unhelpful, because play is always assumed to be its own activity. In contrast, Malaby’s dissertation research on gambling in a small Greek town leads him to argue that games and life are not separable: games like poker inform real life actions like politics. In a small essay after this research on gambling, Malaby defines a game as a “semibounded and socially legitimate domain of contrived contingency that generates interpretable outcomes” (Malaby 2007, 96). In opposition to Huizinga’s magic circle and Caillois’ typifications, Malaby argues that games are always integrated with life (there is no magic circle), but in contextual ways (the type of game is related to context, not an essence of the game itself). Additionally, in opposition to Juul’s focus on set rules, which Malaby calls a “misplaced formalism” (Malaby 2007, 103), he argues, “games are grounded in (and constituted by) human practice and are therefore always in the process of becoming” (Malaby 2007, 103). Their rules are never set as they depend on context. One major problem with Malaby’s argument is that he is analyzing analog games (gambling) in contrast to Juul’s digital games. While I do not wish to make an argument separating them as essentially different or similar, I do wish to point out that Juul’s set of rules within digital games must be programmed, and are therefore set (formal, universal), even if they are eternally mutable through patching (still universal through a non-particular quality). While there are local rules to analog games (house rules for gambling), there are no local rule sets for digital games (there are no house rules for StarCraft). Despite this discrepancy between formal qualities of analog and digital games, Malaby interjects into game studies discourse a much needed focus on both particularity and process.

 

Art, Rhetoric and Persuasion – What Games Are to What Games Could Do

A second subfield looks at what games are aesthetically. The next section of this paper traces the games and/as art, and its current trajectory into questions of what games do rhetorically.

In the introduction to their edited volume, Art and Videogames, Andy Clarke and Grethe Mitchell (2007) connect art and games in a series of stages. The first stage involves utilizing, or repurposing game iconography as art. One example is the street artist Invader, who makes tile mosaics in the shape of characters from the classic 1970s arcade game Space Invaders. He places these mosaics around the world and documents them as ‘invasions.’ In major cities within a world of increasing migration, these invasions serve to propagate alternate ideas of citizenship and ‘alien’ movement. However, iconography does not need to be visual. There are many examples of music groups that play covers from video games, and even performances of Uematsu Nobuo’s Final Fantasy music in concert halls could be considered iconographic art.

The second stage involves game art that utilizes the technology of the game system. These are usually internal or external game modifications.[7] Three examples are Brody Condon’s Adam Killer (1999), Anne-Marie Schleiner’s Velvet-Strike (2002), and Cory Arcangel’s Super Mario Clouds (2002). With his Half Life game modification, Adam Killer, Condon plays with the idea of representation, killing and the FPS by putting the player in a room with innumerable, passive, identical characters where the only thing to do is kill them. Schleiner’s Velvet-Strike is a spray paint modification for the popular FPS game that allows the player to tag walls with user generated, anti-war and anti-violence graffiti instead of running around killing other players. Like Adam Killer, Velvet-Strike plays with the FPS genre’s ‘essential’ shooting element. For Super Mario Clouds, Cory Arcangel removed all of the code from the game Super Mario Bros other than the clouds in the background drifting by to the left, so that when played the game of running, jumping, collecting coins and rescuing a princess does not exist. All that remains is the peaceful experience of watching 8bit clouds drift by.

A third type repurposes gameplay as art. This can be seen in machinima that create movies from game play footage, speedruns as perfect playthroughs, and performances within game worlds. Machinima (from Machine Animation) use video from game play, or video using game engines, in order to make movies. Machinima range from the original United Ranger Films’ “Diary of a Camper” (1996) using the Quake engine to create the most basic of narratives, to Jake Hughes’ “Anachronox: the Movie” (2002), which combined the game Anachronox’s (2001) various cutscenes to create a full length movie, to the long running comedy series Red vs. Blue that uses various Halo engines,[8] to fan made music videos such as Oxhorn’s “ROFLMAO!” (which uses the World of Warcraft engine to adapt the skit “Mahna Mahna” from the Muppets Show).[9] Speedruns attempt to link the playing of games to art, so that Andrew Gardikis’ five-minute speedrun of Super Mario Bros. is elevated to artisan or athletic skill.[10] Finally, there are performances within game environments such as Joseph DeLappe’s dead-in-iraq (2006) or his The Salt Satyagraha Online (2008). For dead-in-iraq, which is still in progress, and will continue as long as United States military is still in Iraq, DeLappe logs into America’s Army, finds a quiet corner, and types out the newly released names of the war dead. He does not take part in the simulated violence. Instead, he types and continues to type until he invariably gets booted from the server for not ‘playing’ the game. He then logs onto a different server and continues until he finishes with the newly released names of the dead. For his reenactment of Gandhi’s “Salt March” in Second Life DeLappe rigged-up a treadmill to operate as an input device for the game; when DeLappe walked on the treadmill, his character, MGandhi Chakrabarti, moved in the game. Over twenty-six days in 2008, DeLappe walked 240 miles on the treadmill, thereby covering the same distance Ghandhi walked during his protest of the British salt tax in 1930. In both instances, the game world is the site of an artistic performance, but the game itself is only minimally used.

While the first three types are mainly alterations of games done primarily by artists and coders, the fourth type is moves toward games as art on their own grounds, and mainly produced by or with game designers. Art games follow the rules of games, but distance themselves from ‘normal’ games through self-identification as art. Examples include: Eddo Stern’s Waco Resurrection (2004), which plays with embodiment by putting players into the role of David Koresh at the 1993 Branch Dividian standoff in Waco, Texas; Jason Rohrer’s Passage (2007), which uses simple, 8bit graphics and sounds to play with memory, identity and life as the player moves forward with life, gets married or doesn’t, scores points, or doesn’t, but always dies in the end; and Tale of Tales’ The Path (2009), which plays with gender and the Little Red Riding Hood story to create an experience of introspection instead of action. All three games utilize the game form to enact art; the player’s experience is a central part of the art.

Unfortunately, one of the problems with Clarke and Mitchell’s declaration of ‘art games’ is that it necessitates not art games, and this brings the conversation back to what makes certain games ‘art’ and what makes other games ‘not art.’ Is it a matter of high and low culture, or good and bad art? Is it a matter of subjective judgment or professional training? Roger Ebert’s (2010) gate keeping of modern technology’s interaction with art is one of many places that show this sort of reductive yes/no is not particularly productive. Instead, we might say that the questions “what is art?” and “are video games art?” are themselves bad questions. Art has shifted in cultural status, material form, and actual purpose in different movements, cultures and time periods. Instead, Ian Bogost asks what do art and video games do, and what connecting movements can be seen in games? Bogost understands video games not as “art,” an amorphously defined battleground of different movements (Bogost 2009), but as a “new domain for persuasion… that uses procedural rhetoric, the art of persuasion through rule-based representations and interactions rather than the spoken word, writing, images, or moving pictures (Bogost 2007, ix). “Videogames service representational goals akin to literature, art, and film” (Bogost 2007, 45), but the means of attaining this representational goal cannot be the same as other media. Art for Bogost is tied to various movements, and linked to the gallery and museum; in contrast, video games are tied to their own movements and linked to the arcade and the home. While video games might do similar things as art, they cannot be equated, just compared.

Through Kenneth Burke, Bogost argues that rhetoric is essentially linked to persuasion and meaning: “Wherever there is persuasion… there is rhetoric. And wherever there is ‘meaning,’ there is ‘persuasion’” (Bogost 2007, 21). The particular form this rhetoric takes is procedural, which is to say in and through the game code (Bogost 2007, 14). “Procedural rhetoric is a technique for making arguments with computational systems and for unpacking computational arguments others have created” (Bogost 2007, 3). Bogost initially uses Molleindustria’s The McDonald’s Videogame (2006) as an example of procedural rhetoric at work. As a simulation of a fast food company, the player controls four areas: farmland, slaughterhouse, restaurant, and corporate headquarters. In order to win, the player must understand how to make money where the only way to succeed is to cut corners in terms of environmental and health-related issues such as cutting down rainforests, feeding the cows contaminated beef, bribing politicians, and of course using various forms of advertising and propaganda. The game persuades the player, through gameplay, that the only way for this type of business to succeed is by making unethical choices, and the player succeeds by playing in such an unethical manner. The game’s meaning then, is that fast food companies are bad. However, Bogost is clear that The Grocery Game, a website for saving money through coupons and stockpiling, uses procedural rhetoric equally well. The code finds the best deals, tells the user what coupons to use, what items to stockpile, and through following this logic the user saves money (Bogost 2007, 37-9).

Part of Bogost’s desire to understand how procedural rhetoric works is to make better games himself. His Cow Clicker (2010) game for Facebook is an attempt to mix irony and annoyance at the current breed of click games that place a façade of content over simplistic, repetitive, meaningless, but seemingly rewarded clicking. Cow Clicker allows you to click your cow once every six hours, spend ‘mooney’ (the in-game currency, which can be bought with real money) to buy different cows, and complete for numbers of clicks with friends and strangers. Ironically, the game was a hit despite its attempt at sarcasm. Bogost’s Guru Meditation (2009) iPhone game (it also has a version on the Atari VCS using the Joyboard) similarly uses procedural rhetoric to argue about the meaning of action. The goal of the game is to keep the iPhone as steady as possible so that the built in accelerometer registers as little movement as possible. Only by keeping the mobile phone immobile can the player succeed in the game. To play the game the player cannot do all of the required things of life such as walking to work, taking the bus, or moving in any way: the act of playing the game opposes being active within culture.

The other part of Bogost’s goal is to understand how others make games, which is to say, how to analyze the meaning of a game. This is particularly important in terms of cultural and political discourse, but also in its relationship to art.

September 12th: A Toy World (2003) is a game by Gonzala Frasca that makes claims about America’s strategies in its never-ending “war on terror.” The player sees an isometric ‘middle eastern’ village with many ‘citizens’ and a few ‘terrorists.’  The player has a targeting reticule that is aimed through mouse movement. Upon clicking the mouse, a missile is launched; it reaches its destination after a few moments. The missile may hit the target, may kill the ‘terrorist,’ may destroy nearby buildings, and/or may kill ‘civilians.’ In any case, the result is that nearby ‘civilians’ gather around the destruction, mourn, and become ‘terrorists’ themselves. Like with the 1998 film WarGames, where an AI learns that in nuclear war “the only winning move is not to play,” September 12th uses procedural rhetoric to persuade the player that, like tic-tac-toe and thermonuclear war, the only way to win the war on terror is not to play. Of note is that this rhetoric is not simply in games that wear their ideological stripes on their titles; it is also in games that hide their meaning.

Brenda Braithwaite has recently switched from making digital games professionally to creating a series of award winning board games called “Mechanics is the Message.” The game in the series that has garnered the most attention is called Train. In it, the player tries to get people to a terminus. There is rolling of die, blocking of other players’ trains, and the usual board game fun. However, when the first player gets his or her first train to the terminus, its name is revealed to be Auschwitz. At this point various details of the game come into a new light, but the game itself does not stop. Rather, the player is asked to get more passengers, to continue the ‘game.’ Whether the game actually continues or not is up to the players themselves, just as was the practice of many things within twentieth century Europe, including the extermination camps. The mechanics of a game persuade the player of a message and in the case of Train, the persuasion has been incredibly powerful. Many people do not want to continue and are appalled by their previous excitement; some people try to ‘save’ the passengers through creative play (Brophy-Warren 2009). In a Brainy Gamer podcast appearance on the topic of games and art, Art Historian John Sharp indicates that Train is art because its mechanics allow a complex emotional experience (Abbot 2009). Whereas Sharp understands post-Renaissance art and culture of the past 500 years to be dominated by the visual, games are a key element of the current change in world culture. The so-called “Ludic Age” is dominated by systems and action, and Sharp sees games as a way to interact with these systems. The way that a user enters the aesthetic experience of understanding the system at work is through playing the game, but when a game is in a museum as art it is unplayable and untouchable. Thus, games are contextually at odds with our current view of art.[11] To Sharp, it is only through playing games that we attain something that is similar, or could be similar, to art. Unfortunately, the ability to see games as things of artistic expression is hindered by the discursive understanding of games as fun, or a waste of time; by people refusing the possibility of games as art; by the place in which people play games: entertainment rooms, lounges and bedrooms instead of museums, galleries, or even places of worship.

If we put Sharp and Bogost together we might claim that certain games are now able to approach the claim of being ‘art’ because they successfully mount claims and are able to persuade their players of these claims. While procedural rhetoric is how games work, not all games are effectively persuasive. Similarly, not all ‘art’ is ‘good art.’ However, persuasion through rhetoric is something games can do to provide a parallel modality to the expressive rhetoric of art.[12] Quake is not particularly artistic or rhetorical, despite its importance historically as the first polygon based FPS game, and first engine to encourage machinima. In contrast, The McDonald’s Videogame, Train, and September 12th all mount arguments and could be claimed to be ludically aesthetic. One might further claim that America’s Army is just as rhetorically successful, if not more successful, due to its overarching integration into American culture. Such a claim is problematic because of the question where art, aesthetics and persuasion end and where propaganda begins, but such is a key place of research in game studies due to the long relationship between games and commercial industry.[13] I will return to the discussion in the closing section of this paper with discussions of what we do with games, but now I turn to questions of what games do to us by turning to the subfield of video game effects.

 

Media Effects – What Games Do to Players

Another large subfield of game studies is media effects research designed to understand the way that video games affect players. This subfield should be meaningful and important, as it seeks to understand the physical, mental and social ways that video games act on players, but it is problematic due to methodological issues and the baggage it carries from past incarnations involving previous media. I include the subfield while pointing out the numerous problems within it, some of which are ignored by the practitioners. I will focus on two research areas, violence and addiction.[14]

While the question of effects is broad, and in certain ways goes back to Plato’s fears that the written word would destroy peoples’ ability to remember (Phaedrus), it has taken 20th century form in the fears of how film and television effect their viewers, and 21st century form in similar fears regarding video games.[15] Examples of 20th century fears include the Payne Fund studies around 1930 that sought to link movie viewing and youth delinquency, and studies in the 1960s and 1970s that tried to link a rise of documented violence in America to television viewing. Screen and violence studies have continued into the present, particularly in relationship to children. A recent example compared kindergartener aggression levels after watching the television shows Mighty Morphin Power Rangers (a martial arts action show about ‘good’ and ‘evil’) and Barney (about a singing, purple dinosaur) (Singer and Singer 2005). The studies claim that watching violent television makes children violent. However, the extent and duration of the effect is completely unproven. In part because of decades of an inability to prove a causal relationship between screen viewing and violence, and in part because of the belief that ‘actively’ playing games might be different than ‘passively’ watching television, media effects research has moved into the realm of video game research.

The problem with video game effects research is the problem with traditional media effects research. It starts from the assumption that all people are equally affected through playing. It ignores the context of play, and any particulars of the players. Despite the claims that because players are active in their playing, thus more inclined to be effected by video games, video game “effects” research still work from an ‘active media’ perspective. Active media research assumes a passive user, who is affected by the medium in a universal way. Research methods tend to be quantitative, and are designed to find a statistical correlation between playing games and an effect. While there is some ‘active user’ research being conducted to understand the practices of interacting with media from an active perspective, such studies are in part meant as countermeasures to the effects research area itself.

In part because they have no voice to say one way or another, in part because of modern beliefs in what it means to be a child (Buckingham 2000), and in part because moral panic is easily mobilized in order to protect them, children are a primary population studied in relation to media effects. Are children becoming violent by playing games (Anderson)? Are they being indoctrinated into the military through playing propagandistic first person shooters, or murder simulators (per the disbarred lawyer Jack Thompson)? Are they becoming addicted to games (Chou and Ting 2003)? Are they becoming obese through playing games (Stettler et al. 2004)? There’s even the rare positive effect such as whether children have better visuospatial cognition [ability to understand spaces through vision] through playing first person games (Spence and Feng 2010)? A second common subject population is the military in the context of training, especially for understanding how youth might be indoctrinated into the military through certain games such as America’s Army in the United States.[16] However, because children are biologically adapting (as is understood through the very culturally based concept of childhood), and moral panic is easily mobilized around them, the focus tends to be on children instead of the military, or when children turn into the military. The correlation between effects research and children is particularly visible in the studies done on games and violence.

Using a General Affective Aggression Model [GAM], Craig Anderson and Karen Dill study the link between playing games, entering a generally more aroused state, and being primed for aggressive actions (Anderson and Dill 2000). In an initial questionnaire they found a correlation between violent video games and aggressive personality, and between playing violent video games and having a delinquent personality. A laboratory study further correlated playing violent video games with a desire to harm somebody else (represented by holding down a buzzer longer). Finally, in his meta-analysis a year later with Brad Bushman, Anderson argues:

results clearly support the hypothesis that exposure to violent video games poses a public-health threat to children and youths, including college-age individuals. Exposure is positively associated with heightened levels of aggression in young adults and children, in experimental and nonexperimental designs, and in males and females… In brief, every theoretical prediction derived from prior research and from GAM was supported by the meta-analysis of currently available research on violent video games. (Anderson and Bushman 2001)

Anderson is the main voice in the active media side of video games research that tries to show a causal relationship between playing video games and violence, which the above research and quote clearly indicate are a foregone conclusion.

Anderson’s work is extensive and forcefully written, but has been shown by numerous researchers to be biased, and to stem from faulty research. Opposing studies have shown that longer exposure to violent games correlates to reduced aggression (Sherry 2001, 425), and that the correlation between violence and video games disappears when gender is controlled for (Ferguson 2008; Gentile et al. 2004). Additionally, Anderson and Dill’s study was attacked as they ignore three out of four violence indications, all of which do not point to heightened aggression (Ferguson 2008, 6). In general, the various studies have been criticized because the authors have played up an incredibly weak correlation between violence and games and calling it a causal relationship. Finally, it is telling that Anderson, Douglas Gentile, and Katherine Buckley announce that, “the scientific debate about whether exposure to media violence causes increases in aggressive behavior is over… and should have been over 30 years ago” (Anderson et al. 2007, 4), but they do so through referencing none of the oppositional studies. While one side of the active media research ignores alternate studies in order to produce damning correlation (that they claim is a cause and effect relationship) between playing violent video games and violence, the other side points out the problems of the research, and the weak correlation, or even the lack of correlation, between games and violence (Egenfeldt-Nielson et al 2008; Ferguson 2010; Gauntlett 2005).

Media effects research is tied to what Christopher Ferguson calls the “Moral Panic Wheel” (Ferguson 2008).[17] The wheel turns as follows: (1) Most of the impetus for effects research begins with general societal beliefs that may be informed by cultural, religious, political, scientific or activist elements. (2) These general societal beliefs lead to media reports on potential adversarial effects. (3) The possibility for violence turns into a likelihood or certainty of violence, which is implied in the broadcasts of the mass media. (4) This dissemination of false information results in a call for research in order to support the original beliefs. (5) The research promotes fear, and is uncritically supported by the media that called for it in the first place. (6) Politicians then promote the panic and fear in order to promote their own political careers, which loops back around to more media reports on potential fears. Through this cycle parental and mass media reactionary responses get coupled with advocacy groups and easily funded scientific research, resulting in a majority of scientific publications indicating some sort of correlation. In contrast, because of the publication bias toward reporting positive effects, null effects go unpublished and therefore uncited (Ferguson 2010, 6-7).

In the recently released and relatively well publicized general audience nonfiction book Grand Theft Childhood, Lawrence Kutner and Cheryl Olson argue that there is a correlation between violence and playing games, but it is insubstantially determined at present, and far from the sole cause of violence. The authors argue that it would be far more productive to locate the other reasons for violent actions outside of media, and that it is not incidents of extreme violence that have increased (they have decreased), but bullying. However, what is most interesting about Kutner and Olson’s book is its popular orientation. Their book is published through a popular press, and targeted not at academic or scientific audiences, but the same audience that is otherwise subject to the wheel of media panic. The book’s second chapter is a cultural history of media panics over the past century in the United States. By showing the link between the current panic and previous ones, they highlight the parallels between what was being hidden and focused upon then, and what is being hidden and focused upon now. Their book thus works to contrast efforts of game naysayers, media voices, and politicians in ways that the academic oppositional studies and meta-analyses have simply failed at so far. Essentially, Kutner and Olson work within Ferguson/Gauntlett’s Moral Panic Wheel to slow it down.

While violence has been the most visible topic of video game effects research, it is not the only one. A second controversial effect is addiction. The question whether games are addicting, whether this addiction (if it is indeed an addiction) needs to be regulated, and for whom does it need to be regulated are all elements of this branch of effects research.

Like the research on violence, addiction research stems (at least in part) from media panics. The suicide of Shawn Woolley while playing Everquest in 2002; the 2004 suicide of Xiao Yi with accompanying note that discusses his addiction and desire to be reunited with other players (Guttridge 2005); the 28 year old South Korean gamer who collapsed after a 50 hour session in 2005 (BBC News 2005); and the 2010 starvation of a South Korean couple’s real life baby supposedly caused by the parents’ game addiction (Tran 2010). All of these incidents have been highly represented in the mass media (Reverend_Danger 2009), and following Ferguson and Gauntlett’s concept of moral panic, studies have begun to research addiction, but are inconclusive as yet. While the American Psychiatric Association (APA) has discussed the possibility of listing video game addiction as an official addiction, it has not yet done so for a number of reasons (ScienceDaily 2007; Hartney 2011). One of the main conundrums around the addiction issue involves the difference in bodily reaction for becoming addicted to games as opposed to drinking or gambling. While the societal reaction toward ‘game addiction’ is similar to that of ‘drug addiction’ (withdrawal form society and an inability to act ‘normally’), the physical manifestation of addiction is quite different in terms of dependence, withdrawal and relapse. For the APA these are serious differences, and part of the reason that game addiction is not considered an official addiction.

A second difference involves the relationship between games, addiction and design practices. If gamers are becoming addicted to playing, this is in part because of particular game design that draws the players in and encourages them to keep playing. If this is the case, then it is not games that are addicting, but particular design practices. In a 2003 study on game addiction, Ting-Jui Chou and Chih-Chen Ting argue that ‘flow experience’ might be a causal link between playing as habit and playing as addiction. The concept of “flow” comes from Mihaly Csikszentmihalyi’s (1990) study on optimal experience, or the rare, intense, but happy and pleasurable moments where the body and mind are completely consumed in something. Chou and Ting note that Becker and Murphy’s economic theory of ‘rational’ addiction, as a type of logical habit, is at odds with the psychological or sociological theory that understands addiction as abnormal excess. However, it is within the playing as habit that Csikszentmihalyi’s (1990) flow state is more likely to occur for players, thus pushing them over into the “quasi-lunatic” type of addiction (Chou and Ting 674). While the habit style of rational addiction is good for business, an enjoyable experience, and good game design, too much of it might generally push players over into the more dangerous area of addiction.

Despite this close link between flow and addiction, many games are designed with a type of flow experience in mind. The most obvious example (and the usual scapegoats) are MMOs like World of Warcraft, which require the player grind for hours in order to increase his or her level, or to gain higher levels of gear. However, clearer and more self-aware examples are Jenova Chen’s games flOw and Flower, which follow through from his MFA thesis on “Flow in Games” (2006). Chen sought to adapt Csikszentmihalyi’s flow experience into a workable theory of game design. Chen designs to allow the user to play in the ‘zone of flow,’ which exists between challenge and boredom depending on personal skill. The user is never at a loss for what to do, and never in a state that would cause him or her to stop playing from vexation. Flow in these games is about designing for user enjoyment, not long lasting play through repeated quests.

The other confusing element involving addiction and MMOs is why the players continue playing. Do people play MMOs due to a simple, unhealthy addiction, incredibly good game design, or because interaction with an alternate, online culture is different and/or better than the offline culture for one reason or another? Unlike the biologically oriented effects research, certain studies on addiction seek to understand societal reasons that people run to games.

Economist and theorist of virtual worlds, Edward Castronova, has used the economic “utility function” to understand the playing of MMOs, and to explain what he foresees as a mass migration from real to virtual world (Castronova 2007, 65-70). According to economics, the rational action is the one that produces the most value; people are rational; people do the rational thing that leads to the most value. Thus, MMOs must have some sort of value that is causing people to rationally play to the point of addiction. While Castronova agrees that there is value that these people seek in the game (freedom, fun, and an interesting experience), part of their decision to flee to the virtual worlds resides in the problems of the real world. These problems are lack of equality in employment, access, outcomes, wealth, et cetera. Addiction, then, is a silly concept as of course people are addicted to something that is fun and good. For Castronova, the answer is simply in changing the real world to take advantage of the utopian, virtual world benefits. He predicts that the exodus to virtual worlds that stems from problems in the real world will eventually reverberate by people changing the real world. Whether this is through making the real world game-like, or changing the bad parts of the real world is as yet unknown.

Flowing directly from the studies of addiction and MMOs are more general studies of virtual worlds. The following section is about gaming cultures studied primarily from an anthropological perspective.

 

Gaming Cultures – What We Do In Games

One of the first discussions of gaming cultures is Julian Dibbell’s 1993 Village Voice article, “A Rape in Cyberspace,” which is about how rape can exist in a virtual form, how a digital place can be social, and how emotions and affect work between real and virtual environments. Dibbell’s article describes a series of events in the online multi-user domain LamdaMOO, which began with a virtual ‘rape,’ followed with a general meeting and user encouraged ‘execution’ (a ‘toading’ that corresponded to the erasure of a character), continued with a ‘resurrection’ (a new character with similar actions), and ended with a character eternally ‘sleeping’ (offline, and away from game). The two key points of “A Rape in Cyberspace” are the emotional reaction to the rape, and the social formation surrounding the decision to toad the rapist. The emotional reaction to the virtual rape is important because it indicated the porousness of virtual/actual borders emotionally. Despite its virtuality, the rape was real. It had real effects on both the virtual avatars and on their actual players. The second key point was the social formation that mimicked real culture. MUDs were of course social situations, but the virtual execution of the rapist took place not by a developer’s authoritarian decision, but deep deliberation of a created social group. This type of cultural formation, deliberation, and action is surprising precisely because it is the exact same thing that happens in the actual world. The same thing that happens in the real world happens in virtual worlds. The magic circle is no longer a hard line between play and life (if it ever was), but a porous boundary between alternate places. These two points have been reiterated in much of the later work on gaming cultures: the things that happen in games matter; there is no hard line between online, virtual game and offline, actual life, and cultures are developed between the two worlds.

A later, and much quoted study in and of a modern MMORPG is T.L. Taylor’s Play Between Worlds (2006).[18] In her ethnographic research of then dominant MMORPG Everquest, she makes strong conclusions aligned with Dibbell’s decade earlier statements, against the hard boundaries of Huizinga’s magic circle, and for the sociality of games. Opposed to the ‘common sense’ belief that video games are antisocial, solitary endeavors and that video game players are loners, Taylor’s study crosses between the virtual and real world, showing how communities were built between the two worlds. Unlike the belief that the game was a place apart, an encircled, magic world of play, she writes of many real things that happened online; and conversely, that virtual things that happened in the game had an affect on the real lives of the players. Taylor concludes, “to imagine we can segregate these things—game and nongame, social and game, on- and offline, virtual and real—not only misunderstands our relationship with technology, but our relationship with culture… [Her] call then is for nondichotomous models” (Taylor 2006, 153). By nondichotomous she means models that do not push one theory or another, real or virtual, game or not game, play or work, as these either or situations falsely simplify the way we as humans interact. While Taylor does not directly reference Sutton-Smith’s ideas of the ambiguity of play here, she is in ways extending this ambiguity to the entire register of game interactions.

Taylor’s study and conclusions support many of the other research agendas from the argument that games affect gamers (discussed above), to the methodological expansion of the ethnographic method to alternate sites (Marcus 1998), to the fact that these synthetic worlds matter socially and economically (Castronova 2005; Dibbell 2006). Within game studies, Taylor’s study is indicative of a massive widening of research to a new area of game cultures and gamer cultures: studies of the social cultures of gamers in and out of the games. These have included studies of clans in MMOs (Pearce and Artemesia 2009, Nardi and Harris 2006), children and learning (Nardi et al. 2007), and even case modification culture (Simon 2007).

A similar, almost derivative work is Tom Boellstorff’s Coming of Age in Second Life (2008). Boellstorff’s ethnography of the MMO world Second Life is a detailed sketch of the culture. He reiterates some of the claims Dibbell made 15 years earlier involving the creation of social groups, and he follows Taylor by pointing toward alternate ethnographic field sites. However, key problems with Coming of Age in Second Life are the complete disregard of any element of the MMO as related to game, and that for him Second Life is a self contained culture. Regarding Second Life as a world, but not even world as he reiterates its titular claim of a ‘life,’ Boellstorff pushes away from the play or gameness of the application. Second Life is a place where life happens (Boellstorff 2008, 91). This claim is not wrong, as that is how players use the application. However, it is problematic as the application is fighting for market share with World of Warcraft, Everquest, Ultima Online and all of the other MMO games. While Second Life is not a game, it is still fighting against other games as an alternative, and as such should be considered as an industry and cultural comparative artifact. However, because of Boellstorff’s focus on the self-contained culture he ignores the industrial, cultural and real world particulars of how Second Life functions. Essentially, Boellstorff creates a magic circle of cultural containment around Second Life players. There are those who play it, and they are in one culture even if their culture goes between their two worlds of real and virtual.

Mia Consalvo’s work on cheating (2007) is a second way to study gaming culture. Consalvo focuses on the very particular, and yet very different ways that one may cheat within games. Unlike Boellstorff’s overly focused online study, Consalvo looks at the multiple places within the register of production and consumption that one might cheat. She spans from programming easter eggs to cheat the producer or distributer, and the creation of strategy guides by the industry or by the community, to cheating within gameplay through codes or hacking, and even to the effort to police cheating in order to create a fair system. As somebody who understands the range of ways digital gaming happens, Consalvo study explores the culture of game cheating in a much less reductive way than Boellstorff’s study of Second Life culture. Her study spans from top-down industry, to resistant fan practices, to mainstream player practices that are both allowed and illegal depending on the game. However, there is a familiar problem in her work: she fails to relate or explore the culture of cheating within different national and regional cultures. Consalvo presents cheating as universal.

Consalvo’s main focus of study (Final Fantasy XI) exists in Japanese, United States and European locales, but she ignores the differences beyond brief mention. For Consalvo, too, gaming culture exists apart from real world cultural matrices, which are somehow assumed to be equally self-contained. In contrast, I would argue that located (often nationally) cultural understandings of cheating are essential to understand the notion of cheating in games. The status of hacking, FAQ writing, and gold farming (all forms of cheating discussed by Consalvo), changes depending on the perspective of the subjects involved. To those who buy gold it is because they do not have enough time and the game to be a matter of fun and play value. In contrast, those who oppose the culture of farming and buying gold often consider it cheating as it ruins their way of playing. While these are conclusions that Consalvo derives, she fails to consider the plethora of freemium games—free to play, but micro-payments for add-ons—that are currently thriving. Originally prevalent with MMOs in Korea and China, the freemium model opposed more Western dominant monthly payment styles that support Consalvo’s conclusions about ‘fairness’ and ‘cheating.’ The freemium model is almost identical to gold farming, except it is the developers and producers who are ‘ruining’ the game by ‘cheating.’ Is gold farming as cheating somehow integrated into national or regional understandings of cheating? Even though she deals with gold farming, Consalvo fails to look at the different models, or situated understandings of cheating. She sees a universal gaming culture and corresponding understanding of cheating instead of understandings that are integrated into regional and national based cultural understandings.

While studies of game cultures and gamer cultures break with earlier separations of real and virtual, they also assume an essentialized alternate culture that somehow remains unlinked to cultures that players might otherwise be a part of (such as national, racial, or religious cultures). While these studies have facilitated the logical break of games as separate and pure fun, they have reinforced the ontological universality of games. In the next section I discuss the way that a different subfield of game studies deals with cultural and political issues between real world and games. In a way, these last two subfields overlap, and it is only the researchers’ agendas and methodologies that determine the placement in one or the other. If walls and fences separate the other disciplinary enclaves, these two are separated by train tracks. One is a bit more run-down, a bit dirtier, and a bit more tangled; however, my alliance is with the dirtier enclave in the next section, which more critically goes back and forth between real and virtual worlds studying their tangled interactions.

 

Political and Cultural Issues – How Games and the World Interact

The final area that I will discuss here involves the intersection of (supposedly) contained games with (supposedly) external variables, issues and problems in the world. Like with the anthropological studies of game cultures, these cross Huizinga’s magic circle, but these final studies work toward how games problematically reproduce, reiterate and facilitate real world issues. In this final subfield of game studies are issues of empire, economics, race, gender, and my own focus, translation. While this subfield holds promise for future study from a Communication perspective, the studies as they stand are relatively limited at present. This looseness is in part because different authors do not necessarily draw from each other, or from the core of game studies. Rather, they often draw from other interdisciplinary origins and discussions. However, if conversations continue to happen, this subfield could turn into a very powerful and productive area in contrast to the bickering of other areas where either or arguments tend to dominate.

In their 2003 book Digital Play, authors Stephen Kline, Nick Dyer-Witheford, and Greig De Peuter provide a more “multidimensional approach” to studying games that mixes media studies, political economy, and cultural studies perspectives. From studies of media the authors draw from Harold Innis’ analysis of the printing press and technology’s relationship to bias, empire and knowledge, and Marshall McLuhan’s extension of Innis’ theories toward electronic media. From political economy, the authors create a genealogy from Karl Marx’s critique of Capitalist accumulation, through the Frankfurt School’s critique of mass culture, to Herbert Schiller’s link of capitalism and communication technologies to American Empire. However, they focus on Nicholas Garnham’s “circuit of capital” where both selling and advertising feed off of each other (Kline et al. 2003, 39-49). Using cultural studies approaches, the authors problematize the top down reductive, top down understanding of the political economic approach that does not consider agency or the user. This they pull from Stuart Hall’s call for both encoding and decoding of messages, and reception studies’ focus on the viewer/user. However, they also critique cultural studies for how it ignores embodied play, underplays the commercial structure of the industry, and misses the fact that audiences do not simply exist, but rather, are constructed in the commercial marketplace (Kline et al. 2003, 45-6). Finally, the authors get inspiration from Raymond Williams complex understanding of television as a continuously shaped and shaping technology that was far from inevitable or predetermined (as media determinists would hold). Through Williams, and more directly Garnham’s “circuit of capital,” the authors combine the three disparate approaches through with their Three Circuit model of interactivity (Kline et al. 2003, 58). The three approaches form separate circuits of technology, marketing, and culture respectively. The center where they overlap is the realm of games, and as the circuits are constantly working, the approach integrates a non-universal, non-cemented view of games that incorporates time and change. While their approach has much promise, primarily due to the complexity, it has been little used. In part this is due to the very complexity that they espouse, but in part it is due to their avoidance of the seminal figures of game studies and the digital/coded aspect. While they discuss the designers and programmers, they do so from a historical approach that avoids the more technical aspects of computer games as code. The unfortunate result is that they do not speak to the more technical areas of game studies, which are dominant at present, and those technical areas do not pull from Kline et al.’s complex, interdisciplinary approach. This is further visible in that Dyer-Witheford and De Peuter have co-written a second book on Games of Empire (2009), which goes similarly unmentioned in most of the other literature in game studies. While cited in bibliographies, the two books are little discussed in other studies, and their overarching method is so far not often followed. This lack of crossover has little to do with the usefulness of their work and all to do with the disciplinary and methodological biases of the individual people in the field.

There are two places where Kline et al.’s interdisciplinary methodological intervention can be seen to be bearing fruit so far. The first is in Aphra Kerr’s (2006) recent textbook-like overview of games. In The Business and Culture of Digital Games: Gamework/Gameplay, Kerr argues, “that digital games are socially constructed artefacts that emerge from a complex process of negotiation between various human and non-human actors within the context of a particular historic formation” (Kerr 2006, 4). Therefore, it is necessary to study the span of contexts, actors and artifact in all of their manifestations: these moments include studying the game as text to be played, the industry as an object, the global networks of production that make games, players in their context and how they particularize play, which includes counter gaming strategies. Unfortunately, her study is a broad overview aimed at introductory students in game studies, which does not follow any particular game through these different moments. As a result, the interdisciplinary methodology remains an ideal or goal.

A second connection is Matthew Payne’s yet-to-be-finished dissertation work, where he follows Kline et al.’s circuits of capital methodology to study FPS games.[19] Payne argues that FPS games represent a new form of Ludic Capitalism.[20] The connection between world and game, Capitalism and commodity, is key to critical communication studies of games and is one of the key benefits of this area of research. Payne’s extension from these larger connections to a particular genre is a second important point. By focusing on a genre, Payne is (likely) able to get closer to making particular arguments that games in general (through the studies of Kline et al. and Kerr) do not approach. Unfortunately, other than Kerr and Payne few studies so far take up the challenge of moving between the various fields within game studies. As it stands, despite its claims toward interdisciplinarity game studies is still focused on separate areas, or to continue with Kline et al.’s terminology, separate circuits.

Connected to both Kline et al’s discussion of culture and circuits, and Kerr’s analysis of business economics, is a subset of research on game economies. Most striking are Edward Castronova’s early study of Everquest’s GDP, Julian Dibbell’s effort to make a living buying and selling in Ultima Online, and Ge Jin’s documentary work on Chinese gold farmers.

The first of these three examples is Castronova’s study of the economy of Norrath, the world in which MMORPG Everquest takes place. In his study, Castronova calculates that “the gross national product of Norrath [is] about $135 million… [making Norrath] the 77th richest country in the world, roughly equal to Russia” (Castronova 2001, 33). While the per capita income is above the poverty line, “inequality is significant” both between levels (as designed) and within equal-level characters (signaling poverty where there ought to be equality) (Castronova 2001, 33). Castronova’s study was groundbreaking as it showed a system of value within Everquest’s diegetic world (and by extension in any MMO), but it was also ground breaking for showing that economies of game worlds are tied to national economies. There is no magic circle to separate work and play, real and virtual lives; rather, they are completely tangled together.

Two years after Castronova’s study, Julian Dibbell published an article (2003) about the sale of real estate in MMORPG Ultima Online, and a year after that Dibbell conducted a yearlong attempt to make his primary source of income the buying and selling of goods in Ultima Online, which he documented in a blog and later compiled in a book. In Play Money (2006) he documents the banal process of the various ways he learned to buy and sell in Ultima Online from farming and selling, to buying and selling items, to simply buying and selling gold. Dibbell’s analysis is interesting in that he proves he can make a reasonable living. However, it shows dystopian sides, as he does not make as much as he did as a writer, which is not the highest paid job regardless, and because of the exploitation at “gold farms” in Tijuana, Mexico (Dibbell 2006, 9-29; 88-134). While Dibbell works independently as a buyer and seller, the Tijuana gold farm (which the author hears about, but does not see) is a place of unskilled, exploited labor. Gold farms are a place of incredible importance for the back and forth between world and game; the issues of exploited labor, gold farming, and racism have gotten more important as such gold farms have become more common in places with available and exploitable labor.

A third influential work is Ge Jin’s documentary work on Chinese gold farms (2006, 2007). He has studied the living, working and discursive conditions of gold farmers in and around MMOs. While some MMO players are against gold farmers who make in-game currency in order to sell it for real world currency, other MMO players claim gold farmers are simply supplying a demand. Regardless of the official status of their act,[21] gold farmers’ conditions bring up critical issues regarding work and play in rich and poor environments, and the visible material connections between virtual and actual worlds. Castronova puts in-game economic disparity as an afterthought at the very end of his early work (2001), but five years later it has become a crucial area of study. While the economics of online worlds is acknowledged, it remains a highly contentious area of game culture: Who is right and who is wrong? How can one play properly? Is a gold farmer’s play work? These and many more problematic areas are touched through the topic of game economies. However, key to my point is that real world economics is not simply being reproduced in the game. Rather, the game affects the real world and the real world affects the game, and to study one it is necessary to study the other.

An area that is heavily discussed, but little advanced, is racial and gendered representation within games, which can produce further racist and sexist societal attitudes.[22] Typically, characters in games have been white and male. This reproduces certain industry structures, as the International Game Developers Association reports its demographics to be 83% white and 88% male (IGDA 2005). However, the character bias (white and male) has also been ‘justified’ by a (mistaken) belief that game players are white and male. Certain studies indicate the male/female ratio of players is 60% male and 40% female (Hewitt 2008), but this ratio has been reported as different as 93% female for certain genres and modes of play (Juul 2009, 80). While the 93% is specifically for casual, downloaded games, the generally accepted 60% male to 40% female ratio is quite different than the mistaken belief that games are solely played by teen boys. Racial and national player base percentages are unclear, but studies note that game expenditure is quite high for African Americans (Nielson 2010), and Japanese, Chinese and Korean game industries are all large. Because of the relatively multinational and multiracial player bases, the unbalanced ratios of represented characters are incredibly problematic.

There have been numerous studies on the representational bias, sexism or racism in games (Everett 2005; Jenkins 2006; Ludica 2004; Nakamura 2002, 2009; Richard and Zaremba 2005). Later studies indicate that along with an increased awareness of the issue of representation there have been more neutral, and less debasing, inclusions of race (Higgin 2009) and gender (Alexander 2010; Kim 2010) in games in the past years as opposed to previous decades. Representation in games has gotten visibly ‘better.’ However, it is far from ‘good,’ and there are still untouched areas that bring out what Ludica in their mission statement call the “re-active” as opposed to “pro-active” way these changes have become manifest.[23] Race and gender have been re-actively attacked because of they have become rendered visible in more mainstream discourse (Cassell and Jenkins 1998), but national and alternative groups are still unrepresented and unrepresentable in mainstream game discourse. It is necessary to pro-actively approach the topic of representation as a general study.

James Paul Gee’s work on learning and literacy is an interesting alternative pro-active theorization approach. Gee notes that games often simply reinforce typical cultural models (Gee 2007, 145-6). These include the racial and gendered ones I discussed above, but range to concepts such as good and evil, that working hard makes everything possible, and that ‘aliens’ are necessarily evil and must be exterminated. He also argues that games can challenge cultural models (for good or ill). One challenging game, Under Ash, challenges the standard FPS model of shooting Arabs/Aliens by putting the player into the role of a Palestinian of the infitada fighting Israel (Gee 2007, 155-60). A more childish, but equally interesting game is Sonic Adventure 2 Battle, which allows the user to play as both Sonic the Hedgehog, a good character, and Shadow the Hedgehog, a bad character; as Sonic, the player saves the world, and as Shadow, the player seeks to destroy the world. On one side, Gee’s six-year-old informant learns the ideological concept of ‘good,’ and on the other the child learns to fight for the group despite its ‘evil’ nature. Both are models that can be learned.

There is a paucity of games that allow the user to experience and learn alternate cultural models. In part this is an industrial/economic issue: the familiar sells; genres sell; companies design what the audience wants. But in part it can be seen as a conservative cultural trend. Oppositional models, which come in many forms, are unacceptable in mainstream culture. That kids could learn useful things from violent video games, the simulation of both sides of an American military conflict, and the existence of games about ‘defending’ the border between United States and Mexico are all oppositional cultural models.[24] However, these alternate models are key to learning for both children and adults. It is unfortunate that they are so rare. To extend Ludica’s comment regarding pro-active and re-active practices beyond gender, pro-active game design practices are the active inclusion of alternate models. The problem with re-active studies of representations (be it of gender, race, sexuality, or nation) is that they are not productive avenues. However, representation is not the only place models disappear.

One of the places that cultural models tend to disappear is in the practice of localization, which renders foreign games culturally palatable through a thoroughly domesticating[25] form of translation. The game industry understands good games as those that sell well and are entertaining. As such, the practice of translating a game includes making it easy to consume, and this involves changing alternate cultural elements. Cultural models that are risqué, or assumed to be difficult to understand, are altered or deleted (Chandler 2005; Mangiron 2006; O’Hagan and Mangiron 2004). These changes are justified through the new media principles of modularity and variability discussed above (Manovich 2001). An example is the Japanese Ryu ga gotoku 3 (2009), which was localized in the United States as Yakuza 3 (2010) with numerous Japanese specific elements removed, including hostess bars and pachinko parlors (Ashcraft 2010; DJ Fob Fresh 2010). An alternate, but similarly domesticating strategy of localization would have been to change hostess bars to strip clubs, and pachinko parlors to slot machines. However, this would still result in altered cultural models. A way to allow (or force) the user to engage with the alternate cultural models would be to leave the Japanese game assets in place. This could be considered a foreignizing, and non-localizing strategy of translation. This alternate strategy would also lead to the player confronting real world cultural models that he or she might otherwise not see. This interaction involves learning, but it is also the way one may ethically witness and confront a foreign culture.

Real world culture is inseparable from games, and these studies approach the topic of games in this very tangled way. Unfortunately, this subfield is quite limited at present: it is the ‘wrong side of the tracks,’ but its problems and disorders are much more promising than the ordered existence of the gaming cultures subfield.

 

Conclusion

In this paper I have discussed the various subfields currently active within the field of game studies. The subfields give the field at large an interdisciplinary appearance, but they tend to have disciplinary underpinnings themselves. As the outline of a class taught in a department of Communication it is my intent to highlight the interdisciplinary field, the disciplinary subfields, and the foci of the subfields; it is not just topic, or area of the game that changes between subfields, but means of approaching that topic.

 

Bibliography:

Abbot, Michael, Brenda Brathwaite, and John Sharp. Brainy Gamer Podcast 26, November 9, 2009

Alexander, Leigh. “Bayonetta: Empowering or Exploitative?” GamePro, January 6 2010.

Anderson, Craig Alan, and Brad J. Bushman. “Effects of Violent Video Games on Aggressive Behavior, Aggressive Cognition, Aggressive Affect, Physiological Arousal, and Prosocial Behavior: A Meta-Analytic Review of the Scientific Literature.” Psychological Science 12, no. 5 (2001): 353-59.

Anderson, Craig Alan, and Karen E. Dill. “Video Games and Aggressive Thoughts, Feelings, and Behavior in the Laboratory and in Life.” Journal of Personality and Social Psychology 78, no. 4 (2000): 772-90.

Anderson, Craig Alan, Douglas A. Gentile, and Katherine E. Buckley. Violent Video Game Effects on Children and Adolescents: Theory, Research, and Public Policy. Oxford; New York: Oxford University Press, 2007.

Ashcraft, Brian. “Sega, You Are Once Again Making a Giant Mistake.” Kotaku, February 24 2010.

Bartle, Richard. “Hearts, Clubs, Diamonds, Spades: Players Who Suit Muds.” In The Game Design Reader: A Rules of Play Anthology, edited by Katie Salen and Eric Zimmerman. Cambridge: MIT Press, 2006 [1996].

BBC News. “S Korean dies after games session.” BBC News. Posted: August 10, 2005. Accessed: April 7, 2011. http://news.bbc.co.uk/2/hi/technology/4137782.stm

Boellstorff, Tom. Coming of Age in Second Life: An Anthropologist Explores the Virtually Human. Princeton: Princeton University Press, 2008.

Bogost, Ian. Persuasive Games: The Expressive Power of Videogames. Cambridge: MIT Press, 2007.

———. “Persuasive Games: The Proceduralist Style.” Gamasutra, January 21 2009.

Boomen, Marianne van den. Digital Material: Tracing New Media in Everyday Life and Technology. Amsterdam: Amsterdam University Press, 2009.

Brophy-Warren, Jamin. “Speakeasy: The Board Game No One Wants to Play More Than Once.” The Wall Street Journal. Posted: June 24, 2009. Accessed: April 2, 2011. http://blogs.wsj.com/speakeasy/2009/06/24/can-you-make-a-board-game-about-the-holocaust-meet-train/

Brown, Harry J. Videogames and Education, History, Humanities, and New Technology. Armonk, N.Y.: M.E. Sharpe, 2008.

Bryce, Jo, and Jason Rutter. “Gendered Gaming in Gendered Space.” In Handbook of Computer Game Studies, edited by Joost Raessens and Jeffrey H. Goldstein. Cambridge: MIT Press, 2005.

Buckingham, David. After the Death of Childhood: Growing up in the Age of Electronic Media. Cambridge, UK ; Malden, MA: Polity Press, 2000.

Caillois, Roger, and Meyer Barash. Man, Play, and Games. Urbana: University of Illinois Press, 2001.

Cassell, Justine, and Henry Jenkins. From Barbie to Mortal Kombat : Gender and Computer Games. Cambridge, Mass.: MIT Press, 1998.

Castronova, Edward. “Virtual Worlds: A First-Hand Account of Market and Society on the Cyberian Frontier.” CESifo Working Paper Series, no. 618 (2001).

———. Synthetic Worlds: The Business and Culture of Online Games. Chicago: University of Chicago Press, 2005.

———. Exodus to the Virtual World: How Online Fun Is Changing Reality. New York: Palgrave Macmillan, 2007.

Chan, Dean. “Negotiating Intra-Asian Games Networks: On Cultural Proximity, East Asian Games Design, and Chinese Farmers.” The Fibreculture Journal 8 (2006): 1-14.

Chandler, Heather Maxwell. The Game Localization Handbook. Hingham: Charles River Media, 2005.

Chen, Jenova. “Flow in Games.” University of Southern California, 2006.

———. “Flow in Games (and Everything Else).” Communications of the ACM 50, no. 4 (2007): 31-34.

Chou, Ting-Jui, and Chih-Chen Ting. “The Role of Flow Experience in Cyber-Game Addiction.” CyberPsychology & Behavior 6, no. 6 (2003): 663-75.

Clarke, Andy, and Grethe Mitchell. Videogames and Art. Bristol, UK; Chicago: Intellect, 2007.

Coleman, Beth. Hello Avatar. Cambridge: MIT Press, 2011.

Consalvo, Mia. Cheating: Gaining Advantage in Videogames. Cambridge, Mass.: MIT Press, 2007.

Csikszentmihalyi, Mihaly. Flow: The Psychology of Optimal Experience. 1st ed. New York: Harper & Row, 1990.

Dibbell, Julian. “A Rape in Cyberspace.” The Village Voice, December 23 1993.

———. “The Unreal Estate Boom.” Wired 11, no. 1 (2003).

———. Play Money: Or, How I Quit My Day Job and Made Millions Trading Virtual Loot. New York: Basic Books, 2006.

Dyer-Witheford, Nick, and Greig De Peuter. Games of Empire: Global Capitalism and Video Games. Minneapolis: University of Minnesota Press, 2009.

Ebert, Roger. “Video Games Can Never Be Art.” Chicago Sun-Times. April 16, 2010. http://blogs.suntimes.com/ebert/2010/04/video_games_can_never_be_art.html

Egenfeldt-Nielsen, Simon, Jonas Heide Smith, and Susana Pajares Tosca. Understanding Video Games : The Essential Introduction. New York: Routledge, 2008.

Elias, Norbert, and Eric Dunning. “Leisure in the Spare-Time Spectrum.” In Quest for Excitement: Sport and Leisure in the Civilizing Process. Oxford; New York: Basil Blackwell, 1986.

Everett, Anna. “Serious Play: Playing with Race in Contemporary Gaming Culture.” In Handbook of Computer Game Studies, edited by Joost Raessens and Jeffrey H. Goldstein, xvii, 451 p. Cambridge, Mass.: MIT Press, 2005.

Ferguson, Christopher J. “The School Shooting/Violent Video Game Link: Causal Relationship or Moral Panic?” Journal of Investigative Psychology and Offender Profiling 5 (2008): 25-37.

———. “Blazing Angels or Resident Evil? Can Violent Video Games Be a Force for Good?” Review of General Psychology 14, no. 2 (2010): 68-81.

Flanagan, Mary. “Locating Play and Politics: Real World Games & Activism.” Paper presented at the Digital Arts and Culture, Perth, Australia 2007.

———. Critical Play: Radical Game Design. Cambridge, Mass.: MIT Press, 2009.

Fresh, DJ Fob. “A Yakuza 3 Guide to Edits and Cuts.” Segashiro 2010.

Galloway, Alexander R. Gaming: Essays on Algorithmic Culture. Minneapolis: University of Minnesota Press, 2006.

Gauntlett, David. Moving Experiences : Media Effects and Beyond. 2nd ed. Eastleigh: John Libbey Publishing, 2005.

Ge, Jin. “Gold Farmers.” 2006.

Gee, James Paul. Good Video Games + Good Learning: Collected Essays on Video Games, Learning, and Literacy. New York: P. Lang, 2007.

———. What Video Games Have to Teach Us About Learning and Literacy. Rev. and updated ed. New York: Palgrave Macmillan, 2007.

Gentile, Douglas A., Paul J. Lynch, Jennifer Ruh Linder, and David A. Walsh. “The Effects of Violent Video Game Habits on Adolescent Hostility, Aggressive Behaviors, and School Performance.” Journal of Adolescence 27 (2004): 5-22.

Guttridge, Luke. “Chinese Suicide Shows Addiction Dangers.” Play.tm. Posted: June 3, 2005. Accessed: April 7, 2011. http://www.play.tm/news/5928/chinese-suicide-shows-addiction-dangers/

Harrigan, Pat, and Noah Wardrip-Fruin. Third Person: Authoring and Exploring Vast Narratives. Cambridge: MIT Press, 2009.

Hartney, Elizabeth. “Is Video Game Addiction Really an Addiction?” About.com, Addictions. Updated: January 8, 2011. Accessed: April 7, 2011

Hewitt, Dan. “Women Comprise 40 Percent of U.S. Gamers.” The Entertainment Software Association, 2008.

Higgin, Tanner. “Blackless Fantasy: The Disappearance of Race in Massively Multiplayer Online Role-Playing Games.” Games and Culture 4, no. 1 (2009): 3-26.

Huizinga, Johan. Homo Ludens; a Study of the Play-Element in Culture. Boston,: Beacon Press, 1955.

International Game Developers Association. “Game Developer Demographics: An Exploration of Workforce Diversity.” October 2005. http://www.igda.org/game-developer-demographics-report

Jenkins, Henry. “”Complete Freedom of Movement”: Video Games as Gendered Play Spaces.” In The Game Design Reader: A Rules of Play Anthology, edited by Katie Salen and Eric Zimmerman, xxx, 924 p. Cambridge: MIT Press, 2006.

Juul, Jesper. “The Game, the Player, the World: Looking for a Heart of Gameness.” In Level Up: Digital Games Research Conference Proceedings, edited by Marinka Copier and Joost Raessens, 30-45. Utrecht: Utrecht University, 2003.

———. Half-Real: Video Games between Real Rules and Fictional Worlds. Cambridge, Mass.: MIT Press, 2005.

———. A Casual Revolution: Reinventing Video Games and Their Players. Cambridge, MA: MIT Press, 2010.

Kerr, Aphra. The Business and Culture of Digital Games: Gamework/Gameplay. London; Thousand Oaks: SAGE, 2006.

Kim, Tae K. “Bayonetta: More Substance Than Virtually Any Female Protagonist before Her ” GamePro, January 8 2010.

Kline, Stephen, Nick Dyer-Witheford, and Greig De Peuter. Digital Play: The Interaction of Technology, Culture, and Marketing. Montréal; London: McGill-Queen’s University Press, 2003.

Koster, Raph. A Theory of Fun for Game Design. Scottsdale: Paraglyph Press, 2005.

Ludica, Tracy Fullerton, Jacquelyn Ford Morie, and Celia Pearce. “A Game of One’s Own: Towards a New Gendered Poetics of Digital Space.” In Digital Arts and Culture, 2004.

Malaby, Thomas M. Gambling Life: Dealing in Contingency in a Greek City. Urbana: University of Illinois Press, 2003.

———. “Beyond Play: A New Approach to Games.” Games and Culture 2, no. 2 (2007): 95-113.

Malaby, Thomas M., and Timothy Burke. “The Short and Happy Life of Interdisciplinarity in Game Studies.” Games and Culture 4, no. 4 (2009): 323-30.

Mangiron, Carmen. “Video Games Localisation: Posing New Challenges to the Translator.” Perspectives: Studies in Translatology 14, no. 4 (2006): 306-23.

Manovich, Lev. The Language of New Media. Cambridge: MIT Press, 2001.

Marcus, George E. Ethnography through Thick and Thin. Princeton, N.J.: Princeton University Press, 1998.

Montfort, Nick, and Ian Bogost. Racing the Beam: The Atari Video Computer System. Cambridge, Mass.: MIT Press, 2009.

Murray, Janet Horowitz. Hamlet on the Holodeck: The Future of Narrative in Cyberspace. Cambridge, Mass.: MIT Press, 1998.

Nakamura, Lisa. Cybertypes: Race, Ethnicity, and Identity on the Internet. New York: Routledge, 2002.

———. “Don’t Hate the Player, Hate the Game: The Racialization of Labor in World of Warcraft.” Critical Studies in Media Communication 26, no. 2 (2009): 128-44.

Nardi, Bonnie A., and Justin Harris. “Strangers and Friends: Collaborative Play in World of Warcraft.” In Conference on Computer Supported Cooperative Work. Banff, Alberta, Canada, 2006.

Nardi, Bonnie A., Stella Ly, and Justin Harris. “Learning Conversations in World of Warcraft.” In 40th Hawaii International Conference on System Sciences, 2007.

Niedenthal, Simon. “What We Talk About When We Talk About Game Aesthetics.” In DiGRA 2009: Breaking New Ground: Innovation in Games, Play, Practice and Theory, 2009.

Nielson. “State of the Media: U.S. TV Trends by Ethnicity.” 2010. http://blog.nielsen.com/nielsenwire/consumer/who-watches-what-and-how-much-u-s-tv-trends-by-ethnicity/

Pearce, Celia, and Artemesia. Communities of Play: Emergent Cultures in Multiplayer Games and Virtual Worlds. Cambridge: MIT Press, 2009.

Poole, Steven. Trigger Happy: Videogames and the Entertainment Revolution. 1st U.S. ed. New York: Arcade Pub., 2000.

Pym, Anthony. “Redefining Translation Competence in an Electronic Age: In Defence of a Minimalist Approach.” Meta 48, no. 4 (2003): 481-97.

Raessens, Joost, and Jeffrey H. Goldstein. Handbook of Computer Game Studies. Cambridge, Mass.: MIT Press, 2005.

Reverend_Danger. “The Top 10 Deaths Caused By Video Games.” Spike.com. Posted: February 24, 2009. Accessed: April 7, 2011. http://www.spike.com/articles/id98jf/the-top-10-deaths-caused-by-video-games

Richard, Birgit, and Jutta Zaremba. “Gaming with Grrls: Looking for Sheroes in Computer Games.” In Handbook of Computer Game Studies, edited by Joost Raessens and Jeffrey H. Goldstein. Cambridge: MIT Press, 2005.

Salen, Katie, and Eric Zimmerman. The Game Design Reader: A Rules of Play Anthology. Cambridge: MIT Press, 2006.

Schleiermacher, Friedrich. “On the Different Methods of Translating.” In The Translation Studies Reader, edited by Lawrence Venuti. New York: Routledge, 2004 [1823].

ScienceDaily. “American Psychiatric Association Considers ‘Video Game Addiction.’” ScienceDaily. Posted: June 26, 2007. Accessed: April 7, 2011. http://www.sciencedaily.com/releases/2007/06/070625133354.htm

Sherry, John L. “The Effects of Violent Video Games on Aggression: A Meta-Analysis.” Human Communication Research 27, no. 3 (2001): 409-31.

Simon, Bart. “Geek Chic: Machine Aesthetics, Digital Gaming, and the Cultural Politics of the Case Mod.” Games and Culture 2, no. 3 (2007): 175-93.

Singer, Dorothy G., and Jerome L. Singer. Imagination and Play in the Electronic Age. Cambridge, Mass.: Harvard University Press, 2005.

Sotamaa, Olli. “When the Game Is Not Enough: Motivations and Practices among Computer Game Modding Culture.” Games and Culture 5, no. 3 (2010): 239-55.

Spence, Ian, and Jing Feng. “Video Games and Spatial Cognition.” Review of General Psychology 14, no. 2 (2010): 92-104.

Stettler, Nicolas, Theo M. Signer, and Paolo M. Suter. “Electronic Game and Environmental Factors Associated with Childhood Obesity in Switzerland.” Obesity Research 12 (2004): 896-903.

Sutton-Smith, Brian. The Ambiguity of Play. Cambridge: Harvard University Press, 1997.

Taylor, T. L. Play between Worlds: Exploring Online Game Culture. Cambridge: MIT Press, 2006.

Tran, Mark. “Girl Starved to Death While Parents Raised Virtual Girl in Online Game.” Guardian.co.uk. Posted: March 5, 2010. Accessed: April 7, 2011. http://www.guardian.co.uk/world/2010/mar/05/korean-girl-starved-online-game

Wardrip-Fruin, Noah, and Pat Harrigan. First Person: New Media as Story, Performance, and Game. Cambridge: MIT Press, 2004.

———. Second Person: Role-Playing and Story in Games and Playable Media. Cambridge, Mass.: MIT Press, 2007.

Wolf, Mark J. P., and Bernard Perron. The Video Game Theory Reader. New York; London: Routledge, 2003.

 

Art, Films, and Games:

Afkar Media. Under Ash. Dar al-Fikr. 2001.

Arcangel, Cory. Super Mario Clouds. 2002. http://www.coryarcangel.com/things-i-made/supermarioclouds/

Badham, John. “WarGames.” MGM/UA. 1983.

Blizzard Entertainment. Starcraft. Blizzard Entertainment. 1998.

———. World of Warcraft. Blizzard Entertainment. 2004.

Bogost, Ian. Cow Clicker. 2010.

———. Guru Meditation. 2009.

Braithwaite, Brenda. Train. 2009.

Brown, Heath (writer), et al. Diary of a Camper. United Rangers Films. 1996

Condon, Brody. Adam Killer. 1999. http://www.tmpspace.com/ak_1.html

Curtis, Pavel. LamdaMOO. 1990.

Delappe, Joseph. dead-in-iraq. 2006 http://www.unr.edu/art/delappe/gaming/dead_in_iraq/dead_in_iraq%20jpegs.html

———. The Salt Satyagraha Online. 2008. http://www.unr.edu/art/DELAPPE/Gaming/Salt_March_Second_Life/Salt_March_Second_Life_%20JPEGS.html and http://saltmarchsecondlife.wordpress.com/

Electronic Arts. Ultima Online. Electronic Arts. 1997.

Gault, Teri. The Grocery Game. http://www.thegrocerygame.com/

Hezbollah Central Internet Bureau. Special Force. Hezbollah Central Internet Bureau. 2003.

Hughes, Jake. Anachronox: The Movie. 2002. http://www.archive.org/details/JakeHughesAnachronoxTheMovie

id Software. Quake. id Software. 1996.

Ion Storm. Anachronox. Eidos Interactive. 2001.

Linden Lab. Second Life. Linden Lab. 2003.

Molleindustria. The McDonald’s Videogame. 2006.

Nintendo. Super Mario Bros. Nintendo. 1985.

Powerful Robot Games. September 12th: A Toy World. Newsgaming.com. 2003. http://www.newsgaming.com/games/index12.htm

Rohrer, Jason. Passage. 2007.

Schleiner, Anne-Marie, Joan Leandre, and Brody Condon. Velvet-Strike. 2002. http://www.opensorcery.net/velvet-strike/

Sega. Yakuza 3. Sega. 2010 [2009].

Sonic Team. Sonic Adventure 2 Battle. Sega. 2001.

Sony Online Entertainment. Everquest. Sony Online Entertainment. 1999.

SquareSoft. Final Fantasy XI. SquareSoft. 2002.

Stern, Eddo, Peter Brinson, Brody Condon, Michael Wilson, Mark Allen, and Jessica Hutchins. Waco Resurrection. 2004. http://www.eddostern.com/waco_resurrection.html

Tale of Tales. The Path. Tale of Tales. 2010.

ThatGameCompany. flOw. ThatGameCompany. 2006.

ThatGameCompany. Flower. SCEA. 2009.

U.S. Army. America’s Army. U.S. Army. 2002.

Valve Software. Half Life. Sierra Entertainment. 1998.

Zynga. Farmville. Zynga. 2009.


[1] MMORPG is a common acronym for Massively Multiplayer Online Role-Playing Game. A similar, but more general acronym that I use in this paper is MMO (Massively Multiplayer Online), which can be either a game-like (World of Warcraft) or world-like (Second Life) atmosphere. There is also the predecessor term MUD (multi-user dungeon, or multi-user domain) and its successor term MOO (Multi-user domain Object-Oriented), which stresses the more advanced ‘object-oriented’ coding of certain MUDs.

[2] This is similar to Norbert Elias and Eric Dunning’s (1986) work on leisure. They write that it is only in the “spare-time spectrum” that play approaches an oppositional value from work. They are, however, coming from a very different disciplinary perspective than Sutton-Smith.

[3] We might also link play to education and Piaget’s stages of childhood play where children necessarily progress through certain stages regardless of culture.

[4] Code is unimportant for analog games including board games and outdoor play, but it is key for digital games, which are the focus of my own work.

[5] Manovich’s other three principles are numerical representation, automation, and transcoding.

[6] Galloway draws this from critical theory, particularly looking at the work of Frederic Jameson, and Italian neo-realist cinema such as Vittorio de Sica’s Ladri Di Biciclette (1948).

[7] While there is a history of fan modifications (Sotamaa 2010), I am focusing on ‘art’ mods.

[8] Rooster Teeth Productions. Red vs. Blue. http://redvsblue.com/home.php

[9] Oxhorn. “ROFLMAO!” Uploaded: April 1, 2007. http://www.youtube.com/watch?v=iEWgs6YQR9A&feature=related

[10] Andrew Gardikis. “Super Mario Bros.” Speed Demos Archive. http://speeddemosarchive.com/Mario1.html

[11] What Sharp implies, but does not fully mention is that other contextual understandings of art, or externally imposed understandings of art, also have the spatial problem. Japanese ukiyo-e were popular posters that became art upon western instigation; ritual objects lost their purpose when placed in museums; happenings, Fluxus events and Dada poetry cannot reside within a museum’s permanent collection.

[12] It is possible that there are other things that games can do, or other styles/movements that games will be a part of, but so far it is either pure entertainment or proceduralist.

[13] This is not to indicate that ‘art’ is not commercial. The ties between art and commerce are inextricable. However, visual art at present has been naturalized where ‘good’ is ‘expensive.’ Video games at present have been naturalized as ‘entertaining,’ but such entertainment value is quite subjective (the latest FPS blockbuster is only entertaining to an audience that appreciates the genre). This is quite unlike the supposed objective monetarily equated ‘art value.’

[14] Despite the existence of numerous other possible effects, violence and addiction effects research dominates the field. In part, this is due to political and media visibility, and how this visibility funnels funding into further research on violence and addiction.

[15] The history of moral panic extends to novels, plays, poetry and pretty much every other medium; people like to place blame.

[16] The long history of technology and games with military research facilities is one reason this subject population exists, however, I do not delve too deeply into it.

[17] Ferguson adapts his moral panic wheel from a David Gauntlett’s diagram entitled “The spiral of panic about screen violence” (Gauntlett 2005, 127).

[18] There are various studies of games and online games between Dibbell’s earlier work and Taylor’s later work. An earlier example is Richard Bartle’s “Hearts, Clubs, Diamonds, Spades: Players Who Suit MUDs” (1996), in which he creates a typology of player types for online, multi-user domain/dungeon environments/games. He argues that there are four player-types (explorer, adventurer, killer and socializer), and that a multi-user game must be designed for optimum balance (not equality) between the four types. What differentiates Taylor’s study from Bartle’s and others’ studies is its focus on player culture and the porousness between worlds instead of the players themselves and the game as a set environment.

[19] Matthew Payne. “Selling Ludic War: Marketing Military Realism in Call of Duty 4: Modern Warfare.” UCSD Department of Communication Job Talk. February 2, 2011.

[20] It should be noted that Julian Dibbell’s use of ‘ludocapitalism’ (2006) predates Payne’s usage of ‘ludic capitalism.’

[21] This ontological status changes depending on the game: World of Warcraft bans gold farmers, Ultima Online turns a blind eye, and certain games (like Farmville) incorporate buying in-game currency into the design. This was previously mentioned above regarding ‘freemium’ models.

[22] While race and gender should be split to highlight the particulars of how they function differently in games, they are typically treated as simple representation in the discourse and field. I leave them combined for the sake of space in this paper, but they need to be separated.

[23] Ludica. “Ludica Mission.” http://www.ludica.org.uk/Mission.htm

[24] The first is shown in the effects research above, the second is from a cancelled game called Six Days in Fallujah, and the third is in various games including Smuggle Truck: Operation Immigration (released in early May 2011 as Snuggle Truck due to the media backlash) and Call of Juarez: The Cartel.

[25] ‘Domestication’ is a style of translation where the translator takes the foreign text and transforms it into the idiom of the domestic audience. Friedrich Schleiermacher (2004 [1823]) coined the term domestication as opposed to ‘foreignization’ —where the text stays in its foreign idiom and the domestic audience is forced to work harder to understand the original text and audience—as a principle choice for the translator.

Toward a Multi-Layered Digital Translation Methodology (Qualifying Paper #1)

In this paper, I approach new ways of translating digital media texts— from digital books, to software applications, but particularly my own focus on video games —by mixing traditional translation theory and new media theory. There are similarities between these two fields, but they do not refer to each other. Translation theory rarely looks to films and television, let alone websites, software and games; new media theory fetishizes the ‘new’ and rarely considers that it’s all been done before.[1] I cross the fields because there are mutual benefits to be had by doing so: translation can get new material practices; new media can get more history. I also cross the fields because that is what I see as the work of Communication. Finally, I cross the fields because my own work on video game translation can emerge from their crossing.

Introduction: ‘From Translation to Traduction’ to Localization

In this paper, I approach new ways of translating digital media texts— from digital books, to software applications, but particularly my own focus on video games —by mixing traditional translation theory and new media theory. There are similarities between these two fields, but they do not refer to each other. Translation theory rarely looks to films and television, let alone websites, software and games; new media theory fetishizes the ‘new’ and rarely considers that it’s all been done before.[1] I cross the fields because there are mutual benefits to be had by doing so: translation can get new material practices; new media can get more history. I also cross the fields because that is what I see as the work of Communication. Finally, I cross the fields because my own work on video game translation can emerge from their crossing.

While I have already started this paper with confusion (complexity and fusing togetherness), the word ‘translation’ itself has a confused (or perhaps, defused) past. As Antoine Berman notes, it is only in the modern period (post 1500) that the word (renamed ‘traduction’ in romance languages other than English) has taken on its present meaning.[2] Previously, the word (‘translation’) had an unstable meaning because writing itself was never considered the originary act of an author. Instead, all writing, from musing, to marginal notations, to transcriptions, to commentary, to linguistic alteration was considered translation. We are in the process of discursively moving back to the earlier understanding of the word.

The earlier understanding, ‘translation,’ comes from the Latin translatio, which can include the transportation of objects or people between places, transfer of jurisdiction, idea transfer, and linguistic alteration.[3] As Berman stresses, the premodern understanding of translation is as an “anonymous vectorial movement.”[4] In contrast, the post 1500 term, ‘traduction,’ signifies the “active energy that superintends this transport – precisely because the term etymologically reaches back to ductio and ducere. Traduction is an activity governed by an agent.”[5] For Modernity and its lauded author this “active movement” through a subjective traducer makes sense, as it distances the iterations by emphasizing a particular hierarchy of original over derivative. However, in a Postmodern culture where global flows and exchanges have moved well away from the author function and the primacy of the work it is helpful to understand the elements of translation that were lost “vectors” in the move to traduction.[6]

For romance languages where ‘translation’ became ‘traduction,’ certain formal and temporal vectors have been lost and taken up by other concepts such as adaptation, repetition, convergence, and intertextuality. While all of these terms have particulars, intertextuality is a useful example due to its link with postmodernity and the move away from grand theories.[7] With postmodern intertextuality there is no singularity of a work. Rather, everything is texts with borrowed themes, images, and sections. Intertextuality follows the formal vector of transformation, which has left translation, but it does not consider power and difference. In the early 21st century United States context, both power and difference are increasingly important and yet elided.

Some vectors were never actually lost in English, as it never switched over to the word traduction. As Berman notes, “English does not ‘traduce,’ it ‘translates,’ that is, it sets into motion the circulation of ‘contents’ which are, by their very nature, translinguistic.”[8] As the problematically designated world language, English sets itself up as a translinguistic universal, but it does so in opposition to a host of other languages that have switched over to thinking about translation as the necessary and active linguistic alteration to move a text from one place to another. Similarly, while there is an underlying energy that fuels the translational movement of a modern video game over space, there is a simultaneous understanding that nothing the game translator does can change the game as they are not changing the play level. Just like English, play is translinguistic and universal. Current forms of game translation, then, have retained a link to some of the anonymous vectors of translation.

I define translation as the ‘carrying over’ of a text from one context to another, where context can be understood as spatial, formal, or temporal. This broad definition begins to reclaim previously lost vectors, particularly a criticality necessary for the analysis of video games, which are currently exempt as they reside in an area of pure entertainment. This broad definition allows me to consider other forms of textual manipulation including video game localization—the process of translating games for new cultural contexts, which includes linguistic, audio, visual and ludic [play/action] alterations —that has theoretically and practically separated itself from simultaneous interpretation and literary translation. By doing so I wish to force open the definition to include what is already happening, localization, where much of the text is changed for the purpose of a “better” user experience. However, this move allows opens a space for what might happen, such as new forms of translation that use unofficial production to destabilize the meaning of the text by building it up.

I link traditional foci of literary translation theory with some of Jacques Derrida’s theories of deconstruction (particularly of ‘trace,’ ‘living-on,’ and ‘relevance’), and J. David Bolter and Richard Grusin’s concept remediation, in order to reconnect ‘translation’ with its (not quite) lost vectors.[9] I begin with the standard tropes of translation theory — sense and word, source and target, domestication and foreignization — as they do well to show the different possibilities at play with translation. However, disciplinary bound theories are never complete as they ignore extradisciplinary connections. One such connection is remediation. While the concept comes from a literary origin, remediation exists between literary and new media theories; I believe it can help to combine translation in the two areas and help move understandings of translation to new alternatives.

I argue that current practices of translation focus on only one side of the literary theories, thereby turning them into mutually exclusive binaries (sense or word, foreignization or domestication, immediacy or hypermediacy). However, Bolter and Grusin show that remediation is not a binary between hypermediacy and immediacy; rather, remediation utilizes both sides of the equation. Essential to new media is the simultaneous existence of both hypermediacy and immediacy. Current translations espouse only one of these sides, and ignore the benefits of the other. Translation can learn from this simultaneity in new media theory. This paper argues through to a material instantiation of new media translation that takes into consideration both sides of these pairings.

In the second section I show how the dominant practice of translation at present utilizes a domesticating, immediate strategy that overwrites (and thereby renders falsely singular) texts, whether they are literary, filmic, or ludic. In contrast, I argue that a foreignizing, hypermediate strategy that layers texts, which has always existed despite its current lack of presence, can facilitate an alternate, much needed ethics of both translation and cultural interaction. I am not arguing for a simplistic multiculturalism where difference can be subsumed under mere celebration, but for a difficult, abusive, and often painful form of interaction with difference that can reveal the actual ways in which culture functions. As Derrida argues, there is violence and pain that comes with eating the other, but there is also a necessity to eat. One must thus eat [ethically] (bien manger).[10] The same holds for translating.

 

Tenets of Translation

In the following sections I will review the key principles that have been the focus of translators throughout Western translation history. These examples are primarily from a European/English perspective although I try to use alternative examples where available, applicable and known. I will begin with the impossibility of a perfect translation. Second, I will elaborate on the ways of escaping this core dilemma beginning with the argument between sense-for-sense and word-for-word, and ending with the concept of equivalence. Third, I will review the opposing tendencies of domestication and foreignization as an alternate focus on the author and user instead of equivalence’s focus on the text itself. Finally, I will bring up remediation as concept terms that help bridge literary translation with new media and video game translation and transformation. By linking translation with remediation I can, in the later half of the paper, re-approach Berman’s ‘lost vectors’ of translation, recombine translation and localization, and point out alternate possibilities that are currently unconsidered due to the discursive dominance of fluent translations.

 

(Im)possibility of Translation

In an almost fetishistic move translation is known for its parts in lieu of its whole. The whole in this case is a holistic notion of perfect translation that completely reproduces a text in a secondary context. As George Steiner notes:

A ‘perfect’ act of translation would be one of total synonymity. It would presume an interpretation so precisely exhaustive as to leave no single unit in the source text —phonetic, grammatical, semantic, contextual — out of complete account, and yet so calibrated as to have added nothing in the way of paraphrase, explication or variant.[11]

Steiner rightly notes such a task is impossible for both an original interpretation and a translational restatement. In fact, the sole example ever given for a perfect translation is the mythical Biblical Septuagint translation where 72 individually cloistered translators made 72 simultaneous translations of the Torah from old Hebrew to Greek over 72 days. As the story goes, their translations were exactly the same indicating divine intervention. However, if one considers the logic of the translation it was the absence of any particular tenet, or focus, that enabled the translation to be considered perfect. God’s weight, on some tenet or another, was imperceptible, so it is the absence of a particular reference that marks the example of perfection. It is the unmarked translation that can be considered perfect, but this does not help with real translations. The practical lesson from the Septuagint is thus that perfect translation is impossible.

The impossibility of a perfect translation has forced all practical translation to focus on certain elements. These elements—sense, rhythm, original meaning, feel, length, and experience—are routinely marked as essential and elevated to primacy. The elements that are considered non-essential are then justifiably negated. One is hard pressed to find some moment, including the present, where this fetishization of certain tenets does not happen.

In contrast to such a partial focus with translation I hope to encourage a use of materiality, which can lead to a fragmented, built translation; imperfect and incomplete, but hopefully leading to a partial picture of what could be. A postmodern translation that is hardly ‘perfect,’ but in contrast to other forms of translation it does not assume justifiable negligibility of unconsidered elements.

I argue that digital new media in particular can enable this form of translation. However, this new method is anything but new, just as new media is anything but new. Rather, it borrows from, and builds upon, both Jacques Derrida and Walter Benjamin’s theorizations of translation. Derrida, in strict opposition to the dream of perfect translation and meaning argues for the slippery sliding of signifiers as a way to point back, but never get back, to an originary moment, text, or meaning. In contrast, Benjamin understands the failures of translation as a necessary part of the dream of messianic return in that they build up to perfection. These two provide theoretical groundwork for what can be made possible by the impossibility of translation.

Derrida’s concept of deconstruction is based in Ferdinand de Saussure’s semiotics taken up to postmodern instability instead of the Formalist dream of an ultimately stable meaning. In the Course in General Linguistics Saussure argues that the linguistic sign is arbitrary in that there is no natural relationship between signifier and signified;[12] it is both variable and invariable in that it changes, but nobody controls the change;[13] it exists as a system (la langue) and individual instances (parole), and this duality makes it both synchronic in its permanence related to langue and diachronic in its relation to parole.[14] As Jonathan Culler argues, what is interesting in Saussure’s linguistics is the relational nature of signs, and therefore how “[l]anguage is a network of traces, with signifiers spilling over into one another.”[15] Words do not equal each other. Rather, they stand in positions of relationality that depend on time and space.

While Saussure focused on both the synchronic and the diachronic, stable and unstable, system and individual, ways that language exists, the Russian Formalists after him dreamed of a study of stable signs, a Science. Formalists such as Shklovsky and Jakobson (against which Mikhail Bakhtin later wrote) dreamed of an ultimate equality between signified and signifier, of a way that language made Scientific sense. This impetus toward stability and reason drives a great deal of language usage, and it informs practical translation. However, Derrida takes the instability of language, the ‘traces’ that Culler mentions, and runs with it.[16] There is no formal structure to language, there is no deep structure, there is simply the sliding of signifieds on signifiers as words change meaning over time and between utterances. Derrida represents this by the trace, the word under erasure (‘sous rature’). The word is unstable, but this does not indicate that it is free; rather, the word is loaded down with all of the past meanings, the traces of history (whether we recognize those past meanings or not). For Derrida, like with Saussure, meaning can never be pinned down, which means that words are never singular and always slide back along different signifiers; however, for Derrida, this instability means that a translation is twice as meaningful as the original text itself. It is an added sense above; it is an after erasure, a meaning after the original. In light of such polysemy, translation ultimately does something different than simply move a text between form, time and space: it helps the text “live on.”

In “Des Tours de Babel” Derrida argues that the proper name (Babel, but all names) is the ultimate example of translation’s impossibility. Coming from the Biblical story, Babel is the tower, it is ‘chaos’ (the multiplicity of tongues), but it is also God, the Father.[17] Names remain as they are in translations, they are untranslatable, but this is further the case with God’s name, and the tower itself, both of which cannot be translated/written/completed. Ultimately, Derrida argues that translation is the ‘survie,’ the ‘living on’ and ‘afterlife’ of the original text through the translation, but not the dead, original author whose sole means of immortality is through ever transforming literary texts.[18] As he summarizes in his discussion of a ‘relevant’ [meaningful and raising] translation of Merchant of Venice’s Shylock:

It would thus guarantee the survival of the body of the original… Isn’t that what a translation does? Doesn’t it guarantee these two survivals by losing the flesh during a process of conversion [change]? By elevating the signifier to its meaning or value, all the while preserving the mournful and debt-laden memory of the singular body, the first body, the unique body that the translation thus elevates, preserves, and negates [relève]?[19]

Translation allows a text/body/father, to live on, to survive, but in so doing the original is necessarily changed.

The lesson from Derrida in regards to translation is that it is impossible. This much is obvious. However, impossibility does not mean that it should not be done. Translation is a necessary act despite its flaws: a text would not ‘live on’ without translation, just as we cannot ‘live on’ without eating, consuming, translating the other into sustenance.[20] We can learn two things from Derrida: the first is that deconstruction is about the psychoanalytic working through of the trauma, the historical weight imbedded in the word due to the impossible overload of meanings. The second, the lesson that I take, is that the failure of translation must be flaunted, highlighted. The Derridian methodology (not deconstruction, perse, but the productive theory we may take from deconstruction) is about showing how language and texts have multiple meanings and in fact can never be pinned down to any single meaning. Translation, just like language and original texts, must show this built-in instability. As all language is sliding along unstable signifiers, and all texts float along the backs of others, translation too must show its layeredness, its historicity. However, the instability is not flexibility and freedom, but a painfully historical burden (a ‘haunting,’ even[21]), and Derrida shows this uncomfortable instability by writing with asides, marginal notes and what Philip Lewis has argued as abusive translation.[22] Because this abusive, Derridian style of translation is painful and difficult to read, it is not often considered useful to translation practice, which focuses on clarity, consumption and entertainment.[23] However, the build-up of meaning through layering is a key method to bring together the various modes of translation that I will return to throughout this paper.

Like Derrida, Benjamin argues that perfect translation is impossible, but he does so toward a completely different end. In “The Task of the Translator,” Benjamin argues that the ‘Aufgabe’ [task, give up, failure] of the translator is impossible, but such failures add up to something more.[24] A translation must not reproduce the original, but must be combined with the original to approach something more. His master metaphor is of an amphora, representing language, which has been shattered into innumerable pieces:

[A] translation, instead of resembling the meaning of the original, must lovingly and in detail incorporate the original’s mode of signification, thus making both the original and the translation recognizable as fragments of a greater language, just as fragments are part of a vessel.”[25]

The amphora is language and in order to piece it together individual, failed translations (and the original) must be undertaken piece by piece in order to piece the ‘Reine Sprache’ [pure language] together. Finally, translations are not necessarily possible in any given time; there is a timeliness, or “translatability” that allows or prevents certain translations.[26] For Benjamin, no translation is necessarily possible and no translation does everything, but translations must be undertaken both for Messianic (it facilitates the return to a pure language) and logistic (it enables the spread of ideas and texts) reasons.

Individual translations do not do everything, but as particular translations in particular contexts they give a glimpse of the pure language. From Benjamin I take the notion of seeing something more even if the singular is not perfect, and I take the idea that particular translations are better in particular contexts. Both of these oppose the idea of a singular, perfect translation, which, like Derrida’s insistence of abuse, is little desired by practitioners of popular translation. However, it is something that has great importance in a world where the difference between believing in a perfect translation and understanding the problems of translation can be the difference between fun and boredom, but also between death and life.[27]

While I do not believe in a Messianic return of an Adamic language, I do agree with Benjamin’s insistence on the unequal benefit of different translations. Certain languages at certain times translate better than others due to contextual issues. This is not to say that translation at any given point is fundamentally impossible, but rather that translations are unequal. While Benjamin might hold that this renders useless certain translations at certain times, I believe that it is possible to use the materiality of new media to combine Derrida’s abusive slipperiness of language with Benjamin’s build up of languages to create a more complete translation. Such a new form is where this paper will ultimately conclude.

 

Word, Sense, and Equivalence(s)

While Benjamin, Derrida, and a large number of other theoreticians of translation confront (and embrace) the impossibility of translation, practitioners of translation routinely deny the impossibility by necessity. Translation must (and does) happen, so instead of a holistic notion of perfection, individual elements are highlighted. Historically, the two primary tenets of translation have been the oppositional mandates of translating word-for-word, and translating sense-for-sense. However, theorists in the 20th century expanded the either/or of word vs. sense to include a host of other correspondences and equivalences. In the following section I will go over these different forms of practical translation, but I will conclude by pointing out that at issue with all of them is that they naturalize a single element, which blocks off the possibilities of any other options.

The oppositional mandate between word and sense has been a major focus in Western translation since the Greeks in part because of the importance of the Bible in Western translatology. The conundrum posed within the oppositional mandate is simply does the translator translate the words in front of him/her [word-for-word], or the meaning of those words as a larger whole [sense-for-sense]? However, because this debate has been contextualized historically within the realm of Bible translations it has never been a simple question between sense and word, but between worldly sense and divine word.[28]

The ‘first’ Bible translation was the previously discussed Septuagint translation from Hebrew to Greek, which was done ‘by the hand of God,’ but manifested through the separate acts of 72 individual translators. In this instance, the translators create what is known thereafter as a perfect translation. The words are God’s words and can neither be altered nor denied. It is the perfect translation as there was unified meaning between original and translation in word and sense. Such claims for perfect word-for-word and sense-for-sense translation are quite problematic, but they go unquestioned until St. Jerome again translates the bible from Greek to Latin. The problem (or so it is claimed) is that he refers back to the old, pre-Septuagint Hebrew version of the Torah, and in so doing denies the primacy of God’s perfectly translated words. How can the Greek version be perfect, with all of the sense of the original in the new words, if Jerome must go back to the Hebrew?

While St. Jerome argues for sense-for-sense translation, he does so in an interesting bind having translated the Septuagint Bible while referring back to the older version and highlighting the importance of particular words. He thus pays very close attention to word-for-word ideals, noting the importance of word order with mysteries, but ultimately argues, “in Scripture one must consider not the words, but the sense.”[29]

Word-for-word translation schemes never work, as there are never equivalent words. To show how this works I’ll take the word ‘wine’ between English and Japanese: wine is not blood; wine is not saké; saké is definitely not blood. Wine rhymes with dine and whine, but it is also either white or red and can be related to both debauchery and blood, and even metonymically to Christ’s blood. Of course, wine is the fermented liquid from grapes, but also just the general fermentation process itself so that “rice wine” is fermented rice starch, and “plum wine” is fermented plum liquid, but “grape wine” would be considered redundant. On the other side, saké, the Japanese word from which “rice wine” is often translated, stands as a general word for all alcohol, but nihonshu, or Japanese alcohol, which is the more explanative Japanese word for saké, is unused in English. Finally, there is no link between sake and blood in color, rhyme, or any other mode of meaning. If one single word can cause this (and more) trouble it should come as no surprise that a word-for-word translational scheme must fail.

From Jerome through to the modern period there is a fixation upon sense-for-sense translation, and by the time of John Dryden sense-for-sense translation (except when dealing with mysteries of the divine word) is cemented. While metaphrase, word-for-word, is one of Dryden’s three types of translation it is only done in extreme cases. The main debate is between paraphrase, sense-for-sense translation with fidelity to the author, and imitation, which is a type of adaptation that partially betrays the original author.[30] Dryden’s third form of translation, imitation, is the divergence point between adaptation, what I seek to note as a carrying over of form where the translator hints at the style, form or sense of an author, but not the content. Between Dryden and the present this form has completely diverged into adaptation and intertextuality, which are considered entirely separate from translation. This is the final splitting point between translation’s original vectors and traductions’s linguistic and authorial focus in the modern period. Finally, Dryden’s second form of translation, paraphrase, is the most general concept of sense-for-sense translation as it is about what the author said in one language said in another language.

Paraphrase translation has enjoyed the primary role in translation from the time of Dryden to the present, and has only faced significant opposition during the 20th century from semiotics, formalism, and postmodern ideas of language. All three of these provided different oppositions, but all significantly affected the word/sense divide.

While Jakobson is mainly known within translation studies for his three types of translation (intralingual, interlingual, and intersemiotic), as a formalist he is understandable as one looking at the formal qualities of language, and therefore what happens to those essential elements in the process of translation within and between languages and forms. Moving from a semiotic understanding of language where “the meaning of any linguistic sign is its translation into some further, alternative sign,” Jakobson argues that there is never complete synonymy as “synonymy, as a rule, is not complete equivalence.”[31] A translation, regardless of word or sense, cannot fully encapsulate the source text. As Jakobson claims, “only creative transposition is possible” where this creative transposition focuses on something, but loses some other specificity.[32] While two possibilities coming from this failure of translation are Derrida and Benjamin, a more common one is to instead focus on creative transposition of one particular element of the text, but ignoring the rest. This is most visible in Nida’s ideas of correspondence with Bible translations, Popovic’s four equivalences in literary translation, and finally the current style of game localization.

Eugene Nida is best known for his principles of correspondence, formal and dynamic (or functional) equivalence, which he has primarily enacted with Bible translations. As a translator closely linked to the American Bible Association most of Nida’s work is also linked to principles of missionary work and the spread of Christianity through rendering the Bible understandable and close to a target audience. His two sides of translational equivalence, formal and dynamic/functional, are quite similar to Dryden’s metaphrase and paraphrase. However, in particular, formal equivalence focuses on fidelity to the source text’s grammar and formal structure. In contrast, dynamic equivalence seeks to make the text more readable to a target audience by adapting it to a target context. Nida’s scalar of equivalence is similar to both the word and sense debate as well as the domestication and foreignization debate, which I will elaborate below, however, that he uses the idea of equivalence in the singular and deliberately notes that one must sacrifice one side or another is important for the current discussion.

In a slightly more expanded sense, Anton Popovič writes of four types of equivalence within a text: Linguistic, Paradigmatic, Stylistic (Translational) and Textual (Syntagmatic).[33] The first, linguistic word, is the goal of replacing a word in the source language with another, equivalent word in the target language; it is different from the word and sense debate in that it simply indicates that the translator must pay attention to the phonetic, morphological and syntactic level of the text, which is to say the words that are written. The following three expand on the idea of equivalence in that a translation may focus on the grammar, the style, or the expressive feeling of the text.

Popovič’s focus is on a very literary understanding of the text. These four methods are for understanding the formal qualities of the written word, and therefore how to translate literary texts. Obviously, these four equivalences do not cover the entire realm of human experience. Other media involve different essential qualities, which have been the focus of those types of translation.

While any medium can offer an example of a different essence, I draw from my own focus on game translation. Game translation highlights experience. Games, as mass produced commodities, are considered interactive entertainment, and the core of the game is the active, fun experience.[34] In light of this gaming essence, the equivalence sought by game translators is the experience of the player in the source culture. As Minako O’Hagan and Carmen Mangiron, two of the few theorists on game translation write:

[T]he skopos of game localization is to produce a target version that keeps the ‘look and feel’ of the original… the feeling of the original ‘gameplay experience’ needs to be preserved in the localized version so that all players share the same enjoyment regardless of their language of choice.”[35]

Because the optimal experience when playing a game is entertainment, a good game translation is one that entertains and nothing more.

While Popovič believes there is an “invariant core [meaning]”[36] that remains regardless of any translational variations, one may translate with the goal of rendering equivalent only one of the elements, and in so doing the other three are sacrificed. Such a sacrifice works directly off of the understanding that perfect translation is impossible. Choosing one equivalence over another does not elevate it in importance over the others. However, in the practical integration of translation and reception only one rendering of one equivalence is ever seen, indicating that it is the true equivalence. Because only one type of equivalence is ever seen it is retrospectively elevated to the true equivalence. The equivalence highlighted becomes the essence of the text regardless of it being only one of many and any other types of translation that highlight the other elements of the text are useless. In the case of video games the fetishistic focus on the experience of the player renders invisible and invalid all other levels of the game. As a result, games become pure entertainment and all artistic, political, or cultural levels are ignored.

A text does not have a single essence; it has many different sites of differing importance to different people. The author might be intending to highlight one thing; the reading takes another; one cultural context focuses on one element, but another focuses on another. While the essence of a text spread to innumerable sites (rhyme, look, site, context, etc) equivalence seeks to focus on one and sacrifices the rest. This sacrifice is naturalized, and the equivalent element is constructed (after the fact) as the ultimate/important thing to be translated. As Lawrence Venuti notes regarding Jerome’s Bible translation, “Jerome’s examples from the gospels include renderings of the Old Testament that do not merely express the ‘sense’ but rather fix it by imposing a Christian interpretation.”[37] Translation does not just move a text from one language, time or place to another, but rather, it imposes particular meanings on that text and through the text on both the source and target cultures. Translational regimes and translations themselves exist within a political world. Translation is inseparable from power.

 

Domestication and Foreignization

While equivalence flows logically from the debates of sense-for-sense vs. word-for-word it also comes from the other primary concern in translation, which is between domestication and foreignization.

In an attempt to move beyond the debate between paraphrase (sense) and imitation (adaptation),[38] Friedrich Schleiermacher argued that there were two main ways of translating: either the translator makes the text in the style of the foreign original and forces the reader to move toward that source text and context [foreignization, or Source Text orientation (ST)], or the translator relocates the text into the target culture, pushing the text into the local context making it easier for a reader to understand [domestication, or Target Text orientation (TT)].[39] Schleiermacher argued that the debate between sense and word was defunct as both fail bring together the writer and reader. Instead, he contended that the translator needed to decide between foreignization and domestication, as the act of translation was necessarily related not to texts, but to cultures.

Schleiermacher argues that different types of translation are necessary to provoke different reactions in different audiences. Imitation and paraphrase must come first to prepare readers for the higher phases of true translational style: foreignization and domestication. He then argues that writers would be different people were they to write in, or be positioned as if they were writing in, foreign languages as domestication claims to do, and that such a repositioning would take the best elements out of the writers.[40] Thus, his argument ultimately supports foreignizing translation.

Antoine Berman understands Schleiermacher’s call for foreignization as a particular moment where an ethics of translation is visible. This ethics relates to the formation of a German language and culture. To Berman, domestication denies the importance of a mother tongue itself, and foreignization has the possibility that the mother tongue is “broadened, fertilized, transformed by the ‘foreign.’”[41] However, he also notes there are extreme risks to such nation building:

inauthentic translation [domestication] does not carry any risk for the national language and culture, except that of missing any relation with the foreign. But it only reflects or infinitely repeats the bad relation with the foreign that already exists. Authentic translation [foreignization], on the other hand, obviously carries risks. The confrontation of these risks presupposes a culture that is already confident of itself and of its capacity of assimilation.[42]

The prime assumption here is that Germany exists on the cusp of the ability to incorporate the foreign tongue in order to grow, but more importantly it also exists in a situation of being dominated by the French. In order to negate the French dominance over the German culture and tongue (that is extended through domesticating translations and bilingualism) it becomes necessary to take the dangerous plunge and move toward a foreignizing form of translation.

Texts do not exist outside of contexts, so any choice is necessarily related to political interests. In the case of Germany in the 19th century it was the relationship between Germany trying to develop against a dominant France. As Lawrence Venuti notes about Berman and Schleiermacher, “The ‘foreign’ in foreignizing translation is not a transparent representation of an essence that resides in the foreign text and is valuable in itself, but a strategic construction whose value is contingent on the current situation in the receiving culture.”[43] In the case of 19th century Germany, Venuti argues that the “Schleiermacher was enlisting his privileged translation practice in a cultural political agenda: an educated elite controls the formation of national culture by refining its language through foreignizing translations.”[44] Venuti’s argument requires jettisoning the nationally chauvinistic quality of Schleiermacher’s call for foreignization, but maintaining foreignization’s oppositional quality. To Venuti such a foreignization is necessary to oppose the current discursive regime of transparency that is dominant within the 20th and 21st century United States.

Venuti argues that the dominant discourse of translation within the United States is transparency. The translation must read as if it were written in the local language. This is a modern rendition of Schleiermacher’s domesticating translation that has been normalized to the extent that foreignization as a method is not an alternative, or different choice, but an awkward oddity.[45] As his subtitle “A History of Translation” indicates, Venuti lays out a genealogy that shows the rise of fluent translations in Europe between the early modern period and the late 19th century and how during this period the translator’s status dropped. By pointing out the constructed nature of the ‘fluency is good’ discourse, Venuti is trying to argue a move away from such fluency. He does so both to raise the status of the translator in relation to the author and originality, and to problematize the United States and English’s relationship to other countries and languages. As he writes in his conclusion:

A change in contemporary thinking about translation finally requires a change in the practice of reading, reviewing, and teaching translations. Because translation is a double writing, a rewriting of the foreign text according to values in the receiving culture, any translation requires a double reading… Reading a translation as a translation means not just processing its meaning but reflecting on its conditions – formal features like the dialects and registers, styles and discourse in which it is written, but also seemingly external factors like the cultural situation in which it is read but which had a decisive (even if unwitting) influence on the translator’s choices. This reading is historicizing: it draws a distinction between the (foreign) past and the (receiving) present. Evaluating a translation as a translation means assessing it as an intervention into a present situation.[46]

Writing, translating and reading are contextually contingent acts and one must be aware of the contexts from which and to which such texts move. It is key that the discursive regime of domesticating/fluent translation does not allow such historicizing or cultural understanding, as the foreign is simply rendered invisible

The current regime of translation is one in which the translator has become invisible and this has negative effects regarding the translator’s status, but also in regard to couching the United States’ translational imperialism. Venuti argues, “Schleiermacher’s theory anticipates these observations. He was keenly aware that translation strategies are situated in specific cultural formations where discourses are canonized or marginalized, circulating in relations of domination and exclusion.”[47] Results of this naturalized, extreme form of domestication are transparent cultural ethnocentrism and domination. These are, as Venuti argues, “scandals” of translation.[48] In opposition to these scandals, a foreignizing translational regime can link up to an “ethics of difference” that “deviate[s] from domestic norms to signal the foreignness of the foreign text and create a readership that is more open to linguistic and cultural differences.”[49] It is Venuti’s argument that acknowledgement and accommodation of difference are sorely lacking with late 20th and early 21st century United States context, thus requiring the switch to foreignizing translation. However, as previously stated, such a foreignizing method is completely opposite the dominant trend of the present.

Venuti argues for a switch to foreignization and away from the domestication that has been naturalized. He argues that “invisibility” refers to both the status of the translator as negated under the writer economically and functionally, and that translations must be presented so fluently, as if they were made in the local language and culture, that the translator is rendered invisible. The invisibility of domestication overlaps in instructive ways with Bolter and Grusin’s concept immediacy, the transparent side of remediation. Ultimately, remediation is a way out of the problematic discursive regime of translation that Venuti locates.

 

Remediation

In their seminal new media text J. David Bolter and Richard Grusin coined the term remediation in response to what they saw happening with new media at the time, but also how all media had been changing over the twentieth century.[50] For Bolter and Grusin all media is remediated: a medium remediates other media. Web pages have text, icons that tell people to ‘turn to the next page,’ and imbedded movies with standard filmic controls; Microsoft Word has a ‘page’ as it remediates writing on paper. This remediation has two qualities, or sides. The first, immediacy, is where the fact of remediation is cut away, or rendered invisible. The HUD (heads up display) of a game is lessened, removed, or rendered diegetically relevant. From a literary standpoint the content and diegesis is all that matters and the user need not leave this place of immediate access to the text. As Bolter and SIGGRAPH director Diane Gromala write a few years later, “we…have lost our imagination and insist on treating the book as transparent….  We have learned to look through the text rather than at it. We have learned to regard the page as a window, presenting us with the content, the story (if it’s a novel), or the argument (if it’s nonfiction).”[51] The second, hypermediation, can be seen in TV phenomena such as showing a miniature window in one corner of the screen and in the scrolling information bar on the bottom of the screen, but it is also footnotes, side notes and commentary with books. For Bolter and Grusin remediation is simply something that happens with all media and has happened since writing remediated speech, much to Plato’s chagrin. However, it has interesting links with translation, particularly in how immediacy can link up with Venuti’s fluency, and how hypermediacy can link up with the possibilities of layered translation, which come from Derrida and Benjamin.

Venuti claims that the current regime of domesticating translation within the United States leads toward a fluency that renders invisible both the translator and the fact of translation. According to the majority of American readers who enjoy this type of translation and experience such a goal is admirable. According to Venuti, fluency is quite problematic due to the translational ethics of difference involved. Within the logics of remediation, by rendering the translation invisible the original text is made an immediate fact for the reader even though it is not the original text, but the translated version. This type of immediacy materializes in particular ways with particular media: for books it is in a one to one fluent translational strategy, with film it is dubbing and remaking, and with video games it is localization. While these fluent/immediate strategies are dominant at present there are alternatives.

For Venuti, the opposite of translational fluency is a foreignization that highlights the ethics of difference. As cited above, most important in this is creating a new style of “double reading” that requires the reader read the text as a translation. However, if we take Bolter and Grusin’s oppositional strategy of remediation, hypermediation, we can see alternative methods of highlighting an ethics of difference. Translational hypermediation would entail highlighting the fact of translation; it could be abusive, Derridian translation; it could be Jerome McGann’s hypermedia work; it could be cinematic subtitles and metatitles; it could be game mods.[52] All of these interact with the medium in a way that utilizes its particular form.

Hypermediated translations of new media could easily exist, because of the particularities of digital alterability, but they do not. In the following section I will elaborate the particular way that translation happens materially with books, film and games. Primarily, these current ways are domesticating, fluent and immediate. Then, I will explain how translation could bring out both a type of foreignizing, layered and hypermediate relationship with the text.

 

Specific Iterations in Media

While the above section has summarized tenets of translation primarily coming from literary studies, the following will elaborate how these different trends intersect with three particular media: books, film and games. These three media are chosen very deliberately. Gaming is my main focus in part because of industry and theoretical denial of its translated nature, and in part due to its ability to lead to new translational possibilities. However, books and film are necessary predecessor forms on the route to games. Books are important as the primary textual form in current Western literary culture. While poetry, newspapers, magazines and other printed forms are also relevant I limit my analysis to the Modern novel both for space issues, and for the novel’s focus on, and obsession with, the author. Secondly, film is important as games have been created in the wake of the 20th century’s cinematic revolution where the language of games comes in part from the language of cinema: cut scenes, 1st person perspective, and increasing obsession with realisticness.[53] While the link between gaming and cinema has been critiqued on the grounds of gaming’s material and experiential differences from cinema, this does not deny its historical and stylistic links despite their unwieldy application in games.

 

Books, Supplementarity, and Digital Culture

Books in the modern period are singular objects created by singular authors. An author has an idea, struggles to bring this (original) idea to paper, and over time eventually uses his or her singular language to write the work. While books are made at one point in time, there is a belief in their timelessness: they are able to stand up to decades, centuries, and millennia (although such durability is also a test of worth) due to their original language (or rather, despite their original language, as it is translation that allows the text to ‘live on’). There is an essential link between author, nation and language, which is brought out in the book, and readers partake in this art when they read the book.

A translation is something that comes chronologically after the book. It is the result of taking the words and sentences (the content), and changing it into another language in order to facilitate the book’s movement over spatial-linguistic borders. The translation’s hierarchical relationship to the original book is derivative, but its material relationship has changed over time. Whereas translations are a material replacement that comes chronologically after the original, they were at times both simultaneous and supplementary to an original work.

Certain texts needed to be written in certain languages (Latin for religious, philosophic and scientific texts; literary genres in Galican or Arabo-Hebrew, and travel accounts such as Marco Polo’s and Christopher Columbus’ in a hodgepodge), and the idea of deliberately altering a text from one language to another was not high in priority, or even acceptable in some instances.[54] At one point in lieu of translation there was commentary, or Midrash in the case of the Torah. Such commentary was necessarily displayed alongside the original as a supplement. It complicated, but did not replace the original.

This older form of supplementarity can be linked to the current, but uncommon, practice of side-by-side translations where the original resides on one page and the translation on the other. The original and translation face each other to enable comparison. While Biblical and philosophical material is often granted side-by-side translations it is done so due to the importance of both individual words and overall sense, or because the question of just what is important is either undecided or unknowable. In the case of popular (low) cultural novels there is less reason to consider the original and so there is little reason to print the original. Other possible reasons side-by-side translations of important biblical, philosophical, and literary texts still exist, but popular novels are almost never given such a translational method are cost and size. Halving the pages printed should significantly reduce the cost and size of the book. Only important texts, or political and religious ones where price is not an issue, can justify the additional cost of the double pages. And popular, semi-disposable entertainment texts are less entertaining when enormous, bulky tomes. What is a complimentary relation between original and translation becomes a matter of replacing one with the other.

The shift from supplementary translation to replacement translation, where the translation stands on its own as a complete text, happens at the same time within modernity as the rise of translational equivalences. However, as discussed previously, it is impossible to conduct a perfect translation that conveys word, sense and all equivalences, so one element becomes the focus and under that equivalence the translation replaces the original book. In the case of the 20th to 21st century United States this equivalence is roughly what the author would have written had he or she been from the United States and writing in English. Because the industry follows a replacement strategy that supports fluency and immediacy, books can only follow a single equivalence. However, the materiality can support multiple equivalences through a translational supplementarity that supports an ethics of difference and hypermediacy.

Obviously, page-to-page translation, and the works of Derrida are an example of how books can support this form of hypermediated translation.[55] The viewer can be shown the different words that could have been used throughout the translation. While there are many possibilities for a hypermediated translation, there have been few opportunities throughout Western translation history. However, this hypermediated style might be coming back in fashion with the advent of new technologies including the digital book. These digital books also solve cost and size issues that were partial reasons against side-by-side complimentary translations.

While the digital book holds much potential, proprietary design, national based sales of content, and Digital Rights Management (DRM) issues plague current eReaders. They are simply an alternate way to read a book, which one must buy from a massive chain store in one language, and nothing more; they are monolingual devices that bring out the same trend of immediacy that I described above. However, the digital book could be programmed to show a multiplicity of versions, iterations, and translations. It could be programmed to be a truly hypermediating experience if only by linking different translations of a text. I will return to this in the final section of my paper, but a hint at this possibility is in Bible applications. YouVersion’s digital Bible application[56] has 49 translations in 21 languages, and this number increases as new versions are added. The Bible is not in copyright, but it would be possible to use a micropayment system that would allow interested patrons to buy linked versions of different book translations in a similar manner. By integrating the different variations a hypermediated experience would be created.

 

Film, Dubs, Subs, Remakes and Metatitles

The contentious relationship between immediacy and hypermediacy is highly visible in film translation.[57] On the one hand there is a long history of replacement/transparency with multi language versions (MLV), dubbing and remaking, but on the other hand there is an equally long history of subtitles. While the debate between subtitles and dubbing is really only solvable by referring to local preference, I argue that the rise of remakes of foreign films, especially in the United States, is a sign of the dominance of replacement and immediacy strategies. In the following section I will outline the history of language in film, then how it intersects with remediation, and finally ways that the lesser-used hypermediacy might bring out alternate forms of film translation.

When cinema was first exhibited there was no call for translation. There was no attached sound and there was no dialogue. The original ‘films’ like the Lumière Brothers’ La Sortie des Usines Lumière (1895), which depicts the workers leaving the Lumière factory, and L’Arrivée d’un train á La Ciotat (1896), which shows the train arriving at the station and people beginning to get off, are good examples of the limited structure and general ‘universality’ of the earliest films. Because there were no complicated plots or multiple scenes it was believed at the turn of the 19th century that cinema, like photography, was merely the “reproduc[tion of] external reality.”[58] At the beginning of the 20th century, cinema was considered outside of language and universal.[59] This understanding was first troubled with the inclusion of intertitles, as they required translation to move the film from one place to another, and from one language to another. However, the rest remained ‘universal.’

The late 1920s brought imbedded sound to cinema, and with it came talkies. These talkies necessitated a new level of translation, and both immediacy and hypermediacy translation styles were available: dubbing and subtitling respectively. Subtitling is both hypermediating and foreignizing. It is hypermediating in that it accentuates the fact of translation by putting the translated dialogue on top of the film. It is foreignizing because of the constant, visible disjoint between the words of the actors and the subtitles at the bottom of the screen.[60] The viewer constantly hears the foreign other, and this brings to the forefront the issue of trusting a translator to have translated properly.

In contrast, dubbing is immediate in that it erases the voices of the visible actors and replaces them with other voices in the target language. However, dubbing is not perfectly domesticating as there is a discrepancy between the bodies on screen and the dialogue. This discrepancy is partially the result of lip-syncing issues, and partially the result of differently signified bodies and voices. One of the tasks of dubbers is to forcefully make the dialogue match the lips by altering the linguistic utterances, often quite significantly.

While dubbing can alter the words and voice coming out of the body it cannot change the bodies themselves. In a realm of racialized nationalism, or as Appadurai writes, when the hyphen between the nation and state is strong,[61] this discrepancy between racially different body and local language is a problem. Because it is assumed that only those with specific bodies speak specific languages such discrepancy is highlighted.[62] Dubbing thus still has a hypermediated quality to it. A further step toward immediacy is changing the body. There have been two different methods used to make films more immediate by changing the bodies. The first was the early 20th century multi and foreign language versions, and the second was the much more long lasting remake.

The understanding of film as universal was initially challenged in the 1929-33 period, which saw the inclusion of multi and foreign language versions. Foreign language versions (FLV) are where the film was recreated after the fact in a different studio, and multi language versions (MLV) are when they were recreated in the same studio on the same set with different actors, but later in the same day.[63] The M/FLV highlights that there were people who understood that culturally specific elements are writ large on the body. Not only was national culture inscribed with language, but with bodies, clothing, and even story. It was believed that replacing the body, remaking the film into both the ‘local’ language and body, the film would be less foreign. This effort reveals the dominant trends of immediacy and domestication. By replacing both the language and body the text is made even more transparent for the audience. However, the M/FLV did not last long largely due to the high costs involved. Then as now there is a high priority given to business and the bottom line, and the cost of making multiple movies simultaneously was not economically justifiable especially when the movie could flop.

While intertitles and the MLV incorporate linguistic and human alteration what they do not consider is the cultural specifics. The content level was not translated or adapted; the stories are not altered. There were incredible numbers of stories adapted and remade again and again, but not because of cultural relativity. This oversight is rectified three decades later with films like Gojira (1954), which was reconceptualized away from the original’s atomic bomb logics. The remake, Godzilla, King of the Monsters! (1956), is reshot and reedited in order to feature an American journalist narrator and highlight the monster genre.[64] Following Godzilla, but primarily at the end of the 20th century there was a resurgence of remakes that link with cultural translation.[65]

With remaking not only do the bodies in the film change to locally recognizable ones with their own voices, but the context of the film can be changed from foreign lands to local ones. An example of this is Shall We ダンス (1996), a Japanese movie about a salary man going through a midlife crisis and learning to dance in an anti-dancing Japanese society, which was remade as Shall We Dance? (2004) with Richard Gere, Susan Sarandon and Jennifer Lopez in a Chicago context.

In one of the most important scenes in the original Mai is lectured by a possible new dance partner, Kimoto. He proposes they give a demonstration at a local dance hall (night club), but she refuses to dance with “hosts and hostesses” claiming it isn’t dancing, but cabaret.[66] Mai is obsessed with the foreign, European Blackpool competition and dance floor, which is opposed to the native dance hall with less history and lower culture. Kimoto claims not only that enjoying dance is of primary importance, but that the lowly Japanese dance hall has a history just as important as Blackpool. The opposition of high to low (hierarchical) and native to foreign (spatial) is stressed in this interchange. When Mai finally holds a party that signals the restart of her career it is on the lowly dance hall’s floor, indicating the primacy (or at least equality as she plans on returning to Europe) of the native over the foreign, and it stresses the equality of high and low. In contrast, the remake opposes Miss Mitzi’s relatively unpopular dance studio with the hip Doctor Dance studio and club. The opposition is both temporal and hierarchical: Miss Mitzi is middle aged and teaches various forms of professional dance compared to the scenes in Doctor Dance that are almost all depicted as club/entertainment moments. And when Paulina, Lopez’s adaptation of the Mai character, decides to go study in England (a rather meaningless decision in the context of the remake) her going away party takes place in an unrecognizable locale. In the original, the Japanese spirit and history is implied to be just as important and meaningful as the European one. The film is highly nationalist in its context. The remake works to erase such nationalism by placing the theme of global/universal work and the international family man/nuclear family over that of foreign and native. Such movement complies with a universalization of remaking as domestication. The foreignness of the Japanese original is rendered domestic and immediate with the remake.

A domesticating translation takes the foreign text and moves it into the native context, making the reader’s job easier by forcing the text to speak in a manner the reader is used to. In the Hollywood’s domesticating remake of Shall We Dansu, Japan’s troubled interaction with modernity and globalization are removed. The local socio-political particulars of the original films are erased in the service of “universal” generic narratives that satisfy an American audience that rarely interacts with foreign others. Hollywood’s remake process is a systematic erasure of difference and the foreign other that has been naturalized under the theory of the remake as cinematic translation, which only needs to render equivalent one essential element at the expense of all others.

So far I have discussed the current domesticating and immediate strategies of film translation. Even though I have claimed that subtitling is both foreignizing and hypermediating, it does not use the materiality of the filmic medium to really bring out the possibilities of hypermediation. So far there have been no further creations, but it is not hard to think of a type of “metatitles” that use the capacities of the digital cinematic medium to layer translations on the screen in a hypermediating translational style.

In the last few pages of “For An Abusive Subtitling,” Nornes refers to the fan subtitling of Japanese animation that took place largely between the late 1980s and early 1990s in the United States.[67] With difficult to translate terms the fan subtitlers gave extended definitions that covered the screen with words. The translation effort goes well beyond the standard translation in that it starts with a foreignizing pidgin, but also provides an incredible amount of information that works to bridge the viewer and source. While this abusive subtitling is hypermediating in that it layers the text, it could be extended to use the medium more by layering the text using DVD layers. These layers could move from the main textual layer (the visual film) and the verbal audible signs (dialogue and its subtitles), to the hypermediated translational layers: the visual audible signs (text on screen), the non-verbal audible signs (background noises that need explanation), the non-verbal visual signs (culturally derived, metaphoric camera usage), and any other semiotic layer possible.

Through such a layering commentary of the different signs the screen would quickly fill and overwhelm the viewer as a form of abusive translation, and while there is something admirable in completely disrupting visual pleasure, such disruption would never be taken up by the industry: all film layers must be visible either alternately or simultaneously, and at the control of the viewer. As home video watching is generally at the command of a single user or a small number of viewers the DVD format is a uniquely suited mode to enact metatitles. Due to the increased capacity to store information coming from DVD, Blu-Ray and future technology there is no limit to the possibilities of layering.

A layered translation uses the capacities of current technology by hovering over the text, but just as a translation can never fully encapsulate the original, metatitling would never fully acknowledge every aspect of the original text: it is a failed translation, just as all translation is failure due to being incomplete, but it does so in a foreignizing and hypermediating style that acknowledges its failings, and builds toward some ethical ‘more.’

 

Games and L10n[68]

While film translation retained a complex, but present relationship to translation theories and literary translation, the move to new media forms has created a chasm between theories and practice, which has resulted in new methods and industries of translating. Both translation theory and localization practice could benefit from cross-pollination, and that is the heart of my work. The shift to digital software has been accompanied by the rise of a software localization industry (of which gaming localization is an independent but related industry) with its own tools, standards committees and rhetoric. The following section begins by looking at how language intersects with games. I then consider what game localization is and how it succeeds to translate games, but also how it fails to address certain possibilities. One major element is in how localization fails to utilize the possibilities of the digital medium to bring about a hypermediated translation despite the immense amount hypermediation within the medium itself.

Like films, games have an interesting relationship with the idea of universality. The first computer/digital games such as Tennis For Two (1958) and Spacewar (1962), and even early arcade cabinet games like Pong (1972), Space Invaders (1978) and Donkey Kong (1981) were ‘language’ free. In a similar way that the early films were largely visual amazements, games were computer-programming amazements meant to show off the technology.[69] However, the programming was difficult and took up all or most of the available processing power and programming energy. This meant that early games had little processing power or programming time to spare for story. Many held (and still hold) to a universal accessibility and understanding of these games due to the technological and programming limitations coupled with a belief in the universality of play as a social phenomenon. Even now the belief in ludic universality holds despite theorists problematizing that fact in a similar way to how a previous generation of visual culture theorists problematized the universality of vision.[70] For instance, Mary Flanagan has argued, “while the phenomenon of play is universal, the experience of play is intrinsically tied to location and culture.”[71] While she is largely discussing the spatial politics of games existing in certain spaces, the theory can be expanded to indicate that any game, or instance of play, is tied to a cultural context be it Tennis for Two and the atomic age the weapons research lab in which it was created, Spacewar and masculine science fiction fantasies, Donkey Kong and the origins of the side-scroller as linked to a Japanese aesthetic, or any other game and context. Games are developed, produced and distributed in specific socio-political, temporal and spatial locations and are thus not universal.

However, this believed universality is only now coming into question, and it was completely unquestioned during the 1960s to early 1980s during the 1st and 2nd generations of computer games. There were no ‘words’ in the early computer games, just crude iconic representations. This meant that within the games themselves there was no ‘language’ needing ‘translation.’ What did need translation were the external titles and instructions. Titles were kept or changed to the desire of the producers and distributers. Pakkuman (1980) turned into Pacman instead of Puckman for fear of malicious pranksters changing the P to an F, but other titles were kept as is or were programmed in roman characters. Instructions for arcades and manuals for home consoles needed more extensive translation, but it was a very limited, technical form of translation. The first generation of computer game translation was thus both limited and little different from the roughest of technical translations, neither ‘literary’ nor ‘political.’

The second generation of game translation came about when games utilized greater processing power and storage capabilities to tell extensive stories. These were earlier adventure games like Colossal Cave Adventure (1976) and Zork (1977-80), which told 2nd person adventure narratives, and the more graphical adventure descendents of the 1980s such as Final Fantasy (1987) and King’s Quest (1987). These broke ground in games by normalizing narrative along with play. These also necessitated a new type of game translation that could address more than just the paratextual elements of title and manual.[72] This generation of game translation led to the creation of an industry for game translation.

The rise of linguistic material (stories in and surrounding the games) led to an acknowledged need of translation and the beginnings of the localization industry. Originally, the primary method was what is now called partial localization, where certain things were localized, but most others were not. Thus, the manual, title, dialogue, and menus might be translated, but the HUD might remain in the original language due to the difficulty of graphical alterations. The localization industry evolved in the 1990s to match the growing game industry, and localized elements were expanded from menus and manuals to graphics, voices and eventually even story and play elements.

While the current form of game localization is much expanded from early game translation the basics are the same. According to the Localization Industry Standards Association (LISA[73]), “Localization involves taking a product and making it linguistically and culturally appropriate to the target locale (country/region and language) where it will be used and sold.”[74] Localization is like translation in that it facilitates the movement of software between places, but it is different in that it also allows significant changes in the visual, iconographic and audio registers in addition to the linguistic alteration.

Regardless of how much is translated, game translation involves the replacement of certain strings of code with other strings of code. These strings are usually linguistic: The title The Hyrule Fantasy: Zeruda no densetsu (The Hyrule Fantasy: ゼルダの伝説) becomes ‘The Legend of Zelda,’ and within the game the line “ヒトリデハキケンジャ コレヲ サズケヨウ” [it’s dangerous by yourself, receive this] becomes the meme-worthy “It’s dangerous to go alone. Take this!” But alterations are also graphical: a Nazi swastika is changed into a blank armband for games in Germany. The first is a title, the second is a linguistic asset, and the third is a graphical asset. All assets exist as strings of text in the application code, and by altering the programmed code, each can be changed in the effort to move the game from one context to another. The ability to alter assets is an essential quality of new media.

Along with numerical representation, modularity, automation and transcoding, Lev Manovich argues that one of the primary elements of new media is their variability.[75] This idea of variability exists because new media is tied to digital code, which is adaptable, translatable and transmediatable through the alteration of specific strings. Because the strings, especially linguistic strings, are modular there is no specificity to games. With digital games this variability is combined with discourse of play as universally understandable. Because play is considered universal, the trappings of games (form, content and culture) are considered inconsequential, variable, and localized to fit into a target context in a way that does not change the game’s ludic [play] essence. Thus, any level of alteration in the localization process is fully sanctioned in order to provide the equivalent “experience” to the user. [76]

While asset alteration is possible as an essential quality of digital media, it is not simple: a hard coded application can only be changed through painstakingly altering tons of strings all throughout the program. In contrast, an application that calls up assets can change the individual assets into multiple variations and then choose which assets to call. This practice has been enabled in part by the game production industry embracing Internationalization (i18n) as a necessary and regular practice.

Internationalization is the practice of keeping as many game assets as possible untied and unmarked by cultural elements. In his guide to localization Bert Esselink provides an example of an image with a baby covered in blankets and a separate layer of undefined, localizable text.[77] Unlike pre internationalization methods the image and text are not compressed, which makes it possible and easy to switch the text. While the words are changeable the images remain the same, as there is an assumption that a smiling child is universal. Such non-universality of these particular elements is an issue. Games move beyond this by retaining almost all elements as changeable assets whether they are dialogue, images, Nazi armbands, or realistic representations of military flight simulators, but this changeability brings out other problems.[78] It does not address the elements that go assumed as universal that are not, but it also positions internationalization as a lead-in to domestication. Within the ideal of internationalization the practice of internationalization becomes domesticating translation by material and practical necessity. No matter what happens there will be an immediate, replacing, domesticating translation.

If expansive narratives opened games up to larger amounts of translation, there is a conflux of things that led to the third generation of game translation and the eventual rise of the game localization industry. These are the rise of the software localization industry with i18n standards, the understanding of variability and ability to change the games, the creation of CD technology with larger amounts of storage capacity, and finally the use of that storage capacity to enable voice acting to highlight the narratives.

While compact disk technology was created in the 1970s and has been a means of distributing music since the early 1980s, it took until the 1990s for games to be distributed on CDs. Beginning in the early 1990s CD-ROMs were attached to computers and the Playstation gaming device, and games began to be distributed on CDs. This move from floppy disks to CDs greatly expanded the size of games, and with it came the inclusion of both cinematics and digitized voices. One famous early example is Myst (1993). Both cinematics and recorded vocals take a large amount of storage capacity, which the CD provides. However, the CD does not provide enough space to enable multiple languages of vocal dialogue. There was the justified necessity to limit the included languages with a game because of the limited space available. Even when games moved to multiple disks providing multiple audio tracks would have significantly increased the disks required.

The lack of space for multiple languages forced game translators to decide between subtitling the audio and dubbing it over. While this might have led into an equal debate between dubbing and subtitling (like with film translation), the dominance of computer generated (CG) video over live action, full motion video within the games actually led to the naturalized dominance of dubbing and replacing.[79]

As CG requires that voices be added there is little understanding that localization replaces anything. There is no ‘natural’ link between the visible body and the audible voice for CG, so dubbing causes fewer problems in gaming than it does in cinema.[80] However, because of the space issues there was not enough space to provide multiple languages on the single CD, which meant that the majority of games only have one language on them. Certain European regions provide multiple languages by necessity, but this is far from the norm. Even when the storage and distribution method changed from CD to DVD there was little movement toward the inclusion of multiple languages. This lack of included languages is also partially due to the region encoding business practice.

Linguistic multiplicity within games has also been stymied by the practices of video compression for TV and different regions encodings for DVD disks. CDs and DVDs are region encoded in order to protect business interests by opposing ‘piracy,’ defined here as the unsanctioned copy, spread and use of software applications.[81] There are two general eras of this encoding. The first was the separation between NTSC (National Television System Committee) and PAL (Phase Alternate Line). These two methods were linked to the televisions distributed in different regions; the different gaming systems and disks need to operate in the same encoded manner as the televisions. This made it impossible to play European games (PAL) on an American system (NTSC), but it did not necessarily block out Japanese games (NTSC). This initial form of encoding has less to do with piracy protection than it does policing national airwaves. DVDs use a slightly different method in that they are region based between 8 different encodings: in a limited manner they are as follows: US/Canada (1), Europe/Middle East/Japan (2), Southeast Asia (3), Central/South America/Oceania (4), Russia/Africa (5), China (6), undefined (7), international venues such as airports (8). For video games these region encodings work with and against the standard PAL/NTSC distinction so that while Europe and Japan are both region 2 there are differences between the ability to play PAL and NTSC and vice versa. In contrast, while NTSC disks work easily in both Japan and the United States the region encoding limits the ability to play both disks. Both the PAL/NTSC distinction and region encoding have multiple purposes including software piracy prevention, but in terms of translation they legitimize the lack of necessity of translating for multiple regions.

As piracy is a problem to the game industry[82] and large amounts of piracy happen in certain regions (Asian regions especially due to economic disparity, gray markets and governmental bans on consoles) there is a general belief that by not supporting multiple languages there will be a block put on game piracy: if the gray market version is unintelligible due to it being in an alternate language it is possible that a user will still buy their language version. In other words, limiting the number of languages available limits the geographical range of a particular version of a game, which works against the black markets and works for the game industry. Thus, there is an interesting convergence between business interests, the technology available, the developing techniques in programming games, and the general trend toward translational domestication and immediacy. The storage capacity limitations coupled with the use of cinematics and voices and the standardized practice of dubbing and replacing dovetail perfectly with the industry practice of localization as domesticating, immediate translation.

The goal of localization is to make it ‘appropriate.’ This goal is heavily influenced by the business elements of the localization industry. Localization is about profit, the bottom line, so the goal is to fit with user desires. Game localizers identify game user desire related solely to entertainment.[83] Entertainment and appropriate translation here is identified as helping the target player to have the same experience that the source player had in the source context. Such a singular drive is quite different from literary translations that aim to abuse the user, or linguistic interpretation and political translations that deal with the problems of modern political interaction. However, at base localization is still a matter of equivalence: the equivalent experience/feeling/affect.[84]

Insofar as the localization industry is a business there is little one can say negatively about the practices enacted. Only popular games are localized, so translating them with the same money-making “experience” is better business practice. However, when one attempts to move beyond such market logics it is hard not to see the problems. Just as translation needs to be understood as important, powerful and dangerous, so too must localization be understood as a weighty practice. An industry that has globalization (g11n) as one of its prime terms must be aware that there is more to globalization than “the business issues associated with taking a product global.”[85] Just as globalization is a fraught term in the world it must be problematized from its purely business nature in localization.[86] Said simply, there is more to a game than the immediate localization of the foreign user’s experience.

One way in which localization has recently pointed toward both hypermediation and alternate forms of translation are the creation of multilingual editions to games. The switch from CDs to DVDs and the move to downloadable software there has been a move to include multiple languages. DVDs have enough storage capacity to house multiple language audio tracks and downloadable software is unlimited (if time consuming) as it solely relates to the system’s hard drive capacity. Because of this there has been some movement toward including multiple languages. One particularly interesting case is Square-Enix’s “international editions.” Particularly interesting about the international editions is that they started with only one language: Japanese, but included a few additional features (Final Fantasy VII: International Edition). They then turned into games that mixed the English and Japanese, but were released solely in Japan. The audio tracks were English and there were Japanese subtitles, but the rest of the game was in Japanese (Final Fantasy X: International Edition, Kingdom Hearts: Final Mix). Part of the difference between the early and later international editions is the move from CD to DVD, thus there was little dialogue in the early version, but even in the DVD versions there was only a replaced audio track (the Japanese was replaced with ‘international’ English). A third movement was when both English and Japanese audio tracks were available, but only after finishing the game once: the initial playthrough necessitated the player have a mixed English/Japanese experience with Japanese menus, written dialogue and subtitles, but with English audio (Kingdom Hearts II: Final Mix+). Finally, a fourth movement is the full availability between English and Japanese with various different subtitled languages (Star Ocean: Last Hope International). This progression of different styles of international edition implies what was originally a gimmick, but has changed to a marketing decision based on the knowledge that there is an audience and that this audience has spread outside of Japan.

These international editions have a tangled relationship to the concept of kokusaika [internationalization, or ‘international-transformation’] within Japan. Kokusaika itself is tied to ideas of westernization in the late Tokugawa and Meiji periods, and Americanization in the post World War II period. Kokusaika was seen as an important step of modernization in much of the discourse of the 19th and 20th centuries, but it is troubled in nationalist and essentialist discourses in particular.[87] One might also argue that the Square-Enix games both support and trouble this kokusaika discourse as they support it, but they maintain the importance of Japanese within the games. While the international edition allows multiple languages it does so from a Japanese expansionist perspective. Language is never neutral, and but putting the lingua franca with Japanese as the only choices (with the other standard gaming languages such as French, German, Spanish and Italian as subtitle options) there is a definite movement to raise the importance and reach of Japanese as a language. Kokusaika is thus maintained, but with the exception of a continued presence (and even dominance) of Japanese. While I believe the international editions are on the right track toward a layered, foreignizing style of translation they still exist in the context of Japanese politics.[88] This is similar to Venuti’s claim that Schleiermacher’s work offers a helpful corrective despite the German author’s 19th century chauvinism.

While the past thirty years has led to increased immediacy and region protections, new forms such as DRM routines and online portals such as Steam indicate a general belief that such region separations have ultimately failed to protect against piracy. Because the region encoding tactics to prevent piracy have failed, it is possible that a new era of Localization is coming, but so far it has been relatively limited. Hopefully this is only momentary and the same hypermediacy that has been blocked out since the beginning of gaming will become visible along with the existence of difference that is visible with translations and layers. I will discuss some of these possibilities in the final section of this paper.

 

Possible Futures

I would like to conclude this paper with a discussion of two new trends in translation. Both are postmodern, intentionally unstable, and utilize the digital materiality. One trend destabilizes the translator, and the other destabilizes the translation. However, both trends can heighten the feeling of hypermediation and foreignization, which (according to Venuti) is helpful in the current translational climate.[89]

 

Destabilization of the Translator

The destabilization of the translator has multiple translators, but a single translation. It has its history in the Septuagint, but its present locus is around dividing tasks and the post-Fordist assembly line form of production. Like the Septuagint, where 72 imprisoned scholar translators translated the Torah identically through the hand of God, the new trend relies on the multiplicity of translators to confirm the validity of the produced translation. However, different is that while the Septuagint produced 72 results that were the same, the new form of translation produces one result that, arguably, combines the knowledge of all translators involved. This trend of translation can be seen in various new media forms and translation schemes such as Wikis, the Lolcat Bibul, Facebook, and FLOSS Manuals.

Wikis (from the Hawaiian word for “fast”) are a form of distributed authorship. They exist due to the effort of their user base that adds and subtracts small sections to individual pages. One user might create a page and add a sentence, another might write three more paragraphs, a third may edit all of the above and subtract one of the paragraphs, and so on. No single author exists, but the belief is that the “truth” will come out of the distributed authority of the wiki.  It is a democratic form of knowledge production and authorship that certainly has issues (among these questions is whether wikis are actually democratic and neutral), but for translation it enables new possibilities.[90] While wikis are generally produced in a certain language and rarely translated (as the translation would not be able to keep track of the track changes), the chunk-by-chunk form of translation has been used in various places.

One form of wiki translation is the Lolcat Bible translation project, a web-based effort to translate the King James Bible into the meme language used to caption lolcats (amusing cat images). The “language” meme itself is a form of pidgin English where present tense and misspellings are highlighted for humorous effect. Examples are “I made you a cookie… but I eated it,” “I’z on da tbl tastn ur flarz,” and “I can haz cheeseburger?”[91] The Lolcat Bible project facilitates the translation from King James verse to lolcat meme. For example, Genesis 1:1 is translated as follows:

KING JAMES: In the beginning God created the heaven and the earth

LOLCAT: Oh hai. In teh beginnin Ceiling Cat Maded teh skiez An da Urfs, but he did not eated dem.[92]

While the effort to render the Bible is either amusing or appalling depending on your personal outlook, important is the translation method itself. The King James Bible exists on one section of the website, and in the beginning the lolcat side was blank. Slowly, individual users took individual sections and verses and translated them according to their interpretation of lolspeak, thereby filling the lolcat side. These translated sections could also be changed and adapted as users altered words and ideas. No single user could control the translation, and any individual act could be opposed by another translation. According to the homepage, the Lolcat Bible project began online in July of 2007, and a paper version was published through Ulysses Press in 2010. The belief is that if 72 translators and the hand of God can produce an authoritative Bible, surely 72 thousand translators and the paw of Ceiling Cat can produce an authoritative Bible.[93]

FLOSS (Free Libre Open Source Software) Manuals and translations are a slightly more organized version of this distributed trend.[94] FLOSS is theoretically linked to Yochai Benkler’s “peer production” where people do things for different reasons (pride, cultural interaction, economic advancement, etc), and both the manuals and translations capitalize on this distribution of personal drives.[95] Manuals are created for free and open source software through both intensive drives, where multiple people congregate in a single place and hammer out the particulars of the manual, and follow-up wiki based adaptations. The translations of these manuals are then enacted as a secondary practice in a similar manner. Key to the open translation process are the distribution of work and translation memory tools (available databases of used terms and words) that enable such distribution, but also important is the initial belief that machine translations are currently unusable. It is the problems of machine translation that causes the need for human intervention with translation, be it professional or open.

Finally, Facebook turned translation into a game by creating an applet that allowed users to voluntarily translate individual strings of linguistic code that they used on a daily basis in English. Any particular phrase such as “[user] has accepted your friend request” or “Are you sure you want to delete [object]?” were translated dozens to hundreds of times and the most recurring variations were implemented in the translated version. The translation was then subject to further adaptation and modification as “native” users joined the fray when Facebook officially expanded into alternate languages. In Japanese <LIKE> would have become <好き>, but was transformed to <いいね!> [good!]. Not only did this process produce “real” languages, such as Japanese, but it also enabled user defined “languages” such as English (Pirate) with plenty of “arrrs” and “mateys.” The open process created ‘usuable’ material, such as Facebook in Japanese, but also things that would never happen due to bottom line considerations, such as pirate, Indian, UK, and upside down ‘translations’ of English.

Wikis, FLOSS, and Facebook are translations with differing levels of user authority, but they all work on the premise that multiple translators can produce a singular, functioning translation. In the case of Facebook, functionality and user empowerment are highlighted, but profitability is always in the background; for FLOSS, user empowerment through translation and publishing are one focus, but a second focus is the movement away from machine translation; in all cases, but wikis particularly, the core belief is that truth will emerge out of the cacophony of multiple voices, and this is the key tenet of the destabilization of the translator.

 

Destabilization of the Translation

The other trend is the destabilization of the translation. This form of translation has roots in the post divine Septuagint where all translation is necessarily flawed or partial. Instead of the truth emerging from the average of the sum of voices, truth is the build-up, the mass turned back into a literal tower of Babel: it is footnotes, marginal writing and multiple layers. Truth here is the cacophony itself. The ultimate text is forever displaced, but the mass implies the whole. The translation is destabilized through using new media’s digital essence to bring out a hypermediating translational style.

This style of translation it is not new as it is the hypermediated translations that I discussed previously. It is side-by-side pages with marginal notes; it is Derridian translations; it is NINES and other multilayered digital scholarship; it is fan translations and metatitles; it is multilingual editions of games; it is modding. All of these exist, but not as a new methodology. The destabilization of the translation is a term for grounding these different styles as a new methodology that utilizes forms of peer production (similar to the destabilization of the translator), but fully layers things so that it is not the average that is visible to the user, but a mountain of possibilities available to the user to delve into or climb up. All of these types of translation exist, and the willing translators mentioned above are available, so the difficulty is not in making the many translations happen. Rather, the difficult task is in rendering visible the multiplicity.

The main difficulty of the destabilization of the translation is the problem of exhibiting multiple iterations at one time in a meaningful way. How can a reader read, watch, or play two things at once? Books, films, and games provide multiple examples of how to deal with such an attention issue, but in a limited way. Footnotes, side-by-side pages, and subtitles are all hypermediating layers. However, the digital form presents new possibilities in that there is no space issue and things may be revealed and hidden at the user’s command. There are interesting possibilities of how games can use their digital, programmed, form and user/peer production to bring out new levels of the application and the experience. I will review the digital book and metatitle here, but I will focus on what I see as a new form of game translation that not only uses, but truly thrives off of fan production.

Books are rather conservative. While they are in many ways open due to a lapse in copyright, there is little invention happening to bridge different versions. While resources such as Project Guttenberg have opened these thousands of texts to digital reader devices they exist as simple text forms just as the other purchasable books exist as simple, immediate, remediations of the original book form. However, a hypermediating variation would link these different versions and translations. At a click the reader can switch between Homer’s Odyssey in Greek and every single translation into English made in the 20th century. Of course, French, Japanese, German and various other translations are also available and the screen can be split to compare any of the above. With a slightly different (slightly less academic) mentality the reader to peruse Jane Austen’s Pride and Prejudice on the left hand side of the screen and the recent zombie rewrite Pride and Prejudice and Zombies on the right hand side of the screen. This does not advance the technology particularly, it simply has a different relationship with the text, the author, and the translator; the key is to link the texts and make them available even if it is through using small micropayments for each edition.

Films are interesting as there are already possibilities in play: multiple subtitle and audio tracks, and commentary tracks by stars, directors and others. Subtitles are a simple layer that has existed for almost a century. However, with the advent of digital disks the subtitle has been separated from the print itself allowing the user to choose to hide the subtitle or to choose what subtitles to view. Shortly after the introduction of DVD technology better compression algorithms enabled multiple audio tracks including commentary tracks. We are in an era that uses Blu-Ray disks with more storage capacity, and downloadable movie sites that allow the user to access as desired. These already exist. What would be a step forward is the linking of fan translation and commentary tracks to the digital artifact itself. Files that are in-sync with the film, but must be started independently exist now. Three examples are the abusive subtitling that I discussed earlier through Nornes, RiffTrax, from creators of Mystery Science Theater 3000,[96] which overdubs commentary onto various films creating a sort of meta-humor, and fan commentary from the Leaky Cauldron,[97] one of many prolific Harry Potter fan sites that exist on the Internet. All three of these are independent, fan, productions that are partially sanctioned by business. It would be highly beneficial to producers, prosummers and consumers to enable the direct inclusion of these modifications into the DVDs themselves. It would also enable a new understanding of the film where the meaning is not the surface, but the build up of meaning provided by both original creators and all others who play and add to it.

Finally, we arrive at digital games where some of the most interesting fan work has been done and partially integrated. This means that the way has been opened for a hypermediated translation, but it has, so far remained unpaved. The destabilization of the video game translation would combine the burgeoning practice of multi lingual editions, where there is a visible choice for the user between one language version or another, and the practice of allowing and integrating fan mods. Mods are game modifications, which could be additional maps, different physics protocols, alternate graphics, or a host of other types. Some of these, such as Team Fortress, have been wildly popular. However, ‘mods’ could be expanded to include alternate translations and dialogue tracks. The workers are there and available,[98] but so far these fan productions have faced nothing but cease and desist letters, virtual takedowns, and lawsuits.

With digital games the localization process has traditionally replaced one language with its library of accompanying files with another. However, as computer memory increases the choice of one language or another becomes less of an issue, and certain platforms such as the Xbox and online portal Steam, provide multiple languages with the core software. This gives rise to the language option where the game can be flipped from one language to another through an option menu. Some games put this choice in the options menu at the title screen. Examples[99] of this are Gameloft’s iPhone games (almost all of them, but including Block Breaker Deluxe, Hero of Sparta, and Dungeon Hunter) and Ubisoft’s Nintendo DS game Might and Magic: Clash of Heroes. Others have a hard switch that makes the natural language of the game correspond to the language of the computer system software, so that a computer running in English would have only English visible in the game, but if that computer’s OS switched to Japanese the game would boot with the Japanese language enabled. Square-Enix’s Song Summoner: Encore, Final Fantasy, and Final Fantasy II iPhone releases automatically switch between English and Japanese depending on which language the iPhone is set to. The Xbox 360 has a similar switch mechanism that requires the system to be switched to the desired language.[100] Between these two types are games played on the Steam system such as Valve’s Portal and Half-Life 2, which allow the user to launch the game in chosen languages, but do not require a system-wide switch. Finally, a few games allow the user to switch back and forth between languages. Square-Enix’s iPhone game Chaos Rings allows the user to switch between English and Japanese in the in-game menu allowing the rapid switch between languages at any time not currently in conversation or battle. This last example is the closest example to a destabilization of the translation as it would allow the near simultaneous visibility of multiple languages.

Integrating fan created translational mods into the software itself would further destabilize the already unstable base of multiple visible languages. This integrated form would allow the user to switch between official localization to fan translation to fan mod at their whim. The official version ceases to exist and the user is allowed to both interact with other types of users and create fully sanctioned alternative semiotic domains. The eventual ability to mix and match HUD in English, subtitles in Japanese and fan translation in Polish would be a true destabilization.[101]

Both the destabilization of the translator and the destabilization of the translation use new forms of fan and peer production and create a foreignizing, hypermediated translation. All of these things could be good in the current political moment that equates difference with terrorism, which necessitates the translational replacement of all forms difference with local variations. However, key to both destabilizations are that they are not simply utopian fantasies, but legitimately productive and ready to enact. It is my intent to build, and build upon, these possibilities for opening up new forms of translation in digital media in my dissertation project on games and localization.


[1] For an example of the lack of integration of alternate media in translation studies, see: Lawrence Venuti. The Translation Studies Reader. 2nd ed. New York: Routledge, 2004. On a particular attempt to integrate it, see: Anthony Pym. The Moving Text: Localization, Translation, and Distribution. Amsterdam; Philadelphia: John Benjamins Pub. Co., 2004. On the distinct effort to consider ‘old’ media as ‘new’ see: Lisa Gitelman and Geoffrey B. Pingree, eds. New Media, 1740-1915. Cambridge: MIT Press, 2003.

[2] Antoine Berman. “From Translation to Traduction.” Richard Sieburth trans. (unpublished): p. 11.

[3] Serge Lusignan. Parler Vulgairement. Paris/Montreal: Vrin-Presses de l’Université de Montréal, 1986: pp. 158-9. Quoted in Berman. “From Translation,” p. 9.

[4] Berman, “From Translation,” p. 11.

[5] Berman, “From Translation,” p. 11.

[6] Roland Barthes. “From Work to Text.” In The Cultural Studies Reader, edited by Simon During, Donna Jeanne Haraway and Teresa De Lauretis. London: Routledge, 2007. Rosemary J. Coombe. The Cultural Life of Intellectual Properties: Authorship, Appropriation, and the Law. Durham: Duke University Press, 1998. Néstor García Canclini. Hybrid Cultures: Strategies for Entering and Leaving Modernity. Minneapolis: University of Minnesota Press, 2005. Koichi Iwabuchi. Recentering Globalization: Popular Culture and Japanese Transnationalism. Durham: Duke University Press, 2002. Koichi Iwabuchi, Stephen Muecke, and Mandy Thomas. Rogue Flows: Trans-Asian Cultural Traffic. Aberdeen, Hong Kong: Hong Kong University Press, 2004.

[7] See: Barthes, “From Work to Text.” Michel Foucault. “What Is an Author?” In The Essential Foucault: Selections from Essential Works of Foucault, 1954-1984, edited by Paul Rabinow and Nikolas S. Rose. New York: New Press, 2003. Lesley Stern. The Scorsese Connection. Bloomington; London: Indiana University Press; British Film Institute, 1995. Mikhail Iampolski. The Memory of Tiresias: Intertextuality and Film. Berkeley: University of California Press, 1998.

[8] Berman, “From Translation,” p. 14

[9] I use literary theories due to their prevalence within academia, but also because of their political nature. While other conceptualizations of translation avoid politics and ethics (particularly practical understandings of translation) comparative literary theories of translation highlight them: my underlying belief is that translation is both politically and culturally important.

[10] Jacques Derrida. “‘Eating Well,’ or the Calculation of the Subject: An Interview with Jacques Derrida.” In Who Comes after the Subject?, edited by Eduardo Cadava, Peter Connor and Jean-Luc Nancy, 96-119. New York: Routledge, 1991.

[11] George Steiner. After Babel: Aspects of Language and Translation. 3rd ed. Oxford ; New York: Oxford University Press, 1998: p. 428.

[12] Ferdinand de Saussure, Charles Bally, Albert Sechehaye, and Albert Riedlinger. Course in General Linguistics. Translated by Roy Harris. LaSalle: Open Court, 1983 [1972]: p. 67.

[13] Saussure, Course, pp. 71-78.

[14] Saussure, Course, pp. 79-98.

[15] Jonathan D. Culler. Ferdinand De Saussure. Rev. ed. Ithaca, N.Y.: Cornell University Press, 1986: p. 132.

[16] Jacques Derrida. Of Grammatology. 1st American ed. Baltimore: Johns Hopkins University Press, 1976.

[17] Jacques Derrida. “Des Tours De Babel.” In Difference in Translation, edited by Joseph F. Graham. Ithaca: Cornell University Press, 1985: pp. 165-7.

[18] Jacques Derrida. “Living On. Border Lines.” In Deconstruction and Criticism, edited by Harold Bloom, Paul De Man, Jacques Derrida, Geoffrey H. Hartman and J. Hillis Miller. New York: Seabury Press, 1979.

[19] Jacques Derrida. “What Is a ‘Relevant’ Translation?” In The Translation Studies Reader: p. 443. (italics and brackets in text)

[20] Derrida, “‘Eating Well.’

[21] Jacques Derrida. Specters of Marx: The State of the Debt, the Work of Mourning, and the New International. New York: Routledge, 1994.

[22] Philip E. Lewis. “The Measure of Translation Effects.” In Difference in Translation.

[23] Ironically, Spivak’s Derridian translation of Derrida’s Of Grammatology was successful in its abuse, but unsuccessful in getting her further translation jobs of Derrida’s works. Derridian translations are successful when they are unsuccessful.

[24] On the relationship between task, giving up and failure see: Paul De Man. “Conclusions: Walter Benjamin’s ‘the Task of the Translator’.” In The Resistance to Theory. Minneapolis: University of Minnesota Press, 1986: p. 80. For more on Derrida, Benjamin and De Man see: Tejaswini Niranjana. Siting Translation: History, Post-Structuralism, and the Colonial Context. Berkeley: University of California Press, 1992.

[25] Walter Benjamin. “The Task of the Translator: An Introduction to the Translation of Baudelaire’s Tableaux Parisiens.” In The Translation Studies Reader: p. 81.

[26] Benjamin. “The Task of the Translator,” p. 76.

[27] Emily Apter brings this out well in her work on translation and politics. Emily S. Apter. The Translation Zone: A New Comparative Literature. Princeton: Princeton University Press, 2006.

[28] Specifically, Robinson argues for the long lasting presence of Christian asceticism (both eremitic and cenobitic) coming from religious dogma, but leading into the word/sense debate. See: Douglas Robinson. “The Ascetic Foundations of Western Translatology: Jerome and Augustine.” Translation and Literature 1 (1992): 3-25.

[29] Jerome. ”Letter to Pammachius.” Kathleen Davis trans. In The Translation Studies Reader: p. 28.

[30] John Dryden. “From the Preface to Ovid’s Epistles.” In The Translation Studies Reader, pp. 38-42.

[31] Roman Jakobson, Krystyna Pomorska, and Stephen Rudy, Language in Literature. Cambridge: Belknap Press, 1987: p. 429.

[32] Jakobson, Language in Literature, p. 434. There are interesting connections between formalism and Laura Marks’ work on digital translation. Marks argues that digitization necessarily robs things of certain qualities and this means they can be translated in interesting, new ways, but that they are forever robbed of originary elements. The digital becomes a universal language. See: Laura U. Marks. “The Task of the Digital Translator.” Journal of Neuro-Aesthetic Theory 2 (2000-02).

[33] Anton Popovič. Dictionary for the Analysis of Literary Translation. Edmonton: Department of Comparative Literature, University of Alberta, 1975: p. 6. Also see Niranjana’s discussion in Siting Translation, p. 57.

[34] I am skipping over large debates within game studies involving the question of the core of gaming: ludology and narratology. Roughly, whether the core of gaming is the ‘play’ or the ‘story.’ I skip this to save space, because it is a dead end that has been generally concluded with the answer of ‘both,’ because ludologists and narratologists are academics, but finally because ‘experience’ encapsulates both play and story.

[35] Carmen Mangiron and Minako O’Hagan. “Game Localization: Unleashing Imagination with ‘Restricted’ Translation.” Journal of Specialized Translation, no. 6 (2006): 10-21. Also see, Minako O’Hagan and Carmen Mangiron. “Games Localization: When Arigato Gets Lost in Translation.” Paper presented at the New Zealand Game Developers Conference, Otago 2004.

[36] Popovič, Dictionary, p. 11.

[37] Lawrence Venuti. ”Foundational Statements” in The Translation Studies Reader: p. 15.

[38] Schleiermacher is working with Dryden’s tripartite: metaphrase, paraphrase and imitation. In his understanding, then, word-for-word has been subsumed (since Jerome) for sense-for-sense, but imitation has been opened up as a larger (maligned) possibility.

[39] Friedrich Schleiermacher. “On the Different Methods of Translating.” In The Translation Studies Reader: p. 49.

[40] Schleiermacher. “On the Different Methods of Translating,” pp. 60-61.

[41] Antoine Berman. The Experience of the Foreign: Culture and Translation in Romantic Germany. Albany: State University of New York Press, 1992: p. 150.

[42] Berman, The Experience of the Foreign, p. 149.

[43] Lawrence Venuti. The Translator’s Invisibility: A History of Translation. 2nd ed. New York: Routledge, 2008 [1994]: p. 15.

[44] Venuti, Translator’s Invisibility, p. 86.

[45] Venuti, Translator’s Invisibility, p. 98.

[46] Venuti, Translator’s Invisibility, p. 276.

[47] Venuti, Translator’s Invisibility, p. 85.

[48] Lawrence Venuti. The Scandals of Translation: Towards an Ethics of Difference. London; New York, NY: Routledge, 1998.

[49] Venuti, Scandals of Translation, p. 87.

[50] J. David Bolter and Richard Grusin. Remediation: Understanding New Media. Cambridge: MIT Press, 1999.

[51] In this later work the metaphor has shifted to interfaces being both windows with immediacy and mirrors with reflection, but it is still connected to remediation with both immediacy and hypermediacy. Jay David Bolter and Diane Gromala. Windows and Mirrors: Interaction Design, Digital Art, and the Myth of Transparency. Cambridge: MIT Press, 2003: p. 82.

[52] Metatitles are a extended form of subtitles that I first discussed in my Master’s thesis; Jerome McGann’s work, including IVANHOE and his Rosetti work, can be found through his website <http://www2.iath.virginia.edu/jjm2f/online.html>; Mods are fan/user created game modifications.

[53]Alexander R. Galloway. Gaming: Essays on Algorithmic Culture. Minneapolis: University of Minnesota Press, 2006: pp. 70-84.

[54] Berman, “From Translation,” p. 6.

[55] Jacques Derrida. Glas. Lincoln: University of Nebraska Press, 1986 [1974].

[56] This application is for various ‘smart’ phones and the iPad, but the technology is still not utilized for eReaders. My point is that this lack is not for technological reasons, but for ways that the eReader is both imagined and actualized.

[57] For a general, early look at film translation see: Dirk Delabastita. “Translation and the Mass Media.” in Susan Bassnett and Andre Lefevere eds. Translation, History and Culture. London: Pinter Publishers, 1990.

[58] Lawrence W. Levine. Highbrow/Lowbrow: The Emergence of Cultural Hierarchy in America. Cambridge: Harvard UP, 1988. Referenced in Jennifer Forrest “The ‘Personal’ Touch: The Original, the Remake, and the Dupe in Early Cinema,” In Jennifer Forrest and Leonard R. Koos eds. Dead Ringers: The Remake in Theory and Practice. Albany: State University of New York Press, 2002: p. 102.

[59] As has been stated by many people in the 20th century, there is nothing objective, or reflective, about representation, and there never was for early cinema, however, this belief has never really gone away. See: Ella Shohat and Robert Stam. “The Cinema after Babel: Language, Difference, Power.” Screen 26.3-4, 1985: 35-58.

[60] This is regardless of corruption of subtitles per Abé Mark Nornes. Cinema Babel: Translating Global Cinema. Minneapolis: University of Minnesota Press, 2007.

[61] Arjun Appadurai. Modernity at Large: Cultural Dimensions of Globalization. Minneapolis: University of Minnesota Press, 1996: particularly p. 39.

[62] For Japanese this is particularly a problem; for English this is less of a problem, especially for Americans, due to the assumption that English is a global language.

[63] On MLV see: Ginette Vincendeau. “Hollywood Babel: The Coming of Sound and the Multiple Language Version.” Screen 29.2 (1988): 24-39. On FLV see: Natasa Durovicová. “Translating America: The Hollywood Multilinguals 1929-1933.” In Sound Theory: Sound Practice, edited by Rick Altman, 138-53. New York: Routledge, 1992. Also, see: Nornes, Cinema Babel.

[64] See: Chon Noriega. “Godzilla and the Japanese Nightmare: When “Them!” is U.S.” Cinema Journal 27.1 (Autumn 1987): 63-77

[65] These are visible in the United States, to which I largely refer, but there is another history within India’s Bollywood (often illegal/unofficial) remake practices.

[66] Ironically, the actual words she uses, ホスト, ホステス and キャバレー, are all foreign loan words in katakana. Thus, even her word choice is based in an awkward schizophrenia between local and foreign.

[67] Abé Mark Nornes. “For an Abusive Subtitling.” Film Quarterly 52, no. 3 (1999): 17-34.

[68] L10n is the industry shorthand for localization. There are 10 letters between the L and the n. In addition to localization, the industry uses i18n as shorthand for internationalization and g11n for globalization..

[69] For a discussion on the demonstration and visibility of these early games, see: Van Burnham. Supercade: A Visual History of the Videogame Age 1971-1984. Cambridge: MIT Press, 2003.

[70] In particular see Michel Foucault on the new regime of power/knowledge through a new way of seeing, and Lisa Cartwright on the problems of medical imaging technologies and truth. See: Lisa Cartwright. Screening the Body: Tracing Medicine’s Visual Culture. Minneapolis: University of Minnesota Press, 1995. Michel Foucault. The Birth of the Clinic: An Archaeology of Medical Perception. New York: Vintage Books, 1975. Marita Sturken and Lisa Cartwright. Practices of Looking: An Introduction to Visual Culture. Oxford; New York: Oxford University Press, 2001.

[71] Mary Flanagan. “Locating Play and Politics: Real World Games & Activism.” Paper presented at the Digital Arts and Culture, Perth, Australia 2007: p. 3.

[72] See: Gérard Genette. Palimpsests: Literature in the Second Degree. Lincoln: University of Nebraska Press, 1997; Gérard Genette. Paratexts: Thresholds of Interpretation, Literature, Culture, Theory. Cambridge; New York, NY: Cambridge University Press, 1997.

[73] LISA is “An organization which was founded in 1990 and is made up mostly software publishers and localization service providers. LISA organizes forums, publishes a newsletter, conducts surveys, and has initiated several special-interest groups focusing on specific issues in localization.” Bert Esselink. A Practical Guide to Localization. Rev. ed. Amsterdam; Philadelphia: John Benjamins Pub. Co., 2000: p. 471.

[74] LISA quoted in Esselink, A Practical Guide to Localization, p. 3.

[75] Lev Manovich. The Language of New Media. Cambridge: MIT Press, 2001.

[76] On experience as the core equivalence see the work of Carmen Mangiron and Minako O’Hagan: Carmen Mangiron. “Video Games Localisation: Posing New Challenges to the Translator.” Perspectives: Studies in Translatology 14, no. 4 (2006): 306-23; Mangiron and O’Hagan, “Game Localization;” O’Hagan, Minako. “Conceptualizing the Future of Translation with Localization.” The International Journal of Localization (2004): 15-22; Minako O’Hagan. “Towards a Cross-Cultural Game Design: An Explorative Study in Understanding the Player Experience of a Localised Japanese Video Game.” The Journal of Specialized Translation, no. 11 (2009): 211-33; O’Hagan and Mangiron, “Games Localization.”

[77] Esselink, A Practical Guide to Localization, p. 46.

[78] Frank Dietz. “Issues in Localizing Computer Games.” In Perspectives on Localization, edited by Kieran Dunne. Amsterdam; Philidelphia: John Benjamins Publishing, 2006. Also, Mangiron and O’Hagan, “Game Localization.”

[79] The move to CG from live action might also be a contributing factor to the rise of domesticating, replacement localization. Technically, gaming started with live action cut-scenes with big budgets and famous actors in the 1990s (Wing Commander III (1994); Star Wars: Jedi Knight: Dark Forces II (1997)), but it moved to CG cut-scenes using the game engine by the late 1990s and early 2000s (Half Life (1998), Star Wars: Jedi Knight II: Jedi Outcast (2002)). In part this could be seen as a budget issue, but in part it is an immersion issue as live action cut-scenes could be considered more jarring due to difference from regular game.

[80] This is, of course, ironic as cinema often overdubs the dialogue into the film due to the difficulties of recording clear dialogue when filming.

[81] This is an incredibly rough definition especially due to how ‘piracy’ relates to fan production, modding and copyright.

[82] Piracy is rampant with PC games, due to the ease of duplicating CDs and DVDs, and only slightly better with console games where cartridges are harder to duplicate. For various views on game piracy see: Ernesto. “Modern Warfare 2 Most Pirated Game of 2009.” TorrentFreak. Posted: December 27, 2009. Accessed: June 6, 2010. <http://torrentfreak.com/the-most-pirated-games-of-2009-091227/>. David Rosen. “Another View of Video Game Piracy.” Kotaku. Posted: May 7, 2010. Accessed: June 6, 2010. <http://kotaku.com/5533615/another-view-of-video-game-piracy>. In general, also see the blog Play No Evil: Game Security, IT Security, and Secure Game Design Services, particularly the “DRM, Game Piracy & Used Games” category: <http://playnoevil.com/serendipity/index.php?/categories/7-DRM,-Game-Piracy-Used-Games>.

[83] Mangiron and O’Hagan, “Game Localization.”

[84] That the equivalent experience comes from, and aims toward, generic cultural attributes of a presumed group, and not a complex, real group, is another problem entirely.

[85] Esselink, A Practical Guide to Localization, p. 4.

[86] Appadurai, Modernity at Large. Toby Miller, Nitin Govil, John McMurria, Richard Maxwell, and Ting Wang. Global Hollywood 2. London: BFI Publishing, 2005. John Tomlinson. Cultural Imperialism: A Critical Introduction. Baltimore: Johns Hopkins University Press, 1991.

[87] Harumi Befu. Hegemony of Homogeneity: An Anthropological Analysis Of “Nihonjinron. Melbourne: Trans Pacific Press, 2001. Stephen Vlastos. Mirror of Modernity: Invented Traditions of Modern Japan. Berkeley: University of California Press, 1998. Tomiko Yoda and Harry D. Harootunian. Japan after Japan: Social and Cultural Life from the Recessionary 1990s to the Present. Durham: Duke University Press, 2006.

[88] I have written about both the politics of Square-Enix as a Japanese company and the International Edition as a political force elsewhere. See: William Huber and Stephen Mandiberg. “Kingdom Hearts, Territoriality and Flow.” Paper presentation at the 4th Digital Games Research Association Conference. Breaking New Ground: Innovation in Games, Play, Practice and Theory. Brunel University, West London, United Kingdom. September, 2009; Stephen Mandiberg. “The International Edition and National Exoticism.” Paper presentation at Meaningful Play. Michigan State University, East Lansing. October, 2008.

[89] There are serious issues regarding labor and these two trends of translation. One is in the labor of fans to create translations. This is alleviated through micro-payments for the additional localization packages. They must receive some amount of compensation for their labor, as this situation is dangerously close to exploitation. The second issue is related to the de-skilling of professional translators and localizers due to the possible disappearance of their work to the fans. This is an issue, but micro-payments and the necessity of companies to pay localizers for the primary localizations should alleviate this possible de-skilling somewhat. These are matters that demand attention that I am not giving them in the present paper.

[90] See: Joseph Reagle. Good Faith Collaboration: The Culture of Wikipedia. Cambridge: MIT Press, 2010.

[91] Rocketboom Know Your Meme. <http://knowyourmeme.com/memes/lolcats>; I Can Has Cheezburger. <http://icanhascheezburger.com/>. Hobotopia. <http://apelad.blogspot.com/>.

[92] LOLCat Bible Translation Project. <http://www.lolcatbible.com/index.php?title=Genesis_1>.

[93] A slightly different translation project that utilized the masses is Fred Benenson’s Kickstarter project Emoji Dick. Benenson used Kickstarter, an online funding platform, to fund a translation of Moby Dick into Emoticons using Google’s Mechanical Turk. Thousands of individual Mechanical Turk users were paid pennies to translate individual sentences into emoticons and the results were published. See: <http://www.kickstarter.com/projects/fred/emoji-dick>.

[94] FLOSS Manuals. <http://en.flossmanuals.net/>.

[95] Yochai Benkler. The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven: Yale University Press, 2006.

[96] RiffTrax. <http://www.rifftrax.com/>

[97] The Leaky Cauldron. <http://www.the-leaky-cauldron.org/features/dvdcommentaries>.

[98] Fan translations and retranslations have both existed over the past decades. For instance, see the ChronoTrigger retranslation <http://www.chronocompendium.com/Term/Retranslation.html>, the Mother 3 fan translation < http://mother3.fobby.net/>, and the Seiken Densetsu 3 fan translation <http://www.neillcorlett.com/sd3/>.

[99] There are innumerable examples of each type. I am simply listing ones that come to mind.

[100] The Xbox 360 information comes from Rolf Klischewski. IGDA LocSIG mailing list. May 31, 2010.

[101] While Dyer-Mitheford and De Peuter would likely relegate this industry-integrated solution to a form of apologist for Empire, I prefer to think of it as a dialogic solution. See: Nick Dyer-Witheford and Greig De Peuter. Games of Empire: Global Capitalism and Video Games. Minneapolis: University of Minnesota Press, 2009. Mikhail Bakhtin, The Dialogic Imagination: Four Essays. Austin: University of Texas Press, 1981.

On Translation and/as Interface

I. Windows, Mirrors and Translations

Within their book Windows and Mirrors, J. David Bolter and Diane Gromala discuss the interface of digital artifacts (primarily artistic ones, but that is in large part their SIGGRAPH 2000 sample set) as having two trends. The first is the invisible window where we see through the interface to the content, and the other is the seemingly maligned (at least recently an in much of the design treatises) reflective mirror that reflects how the interface works with/on us as users.

This is similar in ways to Ian Bogost and Nick Montfort’s Platform Studies initiative where the interface exists between the object form/function and its reception/operation, and this interface can do many things depending on its contextual and material particulars. We need only look at the difference between Myst with its clear screen and Quake with its HUD, or between Halo and its standard gamepad and Wario Ware: Smooth Moves with its wiimote utilization to see the range.

However, another thing that the discussion of windows and mirrors, immediacy and hypermediacy, seeing through and looking at all bring up when paired with interface is translation. A translation is also an interface. It can be a window or a mirror, transparent or layered, you can see through it to some content, or you can be forced to look at it and the form and translation itself.

But thinking of translation as an interface in the Bolter and Gromala sense, or as Bogost and Montfort’s interface layer is unusual. Usual is to place the translation outside of the game as a post production necessity that enable the global spread of the product, or, at best, an integrated element of the production side that minimally alters the text so that it can be accepted in the target locale. Even researchers within the field of game studies generally ignore the language of the game: nobody asks what version the researcher played because we all recognize that we play different versions; more important is that the researcher played at all.

So translation’s place is in question. Is it production? Post-production? Important? Negligible? And how does one study it? We can barely agree upon how to study play and games themselves, so surely this is putting the carriage before the horse (or maybe some nails on the carriage before both). But, no, I still wish to follow through with this discussion, as I believe it can be productive. My question is how does translation relate to games, and hopefully I can come up with a few thoughts/answers if not a single ‘truth.’

II. Translation and Localization

As Heather Chandler has so wonderfully documented, the translation of game has a variable relationship to the production cycle. It at one point was completely post-productive and barely involved the original production and development teams. At its earliest it was simply the inclusion of a translated sheet of instructions to aid the user in deciphering what was a game in a completely foreign language. This still exists in certain locations, especially those with weaker linguistic and monetary Empires (obviously, not English, but ironically this includes China where the games are often gray or black market Japanese imports). This type of translation, called a non-localization, has slowly given way to more complete localizations including “partial” and “full” localizations. Partial localizations maintain many in game features, but menus and titles switch language, audio may remain as is, but subtitles will be included. In contrast, a full localization tends toward altering everything to a target preference including voices, images, dialogue, background music, and even game elements such as diegetic locations. As the extent of localization increased the position (temporally and in importance) of translation in the production cycle changed. It moved forward and needed pre-planning for nested file structures. It also grew in importance so that more money might be spent to ensure a better product.

However, other than a few gaffs like “all your base” and other poor translations from the early years game translation has increasingly become invisible. This invisibility, or transparency, has been written about extensively by Lawrence Venuti regarding literary translation, the status of the translator, and the relationship of global to national cultural production. For my purposes here I will simply say that he says the fluent translations are a problem (in the context of American Empire) and that current game localization practices (which are multi/international, but in many ways American-centric) do what he claims is bad. We don’t need to accept his arguments regarding empire and discursive regimes of translation (although I do), but we should be aware of the parallels between what he talks about using literary analysis and many translation reviews, and the way that nobody even talks about a game translation.

So the industry hides translation. But why does the academic community ignore it? Is it not a part of games? Maybe. But is it a part of play?

III. Ontology

Ontologies of play typically exclude translation. This is most obviously demonstrated in Jesper Juuls’ summary of common definitions and play that he uses to form his own classic game model. Rules are all well and good, but all games have a context, and it is this context that Juul misses when he dismisses the idea of “social groupings” (Juul 34). Juul pulls this from Huizinga and it is key that it relates to Huizinga’s primary contribution of the magic circle and the “ins” and “ofs” of play and culture.

I would argue that games promote social groups, but they also form in social groups and language is crucial to this as an important (perhaps primary) marker of a social group. However, in Juul’s final analysis “the rest of the world” has almost entirely been removed as an “optional” element (41). It is one thing to say that the outcome might effect the world, but it is another to say it can only be created through that world and its mere playing effects the world. Juul even acknowledges this in the conclusion to the chapter where he notes that pervasive and locative games break the rule. However, I would still argue that even the classic model does not obey the ”bounded in space and time” principle.

The former can be demonstrated trough Scrabble. A game created in English with strict rules, negotiable outcomes, player effort, attachment, valorization of winning, and many ways to do so. But the game is completely attached to English. The letters have point determinations based on ease of use and the scarcity of each letter is based on its common usage. The game is designed around English and cone cannot play it with other languages. Take Japanese: even if one were to Romanize the characters one wouldn’t have nearly enough vowels, and if one replaced all of the characters with hiragana there are still way too many homonyms to make a meaningful/difficult game. Japanese Scrabble might be possible, but it would need to be created by changing a great deal of the game. It is bounded in space and time, but contextually so.

The latter we can return to both Huizinga and Caillois who both locate play/games within a relationship to culture. Their teleological and Structuralist issues aside, it is important to not simply separate games (the text) from culture, time, place (the context) in a reductively formal analysis. Huizinga links play to culture as a functional element. These rules are a purpose even if that purpose has changed. Caillois notes a key association between types of play and particular societies. Games may be a separate place, but they affect the real world and vice-versa.

IV. Platform Studies

So context is important. Essential even. Let’s tack it on and see what happens. Or better yet, let’s say it’s pervasive and inseparable, but also difficult to distinguish. This is much like Bogost and Montfort’s Platform Studies model, so let’s see how translation could be integrated into that model.

Here I will primarily use Montfort’s earlier conceptualization of platform studies from his essay “Combat in Context.” Montfort moves toward a slightly simplified five-layer model from Lars Konzack’s seven-layer model by moving cultural and social context from a layer to a surrounding element. However, it is interesting that while he moves context to a surrounding element it is Platform that is key for them. Everything in his model is reliant on the platform.

As the base level the platform enables what can be created upon it. It is both the question of whether it is on a screen, whether it plays DVDs, cartridges or downloads files, how big those are and what size of a game is allowed on them. It is the capabilities of the system and what this enables. However, the platform layer exists in a context both technological and socio-cultural. The processor chip of the platform is in a particular context and limits the platform, but the existence of a living room with enough space to move can also limit the platform.

Second is the game code. The switch from assembly to higher level programming was enabled by platform advancements, but this also enabled great differences in the further layers. The way the code existed is also integrally related to linguistics/language. Translating assembly code is painstaking and almost always avoided. The era of assembly code was also the era of in house translations and non or partial localizations. In contrast, C and its derivatives enable greater linguistic integration and as long as programs in higher level code are programmed intelligibly translating them is possible. Context with the game code involves language. This much is obvious, as code is language. But I mean something further. I mean that there is a shift tin allowances along the way that reveals how real world “natural/national” languages become integrated, but always subsumed under machine languages.

Third is the game form: the narrative and rules. What we see, hear and play (if not ‘how’ we see, hear and play). This is the non-phenomenological game. The text, as it is. Of course, if it is the text then what is the surrounding context other than everything?

As we’ve seen from Juul, the rules belie languagelessness. We enter a world that has a set of rules that are separate from life and this prevents one from linking the game to life. But the narrative, if one does not think it an inconsequential thing tacked onto the essential rules, is related to contextually relevant things and presented in linguistically particular ways. Language then is here as well and translation bears and important role. In many ways this is the main place in which one might locate translation, but only if one is a narratologist. If the story is of prime importance, form is where translation exists.

The fourth level is the interface. Not the interface that I began with, at least not quite, or not yet, but the link between the player and the game. The “how” one sees, hears and plays the game. To Bogost and Montfort this is the control scheme, the wiimote and its phenomenological appeal compared to the gamepad or joystick, but it is also the way the game has layers of information that it must communicate to the user. The form of the game leads toward certain options of interface: a PVP FPS must be sure to have easily read information that allows quick decisions and full game time experience, but a slow RPG can have layers of dense interface, opaque and in a way that forces the user to spend hours making decisions in non-game time.

The interface also enables certain things. A complicated interface is hard to pick up and understand, but a simple one is easy. This is a design principle that Bolter and Gromala contest, but it has levels of truth in it. A new audience is not likely to pick up the obscenely difficult layering of interface of an RPG or turn based strategy game, but a casual point and click may be easily picked up and learned (if just as easily put down and forgotten).

In some ways this is also where translation exists and in some ways it isn’t. Certainly the GUI’s linguistic elements can be translated, but more often they are programmed in a supposedly non linguistic and universal manner. [heart symbol] stands for life and [lightening bolt] stands for magic or energy, or life is red and energy/magic is blue. Similarly, the audio cues are often untranslated. And controls mainly stay the same. Perhaps one of the few control changes of interface is the PlayStation alteration of O, or ‘maru,’ for ‘yes’ and X, or ‘batsu,’ for ‘no’ in Japanese for X, or check, for ‘yes’ and O for ‘no’ in English.

The fifth level is reception and operation: how the user and society receives the game, how it has come from prequels and gone to sequels, its transmedial or generic reverberations, and even the lawsuits and news surrounding it. All of these point outside of the game, but how does one then separate context? Is the nation the receiver or the context? Is the national language or dominant dialect part of the level or surrounding context? Is it effected by the game or can it then effect the game? And even if it effects the game by being on the top layer is it negligible in its importance? Is this another material vs. ideological Marxist fight for a new generation?

A short answer is that Bogost and Montfort answer all of this by putting context as a surrounding element, but they also fail to highlight its importance. By pushing out context to the surrounding bits it essentializes the core and approves of an analysis that does not include the periphery. The core can be enumerated; the periphery can never be fully labeled or contained.

Elements of importance are too destabilized to be meaningful when analyzed according to the platform studies. Translation is a prime example, but race and sexuality are equally problematic. Their agenda is not contextual, but formal. Mine is contextual and cultural.

V. Translation as Interface

The goal of localization is to translate a game so that a user in the target locale can have the same experience as a user in the source locale. For localization, then, translation is about providing a similar fifth level reception and operation experience. However, to provide this experience the localizers must alter the game form level by physically manipulating the game code level. The interface, beyond minor linguistic alteration, is not physically altered and yet it is the metaphor of what is being done to the game itself. The translation of a game, like Bolter and Gromala’s critique of the interface as window, attempts to transparently allow the user to look into a presumed originary text, or in the case of games, into to originary experience. It reduces the text to a singular experience/text. However, the experience and text were never singular to begin with. In translations, too, we need mirrors as well as windows, so how can we make a translation that reads like a mirror by reflecting the user and his or her own experience?

First, all of Bolter and Gromala’s claims against design’s obsession with windows and transparency are completely transferrable to games as digital artifacts and to the localization industry’s professed agendas. Thus, the primary necessity is to acknowledge the benefit of a non-window translation. Second, the translation must be put in as a visible, reflective interface that both shows the user’s playing particulars, the originals playing particulars and the way that the game form and code has been changed in the process. This could be enabled by a more layered, visible, foreignizing translational style. Instead of automatically loading a version of the game the user should be required to pick a translation and be notified that they can pick another. Different localizations should be visible provided on a singular medium. Alternate, fan produced modification translations should be enabled. If an uncomplicated translation-interface is an invisible and unproductive interface, then a complicated translation-interface is a visible and productive one. Make the translational interface visible.

VI. References

  • Bolter, J. David, and Diane Gromala. Windows and Mirrors: Interaction Design, Digital Art, and the Myth of Transparency. Cambridge: MIT Press, 2003.
  • Chandler, Heather Maxwell. The Game Localization Handbook. Hingham: Charles River Media, 2005.
  • Chandler, Heather Maxwell. The Game Production Handbook. 2nd ed. Hingham: Infinity Science Press, 2009.
  • Juul, Jesper. Half-Real: Video Games between Real Rules and Fictional Worlds. Cambridge: MIT Press, 2005.
  • Montfort, Nick. “Combat in Context.” Game Studies 6, no. 1 (2006).
  • Montfort, Nick, and Ian Bogost. Racing the Beam : The Atari Video Computer System. Cambridge: MIT Press, 2009.
  • Venuti, Lawrence. The Translator’s Invisibility: A History of Translation. 2nd ed. New York: Routledge, 2008 [1994].

Localizing Visibly Ideologically Material

Is it possible to localize America’s Army? How about Under Ash? Finally, what about Kingdom Hearts? The initial answer for both America’s Army and Under Ash is generally ‘no.’ It is not considered possible to localize such strongly ideological games because the ideological elements for these games are such a central feature, the content, and yet to localize a game is to take out such particulars and make it legible to an alternate audience. In order to localize America’s Army it would be necessary to take out the America element. Similarly, to localize Under Ash it would be necessary to remove the Hezbollah part. Subsequently it would be necessary to insert similarly understandable, equal yet different, elements in their place. Such a task is generally considered, if not impossible, incredibly difficult.

However, I want to answer that, yes, it would be possible to localize either game using the standard process of localization, but that the results would be meaningless. Both an America’s Army that did not help recruit cadets for the Army and an Under Ash that did not demonstrate a way to fight against incursions in Palestine would be so far divorced from their original text that calling them translations, or in any meaningful way related to the original text, would be false. And yet, that is largely what the localization of Kingdom Hearts, a story within the Japanese cultural context, but localized and transferred to America, does.

This statement is building off of arguments I have made previously with William Huber at the blog Gummi Ship, so I will skip going over those arguments extensively. The gist is that the allegorithmic (Galloway 2005) logic of Kingdom Hearts reproduces American Imperialism within the 20th century. Your main task within the game is to enter and control the entry into other worlds [countries] in order to aid/redirect their cultural politics in a manner highly reminiscent of developmental theory (Rostow 1960, Schramm 1964). But the point for Kingdom Hearts is that while barging into the countries is problematized within the games especially by having the Japanese player act the role of the American side, and through the mixing of Japanese and English in the so-called International Final Mix, thereby highlighting the problems of American exceptionalism, the localization removes these elements, places the American players within their own standard role, and eliminates any element of internationalism that was otherwise visible through the mixture of languages.

The point here is that Kingdom Hearts is just as ideologically charged as America’s Army and Under Ash even if this ideology is slightly submerged below the surface. However, even with that it is translated/localized without consideration. Importantly, however, is that such ideological changes happen with the localization, but they are not considered as really being changed.

So, I suppose my point is that translating ideologically prone games is impossible, but localizing them is certainly possible and done where you least expect it. But again, is that a good thing or a bad thing?

References:

  • Galloway, Alexander R. Gaming: Essays on Algorithmic Culture. Minneapolis: University of Minnesota Press, 2006.
  • Rostow, W. W. The Stages of Economic Growth, a Non-Communist Manifesto. Cambridge [Eng.]: University Press, 1960.
  • Schramm, Wilbur. Mass Media and National Development: The Role of Information in the Developing Countries. Stanford: Stanford University Press, 1964.

On Localization

After reading Heather Chandler’s Game Localization Handbook I’ve come to realize that what I am suggesting is not impossible and despite the LocSIG response it is not particularly problematic. It is, however, an as yet unset standard especially in the US, but also in other smaller linguistic locales and by smaller companies. However, I also cannot emphasize enough that it is not economic suicide.

Essentially, the suggestion is to enable multilingual applications in an open way. Such multilingual versions are becoming more reasonable as the international market is further acknowledged. It is not unreasonably expensive from the large American/English based developers where i18n/L10n is a viable/necessary strategy. It simply requires an extra step of planning not only for L10n-friendliness, but integration. As the companies controlling releases Sony, Nintendo and Microsoft can control standards in certain ways. One way would be to require i18n as a standard. Such a standard would be beneficial for larger companies as it would entail the greater possibility of foreign releases even as gray market releases.

Further, if integrated in a patchable model gray market becomes less sensible as games can be sold as ‘language-bare,’ then localized assets can be purchased in micro payments. This allows the fanatics to get what they want and the companies to monitor things.

In the case of smaller companies it could be seen as problematic as they must also do more work, but as things become more international fan based L10n might happen more. An example of this is Basilisk Games’ ‘languages packs’ for Eschalon Book II. Such language packs are partial localizations (if that), but they might be extended to more full localizations by changing non-linguistic elements in the future. For postcolonial/minority languages forcing internationalization is a problem in that it forces less defensible positions. However, in order to force the dominant sides to be slightly more international the international standard must be made on all sides.

The trick is in asset integration. As long as there are infinite slots for languages with the nicely named schema there should be no problem. Additional languages simply extend the list in the same way that OS language integration has the installed options visible. Other, uninstalled languages are a grayed out option: neither out of sight, nor out of mind.

The available spread of Loc Kits would also allow further translations for political and/or alternate linguistic efforts.

The fact of play is universal, but different people get their jollies in different places. As I said a few months ago some people like masocore. Well, some people like Polish audio with German subtitles, or Korean audio and English subtitles, or English subtitles and no audio. Having the option is beneficial for making money in international markets. Who knows what people really want, what they’ll use if they have, and what is best?

And of course further important is the belief that there are long term benefits to players being acculturated to non-locales. That is not happening to some (US), but is to others. Such an imbalance has global/political ramifications beyond fun.

If global disculure is really supposed to bring us together it should be in a way that is not determined by businesses decided what becomes a locale and forever separating groups based on those locales. Industry determinations are not simply natural: they affect the groups as well.

A lot of this is discussed in Anthony Pym’s Moving Text, but it isn’t much of a thing in either other translation or localization writings. Something important is to discuss this sort of thing, especially before things are standardized.

Referenced Books:

  • Chandler, Heather Maxwell. The Game Localization Handbook. Hingham, Mass.: Charles River Media, 2005.
  • Pym, Anthony. The Moving Text: Localization, Translation, and Distribution. Amsterdam; Philadelphia: John Benjamins Pub. Co., 2004.