Toward a Multi-Layered Digital Translation Methodology (Qualifying Paper #1)

In this paper, I approach new ways of translating digital media texts— from digital books, to software applications, but particularly my own focus on video games —by mixing traditional translation theory and new media theory. There are similarities between these two fields, but they do not refer to each other. Translation theory rarely looks to films and television, let alone websites, software and games; new media theory fetishizes the ‘new’ and rarely considers that it’s all been done before.[1] I cross the fields because there are mutual benefits to be had by doing so: translation can get new material practices; new media can get more history. I also cross the fields because that is what I see as the work of Communication. Finally, I cross the fields because my own work on video game translation can emerge from their crossing.

Introduction: ‘From Translation to Traduction’ to Localization

In this paper, I approach new ways of translating digital media texts— from digital books, to software applications, but particularly my own focus on video games —by mixing traditional translation theory and new media theory. There are similarities between these two fields, but they do not refer to each other. Translation theory rarely looks to films and television, let alone websites, software and games; new media theory fetishizes the ‘new’ and rarely considers that it’s all been done before.[1] I cross the fields because there are mutual benefits to be had by doing so: translation can get new material practices; new media can get more history. I also cross the fields because that is what I see as the work of Communication. Finally, I cross the fields because my own work on video game translation can emerge from their crossing.

While I have already started this paper with confusion (complexity and fusing togetherness), the word ‘translation’ itself has a confused (or perhaps, defused) past. As Antoine Berman notes, it is only in the modern period (post 1500) that the word (renamed ‘traduction’ in romance languages other than English) has taken on its present meaning.[2] Previously, the word (‘translation’) had an unstable meaning because writing itself was never considered the originary act of an author. Instead, all writing, from musing, to marginal notations, to transcriptions, to commentary, to linguistic alteration was considered translation. We are in the process of discursively moving back to the earlier understanding of the word.

The earlier understanding, ‘translation,’ comes from the Latin translatio, which can include the transportation of objects or people between places, transfer of jurisdiction, idea transfer, and linguistic alteration.[3] As Berman stresses, the premodern understanding of translation is as an “anonymous vectorial movement.”[4] In contrast, the post 1500 term, ‘traduction,’ signifies the “active energy that superintends this transport – precisely because the term etymologically reaches back to ductio and ducere. Traduction is an activity governed by an agent.”[5] For Modernity and its lauded author this “active movement” through a subjective traducer makes sense, as it distances the iterations by emphasizing a particular hierarchy of original over derivative. However, in a Postmodern culture where global flows and exchanges have moved well away from the author function and the primacy of the work it is helpful to understand the elements of translation that were lost “vectors” in the move to traduction.[6]

For romance languages where ‘translation’ became ‘traduction,’ certain formal and temporal vectors have been lost and taken up by other concepts such as adaptation, repetition, convergence, and intertextuality. While all of these terms have particulars, intertextuality is a useful example due to its link with postmodernity and the move away from grand theories.[7] With postmodern intertextuality there is no singularity of a work. Rather, everything is texts with borrowed themes, images, and sections. Intertextuality follows the formal vector of transformation, which has left translation, but it does not consider power and difference. In the early 21st century United States context, both power and difference are increasingly important and yet elided.

Some vectors were never actually lost in English, as it never switched over to the word traduction. As Berman notes, “English does not ‘traduce,’ it ‘translates,’ that is, it sets into motion the circulation of ‘contents’ which are, by their very nature, translinguistic.”[8] As the problematically designated world language, English sets itself up as a translinguistic universal, but it does so in opposition to a host of other languages that have switched over to thinking about translation as the necessary and active linguistic alteration to move a text from one place to another. Similarly, while there is an underlying energy that fuels the translational movement of a modern video game over space, there is a simultaneous understanding that nothing the game translator does can change the game as they are not changing the play level. Just like English, play is translinguistic and universal. Current forms of game translation, then, have retained a link to some of the anonymous vectors of translation.

I define translation as the ‘carrying over’ of a text from one context to another, where context can be understood as spatial, formal, or temporal. This broad definition begins to reclaim previously lost vectors, particularly a criticality necessary for the analysis of video games, which are currently exempt as they reside in an area of pure entertainment. This broad definition allows me to consider other forms of textual manipulation including video game localization—the process of translating games for new cultural contexts, which includes linguistic, audio, visual and ludic [play/action] alterations —that has theoretically and practically separated itself from simultaneous interpretation and literary translation. By doing so I wish to force open the definition to include what is already happening, localization, where much of the text is changed for the purpose of a “better” user experience. However, this move allows opens a space for what might happen, such as new forms of translation that use unofficial production to destabilize the meaning of the text by building it up.

I link traditional foci of literary translation theory with some of Jacques Derrida’s theories of deconstruction (particularly of ‘trace,’ ‘living-on,’ and ‘relevance’), and J. David Bolter and Richard Grusin’s concept remediation, in order to reconnect ‘translation’ with its (not quite) lost vectors.[9] I begin with the standard tropes of translation theory — sense and word, source and target, domestication and foreignization — as they do well to show the different possibilities at play with translation. However, disciplinary bound theories are never complete as they ignore extradisciplinary connections. One such connection is remediation. While the concept comes from a literary origin, remediation exists between literary and new media theories; I believe it can help to combine translation in the two areas and help move understandings of translation to new alternatives.

I argue that current practices of translation focus on only one side of the literary theories, thereby turning them into mutually exclusive binaries (sense or word, foreignization or domestication, immediacy or hypermediacy). However, Bolter and Grusin show that remediation is not a binary between hypermediacy and immediacy; rather, remediation utilizes both sides of the equation. Essential to new media is the simultaneous existence of both hypermediacy and immediacy. Current translations espouse only one of these sides, and ignore the benefits of the other. Translation can learn from this simultaneity in new media theory. This paper argues through to a material instantiation of new media translation that takes into consideration both sides of these pairings.

In the second section I show how the dominant practice of translation at present utilizes a domesticating, immediate strategy that overwrites (and thereby renders falsely singular) texts, whether they are literary, filmic, or ludic. In contrast, I argue that a foreignizing, hypermediate strategy that layers texts, which has always existed despite its current lack of presence, can facilitate an alternate, much needed ethics of both translation and cultural interaction. I am not arguing for a simplistic multiculturalism where difference can be subsumed under mere celebration, but for a difficult, abusive, and often painful form of interaction with difference that can reveal the actual ways in which culture functions. As Derrida argues, there is violence and pain that comes with eating the other, but there is also a necessity to eat. One must thus eat [ethically] (bien manger).[10] The same holds for translating.

 

Tenets of Translation

In the following sections I will review the key principles that have been the focus of translators throughout Western translation history. These examples are primarily from a European/English perspective although I try to use alternative examples where available, applicable and known. I will begin with the impossibility of a perfect translation. Second, I will elaborate on the ways of escaping this core dilemma beginning with the argument between sense-for-sense and word-for-word, and ending with the concept of equivalence. Third, I will review the opposing tendencies of domestication and foreignization as an alternate focus on the author and user instead of equivalence’s focus on the text itself. Finally, I will bring up remediation as concept terms that help bridge literary translation with new media and video game translation and transformation. By linking translation with remediation I can, in the later half of the paper, re-approach Berman’s ‘lost vectors’ of translation, recombine translation and localization, and point out alternate possibilities that are currently unconsidered due to the discursive dominance of fluent translations.

 

(Im)possibility of Translation

In an almost fetishistic move translation is known for its parts in lieu of its whole. The whole in this case is a holistic notion of perfect translation that completely reproduces a text in a secondary context. As George Steiner notes:

A ‘perfect’ act of translation would be one of total synonymity. It would presume an interpretation so precisely exhaustive as to leave no single unit in the source text —phonetic, grammatical, semantic, contextual — out of complete account, and yet so calibrated as to have added nothing in the way of paraphrase, explication or variant.[11]

Steiner rightly notes such a task is impossible for both an original interpretation and a translational restatement. In fact, the sole example ever given for a perfect translation is the mythical Biblical Septuagint translation where 72 individually cloistered translators made 72 simultaneous translations of the Torah from old Hebrew to Greek over 72 days. As the story goes, their translations were exactly the same indicating divine intervention. However, if one considers the logic of the translation it was the absence of any particular tenet, or focus, that enabled the translation to be considered perfect. God’s weight, on some tenet or another, was imperceptible, so it is the absence of a particular reference that marks the example of perfection. It is the unmarked translation that can be considered perfect, but this does not help with real translations. The practical lesson from the Septuagint is thus that perfect translation is impossible.

The impossibility of a perfect translation has forced all practical translation to focus on certain elements. These elements—sense, rhythm, original meaning, feel, length, and experience—are routinely marked as essential and elevated to primacy. The elements that are considered non-essential are then justifiably negated. One is hard pressed to find some moment, including the present, where this fetishization of certain tenets does not happen.

In contrast to such a partial focus with translation I hope to encourage a use of materiality, which can lead to a fragmented, built translation; imperfect and incomplete, but hopefully leading to a partial picture of what could be. A postmodern translation that is hardly ‘perfect,’ but in contrast to other forms of translation it does not assume justifiable negligibility of unconsidered elements.

I argue that digital new media in particular can enable this form of translation. However, this new method is anything but new, just as new media is anything but new. Rather, it borrows from, and builds upon, both Jacques Derrida and Walter Benjamin’s theorizations of translation. Derrida, in strict opposition to the dream of perfect translation and meaning argues for the slippery sliding of signifiers as a way to point back, but never get back, to an originary moment, text, or meaning. In contrast, Benjamin understands the failures of translation as a necessary part of the dream of messianic return in that they build up to perfection. These two provide theoretical groundwork for what can be made possible by the impossibility of translation.

Derrida’s concept of deconstruction is based in Ferdinand de Saussure’s semiotics taken up to postmodern instability instead of the Formalist dream of an ultimately stable meaning. In the Course in General Linguistics Saussure argues that the linguistic sign is arbitrary in that there is no natural relationship between signifier and signified;[12] it is both variable and invariable in that it changes, but nobody controls the change;[13] it exists as a system (la langue) and individual instances (parole), and this duality makes it both synchronic in its permanence related to langue and diachronic in its relation to parole.[14] As Jonathan Culler argues, what is interesting in Saussure’s linguistics is the relational nature of signs, and therefore how “[l]anguage is a network of traces, with signifiers spilling over into one another.”[15] Words do not equal each other. Rather, they stand in positions of relationality that depend on time and space.

While Saussure focused on both the synchronic and the diachronic, stable and unstable, system and individual, ways that language exists, the Russian Formalists after him dreamed of a study of stable signs, a Science. Formalists such as Shklovsky and Jakobson (against which Mikhail Bakhtin later wrote) dreamed of an ultimate equality between signified and signifier, of a way that language made Scientific sense. This impetus toward stability and reason drives a great deal of language usage, and it informs practical translation. However, Derrida takes the instability of language, the ‘traces’ that Culler mentions, and runs with it.[16] There is no formal structure to language, there is no deep structure, there is simply the sliding of signifieds on signifiers as words change meaning over time and between utterances. Derrida represents this by the trace, the word under erasure (‘sous rature’). The word is unstable, but this does not indicate that it is free; rather, the word is loaded down with all of the past meanings, the traces of history (whether we recognize those past meanings or not). For Derrida, like with Saussure, meaning can never be pinned down, which means that words are never singular and always slide back along different signifiers; however, for Derrida, this instability means that a translation is twice as meaningful as the original text itself. It is an added sense above; it is an after erasure, a meaning after the original. In light of such polysemy, translation ultimately does something different than simply move a text between form, time and space: it helps the text “live on.”

In “Des Tours de Babel” Derrida argues that the proper name (Babel, but all names) is the ultimate example of translation’s impossibility. Coming from the Biblical story, Babel is the tower, it is ‘chaos’ (the multiplicity of tongues), but it is also God, the Father.[17] Names remain as they are in translations, they are untranslatable, but this is further the case with God’s name, and the tower itself, both of which cannot be translated/written/completed. Ultimately, Derrida argues that translation is the ‘survie,’ the ‘living on’ and ‘afterlife’ of the original text through the translation, but not the dead, original author whose sole means of immortality is through ever transforming literary texts.[18] As he summarizes in his discussion of a ‘relevant’ [meaningful and raising] translation of Merchant of Venice’s Shylock:

It would thus guarantee the survival of the body of the original… Isn’t that what a translation does? Doesn’t it guarantee these two survivals by losing the flesh during a process of conversion [change]? By elevating the signifier to its meaning or value, all the while preserving the mournful and debt-laden memory of the singular body, the first body, the unique body that the translation thus elevates, preserves, and negates [relève]?[19]

Translation allows a text/body/father, to live on, to survive, but in so doing the original is necessarily changed.

The lesson from Derrida in regards to translation is that it is impossible. This much is obvious. However, impossibility does not mean that it should not be done. Translation is a necessary act despite its flaws: a text would not ‘live on’ without translation, just as we cannot ‘live on’ without eating, consuming, translating the other into sustenance.[20] We can learn two things from Derrida: the first is that deconstruction is about the psychoanalytic working through of the trauma, the historical weight imbedded in the word due to the impossible overload of meanings. The second, the lesson that I take, is that the failure of translation must be flaunted, highlighted. The Derridian methodology (not deconstruction, perse, but the productive theory we may take from deconstruction) is about showing how language and texts have multiple meanings and in fact can never be pinned down to any single meaning. Translation, just like language and original texts, must show this built-in instability. As all language is sliding along unstable signifiers, and all texts float along the backs of others, translation too must show its layeredness, its historicity. However, the instability is not flexibility and freedom, but a painfully historical burden (a ‘haunting,’ even[21]), and Derrida shows this uncomfortable instability by writing with asides, marginal notes and what Philip Lewis has argued as abusive translation.[22] Because this abusive, Derridian style of translation is painful and difficult to read, it is not often considered useful to translation practice, which focuses on clarity, consumption and entertainment.[23] However, the build-up of meaning through layering is a key method to bring together the various modes of translation that I will return to throughout this paper.

Like Derrida, Benjamin argues that perfect translation is impossible, but he does so toward a completely different end. In “The Task of the Translator,” Benjamin argues that the ‘Aufgabe’ [task, give up, failure] of the translator is impossible, but such failures add up to something more.[24] A translation must not reproduce the original, but must be combined with the original to approach something more. His master metaphor is of an amphora, representing language, which has been shattered into innumerable pieces:

[A] translation, instead of resembling the meaning of the original, must lovingly and in detail incorporate the original’s mode of signification, thus making both the original and the translation recognizable as fragments of a greater language, just as fragments are part of a vessel.”[25]

The amphora is language and in order to piece it together individual, failed translations (and the original) must be undertaken piece by piece in order to piece the ‘Reine Sprache’ [pure language] together. Finally, translations are not necessarily possible in any given time; there is a timeliness, or “translatability” that allows or prevents certain translations.[26] For Benjamin, no translation is necessarily possible and no translation does everything, but translations must be undertaken both for Messianic (it facilitates the return to a pure language) and logistic (it enables the spread of ideas and texts) reasons.

Individual translations do not do everything, but as particular translations in particular contexts they give a glimpse of the pure language. From Benjamin I take the notion of seeing something more even if the singular is not perfect, and I take the idea that particular translations are better in particular contexts. Both of these oppose the idea of a singular, perfect translation, which, like Derrida’s insistence of abuse, is little desired by practitioners of popular translation. However, it is something that has great importance in a world where the difference between believing in a perfect translation and understanding the problems of translation can be the difference between fun and boredom, but also between death and life.[27]

While I do not believe in a Messianic return of an Adamic language, I do agree with Benjamin’s insistence on the unequal benefit of different translations. Certain languages at certain times translate better than others due to contextual issues. This is not to say that translation at any given point is fundamentally impossible, but rather that translations are unequal. While Benjamin might hold that this renders useless certain translations at certain times, I believe that it is possible to use the materiality of new media to combine Derrida’s abusive slipperiness of language with Benjamin’s build up of languages to create a more complete translation. Such a new form is where this paper will ultimately conclude.

 

Word, Sense, and Equivalence(s)

While Benjamin, Derrida, and a large number of other theoreticians of translation confront (and embrace) the impossibility of translation, practitioners of translation routinely deny the impossibility by necessity. Translation must (and does) happen, so instead of a holistic notion of perfection, individual elements are highlighted. Historically, the two primary tenets of translation have been the oppositional mandates of translating word-for-word, and translating sense-for-sense. However, theorists in the 20th century expanded the either/or of word vs. sense to include a host of other correspondences and equivalences. In the following section I will go over these different forms of practical translation, but I will conclude by pointing out that at issue with all of them is that they naturalize a single element, which blocks off the possibilities of any other options.

The oppositional mandate between word and sense has been a major focus in Western translation since the Greeks in part because of the importance of the Bible in Western translatology. The conundrum posed within the oppositional mandate is simply does the translator translate the words in front of him/her [word-for-word], or the meaning of those words as a larger whole [sense-for-sense]? However, because this debate has been contextualized historically within the realm of Bible translations it has never been a simple question between sense and word, but between worldly sense and divine word.[28]

The ‘first’ Bible translation was the previously discussed Septuagint translation from Hebrew to Greek, which was done ‘by the hand of God,’ but manifested through the separate acts of 72 individual translators. In this instance, the translators create what is known thereafter as a perfect translation. The words are God’s words and can neither be altered nor denied. It is the perfect translation as there was unified meaning between original and translation in word and sense. Such claims for perfect word-for-word and sense-for-sense translation are quite problematic, but they go unquestioned until St. Jerome again translates the bible from Greek to Latin. The problem (or so it is claimed) is that he refers back to the old, pre-Septuagint Hebrew version of the Torah, and in so doing denies the primacy of God’s perfectly translated words. How can the Greek version be perfect, with all of the sense of the original in the new words, if Jerome must go back to the Hebrew?

While St. Jerome argues for sense-for-sense translation, he does so in an interesting bind having translated the Septuagint Bible while referring back to the older version and highlighting the importance of particular words. He thus pays very close attention to word-for-word ideals, noting the importance of word order with mysteries, but ultimately argues, “in Scripture one must consider not the words, but the sense.”[29]

Word-for-word translation schemes never work, as there are never equivalent words. To show how this works I’ll take the word ‘wine’ between English and Japanese: wine is not blood; wine is not saké; saké is definitely not blood. Wine rhymes with dine and whine, but it is also either white or red and can be related to both debauchery and blood, and even metonymically to Christ’s blood. Of course, wine is the fermented liquid from grapes, but also just the general fermentation process itself so that “rice wine” is fermented rice starch, and “plum wine” is fermented plum liquid, but “grape wine” would be considered redundant. On the other side, saké, the Japanese word from which “rice wine” is often translated, stands as a general word for all alcohol, but nihonshu, or Japanese alcohol, which is the more explanative Japanese word for saké, is unused in English. Finally, there is no link between sake and blood in color, rhyme, or any other mode of meaning. If one single word can cause this (and more) trouble it should come as no surprise that a word-for-word translational scheme must fail.

From Jerome through to the modern period there is a fixation upon sense-for-sense translation, and by the time of John Dryden sense-for-sense translation (except when dealing with mysteries of the divine word) is cemented. While metaphrase, word-for-word, is one of Dryden’s three types of translation it is only done in extreme cases. The main debate is between paraphrase, sense-for-sense translation with fidelity to the author, and imitation, which is a type of adaptation that partially betrays the original author.[30] Dryden’s third form of translation, imitation, is the divergence point between adaptation, what I seek to note as a carrying over of form where the translator hints at the style, form or sense of an author, but not the content. Between Dryden and the present this form has completely diverged into adaptation and intertextuality, which are considered entirely separate from translation. This is the final splitting point between translation’s original vectors and traductions’s linguistic and authorial focus in the modern period. Finally, Dryden’s second form of translation, paraphrase, is the most general concept of sense-for-sense translation as it is about what the author said in one language said in another language.

Paraphrase translation has enjoyed the primary role in translation from the time of Dryden to the present, and has only faced significant opposition during the 20th century from semiotics, formalism, and postmodern ideas of language. All three of these provided different oppositions, but all significantly affected the word/sense divide.

While Jakobson is mainly known within translation studies for his three types of translation (intralingual, interlingual, and intersemiotic), as a formalist he is understandable as one looking at the formal qualities of language, and therefore what happens to those essential elements in the process of translation within and between languages and forms. Moving from a semiotic understanding of language where “the meaning of any linguistic sign is its translation into some further, alternative sign,” Jakobson argues that there is never complete synonymy as “synonymy, as a rule, is not complete equivalence.”[31] A translation, regardless of word or sense, cannot fully encapsulate the source text. As Jakobson claims, “only creative transposition is possible” where this creative transposition focuses on something, but loses some other specificity.[32] While two possibilities coming from this failure of translation are Derrida and Benjamin, a more common one is to instead focus on creative transposition of one particular element of the text, but ignoring the rest. This is most visible in Nida’s ideas of correspondence with Bible translations, Popovic’s four equivalences in literary translation, and finally the current style of game localization.

Eugene Nida is best known for his principles of correspondence, formal and dynamic (or functional) equivalence, which he has primarily enacted with Bible translations. As a translator closely linked to the American Bible Association most of Nida’s work is also linked to principles of missionary work and the spread of Christianity through rendering the Bible understandable and close to a target audience. His two sides of translational equivalence, formal and dynamic/functional, are quite similar to Dryden’s metaphrase and paraphrase. However, in particular, formal equivalence focuses on fidelity to the source text’s grammar and formal structure. In contrast, dynamic equivalence seeks to make the text more readable to a target audience by adapting it to a target context. Nida’s scalar of equivalence is similar to both the word and sense debate as well as the domestication and foreignization debate, which I will elaborate below, however, that he uses the idea of equivalence in the singular and deliberately notes that one must sacrifice one side or another is important for the current discussion.

In a slightly more expanded sense, Anton Popovič writes of four types of equivalence within a text: Linguistic, Paradigmatic, Stylistic (Translational) and Textual (Syntagmatic).[33] The first, linguistic word, is the goal of replacing a word in the source language with another, equivalent word in the target language; it is different from the word and sense debate in that it simply indicates that the translator must pay attention to the phonetic, morphological and syntactic level of the text, which is to say the words that are written. The following three expand on the idea of equivalence in that a translation may focus on the grammar, the style, or the expressive feeling of the text.

Popovič’s focus is on a very literary understanding of the text. These four methods are for understanding the formal qualities of the written word, and therefore how to translate literary texts. Obviously, these four equivalences do not cover the entire realm of human experience. Other media involve different essential qualities, which have been the focus of those types of translation.

While any medium can offer an example of a different essence, I draw from my own focus on game translation. Game translation highlights experience. Games, as mass produced commodities, are considered interactive entertainment, and the core of the game is the active, fun experience.[34] In light of this gaming essence, the equivalence sought by game translators is the experience of the player in the source culture. As Minako O’Hagan and Carmen Mangiron, two of the few theorists on game translation write:

[T]he skopos of game localization is to produce a target version that keeps the ‘look and feel’ of the original… the feeling of the original ‘gameplay experience’ needs to be preserved in the localized version so that all players share the same enjoyment regardless of their language of choice.”[35]

Because the optimal experience when playing a game is entertainment, a good game translation is one that entertains and nothing more.

While Popovič believes there is an “invariant core [meaning]”[36] that remains regardless of any translational variations, one may translate with the goal of rendering equivalent only one of the elements, and in so doing the other three are sacrificed. Such a sacrifice works directly off of the understanding that perfect translation is impossible. Choosing one equivalence over another does not elevate it in importance over the others. However, in the practical integration of translation and reception only one rendering of one equivalence is ever seen, indicating that it is the true equivalence. Because only one type of equivalence is ever seen it is retrospectively elevated to the true equivalence. The equivalence highlighted becomes the essence of the text regardless of it being only one of many and any other types of translation that highlight the other elements of the text are useless. In the case of video games the fetishistic focus on the experience of the player renders invisible and invalid all other levels of the game. As a result, games become pure entertainment and all artistic, political, or cultural levels are ignored.

A text does not have a single essence; it has many different sites of differing importance to different people. The author might be intending to highlight one thing; the reading takes another; one cultural context focuses on one element, but another focuses on another. While the essence of a text spread to innumerable sites (rhyme, look, site, context, etc) equivalence seeks to focus on one and sacrifices the rest. This sacrifice is naturalized, and the equivalent element is constructed (after the fact) as the ultimate/important thing to be translated. As Lawrence Venuti notes regarding Jerome’s Bible translation, “Jerome’s examples from the gospels include renderings of the Old Testament that do not merely express the ‘sense’ but rather fix it by imposing a Christian interpretation.”[37] Translation does not just move a text from one language, time or place to another, but rather, it imposes particular meanings on that text and through the text on both the source and target cultures. Translational regimes and translations themselves exist within a political world. Translation is inseparable from power.

 

Domestication and Foreignization

While equivalence flows logically from the debates of sense-for-sense vs. word-for-word it also comes from the other primary concern in translation, which is between domestication and foreignization.

In an attempt to move beyond the debate between paraphrase (sense) and imitation (adaptation),[38] Friedrich Schleiermacher argued that there were two main ways of translating: either the translator makes the text in the style of the foreign original and forces the reader to move toward that source text and context [foreignization, or Source Text orientation (ST)], or the translator relocates the text into the target culture, pushing the text into the local context making it easier for a reader to understand [domestication, or Target Text orientation (TT)].[39] Schleiermacher argued that the debate between sense and word was defunct as both fail bring together the writer and reader. Instead, he contended that the translator needed to decide between foreignization and domestication, as the act of translation was necessarily related not to texts, but to cultures.

Schleiermacher argues that different types of translation are necessary to provoke different reactions in different audiences. Imitation and paraphrase must come first to prepare readers for the higher phases of true translational style: foreignization and domestication. He then argues that writers would be different people were they to write in, or be positioned as if they were writing in, foreign languages as domestication claims to do, and that such a repositioning would take the best elements out of the writers.[40] Thus, his argument ultimately supports foreignizing translation.

Antoine Berman understands Schleiermacher’s call for foreignization as a particular moment where an ethics of translation is visible. This ethics relates to the formation of a German language and culture. To Berman, domestication denies the importance of a mother tongue itself, and foreignization has the possibility that the mother tongue is “broadened, fertilized, transformed by the ‘foreign.’”[41] However, he also notes there are extreme risks to such nation building:

inauthentic translation [domestication] does not carry any risk for the national language and culture, except that of missing any relation with the foreign. But it only reflects or infinitely repeats the bad relation with the foreign that already exists. Authentic translation [foreignization], on the other hand, obviously carries risks. The confrontation of these risks presupposes a culture that is already confident of itself and of its capacity of assimilation.[42]

The prime assumption here is that Germany exists on the cusp of the ability to incorporate the foreign tongue in order to grow, but more importantly it also exists in a situation of being dominated by the French. In order to negate the French dominance over the German culture and tongue (that is extended through domesticating translations and bilingualism) it becomes necessary to take the dangerous plunge and move toward a foreignizing form of translation.

Texts do not exist outside of contexts, so any choice is necessarily related to political interests. In the case of Germany in the 19th century it was the relationship between Germany trying to develop against a dominant France. As Lawrence Venuti notes about Berman and Schleiermacher, “The ‘foreign’ in foreignizing translation is not a transparent representation of an essence that resides in the foreign text and is valuable in itself, but a strategic construction whose value is contingent on the current situation in the receiving culture.”[43] In the case of 19th century Germany, Venuti argues that the “Schleiermacher was enlisting his privileged translation practice in a cultural political agenda: an educated elite controls the formation of national culture by refining its language through foreignizing translations.”[44] Venuti’s argument requires jettisoning the nationally chauvinistic quality of Schleiermacher’s call for foreignization, but maintaining foreignization’s oppositional quality. To Venuti such a foreignization is necessary to oppose the current discursive regime of transparency that is dominant within the 20th and 21st century United States.

Venuti argues that the dominant discourse of translation within the United States is transparency. The translation must read as if it were written in the local language. This is a modern rendition of Schleiermacher’s domesticating translation that has been normalized to the extent that foreignization as a method is not an alternative, or different choice, but an awkward oddity.[45] As his subtitle “A History of Translation” indicates, Venuti lays out a genealogy that shows the rise of fluent translations in Europe between the early modern period and the late 19th century and how during this period the translator’s status dropped. By pointing out the constructed nature of the ‘fluency is good’ discourse, Venuti is trying to argue a move away from such fluency. He does so both to raise the status of the translator in relation to the author and originality, and to problematize the United States and English’s relationship to other countries and languages. As he writes in his conclusion:

A change in contemporary thinking about translation finally requires a change in the practice of reading, reviewing, and teaching translations. Because translation is a double writing, a rewriting of the foreign text according to values in the receiving culture, any translation requires a double reading… Reading a translation as a translation means not just processing its meaning but reflecting on its conditions – formal features like the dialects and registers, styles and discourse in which it is written, but also seemingly external factors like the cultural situation in which it is read but which had a decisive (even if unwitting) influence on the translator’s choices. This reading is historicizing: it draws a distinction between the (foreign) past and the (receiving) present. Evaluating a translation as a translation means assessing it as an intervention into a present situation.[46]

Writing, translating and reading are contextually contingent acts and one must be aware of the contexts from which and to which such texts move. It is key that the discursive regime of domesticating/fluent translation does not allow such historicizing or cultural understanding, as the foreign is simply rendered invisible

The current regime of translation is one in which the translator has become invisible and this has negative effects regarding the translator’s status, but also in regard to couching the United States’ translational imperialism. Venuti argues, “Schleiermacher’s theory anticipates these observations. He was keenly aware that translation strategies are situated in specific cultural formations where discourses are canonized or marginalized, circulating in relations of domination and exclusion.”[47] Results of this naturalized, extreme form of domestication are transparent cultural ethnocentrism and domination. These are, as Venuti argues, “scandals” of translation.[48] In opposition to these scandals, a foreignizing translational regime can link up to an “ethics of difference” that “deviate[s] from domestic norms to signal the foreignness of the foreign text and create a readership that is more open to linguistic and cultural differences.”[49] It is Venuti’s argument that acknowledgement and accommodation of difference are sorely lacking with late 20th and early 21st century United States context, thus requiring the switch to foreignizing translation. However, as previously stated, such a foreignizing method is completely opposite the dominant trend of the present.

Venuti argues for a switch to foreignization and away from the domestication that has been naturalized. He argues that “invisibility” refers to both the status of the translator as negated under the writer economically and functionally, and that translations must be presented so fluently, as if they were made in the local language and culture, that the translator is rendered invisible. The invisibility of domestication overlaps in instructive ways with Bolter and Grusin’s concept immediacy, the transparent side of remediation. Ultimately, remediation is a way out of the problematic discursive regime of translation that Venuti locates.

 

Remediation

In their seminal new media text J. David Bolter and Richard Grusin coined the term remediation in response to what they saw happening with new media at the time, but also how all media had been changing over the twentieth century.[50] For Bolter and Grusin all media is remediated: a medium remediates other media. Web pages have text, icons that tell people to ‘turn to the next page,’ and imbedded movies with standard filmic controls; Microsoft Word has a ‘page’ as it remediates writing on paper. This remediation has two qualities, or sides. The first, immediacy, is where the fact of remediation is cut away, or rendered invisible. The HUD (heads up display) of a game is lessened, removed, or rendered diegetically relevant. From a literary standpoint the content and diegesis is all that matters and the user need not leave this place of immediate access to the text. As Bolter and SIGGRAPH director Diane Gromala write a few years later, “we…have lost our imagination and insist on treating the book as transparent….  We have learned to look through the text rather than at it. We have learned to regard the page as a window, presenting us with the content, the story (if it’s a novel), or the argument (if it’s nonfiction).”[51] The second, hypermediation, can be seen in TV phenomena such as showing a miniature window in one corner of the screen and in the scrolling information bar on the bottom of the screen, but it is also footnotes, side notes and commentary with books. For Bolter and Grusin remediation is simply something that happens with all media and has happened since writing remediated speech, much to Plato’s chagrin. However, it has interesting links with translation, particularly in how immediacy can link up with Venuti’s fluency, and how hypermediacy can link up with the possibilities of layered translation, which come from Derrida and Benjamin.

Venuti claims that the current regime of domesticating translation within the United States leads toward a fluency that renders invisible both the translator and the fact of translation. According to the majority of American readers who enjoy this type of translation and experience such a goal is admirable. According to Venuti, fluency is quite problematic due to the translational ethics of difference involved. Within the logics of remediation, by rendering the translation invisible the original text is made an immediate fact for the reader even though it is not the original text, but the translated version. This type of immediacy materializes in particular ways with particular media: for books it is in a one to one fluent translational strategy, with film it is dubbing and remaking, and with video games it is localization. While these fluent/immediate strategies are dominant at present there are alternatives.

For Venuti, the opposite of translational fluency is a foreignization that highlights the ethics of difference. As cited above, most important in this is creating a new style of “double reading” that requires the reader read the text as a translation. However, if we take Bolter and Grusin’s oppositional strategy of remediation, hypermediation, we can see alternative methods of highlighting an ethics of difference. Translational hypermediation would entail highlighting the fact of translation; it could be abusive, Derridian translation; it could be Jerome McGann’s hypermedia work; it could be cinematic subtitles and metatitles; it could be game mods.[52] All of these interact with the medium in a way that utilizes its particular form.

Hypermediated translations of new media could easily exist, because of the particularities of digital alterability, but they do not. In the following section I will elaborate the particular way that translation happens materially with books, film and games. Primarily, these current ways are domesticating, fluent and immediate. Then, I will explain how translation could bring out both a type of foreignizing, layered and hypermediate relationship with the text.

 

Specific Iterations in Media

While the above section has summarized tenets of translation primarily coming from literary studies, the following will elaborate how these different trends intersect with three particular media: books, film and games. These three media are chosen very deliberately. Gaming is my main focus in part because of industry and theoretical denial of its translated nature, and in part due to its ability to lead to new translational possibilities. However, books and film are necessary predecessor forms on the route to games. Books are important as the primary textual form in current Western literary culture. While poetry, newspapers, magazines and other printed forms are also relevant I limit my analysis to the Modern novel both for space issues, and for the novel’s focus on, and obsession with, the author. Secondly, film is important as games have been created in the wake of the 20th century’s cinematic revolution where the language of games comes in part from the language of cinema: cut scenes, 1st person perspective, and increasing obsession with realisticness.[53] While the link between gaming and cinema has been critiqued on the grounds of gaming’s material and experiential differences from cinema, this does not deny its historical and stylistic links despite their unwieldy application in games.

 

Books, Supplementarity, and Digital Culture

Books in the modern period are singular objects created by singular authors. An author has an idea, struggles to bring this (original) idea to paper, and over time eventually uses his or her singular language to write the work. While books are made at one point in time, there is a belief in their timelessness: they are able to stand up to decades, centuries, and millennia (although such durability is also a test of worth) due to their original language (or rather, despite their original language, as it is translation that allows the text to ‘live on’). There is an essential link between author, nation and language, which is brought out in the book, and readers partake in this art when they read the book.

A translation is something that comes chronologically after the book. It is the result of taking the words and sentences (the content), and changing it into another language in order to facilitate the book’s movement over spatial-linguistic borders. The translation’s hierarchical relationship to the original book is derivative, but its material relationship has changed over time. Whereas translations are a material replacement that comes chronologically after the original, they were at times both simultaneous and supplementary to an original work.

Certain texts needed to be written in certain languages (Latin for religious, philosophic and scientific texts; literary genres in Galican or Arabo-Hebrew, and travel accounts such as Marco Polo’s and Christopher Columbus’ in a hodgepodge), and the idea of deliberately altering a text from one language to another was not high in priority, or even acceptable in some instances.[54] At one point in lieu of translation there was commentary, or Midrash in the case of the Torah. Such commentary was necessarily displayed alongside the original as a supplement. It complicated, but did not replace the original.

This older form of supplementarity can be linked to the current, but uncommon, practice of side-by-side translations where the original resides on one page and the translation on the other. The original and translation face each other to enable comparison. While Biblical and philosophical material is often granted side-by-side translations it is done so due to the importance of both individual words and overall sense, or because the question of just what is important is either undecided or unknowable. In the case of popular (low) cultural novels there is less reason to consider the original and so there is little reason to print the original. Other possible reasons side-by-side translations of important biblical, philosophical, and literary texts still exist, but popular novels are almost never given such a translational method are cost and size. Halving the pages printed should significantly reduce the cost and size of the book. Only important texts, or political and religious ones where price is not an issue, can justify the additional cost of the double pages. And popular, semi-disposable entertainment texts are less entertaining when enormous, bulky tomes. What is a complimentary relation between original and translation becomes a matter of replacing one with the other.

The shift from supplementary translation to replacement translation, where the translation stands on its own as a complete text, happens at the same time within modernity as the rise of translational equivalences. However, as discussed previously, it is impossible to conduct a perfect translation that conveys word, sense and all equivalences, so one element becomes the focus and under that equivalence the translation replaces the original book. In the case of the 20th to 21st century United States this equivalence is roughly what the author would have written had he or she been from the United States and writing in English. Because the industry follows a replacement strategy that supports fluency and immediacy, books can only follow a single equivalence. However, the materiality can support multiple equivalences through a translational supplementarity that supports an ethics of difference and hypermediacy.

Obviously, page-to-page translation, and the works of Derrida are an example of how books can support this form of hypermediated translation.[55] The viewer can be shown the different words that could have been used throughout the translation. While there are many possibilities for a hypermediated translation, there have been few opportunities throughout Western translation history. However, this hypermediated style might be coming back in fashion with the advent of new technologies including the digital book. These digital books also solve cost and size issues that were partial reasons against side-by-side complimentary translations.

While the digital book holds much potential, proprietary design, national based sales of content, and Digital Rights Management (DRM) issues plague current eReaders. They are simply an alternate way to read a book, which one must buy from a massive chain store in one language, and nothing more; they are monolingual devices that bring out the same trend of immediacy that I described above. However, the digital book could be programmed to show a multiplicity of versions, iterations, and translations. It could be programmed to be a truly hypermediating experience if only by linking different translations of a text. I will return to this in the final section of my paper, but a hint at this possibility is in Bible applications. YouVersion’s digital Bible application[56] has 49 translations in 21 languages, and this number increases as new versions are added. The Bible is not in copyright, but it would be possible to use a micropayment system that would allow interested patrons to buy linked versions of different book translations in a similar manner. By integrating the different variations a hypermediated experience would be created.

 

Film, Dubs, Subs, Remakes and Metatitles

The contentious relationship between immediacy and hypermediacy is highly visible in film translation.[57] On the one hand there is a long history of replacement/transparency with multi language versions (MLV), dubbing and remaking, but on the other hand there is an equally long history of subtitles. While the debate between subtitles and dubbing is really only solvable by referring to local preference, I argue that the rise of remakes of foreign films, especially in the United States, is a sign of the dominance of replacement and immediacy strategies. In the following section I will outline the history of language in film, then how it intersects with remediation, and finally ways that the lesser-used hypermediacy might bring out alternate forms of film translation.

When cinema was first exhibited there was no call for translation. There was no attached sound and there was no dialogue. The original ‘films’ like the Lumière Brothers’ La Sortie des Usines Lumière (1895), which depicts the workers leaving the Lumière factory, and L’Arrivée d’un train á La Ciotat (1896), which shows the train arriving at the station and people beginning to get off, are good examples of the limited structure and general ‘universality’ of the earliest films. Because there were no complicated plots or multiple scenes it was believed at the turn of the 19th century that cinema, like photography, was merely the “reproduc[tion of] external reality.”[58] At the beginning of the 20th century, cinema was considered outside of language and universal.[59] This understanding was first troubled with the inclusion of intertitles, as they required translation to move the film from one place to another, and from one language to another. However, the rest remained ‘universal.’

The late 1920s brought imbedded sound to cinema, and with it came talkies. These talkies necessitated a new level of translation, and both immediacy and hypermediacy translation styles were available: dubbing and subtitling respectively. Subtitling is both hypermediating and foreignizing. It is hypermediating in that it accentuates the fact of translation by putting the translated dialogue on top of the film. It is foreignizing because of the constant, visible disjoint between the words of the actors and the subtitles at the bottom of the screen.[60] The viewer constantly hears the foreign other, and this brings to the forefront the issue of trusting a translator to have translated properly.

In contrast, dubbing is immediate in that it erases the voices of the visible actors and replaces them with other voices in the target language. However, dubbing is not perfectly domesticating as there is a discrepancy between the bodies on screen and the dialogue. This discrepancy is partially the result of lip-syncing issues, and partially the result of differently signified bodies and voices. One of the tasks of dubbers is to forcefully make the dialogue match the lips by altering the linguistic utterances, often quite significantly.

While dubbing can alter the words and voice coming out of the body it cannot change the bodies themselves. In a realm of racialized nationalism, or as Appadurai writes, when the hyphen between the nation and state is strong,[61] this discrepancy between racially different body and local language is a problem. Because it is assumed that only those with specific bodies speak specific languages such discrepancy is highlighted.[62] Dubbing thus still has a hypermediated quality to it. A further step toward immediacy is changing the body. There have been two different methods used to make films more immediate by changing the bodies. The first was the early 20th century multi and foreign language versions, and the second was the much more long lasting remake.

The understanding of film as universal was initially challenged in the 1929-33 period, which saw the inclusion of multi and foreign language versions. Foreign language versions (FLV) are where the film was recreated after the fact in a different studio, and multi language versions (MLV) are when they were recreated in the same studio on the same set with different actors, but later in the same day.[63] The M/FLV highlights that there were people who understood that culturally specific elements are writ large on the body. Not only was national culture inscribed with language, but with bodies, clothing, and even story. It was believed that replacing the body, remaking the film into both the ‘local’ language and body, the film would be less foreign. This effort reveals the dominant trends of immediacy and domestication. By replacing both the language and body the text is made even more transparent for the audience. However, the M/FLV did not last long largely due to the high costs involved. Then as now there is a high priority given to business and the bottom line, and the cost of making multiple movies simultaneously was not economically justifiable especially when the movie could flop.

While intertitles and the MLV incorporate linguistic and human alteration what they do not consider is the cultural specifics. The content level was not translated or adapted; the stories are not altered. There were incredible numbers of stories adapted and remade again and again, but not because of cultural relativity. This oversight is rectified three decades later with films like Gojira (1954), which was reconceptualized away from the original’s atomic bomb logics. The remake, Godzilla, King of the Monsters! (1956), is reshot and reedited in order to feature an American journalist narrator and highlight the monster genre.[64] Following Godzilla, but primarily at the end of the 20th century there was a resurgence of remakes that link with cultural translation.[65]

With remaking not only do the bodies in the film change to locally recognizable ones with their own voices, but the context of the film can be changed from foreign lands to local ones. An example of this is Shall We ダンス (1996), a Japanese movie about a salary man going through a midlife crisis and learning to dance in an anti-dancing Japanese society, which was remade as Shall We Dance? (2004) with Richard Gere, Susan Sarandon and Jennifer Lopez in a Chicago context.

In one of the most important scenes in the original Mai is lectured by a possible new dance partner, Kimoto. He proposes they give a demonstration at a local dance hall (night club), but she refuses to dance with “hosts and hostesses” claiming it isn’t dancing, but cabaret.[66] Mai is obsessed with the foreign, European Blackpool competition and dance floor, which is opposed to the native dance hall with less history and lower culture. Kimoto claims not only that enjoying dance is of primary importance, but that the lowly Japanese dance hall has a history just as important as Blackpool. The opposition of high to low (hierarchical) and native to foreign (spatial) is stressed in this interchange. When Mai finally holds a party that signals the restart of her career it is on the lowly dance hall’s floor, indicating the primacy (or at least equality as she plans on returning to Europe) of the native over the foreign, and it stresses the equality of high and low. In contrast, the remake opposes Miss Mitzi’s relatively unpopular dance studio with the hip Doctor Dance studio and club. The opposition is both temporal and hierarchical: Miss Mitzi is middle aged and teaches various forms of professional dance compared to the scenes in Doctor Dance that are almost all depicted as club/entertainment moments. And when Paulina, Lopez’s adaptation of the Mai character, decides to go study in England (a rather meaningless decision in the context of the remake) her going away party takes place in an unrecognizable locale. In the original, the Japanese spirit and history is implied to be just as important and meaningful as the European one. The film is highly nationalist in its context. The remake works to erase such nationalism by placing the theme of global/universal work and the international family man/nuclear family over that of foreign and native. Such movement complies with a universalization of remaking as domestication. The foreignness of the Japanese original is rendered domestic and immediate with the remake.

A domesticating translation takes the foreign text and moves it into the native context, making the reader’s job easier by forcing the text to speak in a manner the reader is used to. In the Hollywood’s domesticating remake of Shall We Dansu, Japan’s troubled interaction with modernity and globalization are removed. The local socio-political particulars of the original films are erased in the service of “universal” generic narratives that satisfy an American audience that rarely interacts with foreign others. Hollywood’s remake process is a systematic erasure of difference and the foreign other that has been naturalized under the theory of the remake as cinematic translation, which only needs to render equivalent one essential element at the expense of all others.

So far I have discussed the current domesticating and immediate strategies of film translation. Even though I have claimed that subtitling is both foreignizing and hypermediating, it does not use the materiality of the filmic medium to really bring out the possibilities of hypermediation. So far there have been no further creations, but it is not hard to think of a type of “metatitles” that use the capacities of the digital cinematic medium to layer translations on the screen in a hypermediating translational style.

In the last few pages of “For An Abusive Subtitling,” Nornes refers to the fan subtitling of Japanese animation that took place largely between the late 1980s and early 1990s in the United States.[67] With difficult to translate terms the fan subtitlers gave extended definitions that covered the screen with words. The translation effort goes well beyond the standard translation in that it starts with a foreignizing pidgin, but also provides an incredible amount of information that works to bridge the viewer and source. While this abusive subtitling is hypermediating in that it layers the text, it could be extended to use the medium more by layering the text using DVD layers. These layers could move from the main textual layer (the visual film) and the verbal audible signs (dialogue and its subtitles), to the hypermediated translational layers: the visual audible signs (text on screen), the non-verbal audible signs (background noises that need explanation), the non-verbal visual signs (culturally derived, metaphoric camera usage), and any other semiotic layer possible.

Through such a layering commentary of the different signs the screen would quickly fill and overwhelm the viewer as a form of abusive translation, and while there is something admirable in completely disrupting visual pleasure, such disruption would never be taken up by the industry: all film layers must be visible either alternately or simultaneously, and at the control of the viewer. As home video watching is generally at the command of a single user or a small number of viewers the DVD format is a uniquely suited mode to enact metatitles. Due to the increased capacity to store information coming from DVD, Blu-Ray and future technology there is no limit to the possibilities of layering.

A layered translation uses the capacities of current technology by hovering over the text, but just as a translation can never fully encapsulate the original, metatitling would never fully acknowledge every aspect of the original text: it is a failed translation, just as all translation is failure due to being incomplete, but it does so in a foreignizing and hypermediating style that acknowledges its failings, and builds toward some ethical ‘more.’

 

Games and L10n[68]

While film translation retained a complex, but present relationship to translation theories and literary translation, the move to new media forms has created a chasm between theories and practice, which has resulted in new methods and industries of translating. Both translation theory and localization practice could benefit from cross-pollination, and that is the heart of my work. The shift to digital software has been accompanied by the rise of a software localization industry (of which gaming localization is an independent but related industry) with its own tools, standards committees and rhetoric. The following section begins by looking at how language intersects with games. I then consider what game localization is and how it succeeds to translate games, but also how it fails to address certain possibilities. One major element is in how localization fails to utilize the possibilities of the digital medium to bring about a hypermediated translation despite the immense amount hypermediation within the medium itself.

Like films, games have an interesting relationship with the idea of universality. The first computer/digital games such as Tennis For Two (1958) and Spacewar (1962), and even early arcade cabinet games like Pong (1972), Space Invaders (1978) and Donkey Kong (1981) were ‘language’ free. In a similar way that the early films were largely visual amazements, games were computer-programming amazements meant to show off the technology.[69] However, the programming was difficult and took up all or most of the available processing power and programming energy. This meant that early games had little processing power or programming time to spare for story. Many held (and still hold) to a universal accessibility and understanding of these games due to the technological and programming limitations coupled with a belief in the universality of play as a social phenomenon. Even now the belief in ludic universality holds despite theorists problematizing that fact in a similar way to how a previous generation of visual culture theorists problematized the universality of vision.[70] For instance, Mary Flanagan has argued, “while the phenomenon of play is universal, the experience of play is intrinsically tied to location and culture.”[71] While she is largely discussing the spatial politics of games existing in certain spaces, the theory can be expanded to indicate that any game, or instance of play, is tied to a cultural context be it Tennis for Two and the atomic age the weapons research lab in which it was created, Spacewar and masculine science fiction fantasies, Donkey Kong and the origins of the side-scroller as linked to a Japanese aesthetic, or any other game and context. Games are developed, produced and distributed in specific socio-political, temporal and spatial locations and are thus not universal.

However, this believed universality is only now coming into question, and it was completely unquestioned during the 1960s to early 1980s during the 1st and 2nd generations of computer games. There were no ‘words’ in the early computer games, just crude iconic representations. This meant that within the games themselves there was no ‘language’ needing ‘translation.’ What did need translation were the external titles and instructions. Titles were kept or changed to the desire of the producers and distributers. Pakkuman (1980) turned into Pacman instead of Puckman for fear of malicious pranksters changing the P to an F, but other titles were kept as is or were programmed in roman characters. Instructions for arcades and manuals for home consoles needed more extensive translation, but it was a very limited, technical form of translation. The first generation of computer game translation was thus both limited and little different from the roughest of technical translations, neither ‘literary’ nor ‘political.’

The second generation of game translation came about when games utilized greater processing power and storage capabilities to tell extensive stories. These were earlier adventure games like Colossal Cave Adventure (1976) and Zork (1977-80), which told 2nd person adventure narratives, and the more graphical adventure descendents of the 1980s such as Final Fantasy (1987) and King’s Quest (1987). These broke ground in games by normalizing narrative along with play. These also necessitated a new type of game translation that could address more than just the paratextual elements of title and manual.[72] This generation of game translation led to the creation of an industry for game translation.

The rise of linguistic material (stories in and surrounding the games) led to an acknowledged need of translation and the beginnings of the localization industry. Originally, the primary method was what is now called partial localization, where certain things were localized, but most others were not. Thus, the manual, title, dialogue, and menus might be translated, but the HUD might remain in the original language due to the difficulty of graphical alterations. The localization industry evolved in the 1990s to match the growing game industry, and localized elements were expanded from menus and manuals to graphics, voices and eventually even story and play elements.

While the current form of game localization is much expanded from early game translation the basics are the same. According to the Localization Industry Standards Association (LISA[73]), “Localization involves taking a product and making it linguistically and culturally appropriate to the target locale (country/region and language) where it will be used and sold.”[74] Localization is like translation in that it facilitates the movement of software between places, but it is different in that it also allows significant changes in the visual, iconographic and audio registers in addition to the linguistic alteration.

Regardless of how much is translated, game translation involves the replacement of certain strings of code with other strings of code. These strings are usually linguistic: The title The Hyrule Fantasy: Zeruda no densetsu (The Hyrule Fantasy: ゼルダの伝説) becomes ‘The Legend of Zelda,’ and within the game the line “ヒトリデハキケンジャ コレヲ サズケヨウ” [it’s dangerous by yourself, receive this] becomes the meme-worthy “It’s dangerous to go alone. Take this!” But alterations are also graphical: a Nazi swastika is changed into a blank armband for games in Germany. The first is a title, the second is a linguistic asset, and the third is a graphical asset. All assets exist as strings of text in the application code, and by altering the programmed code, each can be changed in the effort to move the game from one context to another. The ability to alter assets is an essential quality of new media.

Along with numerical representation, modularity, automation and transcoding, Lev Manovich argues that one of the primary elements of new media is their variability.[75] This idea of variability exists because new media is tied to digital code, which is adaptable, translatable and transmediatable through the alteration of specific strings. Because the strings, especially linguistic strings, are modular there is no specificity to games. With digital games this variability is combined with discourse of play as universally understandable. Because play is considered universal, the trappings of games (form, content and culture) are considered inconsequential, variable, and localized to fit into a target context in a way that does not change the game’s ludic [play] essence. Thus, any level of alteration in the localization process is fully sanctioned in order to provide the equivalent “experience” to the user. [76]

While asset alteration is possible as an essential quality of digital media, it is not simple: a hard coded application can only be changed through painstakingly altering tons of strings all throughout the program. In contrast, an application that calls up assets can change the individual assets into multiple variations and then choose which assets to call. This practice has been enabled in part by the game production industry embracing Internationalization (i18n) as a necessary and regular practice.

Internationalization is the practice of keeping as many game assets as possible untied and unmarked by cultural elements. In his guide to localization Bert Esselink provides an example of an image with a baby covered in blankets and a separate layer of undefined, localizable text.[77] Unlike pre internationalization methods the image and text are not compressed, which makes it possible and easy to switch the text. While the words are changeable the images remain the same, as there is an assumption that a smiling child is universal. Such non-universality of these particular elements is an issue. Games move beyond this by retaining almost all elements as changeable assets whether they are dialogue, images, Nazi armbands, or realistic representations of military flight simulators, but this changeability brings out other problems.[78] It does not address the elements that go assumed as universal that are not, but it also positions internationalization as a lead-in to domestication. Within the ideal of internationalization the practice of internationalization becomes domesticating translation by material and practical necessity. No matter what happens there will be an immediate, replacing, domesticating translation.

If expansive narratives opened games up to larger amounts of translation, there is a conflux of things that led to the third generation of game translation and the eventual rise of the game localization industry. These are the rise of the software localization industry with i18n standards, the understanding of variability and ability to change the games, the creation of CD technology with larger amounts of storage capacity, and finally the use of that storage capacity to enable voice acting to highlight the narratives.

While compact disk technology was created in the 1970s and has been a means of distributing music since the early 1980s, it took until the 1990s for games to be distributed on CDs. Beginning in the early 1990s CD-ROMs were attached to computers and the Playstation gaming device, and games began to be distributed on CDs. This move from floppy disks to CDs greatly expanded the size of games, and with it came the inclusion of both cinematics and digitized voices. One famous early example is Myst (1993). Both cinematics and recorded vocals take a large amount of storage capacity, which the CD provides. However, the CD does not provide enough space to enable multiple languages of vocal dialogue. There was the justified necessity to limit the included languages with a game because of the limited space available. Even when games moved to multiple disks providing multiple audio tracks would have significantly increased the disks required.

The lack of space for multiple languages forced game translators to decide between subtitling the audio and dubbing it over. While this might have led into an equal debate between dubbing and subtitling (like with film translation), the dominance of computer generated (CG) video over live action, full motion video within the games actually led to the naturalized dominance of dubbing and replacing.[79]

As CG requires that voices be added there is little understanding that localization replaces anything. There is no ‘natural’ link between the visible body and the audible voice for CG, so dubbing causes fewer problems in gaming than it does in cinema.[80] However, because of the space issues there was not enough space to provide multiple languages on the single CD, which meant that the majority of games only have one language on them. Certain European regions provide multiple languages by necessity, but this is far from the norm. Even when the storage and distribution method changed from CD to DVD there was little movement toward the inclusion of multiple languages. This lack of included languages is also partially due to the region encoding business practice.

Linguistic multiplicity within games has also been stymied by the practices of video compression for TV and different regions encodings for DVD disks. CDs and DVDs are region encoded in order to protect business interests by opposing ‘piracy,’ defined here as the unsanctioned copy, spread and use of software applications.[81] There are two general eras of this encoding. The first was the separation between NTSC (National Television System Committee) and PAL (Phase Alternate Line). These two methods were linked to the televisions distributed in different regions; the different gaming systems and disks need to operate in the same encoded manner as the televisions. This made it impossible to play European games (PAL) on an American system (NTSC), but it did not necessarily block out Japanese games (NTSC). This initial form of encoding has less to do with piracy protection than it does policing national airwaves. DVDs use a slightly different method in that they are region based between 8 different encodings: in a limited manner they are as follows: US/Canada (1), Europe/Middle East/Japan (2), Southeast Asia (3), Central/South America/Oceania (4), Russia/Africa (5), China (6), undefined (7), international venues such as airports (8). For video games these region encodings work with and against the standard PAL/NTSC distinction so that while Europe and Japan are both region 2 there are differences between the ability to play PAL and NTSC and vice versa. In contrast, while NTSC disks work easily in both Japan and the United States the region encoding limits the ability to play both disks. Both the PAL/NTSC distinction and region encoding have multiple purposes including software piracy prevention, but in terms of translation they legitimize the lack of necessity of translating for multiple regions.

As piracy is a problem to the game industry[82] and large amounts of piracy happen in certain regions (Asian regions especially due to economic disparity, gray markets and governmental bans on consoles) there is a general belief that by not supporting multiple languages there will be a block put on game piracy: if the gray market version is unintelligible due to it being in an alternate language it is possible that a user will still buy their language version. In other words, limiting the number of languages available limits the geographical range of a particular version of a game, which works against the black markets and works for the game industry. Thus, there is an interesting convergence between business interests, the technology available, the developing techniques in programming games, and the general trend toward translational domestication and immediacy. The storage capacity limitations coupled with the use of cinematics and voices and the standardized practice of dubbing and replacing dovetail perfectly with the industry practice of localization as domesticating, immediate translation.

The goal of localization is to make it ‘appropriate.’ This goal is heavily influenced by the business elements of the localization industry. Localization is about profit, the bottom line, so the goal is to fit with user desires. Game localizers identify game user desire related solely to entertainment.[83] Entertainment and appropriate translation here is identified as helping the target player to have the same experience that the source player had in the source context. Such a singular drive is quite different from literary translations that aim to abuse the user, or linguistic interpretation and political translations that deal with the problems of modern political interaction. However, at base localization is still a matter of equivalence: the equivalent experience/feeling/affect.[84]

Insofar as the localization industry is a business there is little one can say negatively about the practices enacted. Only popular games are localized, so translating them with the same money-making “experience” is better business practice. However, when one attempts to move beyond such market logics it is hard not to see the problems. Just as translation needs to be understood as important, powerful and dangerous, so too must localization be understood as a weighty practice. An industry that has globalization (g11n) as one of its prime terms must be aware that there is more to globalization than “the business issues associated with taking a product global.”[85] Just as globalization is a fraught term in the world it must be problematized from its purely business nature in localization.[86] Said simply, there is more to a game than the immediate localization of the foreign user’s experience.

One way in which localization has recently pointed toward both hypermediation and alternate forms of translation are the creation of multilingual editions to games. The switch from CDs to DVDs and the move to downloadable software there has been a move to include multiple languages. DVDs have enough storage capacity to house multiple language audio tracks and downloadable software is unlimited (if time consuming) as it solely relates to the system’s hard drive capacity. Because of this there has been some movement toward including multiple languages. One particularly interesting case is Square-Enix’s “international editions.” Particularly interesting about the international editions is that they started with only one language: Japanese, but included a few additional features (Final Fantasy VII: International Edition). They then turned into games that mixed the English and Japanese, but were released solely in Japan. The audio tracks were English and there were Japanese subtitles, but the rest of the game was in Japanese (Final Fantasy X: International Edition, Kingdom Hearts: Final Mix). Part of the difference between the early and later international editions is the move from CD to DVD, thus there was little dialogue in the early version, but even in the DVD versions there was only a replaced audio track (the Japanese was replaced with ‘international’ English). A third movement was when both English and Japanese audio tracks were available, but only after finishing the game once: the initial playthrough necessitated the player have a mixed English/Japanese experience with Japanese menus, written dialogue and subtitles, but with English audio (Kingdom Hearts II: Final Mix+). Finally, a fourth movement is the full availability between English and Japanese with various different subtitled languages (Star Ocean: Last Hope International). This progression of different styles of international edition implies what was originally a gimmick, but has changed to a marketing decision based on the knowledge that there is an audience and that this audience has spread outside of Japan.

These international editions have a tangled relationship to the concept of kokusaika [internationalization, or ‘international-transformation’] within Japan. Kokusaika itself is tied to ideas of westernization in the late Tokugawa and Meiji periods, and Americanization in the post World War II period. Kokusaika was seen as an important step of modernization in much of the discourse of the 19th and 20th centuries, but it is troubled in nationalist and essentialist discourses in particular.[87] One might also argue that the Square-Enix games both support and trouble this kokusaika discourse as they support it, but they maintain the importance of Japanese within the games. While the international edition allows multiple languages it does so from a Japanese expansionist perspective. Language is never neutral, and but putting the lingua franca with Japanese as the only choices (with the other standard gaming languages such as French, German, Spanish and Italian as subtitle options) there is a definite movement to raise the importance and reach of Japanese as a language. Kokusaika is thus maintained, but with the exception of a continued presence (and even dominance) of Japanese. While I believe the international editions are on the right track toward a layered, foreignizing style of translation they still exist in the context of Japanese politics.[88] This is similar to Venuti’s claim that Schleiermacher’s work offers a helpful corrective despite the German author’s 19th century chauvinism.

While the past thirty years has led to increased immediacy and region protections, new forms such as DRM routines and online portals such as Steam indicate a general belief that such region separations have ultimately failed to protect against piracy. Because the region encoding tactics to prevent piracy have failed, it is possible that a new era of Localization is coming, but so far it has been relatively limited. Hopefully this is only momentary and the same hypermediacy that has been blocked out since the beginning of gaming will become visible along with the existence of difference that is visible with translations and layers. I will discuss some of these possibilities in the final section of this paper.

 

Possible Futures

I would like to conclude this paper with a discussion of two new trends in translation. Both are postmodern, intentionally unstable, and utilize the digital materiality. One trend destabilizes the translator, and the other destabilizes the translation. However, both trends can heighten the feeling of hypermediation and foreignization, which (according to Venuti) is helpful in the current translational climate.[89]

 

Destabilization of the Translator

The destabilization of the translator has multiple translators, but a single translation. It has its history in the Septuagint, but its present locus is around dividing tasks and the post-Fordist assembly line form of production. Like the Septuagint, where 72 imprisoned scholar translators translated the Torah identically through the hand of God, the new trend relies on the multiplicity of translators to confirm the validity of the produced translation. However, different is that while the Septuagint produced 72 results that were the same, the new form of translation produces one result that, arguably, combines the knowledge of all translators involved. This trend of translation can be seen in various new media forms and translation schemes such as Wikis, the Lolcat Bibul, Facebook, and FLOSS Manuals.

Wikis (from the Hawaiian word for “fast”) are a form of distributed authorship. They exist due to the effort of their user base that adds and subtracts small sections to individual pages. One user might create a page and add a sentence, another might write three more paragraphs, a third may edit all of the above and subtract one of the paragraphs, and so on. No single author exists, but the belief is that the “truth” will come out of the distributed authority of the wiki.  It is a democratic form of knowledge production and authorship that certainly has issues (among these questions is whether wikis are actually democratic and neutral), but for translation it enables new possibilities.[90] While wikis are generally produced in a certain language and rarely translated (as the translation would not be able to keep track of the track changes), the chunk-by-chunk form of translation has been used in various places.

One form of wiki translation is the Lolcat Bible translation project, a web-based effort to translate the King James Bible into the meme language used to caption lolcats (amusing cat images). The “language” meme itself is a form of pidgin English where present tense and misspellings are highlighted for humorous effect. Examples are “I made you a cookie… but I eated it,” “I’z on da tbl tastn ur flarz,” and “I can haz cheeseburger?”[91] The Lolcat Bible project facilitates the translation from King James verse to lolcat meme. For example, Genesis 1:1 is translated as follows:

KING JAMES: In the beginning God created the heaven and the earth

LOLCAT: Oh hai. In teh beginnin Ceiling Cat Maded teh skiez An da Urfs, but he did not eated dem.[92]

While the effort to render the Bible is either amusing or appalling depending on your personal outlook, important is the translation method itself. The King James Bible exists on one section of the website, and in the beginning the lolcat side was blank. Slowly, individual users took individual sections and verses and translated them according to their interpretation of lolspeak, thereby filling the lolcat side. These translated sections could also be changed and adapted as users altered words and ideas. No single user could control the translation, and any individual act could be opposed by another translation. According to the homepage, the Lolcat Bible project began online in July of 2007, and a paper version was published through Ulysses Press in 2010. The belief is that if 72 translators and the hand of God can produce an authoritative Bible, surely 72 thousand translators and the paw of Ceiling Cat can produce an authoritative Bible.[93]

FLOSS (Free Libre Open Source Software) Manuals and translations are a slightly more organized version of this distributed trend.[94] FLOSS is theoretically linked to Yochai Benkler’s “peer production” where people do things for different reasons (pride, cultural interaction, economic advancement, etc), and both the manuals and translations capitalize on this distribution of personal drives.[95] Manuals are created for free and open source software through both intensive drives, where multiple people congregate in a single place and hammer out the particulars of the manual, and follow-up wiki based adaptations. The translations of these manuals are then enacted as a secondary practice in a similar manner. Key to the open translation process are the distribution of work and translation memory tools (available databases of used terms and words) that enable such distribution, but also important is the initial belief that machine translations are currently unusable. It is the problems of machine translation that causes the need for human intervention with translation, be it professional or open.

Finally, Facebook turned translation into a game by creating an applet that allowed users to voluntarily translate individual strings of linguistic code that they used on a daily basis in English. Any particular phrase such as “[user] has accepted your friend request” or “Are you sure you want to delete [object]?” were translated dozens to hundreds of times and the most recurring variations were implemented in the translated version. The translation was then subject to further adaptation and modification as “native” users joined the fray when Facebook officially expanded into alternate languages. In Japanese <LIKE> would have become <好き>, but was transformed to <いいね!> [good!]. Not only did this process produce “real” languages, such as Japanese, but it also enabled user defined “languages” such as English (Pirate) with plenty of “arrrs” and “mateys.” The open process created ‘usuable’ material, such as Facebook in Japanese, but also things that would never happen due to bottom line considerations, such as pirate, Indian, UK, and upside down ‘translations’ of English.

Wikis, FLOSS, and Facebook are translations with differing levels of user authority, but they all work on the premise that multiple translators can produce a singular, functioning translation. In the case of Facebook, functionality and user empowerment are highlighted, but profitability is always in the background; for FLOSS, user empowerment through translation and publishing are one focus, but a second focus is the movement away from machine translation; in all cases, but wikis particularly, the core belief is that truth will emerge out of the cacophony of multiple voices, and this is the key tenet of the destabilization of the translator.

 

Destabilization of the Translation

The other trend is the destabilization of the translation. This form of translation has roots in the post divine Septuagint where all translation is necessarily flawed or partial. Instead of the truth emerging from the average of the sum of voices, truth is the build-up, the mass turned back into a literal tower of Babel: it is footnotes, marginal writing and multiple layers. Truth here is the cacophony itself. The ultimate text is forever displaced, but the mass implies the whole. The translation is destabilized through using new media’s digital essence to bring out a hypermediating translational style.

This style of translation it is not new as it is the hypermediated translations that I discussed previously. It is side-by-side pages with marginal notes; it is Derridian translations; it is NINES and other multilayered digital scholarship; it is fan translations and metatitles; it is multilingual editions of games; it is modding. All of these exist, but not as a new methodology. The destabilization of the translation is a term for grounding these different styles as a new methodology that utilizes forms of peer production (similar to the destabilization of the translator), but fully layers things so that it is not the average that is visible to the user, but a mountain of possibilities available to the user to delve into or climb up. All of these types of translation exist, and the willing translators mentioned above are available, so the difficulty is not in making the many translations happen. Rather, the difficult task is in rendering visible the multiplicity.

The main difficulty of the destabilization of the translation is the problem of exhibiting multiple iterations at one time in a meaningful way. How can a reader read, watch, or play two things at once? Books, films, and games provide multiple examples of how to deal with such an attention issue, but in a limited way. Footnotes, side-by-side pages, and subtitles are all hypermediating layers. However, the digital form presents new possibilities in that there is no space issue and things may be revealed and hidden at the user’s command. There are interesting possibilities of how games can use their digital, programmed, form and user/peer production to bring out new levels of the application and the experience. I will review the digital book and metatitle here, but I will focus on what I see as a new form of game translation that not only uses, but truly thrives off of fan production.

Books are rather conservative. While they are in many ways open due to a lapse in copyright, there is little invention happening to bridge different versions. While resources such as Project Guttenberg have opened these thousands of texts to digital reader devices they exist as simple text forms just as the other purchasable books exist as simple, immediate, remediations of the original book form. However, a hypermediating variation would link these different versions and translations. At a click the reader can switch between Homer’s Odyssey in Greek and every single translation into English made in the 20th century. Of course, French, Japanese, German and various other translations are also available and the screen can be split to compare any of the above. With a slightly different (slightly less academic) mentality the reader to peruse Jane Austen’s Pride and Prejudice on the left hand side of the screen and the recent zombie rewrite Pride and Prejudice and Zombies on the right hand side of the screen. This does not advance the technology particularly, it simply has a different relationship with the text, the author, and the translator; the key is to link the texts and make them available even if it is through using small micropayments for each edition.

Films are interesting as there are already possibilities in play: multiple subtitle and audio tracks, and commentary tracks by stars, directors and others. Subtitles are a simple layer that has existed for almost a century. However, with the advent of digital disks the subtitle has been separated from the print itself allowing the user to choose to hide the subtitle or to choose what subtitles to view. Shortly after the introduction of DVD technology better compression algorithms enabled multiple audio tracks including commentary tracks. We are in an era that uses Blu-Ray disks with more storage capacity, and downloadable movie sites that allow the user to access as desired. These already exist. What would be a step forward is the linking of fan translation and commentary tracks to the digital artifact itself. Files that are in-sync with the film, but must be started independently exist now. Three examples are the abusive subtitling that I discussed earlier through Nornes, RiffTrax, from creators of Mystery Science Theater 3000,[96] which overdubs commentary onto various films creating a sort of meta-humor, and fan commentary from the Leaky Cauldron,[97] one of many prolific Harry Potter fan sites that exist on the Internet. All three of these are independent, fan, productions that are partially sanctioned by business. It would be highly beneficial to producers, prosummers and consumers to enable the direct inclusion of these modifications into the DVDs themselves. It would also enable a new understanding of the film where the meaning is not the surface, but the build up of meaning provided by both original creators and all others who play and add to it.

Finally, we arrive at digital games where some of the most interesting fan work has been done and partially integrated. This means that the way has been opened for a hypermediated translation, but it has, so far remained unpaved. The destabilization of the video game translation would combine the burgeoning practice of multi lingual editions, where there is a visible choice for the user between one language version or another, and the practice of allowing and integrating fan mods. Mods are game modifications, which could be additional maps, different physics protocols, alternate graphics, or a host of other types. Some of these, such as Team Fortress, have been wildly popular. However, ‘mods’ could be expanded to include alternate translations and dialogue tracks. The workers are there and available,[98] but so far these fan productions have faced nothing but cease and desist letters, virtual takedowns, and lawsuits.

With digital games the localization process has traditionally replaced one language with its library of accompanying files with another. However, as computer memory increases the choice of one language or another becomes less of an issue, and certain platforms such as the Xbox and online portal Steam, provide multiple languages with the core software. This gives rise to the language option where the game can be flipped from one language to another through an option menu. Some games put this choice in the options menu at the title screen. Examples[99] of this are Gameloft’s iPhone games (almost all of them, but including Block Breaker Deluxe, Hero of Sparta, and Dungeon Hunter) and Ubisoft’s Nintendo DS game Might and Magic: Clash of Heroes. Others have a hard switch that makes the natural language of the game correspond to the language of the computer system software, so that a computer running in English would have only English visible in the game, but if that computer’s OS switched to Japanese the game would boot with the Japanese language enabled. Square-Enix’s Song Summoner: Encore, Final Fantasy, and Final Fantasy II iPhone releases automatically switch between English and Japanese depending on which language the iPhone is set to. The Xbox 360 has a similar switch mechanism that requires the system to be switched to the desired language.[100] Between these two types are games played on the Steam system such as Valve’s Portal and Half-Life 2, which allow the user to launch the game in chosen languages, but do not require a system-wide switch. Finally, a few games allow the user to switch back and forth between languages. Square-Enix’s iPhone game Chaos Rings allows the user to switch between English and Japanese in the in-game menu allowing the rapid switch between languages at any time not currently in conversation or battle. This last example is the closest example to a destabilization of the translation as it would allow the near simultaneous visibility of multiple languages.

Integrating fan created translational mods into the software itself would further destabilize the already unstable base of multiple visible languages. This integrated form would allow the user to switch between official localization to fan translation to fan mod at their whim. The official version ceases to exist and the user is allowed to both interact with other types of users and create fully sanctioned alternative semiotic domains. The eventual ability to mix and match HUD in English, subtitles in Japanese and fan translation in Polish would be a true destabilization.[101]

Both the destabilization of the translator and the destabilization of the translation use new forms of fan and peer production and create a foreignizing, hypermediated translation. All of these things could be good in the current political moment that equates difference with terrorism, which necessitates the translational replacement of all forms difference with local variations. However, key to both destabilizations are that they are not simply utopian fantasies, but legitimately productive and ready to enact. It is my intent to build, and build upon, these possibilities for opening up new forms of translation in digital media in my dissertation project on games and localization.


[1] For an example of the lack of integration of alternate media in translation studies, see: Lawrence Venuti. The Translation Studies Reader. 2nd ed. New York: Routledge, 2004. On a particular attempt to integrate it, see: Anthony Pym. The Moving Text: Localization, Translation, and Distribution. Amsterdam; Philadelphia: John Benjamins Pub. Co., 2004. On the distinct effort to consider ‘old’ media as ‘new’ see: Lisa Gitelman and Geoffrey B. Pingree, eds. New Media, 1740-1915. Cambridge: MIT Press, 2003.

[2] Antoine Berman. “From Translation to Traduction.” Richard Sieburth trans. (unpublished): p. 11.

[3] Serge Lusignan. Parler Vulgairement. Paris/Montreal: Vrin-Presses de l’Université de Montréal, 1986: pp. 158-9. Quoted in Berman. “From Translation,” p. 9.

[4] Berman, “From Translation,” p. 11.

[5] Berman, “From Translation,” p. 11.

[6] Roland Barthes. “From Work to Text.” In The Cultural Studies Reader, edited by Simon During, Donna Jeanne Haraway and Teresa De Lauretis. London: Routledge, 2007. Rosemary J. Coombe. The Cultural Life of Intellectual Properties: Authorship, Appropriation, and the Law. Durham: Duke University Press, 1998. Néstor García Canclini. Hybrid Cultures: Strategies for Entering and Leaving Modernity. Minneapolis: University of Minnesota Press, 2005. Koichi Iwabuchi. Recentering Globalization: Popular Culture and Japanese Transnationalism. Durham: Duke University Press, 2002. Koichi Iwabuchi, Stephen Muecke, and Mandy Thomas. Rogue Flows: Trans-Asian Cultural Traffic. Aberdeen, Hong Kong: Hong Kong University Press, 2004.

[7] See: Barthes, “From Work to Text.” Michel Foucault. “What Is an Author?” In The Essential Foucault: Selections from Essential Works of Foucault, 1954-1984, edited by Paul Rabinow and Nikolas S. Rose. New York: New Press, 2003. Lesley Stern. The Scorsese Connection. Bloomington; London: Indiana University Press; British Film Institute, 1995. Mikhail Iampolski. The Memory of Tiresias: Intertextuality and Film. Berkeley: University of California Press, 1998.

[8] Berman, “From Translation,” p. 14

[9] I use literary theories due to their prevalence within academia, but also because of their political nature. While other conceptualizations of translation avoid politics and ethics (particularly practical understandings of translation) comparative literary theories of translation highlight them: my underlying belief is that translation is both politically and culturally important.

[10] Jacques Derrida. “‘Eating Well,’ or the Calculation of the Subject: An Interview with Jacques Derrida.” In Who Comes after the Subject?, edited by Eduardo Cadava, Peter Connor and Jean-Luc Nancy, 96-119. New York: Routledge, 1991.

[11] George Steiner. After Babel: Aspects of Language and Translation. 3rd ed. Oxford ; New York: Oxford University Press, 1998: p. 428.

[12] Ferdinand de Saussure, Charles Bally, Albert Sechehaye, and Albert Riedlinger. Course in General Linguistics. Translated by Roy Harris. LaSalle: Open Court, 1983 [1972]: p. 67.

[13] Saussure, Course, pp. 71-78.

[14] Saussure, Course, pp. 79-98.

[15] Jonathan D. Culler. Ferdinand De Saussure. Rev. ed. Ithaca, N.Y.: Cornell University Press, 1986: p. 132.

[16] Jacques Derrida. Of Grammatology. 1st American ed. Baltimore: Johns Hopkins University Press, 1976.

[17] Jacques Derrida. “Des Tours De Babel.” In Difference in Translation, edited by Joseph F. Graham. Ithaca: Cornell University Press, 1985: pp. 165-7.

[18] Jacques Derrida. “Living On. Border Lines.” In Deconstruction and Criticism, edited by Harold Bloom, Paul De Man, Jacques Derrida, Geoffrey H. Hartman and J. Hillis Miller. New York: Seabury Press, 1979.

[19] Jacques Derrida. “What Is a ‘Relevant’ Translation?” In The Translation Studies Reader: p. 443. (italics and brackets in text)

[20] Derrida, “‘Eating Well.’

[21] Jacques Derrida. Specters of Marx: The State of the Debt, the Work of Mourning, and the New International. New York: Routledge, 1994.

[22] Philip E. Lewis. “The Measure of Translation Effects.” In Difference in Translation.

[23] Ironically, Spivak’s Derridian translation of Derrida’s Of Grammatology was successful in its abuse, but unsuccessful in getting her further translation jobs of Derrida’s works. Derridian translations are successful when they are unsuccessful.

[24] On the relationship between task, giving up and failure see: Paul De Man. “Conclusions: Walter Benjamin’s ‘the Task of the Translator’.” In The Resistance to Theory. Minneapolis: University of Minnesota Press, 1986: p. 80. For more on Derrida, Benjamin and De Man see: Tejaswini Niranjana. Siting Translation: History, Post-Structuralism, and the Colonial Context. Berkeley: University of California Press, 1992.

[25] Walter Benjamin. “The Task of the Translator: An Introduction to the Translation of Baudelaire’s Tableaux Parisiens.” In The Translation Studies Reader: p. 81.

[26] Benjamin. “The Task of the Translator,” p. 76.

[27] Emily Apter brings this out well in her work on translation and politics. Emily S. Apter. The Translation Zone: A New Comparative Literature. Princeton: Princeton University Press, 2006.

[28] Specifically, Robinson argues for the long lasting presence of Christian asceticism (both eremitic and cenobitic) coming from religious dogma, but leading into the word/sense debate. See: Douglas Robinson. “The Ascetic Foundations of Western Translatology: Jerome and Augustine.” Translation and Literature 1 (1992): 3-25.

[29] Jerome. ”Letter to Pammachius.” Kathleen Davis trans. In The Translation Studies Reader: p. 28.

[30] John Dryden. “From the Preface to Ovid’s Epistles.” In The Translation Studies Reader, pp. 38-42.

[31] Roman Jakobson, Krystyna Pomorska, and Stephen Rudy, Language in Literature. Cambridge: Belknap Press, 1987: p. 429.

[32] Jakobson, Language in Literature, p. 434. There are interesting connections between formalism and Laura Marks’ work on digital translation. Marks argues that digitization necessarily robs things of certain qualities and this means they can be translated in interesting, new ways, but that they are forever robbed of originary elements. The digital becomes a universal language. See: Laura U. Marks. “The Task of the Digital Translator.” Journal of Neuro-Aesthetic Theory 2 (2000-02).

[33] Anton Popovič. Dictionary for the Analysis of Literary Translation. Edmonton: Department of Comparative Literature, University of Alberta, 1975: p. 6. Also see Niranjana’s discussion in Siting Translation, p. 57.

[34] I am skipping over large debates within game studies involving the question of the core of gaming: ludology and narratology. Roughly, whether the core of gaming is the ‘play’ or the ‘story.’ I skip this to save space, because it is a dead end that has been generally concluded with the answer of ‘both,’ because ludologists and narratologists are academics, but finally because ‘experience’ encapsulates both play and story.

[35] Carmen Mangiron and Minako O’Hagan. “Game Localization: Unleashing Imagination with ‘Restricted’ Translation.” Journal of Specialized Translation, no. 6 (2006): 10-21. Also see, Minako O’Hagan and Carmen Mangiron. “Games Localization: When Arigato Gets Lost in Translation.” Paper presented at the New Zealand Game Developers Conference, Otago 2004.

[36] Popovič, Dictionary, p. 11.

[37] Lawrence Venuti. ”Foundational Statements” in The Translation Studies Reader: p. 15.

[38] Schleiermacher is working with Dryden’s tripartite: metaphrase, paraphrase and imitation. In his understanding, then, word-for-word has been subsumed (since Jerome) for sense-for-sense, but imitation has been opened up as a larger (maligned) possibility.

[39] Friedrich Schleiermacher. “On the Different Methods of Translating.” In The Translation Studies Reader: p. 49.

[40] Schleiermacher. “On the Different Methods of Translating,” pp. 60-61.

[41] Antoine Berman. The Experience of the Foreign: Culture and Translation in Romantic Germany. Albany: State University of New York Press, 1992: p. 150.

[42] Berman, The Experience of the Foreign, p. 149.

[43] Lawrence Venuti. The Translator’s Invisibility: A History of Translation. 2nd ed. New York: Routledge, 2008 [1994]: p. 15.

[44] Venuti, Translator’s Invisibility, p. 86.

[45] Venuti, Translator’s Invisibility, p. 98.

[46] Venuti, Translator’s Invisibility, p. 276.

[47] Venuti, Translator’s Invisibility, p. 85.

[48] Lawrence Venuti. The Scandals of Translation: Towards an Ethics of Difference. London; New York, NY: Routledge, 1998.

[49] Venuti, Scandals of Translation, p. 87.

[50] J. David Bolter and Richard Grusin. Remediation: Understanding New Media. Cambridge: MIT Press, 1999.

[51] In this later work the metaphor has shifted to interfaces being both windows with immediacy and mirrors with reflection, but it is still connected to remediation with both immediacy and hypermediacy. Jay David Bolter and Diane Gromala. Windows and Mirrors: Interaction Design, Digital Art, and the Myth of Transparency. Cambridge: MIT Press, 2003: p. 82.

[52] Metatitles are a extended form of subtitles that I first discussed in my Master’s thesis; Jerome McGann’s work, including IVANHOE and his Rosetti work, can be found through his website <http://www2.iath.virginia.edu/jjm2f/online.html>; Mods are fan/user created game modifications.

[53]Alexander R. Galloway. Gaming: Essays on Algorithmic Culture. Minneapolis: University of Minnesota Press, 2006: pp. 70-84.

[54] Berman, “From Translation,” p. 6.

[55] Jacques Derrida. Glas. Lincoln: University of Nebraska Press, 1986 [1974].

[56] This application is for various ‘smart’ phones and the iPad, but the technology is still not utilized for eReaders. My point is that this lack is not for technological reasons, but for ways that the eReader is both imagined and actualized.

[57] For a general, early look at film translation see: Dirk Delabastita. “Translation and the Mass Media.” in Susan Bassnett and Andre Lefevere eds. Translation, History and Culture. London: Pinter Publishers, 1990.

[58] Lawrence W. Levine. Highbrow/Lowbrow: The Emergence of Cultural Hierarchy in America. Cambridge: Harvard UP, 1988. Referenced in Jennifer Forrest “The ‘Personal’ Touch: The Original, the Remake, and the Dupe in Early Cinema,” In Jennifer Forrest and Leonard R. Koos eds. Dead Ringers: The Remake in Theory and Practice. Albany: State University of New York Press, 2002: p. 102.

[59] As has been stated by many people in the 20th century, there is nothing objective, or reflective, about representation, and there never was for early cinema, however, this belief has never really gone away. See: Ella Shohat and Robert Stam. “The Cinema after Babel: Language, Difference, Power.” Screen 26.3-4, 1985: 35-58.

[60] This is regardless of corruption of subtitles per Abé Mark Nornes. Cinema Babel: Translating Global Cinema. Minneapolis: University of Minnesota Press, 2007.

[61] Arjun Appadurai. Modernity at Large: Cultural Dimensions of Globalization. Minneapolis: University of Minnesota Press, 1996: particularly p. 39.

[62] For Japanese this is particularly a problem; for English this is less of a problem, especially for Americans, due to the assumption that English is a global language.

[63] On MLV see: Ginette Vincendeau. “Hollywood Babel: The Coming of Sound and the Multiple Language Version.” Screen 29.2 (1988): 24-39. On FLV see: Natasa Durovicová. “Translating America: The Hollywood Multilinguals 1929-1933.” In Sound Theory: Sound Practice, edited by Rick Altman, 138-53. New York: Routledge, 1992. Also, see: Nornes, Cinema Babel.

[64] See: Chon Noriega. “Godzilla and the Japanese Nightmare: When “Them!” is U.S.” Cinema Journal 27.1 (Autumn 1987): 63-77

[65] These are visible in the United States, to which I largely refer, but there is another history within India’s Bollywood (often illegal/unofficial) remake practices.

[66] Ironically, the actual words she uses, ホスト, ホステス and キャバレー, are all foreign loan words in katakana. Thus, even her word choice is based in an awkward schizophrenia between local and foreign.

[67] Abé Mark Nornes. “For an Abusive Subtitling.” Film Quarterly 52, no. 3 (1999): 17-34.

[68] L10n is the industry shorthand for localization. There are 10 letters between the L and the n. In addition to localization, the industry uses i18n as shorthand for internationalization and g11n for globalization..

[69] For a discussion on the demonstration and visibility of these early games, see: Van Burnham. Supercade: A Visual History of the Videogame Age 1971-1984. Cambridge: MIT Press, 2003.

[70] In particular see Michel Foucault on the new regime of power/knowledge through a new way of seeing, and Lisa Cartwright on the problems of medical imaging technologies and truth. See: Lisa Cartwright. Screening the Body: Tracing Medicine’s Visual Culture. Minneapolis: University of Minnesota Press, 1995. Michel Foucault. The Birth of the Clinic: An Archaeology of Medical Perception. New York: Vintage Books, 1975. Marita Sturken and Lisa Cartwright. Practices of Looking: An Introduction to Visual Culture. Oxford; New York: Oxford University Press, 2001.

[71] Mary Flanagan. “Locating Play and Politics: Real World Games & Activism.” Paper presented at the Digital Arts and Culture, Perth, Australia 2007: p. 3.

[72] See: Gérard Genette. Palimpsests: Literature in the Second Degree. Lincoln: University of Nebraska Press, 1997; Gérard Genette. Paratexts: Thresholds of Interpretation, Literature, Culture, Theory. Cambridge; New York, NY: Cambridge University Press, 1997.

[73] LISA is “An organization which was founded in 1990 and is made up mostly software publishers and localization service providers. LISA organizes forums, publishes a newsletter, conducts surveys, and has initiated several special-interest groups focusing on specific issues in localization.” Bert Esselink. A Practical Guide to Localization. Rev. ed. Amsterdam; Philadelphia: John Benjamins Pub. Co., 2000: p. 471.

[74] LISA quoted in Esselink, A Practical Guide to Localization, p. 3.

[75] Lev Manovich. The Language of New Media. Cambridge: MIT Press, 2001.

[76] On experience as the core equivalence see the work of Carmen Mangiron and Minako O’Hagan: Carmen Mangiron. “Video Games Localisation: Posing New Challenges to the Translator.” Perspectives: Studies in Translatology 14, no. 4 (2006): 306-23; Mangiron and O’Hagan, “Game Localization;” O’Hagan, Minako. “Conceptualizing the Future of Translation with Localization.” The International Journal of Localization (2004): 15-22; Minako O’Hagan. “Towards a Cross-Cultural Game Design: An Explorative Study in Understanding the Player Experience of a Localised Japanese Video Game.” The Journal of Specialized Translation, no. 11 (2009): 211-33; O’Hagan and Mangiron, “Games Localization.”

[77] Esselink, A Practical Guide to Localization, p. 46.

[78] Frank Dietz. “Issues in Localizing Computer Games.” In Perspectives on Localization, edited by Kieran Dunne. Amsterdam; Philidelphia: John Benjamins Publishing, 2006. Also, Mangiron and O’Hagan, “Game Localization.”

[79] The move to CG from live action might also be a contributing factor to the rise of domesticating, replacement localization. Technically, gaming started with live action cut-scenes with big budgets and famous actors in the 1990s (Wing Commander III (1994); Star Wars: Jedi Knight: Dark Forces II (1997)), but it moved to CG cut-scenes using the game engine by the late 1990s and early 2000s (Half Life (1998), Star Wars: Jedi Knight II: Jedi Outcast (2002)). In part this could be seen as a budget issue, but in part it is an immersion issue as live action cut-scenes could be considered more jarring due to difference from regular game.

[80] This is, of course, ironic as cinema often overdubs the dialogue into the film due to the difficulties of recording clear dialogue when filming.

[81] This is an incredibly rough definition especially due to how ‘piracy’ relates to fan production, modding and copyright.

[82] Piracy is rampant with PC games, due to the ease of duplicating CDs and DVDs, and only slightly better with console games where cartridges are harder to duplicate. For various views on game piracy see: Ernesto. “Modern Warfare 2 Most Pirated Game of 2009.” TorrentFreak. Posted: December 27, 2009. Accessed: June 6, 2010. <http://torrentfreak.com/the-most-pirated-games-of-2009-091227/>. David Rosen. “Another View of Video Game Piracy.” Kotaku. Posted: May 7, 2010. Accessed: June 6, 2010. <http://kotaku.com/5533615/another-view-of-video-game-piracy>. In general, also see the blog Play No Evil: Game Security, IT Security, and Secure Game Design Services, particularly the “DRM, Game Piracy & Used Games” category: <http://playnoevil.com/serendipity/index.php?/categories/7-DRM,-Game-Piracy-Used-Games>.

[83] Mangiron and O’Hagan, “Game Localization.”

[84] That the equivalent experience comes from, and aims toward, generic cultural attributes of a presumed group, and not a complex, real group, is another problem entirely.

[85] Esselink, A Practical Guide to Localization, p. 4.

[86] Appadurai, Modernity at Large. Toby Miller, Nitin Govil, John McMurria, Richard Maxwell, and Ting Wang. Global Hollywood 2. London: BFI Publishing, 2005. John Tomlinson. Cultural Imperialism: A Critical Introduction. Baltimore: Johns Hopkins University Press, 1991.

[87] Harumi Befu. Hegemony of Homogeneity: An Anthropological Analysis Of “Nihonjinron. Melbourne: Trans Pacific Press, 2001. Stephen Vlastos. Mirror of Modernity: Invented Traditions of Modern Japan. Berkeley: University of California Press, 1998. Tomiko Yoda and Harry D. Harootunian. Japan after Japan: Social and Cultural Life from the Recessionary 1990s to the Present. Durham: Duke University Press, 2006.

[88] I have written about both the politics of Square-Enix as a Japanese company and the International Edition as a political force elsewhere. See: William Huber and Stephen Mandiberg. “Kingdom Hearts, Territoriality and Flow.” Paper presentation at the 4th Digital Games Research Association Conference. Breaking New Ground: Innovation in Games, Play, Practice and Theory. Brunel University, West London, United Kingdom. September, 2009; Stephen Mandiberg. “The International Edition and National Exoticism.” Paper presentation at Meaningful Play. Michigan State University, East Lansing. October, 2008.

[89] There are serious issues regarding labor and these two trends of translation. One is in the labor of fans to create translations. This is alleviated through micro-payments for the additional localization packages. They must receive some amount of compensation for their labor, as this situation is dangerously close to exploitation. The second issue is related to the de-skilling of professional translators and localizers due to the possible disappearance of their work to the fans. This is an issue, but micro-payments and the necessity of companies to pay localizers for the primary localizations should alleviate this possible de-skilling somewhat. These are matters that demand attention that I am not giving them in the present paper.

[90] See: Joseph Reagle. Good Faith Collaboration: The Culture of Wikipedia. Cambridge: MIT Press, 2010.

[91] Rocketboom Know Your Meme. <http://knowyourmeme.com/memes/lolcats>; I Can Has Cheezburger. <http://icanhascheezburger.com/>. Hobotopia. <http://apelad.blogspot.com/>.

[92] LOLCat Bible Translation Project. <http://www.lolcatbible.com/index.php?title=Genesis_1>.

[93] A slightly different translation project that utilized the masses is Fred Benenson’s Kickstarter project Emoji Dick. Benenson used Kickstarter, an online funding platform, to fund a translation of Moby Dick into Emoticons using Google’s Mechanical Turk. Thousands of individual Mechanical Turk users were paid pennies to translate individual sentences into emoticons and the results were published. See: <http://www.kickstarter.com/projects/fred/emoji-dick>.

[94] FLOSS Manuals. <http://en.flossmanuals.net/>.

[95] Yochai Benkler. The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven: Yale University Press, 2006.

[96] RiffTrax. <http://www.rifftrax.com/>

[97] The Leaky Cauldron. <http://www.the-leaky-cauldron.org/features/dvdcommentaries>.

[98] Fan translations and retranslations have both existed over the past decades. For instance, see the ChronoTrigger retranslation <http://www.chronocompendium.com/Term/Retranslation.html>, the Mother 3 fan translation < http://mother3.fobby.net/>, and the Seiken Densetsu 3 fan translation <http://www.neillcorlett.com/sd3/>.

[99] There are innumerable examples of each type. I am simply listing ones that come to mind.

[100] The Xbox 360 information comes from Rolf Klischewski. IGDA LocSIG mailing list. May 31, 2010.

[101] While Dyer-Mitheford and De Peuter would likely relegate this industry-integrated solution to a form of apologist for Empire, I prefer to think of it as a dialogic solution. See: Nick Dyer-Witheford and Greig De Peuter. Games of Empire: Global Capitalism and Video Games. Minneapolis: University of Minnesota Press, 2009. Mikhail Bakhtin, The Dialogic Imagination: Four Essays. Austin: University of Texas Press, 1981.

On Translation and/as Interface

I. Windows, Mirrors and Translations

Within their book Windows and Mirrors, J. David Bolter and Diane Gromala discuss the interface of digital artifacts (primarily artistic ones, but that is in large part their SIGGRAPH 2000 sample set) as having two trends. The first is the invisible window where we see through the interface to the content, and the other is the seemingly maligned (at least recently an in much of the design treatises) reflective mirror that reflects how the interface works with/on us as users.

This is similar in ways to Ian Bogost and Nick Montfort’s Platform Studies initiative where the interface exists between the object form/function and its reception/operation, and this interface can do many things depending on its contextual and material particulars. We need only look at the difference between Myst with its clear screen and Quake with its HUD, or between Halo and its standard gamepad and Wario Ware: Smooth Moves with its wiimote utilization to see the range.

However, another thing that the discussion of windows and mirrors, immediacy and hypermediacy, seeing through and looking at all bring up when paired with interface is translation. A translation is also an interface. It can be a window or a mirror, transparent or layered, you can see through it to some content, or you can be forced to look at it and the form and translation itself.

But thinking of translation as an interface in the Bolter and Gromala sense, or as Bogost and Montfort’s interface layer is unusual. Usual is to place the translation outside of the game as a post production necessity that enable the global spread of the product, or, at best, an integrated element of the production side that minimally alters the text so that it can be accepted in the target locale. Even researchers within the field of game studies generally ignore the language of the game: nobody asks what version the researcher played because we all recognize that we play different versions; more important is that the researcher played at all.

So translation’s place is in question. Is it production? Post-production? Important? Negligible? And how does one study it? We can barely agree upon how to study play and games themselves, so surely this is putting the carriage before the horse (or maybe some nails on the carriage before both). But, no, I still wish to follow through with this discussion, as I believe it can be productive. My question is how does translation relate to games, and hopefully I can come up with a few thoughts/answers if not a single ‘truth.’

II. Translation and Localization

As Heather Chandler has so wonderfully documented, the translation of game has a variable relationship to the production cycle. It at one point was completely post-productive and barely involved the original production and development teams. At its earliest it was simply the inclusion of a translated sheet of instructions to aid the user in deciphering what was a game in a completely foreign language. This still exists in certain locations, especially those with weaker linguistic and monetary Empires (obviously, not English, but ironically this includes China where the games are often gray or black market Japanese imports). This type of translation, called a non-localization, has slowly given way to more complete localizations including “partial” and “full” localizations. Partial localizations maintain many in game features, but menus and titles switch language, audio may remain as is, but subtitles will be included. In contrast, a full localization tends toward altering everything to a target preference including voices, images, dialogue, background music, and even game elements such as diegetic locations. As the extent of localization increased the position (temporally and in importance) of translation in the production cycle changed. It moved forward and needed pre-planning for nested file structures. It also grew in importance so that more money might be spent to ensure a better product.

However, other than a few gaffs like “all your base” and other poor translations from the early years game translation has increasingly become invisible. This invisibility, or transparency, has been written about extensively by Lawrence Venuti regarding literary translation, the status of the translator, and the relationship of global to national cultural production. For my purposes here I will simply say that he says the fluent translations are a problem (in the context of American Empire) and that current game localization practices (which are multi/international, but in many ways American-centric) do what he claims is bad. We don’t need to accept his arguments regarding empire and discursive regimes of translation (although I do), but we should be aware of the parallels between what he talks about using literary analysis and many translation reviews, and the way that nobody even talks about a game translation.

So the industry hides translation. But why does the academic community ignore it? Is it not a part of games? Maybe. But is it a part of play?

III. Ontology

Ontologies of play typically exclude translation. This is most obviously demonstrated in Jesper Juuls’ summary of common definitions and play that he uses to form his own classic game model. Rules are all well and good, but all games have a context, and it is this context that Juul misses when he dismisses the idea of “social groupings” (Juul 34). Juul pulls this from Huizinga and it is key that it relates to Huizinga’s primary contribution of the magic circle and the “ins” and “ofs” of play and culture.

I would argue that games promote social groups, but they also form in social groups and language is crucial to this as an important (perhaps primary) marker of a social group. However, in Juul’s final analysis “the rest of the world” has almost entirely been removed as an “optional” element (41). It is one thing to say that the outcome might effect the world, but it is another to say it can only be created through that world and its mere playing effects the world. Juul even acknowledges this in the conclusion to the chapter where he notes that pervasive and locative games break the rule. However, I would still argue that even the classic model does not obey the ”bounded in space and time” principle.

The former can be demonstrated trough Scrabble. A game created in English with strict rules, negotiable outcomes, player effort, attachment, valorization of winning, and many ways to do so. But the game is completely attached to English. The letters have point determinations based on ease of use and the scarcity of each letter is based on its common usage. The game is designed around English and cone cannot play it with other languages. Take Japanese: even if one were to Romanize the characters one wouldn’t have nearly enough vowels, and if one replaced all of the characters with hiragana there are still way too many homonyms to make a meaningful/difficult game. Japanese Scrabble might be possible, but it would need to be created by changing a great deal of the game. It is bounded in space and time, but contextually so.

The latter we can return to both Huizinga and Caillois who both locate play/games within a relationship to culture. Their teleological and Structuralist issues aside, it is important to not simply separate games (the text) from culture, time, place (the context) in a reductively formal analysis. Huizinga links play to culture as a functional element. These rules are a purpose even if that purpose has changed. Caillois notes a key association between types of play and particular societies. Games may be a separate place, but they affect the real world and vice-versa.

IV. Platform Studies

So context is important. Essential even. Let’s tack it on and see what happens. Or better yet, let’s say it’s pervasive and inseparable, but also difficult to distinguish. This is much like Bogost and Montfort’s Platform Studies model, so let’s see how translation could be integrated into that model.

Here I will primarily use Montfort’s earlier conceptualization of platform studies from his essay “Combat in Context.” Montfort moves toward a slightly simplified five-layer model from Lars Konzack’s seven-layer model by moving cultural and social context from a layer to a surrounding element. However, it is interesting that while he moves context to a surrounding element it is Platform that is key for them. Everything in his model is reliant on the platform.

As the base level the platform enables what can be created upon it. It is both the question of whether it is on a screen, whether it plays DVDs, cartridges or downloads files, how big those are and what size of a game is allowed on them. It is the capabilities of the system and what this enables. However, the platform layer exists in a context both technological and socio-cultural. The processor chip of the platform is in a particular context and limits the platform, but the existence of a living room with enough space to move can also limit the platform.

Second is the game code. The switch from assembly to higher level programming was enabled by platform advancements, but this also enabled great differences in the further layers. The way the code existed is also integrally related to linguistics/language. Translating assembly code is painstaking and almost always avoided. The era of assembly code was also the era of in house translations and non or partial localizations. In contrast, C and its derivatives enable greater linguistic integration and as long as programs in higher level code are programmed intelligibly translating them is possible. Context with the game code involves language. This much is obvious, as code is language. But I mean something further. I mean that there is a shift tin allowances along the way that reveals how real world “natural/national” languages become integrated, but always subsumed under machine languages.

Third is the game form: the narrative and rules. What we see, hear and play (if not ‘how’ we see, hear and play). This is the non-phenomenological game. The text, as it is. Of course, if it is the text then what is the surrounding context other than everything?

As we’ve seen from Juul, the rules belie languagelessness. We enter a world that has a set of rules that are separate from life and this prevents one from linking the game to life. But the narrative, if one does not think it an inconsequential thing tacked onto the essential rules, is related to contextually relevant things and presented in linguistically particular ways. Language then is here as well and translation bears and important role. In many ways this is the main place in which one might locate translation, but only if one is a narratologist. If the story is of prime importance, form is where translation exists.

The fourth level is the interface. Not the interface that I began with, at least not quite, or not yet, but the link between the player and the game. The “how” one sees, hears and plays the game. To Bogost and Montfort this is the control scheme, the wiimote and its phenomenological appeal compared to the gamepad or joystick, but it is also the way the game has layers of information that it must communicate to the user. The form of the game leads toward certain options of interface: a PVP FPS must be sure to have easily read information that allows quick decisions and full game time experience, but a slow RPG can have layers of dense interface, opaque and in a way that forces the user to spend hours making decisions in non-game time.

The interface also enables certain things. A complicated interface is hard to pick up and understand, but a simple one is easy. This is a design principle that Bolter and Gromala contest, but it has levels of truth in it. A new audience is not likely to pick up the obscenely difficult layering of interface of an RPG or turn based strategy game, but a casual point and click may be easily picked up and learned (if just as easily put down and forgotten).

In some ways this is also where translation exists and in some ways it isn’t. Certainly the GUI’s linguistic elements can be translated, but more often they are programmed in a supposedly non linguistic and universal manner. [heart symbol] stands for life and [lightening bolt] stands for magic or energy, or life is red and energy/magic is blue. Similarly, the audio cues are often untranslated. And controls mainly stay the same. Perhaps one of the few control changes of interface is the PlayStation alteration of O, or ‘maru,’ for ‘yes’ and X, or ‘batsu,’ for ‘no’ in Japanese for X, or check, for ‘yes’ and O for ‘no’ in English.

The fifth level is reception and operation: how the user and society receives the game, how it has come from prequels and gone to sequels, its transmedial or generic reverberations, and even the lawsuits and news surrounding it. All of these point outside of the game, but how does one then separate context? Is the nation the receiver or the context? Is the national language or dominant dialect part of the level or surrounding context? Is it effected by the game or can it then effect the game? And even if it effects the game by being on the top layer is it negligible in its importance? Is this another material vs. ideological Marxist fight for a new generation?

A short answer is that Bogost and Montfort answer all of this by putting context as a surrounding element, but they also fail to highlight its importance. By pushing out context to the surrounding bits it essentializes the core and approves of an analysis that does not include the periphery. The core can be enumerated; the periphery can never be fully labeled or contained.

Elements of importance are too destabilized to be meaningful when analyzed according to the platform studies. Translation is a prime example, but race and sexuality are equally problematic. Their agenda is not contextual, but formal. Mine is contextual and cultural.

V. Translation as Interface

The goal of localization is to translate a game so that a user in the target locale can have the same experience as a user in the source locale. For localization, then, translation is about providing a similar fifth level reception and operation experience. However, to provide this experience the localizers must alter the game form level by physically manipulating the game code level. The interface, beyond minor linguistic alteration, is not physically altered and yet it is the metaphor of what is being done to the game itself. The translation of a game, like Bolter and Gromala’s critique of the interface as window, attempts to transparently allow the user to look into a presumed originary text, or in the case of games, into to originary experience. It reduces the text to a singular experience/text. However, the experience and text were never singular to begin with. In translations, too, we need mirrors as well as windows, so how can we make a translation that reads like a mirror by reflecting the user and his or her own experience?

First, all of Bolter and Gromala’s claims against design’s obsession with windows and transparency are completely transferrable to games as digital artifacts and to the localization industry’s professed agendas. Thus, the primary necessity is to acknowledge the benefit of a non-window translation. Second, the translation must be put in as a visible, reflective interface that both shows the user’s playing particulars, the originals playing particulars and the way that the game form and code has been changed in the process. This could be enabled by a more layered, visible, foreignizing translational style. Instead of automatically loading a version of the game the user should be required to pick a translation and be notified that they can pick another. Different localizations should be visible provided on a singular medium. Alternate, fan produced modification translations should be enabled. If an uncomplicated translation-interface is an invisible and unproductive interface, then a complicated translation-interface is a visible and productive one. Make the translational interface visible.

VI. References

  • Bolter, J. David, and Diane Gromala. Windows and Mirrors: Interaction Design, Digital Art, and the Myth of Transparency. Cambridge: MIT Press, 2003.
  • Chandler, Heather Maxwell. The Game Localization Handbook. Hingham: Charles River Media, 2005.
  • Chandler, Heather Maxwell. The Game Production Handbook. 2nd ed. Hingham: Infinity Science Press, 2009.
  • Juul, Jesper. Half-Real: Video Games between Real Rules and Fictional Worlds. Cambridge: MIT Press, 2005.
  • Montfort, Nick. “Combat in Context.” Game Studies 6, no. 1 (2006).
  • Montfort, Nick, and Ian Bogost. Racing the Beam : The Atari Video Computer System. Cambridge: MIT Press, 2009.
  • Venuti, Lawrence. The Translator’s Invisibility: A History of Translation. 2nd ed. New York: Routledge, 2008 [1994].

二ノ国 : Impressions and Localization Expectations

Day 1: Initial Impressions

I was discussing Japanese manuals and their translation at a game developers/producers bar gathering. Specifically, I was being told that translating them is incredibly boring as they are routine, have little of interest, et cetera. This struck me as odd at the time because my informant was referring specifically to Japanese manuals (although he then added that English manuals have similarly become boring), but also vaguely true in that manuals are very chunked up in terms of translation. They are incredibly redundant and simplistic. As Gee has noted they make no sense at first, but become sensible after playing. There are so many “problems” with manuals its amazing that they’re still there and haven’t been replaced by in-game education (by which they have partially been replaced).

To my informant I asked if it had something to do with reading and Japan. Their answer was that such was a relativistic statement as it’s no harder to read in Japanese as a child due to furigana as it is in English. I demurred, but still questioned. I’m still not sure what the answer is, but having just seen Level 5 and Studio Ghibli’s upcoming Nintendo DS title 二ノ国 [ni no kuni] I’m writing about how that manual will turn out in relation to this whole idea of manuals in particular and translation in general.

Technically, I’m not even sure if the 352 page book next to every DS unit filled with characters, items, story, et cetera is a “manual” that comes with the game or an extra for the Tokyo Game Show, but I can’t imagine the latter as during my 15 minutes of play I was required to go into the tome (to page 61), retrieve the phrase “いでよなべまじん!”, and input it into the game to summon the genie-like boss/enemy.

So, the question here is two-fold: First, is it really an integral 2nd half to the game? If it is, then what does a 352 page required reading tome do to “video gaming?” Second, how will that tome be translated!?

Both of these questions are fantastically interesting on various levels. The first to theories of “game” and “play.” Where is the story, and where is the play? They’re overlapped in that to play the game one must understand the story. Narratology has a vague revenge on ludology. Does this interaction of book and game encourage kids to read? Is all of this intentional?

The second is of course particularly interesting to me in that a 352 page tome is so far from both the standard practice of manual translation and the standard type of game localization that to translate it almost requires a translation and not a localization. Will the job be distributed? As Ghibli has previously even gone to Neil Gaiman for celebrity/professional rewriting style translation will that be the avenue of choice? And how will that then effect the actual localization element of the game?

Sure, 二ノ国’s manual is hardly “usual,” but it’s exceptional qualities bring out the very questions that came up with the original conversation of manual translation. Is reading ability, which is to say “literacy,” a target of this game spearheaded by a company whose head has a penchant for hating new media in particular and technology in general?

Let us simply say that I am looking forward to the translation/localization of this title, and I hope I can talk with the localizers. For that matter, Im’ not sure that a localization has even been announced…

Day 2: Further Thoughts and Localization Expectations

I went back to Level 5’s 二ノ国 DS title today and confused the hell out of the staff by not playing the game at all. No, I don’t want to put on the headphones, and no I don’t want to choose one of the two demos. I just want to peruse the book. So here goes my further impressions and expectations.

It’s a 352 page book divided into 7 chapters (魔法指南, 合成指南, 装備指南, 道具と食べ物, イマージェンと魔物, 伝説の物語, and 色々な地方), and those chapters have an amazing amount of stuff from how magic and alchemy work, to information about equipment, tools, food, and creatures, to legends and stories of the world, and finally various extra information about characters and places. And of course there are pictures throughout. The book is really beautiful, but its truly amazing in that it forces the player to read it! They must peruse it at least enough to get information, but its beauty encourages them to read the rest. Yes, it’s a carrot and stick situation involving children and literacy.

This book alone would make translation an interesting task as it would be translation, not localization. But the particular use of language within the game makes it even more complicated. The in-game alphabet is based off of the Japanese 46 character syllabary with corresponding characters including “, ° and っ. Such a one to one choice is far from unknown: FFX had a similar trick with the アルベドalphabet but it was largely a non-issue due to the bulk replacement and lack of visual use of the language in the game. The particular use in Final Fantasy is to take the language, mix it around and voila, a “different language.” Because . The issues with 二ノ国 are heightened by the visual representation of an alternate language and the writing of characters during play. If the player does not write them it is less of an issue, but still a great difficulty.

To give an example, the book itself is called Magic Master, which transliterates to まじっくますたー, which transliterates to English as majikku masutaa, or Magic Master. This is on the cover of the book and there are paragraphs of the game language throughout the book at various points. One expects it is in the game world as well. To localize the game the ties between the in world language and the player’s language must be untied and then retied. To do that for English the 46 characters must be weeded down to 26, which is easy enough on a surface level, but  more difficult if anything in the game uses some of the 20 deleted characters in an interactive way.

So, who is taking on this task? I asked one of the Level 5 booth workers and was told it is not being localized. It’s possible he was missing my point and thinking I was asking for an English version on the spot, or he didn’t know, or he couldn’t answer due to legal restrictions, but I’ll take the general ‘no’ for now. After all, what company would want to take on a task that highlights the difficulties and unruly ties between localization and translation? This is not to say I don’t want it to be released in other countries, just that it will be both interesting and problematic when it eventually comes up for localization.

Destabilization of the Translator | Destabilization of the Translation

There are two new trends in translation that I would like to discuss. Both are postmodern and intentionally unstable, but they have opposite instabilities. One trend destabilizes the translator, and the other destabilizes the translation.

The destabilization of the translator has multiple translators, but a single translation. It has its history in the Septuagint, but its present locus is around dividing tasks and the post Fordist assembly line form of production. Like the Septuagint, where 72 imprisoned scholar translators translated the Torah identically through the hand of God, the new trend relies on the multiplicity of translators to confirm the validity of the produced translation. However, different is that while the Septuagint produced 72 results that were the same, the new form of translation produces one result that arguably combines the knowledge of all translators involved. This trend of translation can be seen in various new media forms and translation schemes such as Wikis, the Lolcat Bibul, Facebook, and FLOSS Manuals.

Wikis (from the Hawaian word for “fast”) are a form of distributed authorship. They exist due to the effort of their user base that adds and subtracts small sections to individual pages. One user might create a page and add a sentence, another might write three more paragraphs, a third may edit all of the above and subtract one of the paragraphs, and so on. No single author exists, but the belief is that the “truth” will come out of the distributed authority of the wiki.  It’s a very democratic form of knowledge production and authorship that certainly has issues, but for translation it enables new possibilities. While wikis are generally produced in a certain language and rarely translated (as the translation would not be able to keep track of the track changes), the chunk-by-chunk form of translation has been used in various places.

The Lolcat Bibul translation project is a web-based effort to translate the King James Bible into the meme language used to caption lolcats (amusing cat images). The “language” meme itself is a form of pidgin English where present tense and misspellings are highlighted for humorous effect. Examples are “I made you a cookie… but I eated it,” “I’z on da tbl tastn ur flarz,” and “I can haz cheeseburger?”[1] The Lolcat Bibul project facilitates the translation from King James verse to lolcat meme. For example, Genesis 1:1 is translated as follows:

KING JAMES: In the beginning God created the heaven and the earth
LOLCAT: Oh hai. In teh beginnin Ceiling Cat Maded teh skiez An da Urfs, but he did not eated dem. [2]

While the effort to render the Bible is either amusing or appalling depending on your personal outlook, important is the translation method itself. The King James Bible exists on one section of the website, and in the beginning the lolcat side was blank. Slowly, individual users took individual sections and verses and translated them according to their interpretation of lolspeak, thereby filling the lolcat side. These translated sections could also be changed and adapted as users altered words and ideas. No single user could control the translation, and any individual act could be opposed by another translation. The belief is that if 72 translators and the hand of God can produce an authoritative Bible, surely 72 thousand translators and the paw of Ceiling Cat can produce an authoritative Bibul.

FLOSS (Free Libre Open Source Software) Manuals and translations are a slightly more organized version of this distributed trend [3]. FLOSS is theoretically linked to Yochai Benkler’s “peer production” where people do things for different reasons (pride, cultural interaction, economic advancement, etc), and both the manuals and translations capitalize on this distribution of personal drives. Manuals are created for free and open source software through both intensive drives where multiple people congregate in a single place and hammer out the particulars of the manual, and follow-up wiki based adaptations. The translations of these manuals are then enacted as a secondary practice in a similar manner. Key to this open translation process are the distribution of work and translation memory tools (available databases of used terms and words) that enable such distribution, but also important is the initial belief that machine translations are currently unusable, which causes the necessity of such open translations.

Finally, Facebook turned translation into a game by creating an applet that allowed users to voluntarily translate individual strings of linguistic code that they used on a daily basis in English. Any particular phrase such as “[user] has accepted your friend request” or “Are you sure you want to delete [object]?” were translated dozens to hundreds of times and the most recurring variations were implemented in the translated version. The translation was then subject to further adaptation and modification as “native” users joined the fray as Facebook officially expanded into alternate languages. Thus, <LIKE> would have become <好き>, but was transformed to <いいね!> (good!). Not only did this process produce “real” languages, such as Japanese, but it also enabled user defined “languages” such as English (Pirate) with plenty of “arrrs” and “mateys.”

Wikis, FLOSS, and Facebook are translations with differing levels of user authority, but they all work on the premise that multiple translators can produce a singular, functioning translation. In the case of Facebook this functionality and user empowerment is highlighted; for FLOSS, user empowerment through translation and publishing are one focus, but a second focus is the movement away from machine translation; in all cases, but wikis particularly, the core belief is that truth will emerge out of the cacophony of multiple voices, and this is the key tenet of the destabilization of the translator [4].

The other trend is the destabilization of the translation. This form of translation has roots in the post divine Septuagint where all translation is necessarily flawed or partial. Instead of the truth emerging from the average of the sum of voices, truth is the build up: it is footnotes, marginal writing and multiple layers. Truth here is the cacophony itself. The ultimate text is forever displaced, but the mass intends to eventually lead to the whole (whether it gets there or not is separate matter for Benjamin, Derrida and the like).
While this style of translation is less enacted at present it is not completely new. Side by side pages with notes about choices is one variation centuries old (Tyndale’s Biblical notations, Midrash, and side by side poetry translations), the DVD language menu coming from multiple subtitle tracks is another variation, and finally this leads to new possibilities for multi-language software translations.

While the Septuagint leads to the creation of a single text in the myth, 72 translators translating a single text would produce 72 different translations in reality. The attempt to stabilize this inherent failure of translation argues that one of those translations is better and used, but it can be altered if a better translation comes around. The Bible translation is always singular, but it changes. Similarly, the Odyssey is translated quite often, but the translations are always presented alone. They are authoritative. In contrast, Roland Barthes comparison of modern works and postmodern texts and Foucault’s discussion of the authorial function both lead toward this destabilization of the author [5]. This discussion can be linked into translation studies’ discussions of author and translator intellectual production. The destabilization of translators and translations build off of both of these postmodern traditions, but the latter trend attempts to avoid weighing in on the issue by simultaneously exhibiting the conflicting iterations.

The main difficulty of the destabilization of the translation is the problem of exhibiting multiple iterations at one time in a meaningful way. How can a reader read two things at once, or with film, how can a viewer understand two soundtracks at once? Books and films provide multiple examples of how to deal with such an attention issue. With literary works endnotes are a minimal example of such attention divergence. Endnotes do not immediately compete for the reader’s attention, but the note markers indicate the possibility of voluntary switching. Footnotes are a slightly more aggressive form of attention management s they tell the reader to switch focus to the bottom of the page, a smaller distance that is more likely to happen.

For film, subtitles, which layer the filmic text with both original dialogue and the authorial translation, are a close equivalent to endnotes as they split the viewer’s attention, but do not force the attention toward a particular place. It is entirely possible to ignore subtitles regardless of complaints against them (much harder to ignore would be intertitles filling the screen). Finally, the benshi, a simultaneous live translator/explainer, is an early to mid 20th century Japanese movie theater tradition that most resembles the more aggressive footnotes as the benshi’s explanative voice competes with the film’s soundtrack for the audience’s aural attention [6].

Unlike websites such as Amazon, which have language dedicated pages (.com, .co.jp, .co.de) and block orders from addresses outside of their national coverage, or services such as the Sony PSPGo Store, which disallows the purchase of alternate region software, some sites utilize pull down language options that change the language while remaining on the same page, or provide multiple linguistic versions for purchase.

With digital games the localization process has traditionally replaced one language with its library of accompanying files with another. However, as computer memory increases the choice of one language or another becomes less of an issue and multiple languages are provided with the core software. This gives rise to the language option where the game can be flipped from one language to another through an option menu. Most games put this choice in the options menu at the title screen, but a few allow the user to switch back and forth. The simultaneous visibility of multiple languages or a language switch button would be further advancements toward the destabilization of translations.

Notes:

[1] Rocketboom Know Your Meme. <http://knowyourmeme.com/memes/lolcats>; I Can Has Cheezburger. <http://icanhascheezburger.com/>; Hobotopia. <http://apelad.blogspot.com/>.

[2] LOLCat Bible Translation Project. <http://www.lolcatbible.com/index.php?title=Genesis_1>.

[3] FLOSS Manuals. http://en.flossmanuals.net/

[4] This conceptualization relates to Bolter and Grusin’s hypermediacy. Bolter, J. David, and Richard Grusin. Remediation: Understanding New Media. Cambridge, Mass.: MIT Press, 1999.

[5] Barthes, Roland. “From Work to Text.” In The Cultural Studies Reader, edited by Simon During, Donna Jeanne Haraway and Teresa De Lauretis. London: Routledge, 2007; Foucault, Michel. “What Is an Author?” In The Essential Foucault: Selections from Essential Works of Foucault, 1954-1984, edited by Paul Rabinow and Nikolas S. Rose. New York: New Press, 2003.

[6] Nornes, Markus. Cinema Babel: Translating Global Cinema. Minneapolis: University of Minnesota Press, 2007.

Masochistic Translation

Painful Differance

I recently had a taste of a truly alienating translation: a translation that made me cry from lack of comprehension, and said comprehension was intentional in the author’s method and theory as well as the translator’s. This text, if you haven’t guessed, is Jacques Derrida’s Of Grammatology, translated by Gayatri Chakravorty Spivak.

I am told that Of Grammatology is forever deferred both in fact and meaning. Nobody gets it enough to fully summarize, but individual chunks might be worked through, as can be terms such as ‘trace,’ ‘sous rature,’ ‘differance’ et cetera. Writing exists in a particular relationship to language and to speech, and this relationship is opposite to that believed by the formalists, structuralists and logocentrists. We cannot get to meaning and the signified; we can only slide around in trace relationships between various signifiers in one time, place, language: one moment. What can be made present is only a partial presence, the trace; what is lost, the arche-trace, can be slide back and around, but never regained.

Spivak furthers this theoretical endeavor by sliding around in her translation, by making a 90 page translator’s preface that forces particular readings of the following 300 pages and challenges the relationship of original and translation through such placement. The preface, which comes after Derrida’s de la Grammatologie, is placed before Of Grammatology and thereby becomes first. Derrida’s text is not signfied to her translation’s signifier, rather there are only signifiers of signifiers, translations of translations, versions of versions. Spivak notes how related all of this is to translation in passing implications on lxxvii then straight out on lxxxv-lxxxvii.

All of this taken as is, reading Of Grammatology is a painful experience of slippery wordplay and neverending deferral of understanding. Reading Spivak’s translation is just that much more painful.

The Derridian (and de Man and Spivak) translational project would lead to very unpleasant translations: Spivak’s case is a prime example. However, she got away with it as she is not writing for entertainment and pleasure. Only for the masochistically inclined is Derrida fun.

Masochism

Speaking of mascochism, there are such things as masocore games (a term coming from Anna Anthropy’s blog entry on Auntie Pixelante). Not everybody likes or plays them, but they do exist. Said simply, masocore are games that revel in mistreating the player.

Giantbomb  notes masocore is “a postmodern indie game genre in which the designer intentionally frustrates the player. This frustration is typically accomplished by restructuring a preexisting game genre to place it in in one of three categories of frustration.”

“Trial and Error” is the necessity of following an exact path and figuring out that path. This is easily seen in platformers that necessitate exact jumps, or adventure games that require an exact path where alteration of such leads to the inability to complete the game (such as an item that you needed to pick up in the opening scenes without which the game cannot be completed)

“Confusion” is where generic conventions are broken (often resulting in the player having to relearn generic boundaries through Trial and Error). An example of this from Auntie Pixelante is “you jump over the apple, and the apple falls up and kills you. the apple falls up and kills you.” Auntie Pixelante goes on to reject the “merely super-hard” moniker and sides with the belief that masocore games are those that “[play] with the player’s expectations, the conventions of the genre that the player thinks she knows. they’re mindfucks.”

“Play,” Giantbomb’s third category, is the removal of play motivation (end, death, etc) in order to force the player to focus on (uncomfortable) play mechanics.

As Anna Anthropy states in the conclusion of her piece, masocore is visible now because of the intersections of independent gaming and free and easy distribution methods. She writes: “most of these games are simply unmarketable. which is why the masocore game, twenty years later, is starting to come into its own: now there are avenues for freeware games to reach wide audiences. these games have no need to sell themselves to the player, which allows them to be among the most interesting game experiences being crafted right now.”

Key to her statement in my mind is the how the gaming aesthetic of masochism has been enabled by the early 21st century game industry that has expanded beyond the generic as marketable to the niche as marketable.

Difficult(ies)

Masocore, is certainly a recently dubbed generic name, but it has persistent links to previous forms of the past decades. While the third form of masocore frustration (Play) might be unique, the other two forms can be seen in earlier methods of differentiated difficulties (and in general it can be traced back much further to such “games” as gladiatorial combat, martial arts, war, et cetera).

Game difficulty exists for multiple reasons, only one of which is enjoyment. (The relationship between difficulty and profit where arcade games necessitated difficulty to garner maximal profit, but video/computer games necessitated ease to enable the completion and further purchase of another game are ignored here.)

Due to the belief that difficulty is good for some reason (Flow, or any other theory), games have had various levels of difficulty and different methods of implementing said difficulty. Some games were simply really, really hard such as Donkey Kong and Ghost’n Goblins, some included the use of continues to enable the completion of a game (Teenage Mutant Ninja Turtles, Street Fighter), some offered different difficulty levels (Atari’s difficulty switch; The standardized Easy, Normal, Hard; Doom‘s I’m too Young to Die, Hey, Not too Rough, Hurt Me Plenty, Ultra-Violence, Nightmare!; Marathon‘s Kindergarten, Easy, Normal, Major Damage, Total Carnage; Halo‘s Easy, Normal, Heroic, Legendary; etc), some went the full opposite direction and made it impossible to lose by re-spawning the player at one point or another through some diegetic method (Prey, Bioshock). All of these are based around the idea that there is some benefit in difficulty, but just what that benefit is, and what level of difficulty is good, is unsure.

One new variation is the use of achievements to create a masocore element to an otherwise reasonable game. For instance, one of Mega Man 10‘s 12 achievements is Mr. Perfect, which requires the player “Clear the game without getting damaged.” In a Megaman style platformer this is nearly impossible and both a new proof of hardcore’ness and an implementation of masocore’ness.

Difficulty changes (as do implementations), but the tendency is neither to bow down to the masocores nor the casuals. Instead, the game industry has increasingly attempted to provide access to both. Difficulty, even masochistic pleasure in the extremely difficult, is increasingly deemed acceptable. The inclusion of the masochistic Mr. Perfect achievement between Mega Man 9 (2008) and Mega Man 10 (2010) and its correspondence to Anna Anthropy’s post in 2008 and the present 2010 point to this process of incorporation. Translation should learn a lesson from this, especially when localization’s main defense for its problematic translational method is that games need to be fun, to be entertainment. Some people like masocore games; some people like Derridian translations. Let’s start having masochistic translations.

Sources:

Anthropy, Anna. “Masocore Games.” Auntie Pixelante. Posted: April 6, 2008. Accessed: February 14, 2010. <http://www.auntiepixelante.com/?p=11>

Derrida, Jacques. Of Grammatology. Gayatri Chakravorty Spivak trans. Baltimore: Johns Hopkins University Press, 1976.

Mega Man 10 Achievement List. X-Box 360 Achievements. Accessed: February 14, 2010. <http://www.xbox360achievements.org/game/mega-man-10/achievements/>

TheDustin. “Masocore: Mr. Gimmick: The Best NES Platformer You Haven’t Heard Of (and Sadly Haven’t Played).” Play This Thing. Posted: Thursday, January 28, 2010. Accessed: Sunday, February 14, 2010. <http://playthisthing.com/game-taxonomy/masocore>

Various Authors. “Masocore (video game concept).” Giant Bomb. Accessed: February 14, 2010. <http://www.giantbomb.com/masocore/92-1165/>.

The Task of the Translator; The Location of Localization

I’ve been reading a lot of Walter Benjamin’s “Die Aufgabe des Ãœbersetzers” lately in reference. So much so that I also went back and (re)read the original. The question of course for everybody, or at least as I understand decades later and after Paul de Man, is whether the focus is on ‘translation’ as the ‘failure’ or the ‘task’ of the translator, both of which are built into the German. This comes down to whether the translator tries to translate the ‘what’ or the ‘why’ of the original, the idea of touching and either deflecting or reforming the ‘vessel,’ et cetera. The voice in my head then asks what the relationship between localizations is?

There’s an interesting thing that happens when I read translation work: I don’t feel like I’m barking up a crazy tree. This is nice. However, the other thing that happens is that I wonder exactly how I’m trying to tie things together, which doesn’t exactly work. Too many partial overlaps at once.

Things that are important here are, of course, the failure of the translation process, but also some of the other basics such as translation being not just ontological and spatial, and not just historical and temporal (which Bermann and Wood try to point to, rightfully, in Nation, Language, and the Ethics of Translation, but also specifically NOT temporal or spatial for localization.

Or rather, that is of course what is the intent with localization.

Translation is a post-production effect. It is written and then it’s translated. Even if I’m going to be difficult, or pomo, and say that repetition, adaptation and the like are also forms of translation there is still a key difference and that is the temporal aspect. However, localization specifically abuses that location of the translation. Game translation (localization) is increasingly moved from the post-production to the central production point. This follows through with the central claims of games as new media: that they have no original and are variable. This moving (temporal) position of localization also justifies the claim that games are not actually translated as it was never officially in one place or another. And more, for the case of simultaneous releases (and better yet releases with multiple languages) they are able to claim a full disabling of the temporal element of translation.

(New Media) Translation After Pound

The 20th century turn toward domestication essentially stems form Ezra Pound’s translations, but impurely, through the modern emphasis of the author mixed with the business of selling books.

According to Ronnie Apter in Digging for the Treasure: Translation After Pound, Pound influenced translation theory and practice in three major ways. First, was the move from “Victorian pseudo-archaic translation diction” to modern style. Second, is by arguing for a criticism of the original in some form: not simply the objective transfer (an acknowledged impossibility by the Victorians as well), but to focus on some particular element and thereby “criticize.” And finally, the creation of a new poem: not just something derivative.

These three were all essential breaks with both the Victorian practice, which focused on three criteria: paraphrase with no additions (subtractions were inevitable, but additions were taboo), the reproduction of the author’s traits (just what the traits were was, however, up for grabs), and the reproduction of the overall effect of the text (whether the “effect was of the original on the original’s original audience, or the original on the modern audience who can read the original text is unknown). It was also an adaptation with the contemporaneous translation theory professed by Matthew Arnold and F. W. Newman.

However, while Pound was translating against the Victorian grain, we have come full circle to a new norm. The fashion of the times has changed to one that embraces Pound’s basics, but not the depths. If “great translators transcend the fashion of their times [and] minor ones merely manipulate it” Pound was a great translator, many minor figures have manipulated his transcendence, but Pound himself would simply be one of any in the current fashion. As Lawrence Venuti has argued, the times and dominant style have changed and another transcendental shift is called for.

What I want to argue is that this shift is called for by the media itself. The move from literary page translation to multimedia and digital forms leads into new possibilities for and understandings of translation. In an interesting way, however, it is Pound’s logopoeia, his style of meta-translation, that can still lead the way. Whereas Pound focused on the meaning of words to bring into focus both the older era and the present, a type of dialectical juxtaposition, the move toward searchable, digital data in opposition to static, analogue data allows the simultaneous existence of both data sets and a new type of logopoeia. This new form of meta-translation involves the layering of translational tracks. Instead of juxtaposition, there is the coexistence of both tracks/languages/cultures.

This is similar to the possibilities evoked by subtitles and abuse (Nornes), but it considers the issue in relation to digital, new media and not simply film considered in an analogue manner. Instead of the ability to simply choose one or another track/language, it gives all, or switches between languages. It renders the possibilities of putting three real languages into a game such as Command and Conquer: Red Alert 3 (English, Russian and Japanese to use the fictive world), but more meaningfully (and less deliberately/offensively stereotypically), of switching them on the fly so that one game has the US speaking English, the Russians Russian and the Japanese Japanese, but another switches so that the US speaks Japanese, the Russians English and the Japanese Russian. The media uses its ability to draw from the swappable data files not to simply replace one with another, thereby changing one representation into another, but to abuse the user with a constant active experience that questions the submerged normativity of language that exists with translated entertainment products (games in particular) at present.

Playing In/With the Archive

Archives are everywhere. This much is obvious. Essays and books with archive somewhere in the theme or title have increased in the past few decades and recently I have seen a slew of ‘archive’ related conference CPFs.

Of interest to me is the archive’s intersection with gaming, history and memory. There are three points of intersection that I see: acknowledgment and creation of archives, manipulation of archives and playing in archives. These three forms of archival play correspond to three types of games and gaming. One, massive ROMization (MAME, SNES9X, et cetera) and recent (official) migration of old games to WiiWare, Xbox Live Arcade and Play Station Network. Two, sequels and remakes of older games and the creation of a particular genealogy of titles. Three, demakes and unofficial titles that problematize the greater archive.

The first is simply the building of an archive of obsolete platforms and games. Whereas it used to be possible to pull out one’s old NES and blow into the cartridge more people are finding that the technology simply doesn’t work anymore. This goes doubly for games that were/are hard to find. As the technology has become increasingly unavailable it has become obvious to more and more people that an archive/library is necessary for games. However, because the platforms themselves become similarly unavailable it has also been necessary to create a means of playing the games.

Since the late 1990s there have been semi-legal efforts to play ROMs on personal computers. The game’s cartridge information is ripped to a computer and emulators are programmed to play the games. While emulators are made for some of the most advanced systems such emulation has generally been problematic (buggy, slow, unable to play many games), the emulation of older systems like arcades, Atari, NES, SNES, SEGA et cetera have been highly successful. While this method has resulted in a massive archiving of games (even if the platforms and materiality of play have disappeared – television, cartridge, console, controller), one thing that is undeniable is that this ROMization is less than fully legal and companies are losing what they see as a profit.

The second generation of the archive has recently been implemented with the big three game companies (Nintendo, Sony and Microsoft) creating respective means to play previous generational games. Nintendo releases WiiWare versions of previous games such as Megaman that one can play on the Nintendo Wii. Sony releases past games such as Spyro the Dragon through the Playstation Network (it should also be noted that the Playstation, as a DVD/CD console is able to play older generational games (the PS3 can play PS2 and PS1 games; the PS2 can play PS1 games). Finally, Xbox Live Arcade, which has more broadly made available Sega and other games. While all three of these forms have been more legal than the ROM movement in that they prevent ‘piracy’ they are far from successful in the simple archiving project. They pick and choose (and buy the rights for) what they archive on the three libraries.

However, both of these projects are at base simply the creation of an archive for preservation and further use of games. Here I refer to the building and playing of the archive.

The second way in which the archive and gaming interact is something that has been going on since Tennis for Two, but has begun to take a slightly more aggressive turn. While gaming has always been a form of remediation, and sequels have been around (at least) since PacMan turned into Ms. PacMan, there have more recently been remakes cropping up that are slightly different. Previously I discussed the interaction with a form of restorative nostalgia with the remake. The remake, as restorative, acts to whitewash the past and justify a particular reading of history: it is a form of playing with the archive. This is not new or unusual: all history is manipulative and restorative. However, initial implementation that occurs simultaneously with the rise of the second generation of company archiving is interesting. It makes one think of just what is happening when FPS and 3rd person action games are being remade: what happens to the archive when its contents are embellished and highlighted?

Third is the demake and what I want to call playing in the archive. It is also a type of playing in the past. Demakes work with reflective nostalgia and focus on the patina of the old. With games this is crucially ideas like retro graphics seen with the initial MAME movement, but it is also the attempts to translate/adapt modern games to older platforms. That most of these demakes can only be played by emulation is slightly problematic, but one might also point out the longer history of cartridge manipulation (Cory Arcangel’s Mario Clouds) and its progress into the present with efforts being made to put demakes onto cartridges (D+Pad Hero). Here people are deliberately moving into the archive, taking present things and forcing them into the older sections of the archive. Unhappy with the look and smell of a new book it’s given a fake patina and odorized in order pass and pleasure like an old book.

So, these three types of gamic archiving are taking place. We know that the archive is a deliberate (if uncontrollable) cementation of knowledge and indication of a certain mode of knowledge production. So, the question is really what sort of knowledge production is happening with these three types of gaming archives? What is the difference between the first and second generations of archiving? What sort of knowledge is opened up and closed by remaking and demaking?

Remakes and Demakes: Logics of Repetition in Gaming

Note: The following is a work in progress on remakes and demakes, repetition, and remediation. I post now seeking comments and responses in order to help in the rewrite process.

Abstract: Remakes are a form of repetition well known and increasingly engaged with in cinema and literary studies. Recently remakes have spread to gaming and with them has been a string of games with opposing tendencies. These games have been called demakes. This essay explores the oppositional logics of repetition at play with gaming remakes and demakes in terms of technological, representational and historical modes of knowledge production. Whereas remakes follow the dominant cultural trends and help whitewash the past, demakes are oppositional, playing with technology, the past and nostalgia in a different, if not better, way.

Remakes and Demakes: Logics of Repetition in Gaming

Written: March 21, 2009

Both remaking and demaking have a particular relationship to repetition and time. While their obvious relationship is with the present and the past, they also have a stake in the relationship between present and future. Remaking renews the past; demaking returns to the past. Both are crucially involved with concepts of history, memory and nostalgia. However, these aspects of the remake and the demake seem to be elided in the fetishization of realistic representation, technology and economics. In the following pages I will map out the techno-economic, representation/simulational and historico-nostalgic logics involved with the current repetition of gaming texts.

One relatively recent gamic remake is Tomb Raider: Anniversary. Produced by Eidos, the same company that released Tomb Raider in 1996, the remake marks the 10-year anniversary of the original game that began both the genre and property that has now spread to multiple media (game, film, novelization, et cetera) over the past decade.

Tomb Raider is a 3rd person action adventure game. The player controls the now famous character Lara Croft as she explores various environments (Peru, Greece, Egypt and Atlantis) searching for the keys to unlock and eventually explore the lost city of Atlantis. The Anniversary remake uses the same 3rd person genre, narrative and particular locales as the original. The graphics are updated from clunky, early polygonal representation to high-count polygon graphics that produce a more “naturalistic” or “realistic” representation. [1]  The remake could be considered a “faithful” translation of the original as it reproduces the flow and specific scenes/levels of the original, but it works to erase the dated aspect: the graphics. [2]  By this logic the important, reproduced aspects are the play style and the property itself.

The Anniversary remake, as a highly commercial endeavor, reproduces the “good” elements of the original for simple economic reasons. It brings back what the audience has paid for time and again with extras. The generic mode of 3rd person 3D action/adventure genre, itself an update to 2D platformers such as Super Mario Bros., was popularized in the original Tomb Raider. It has been revived as a genre in each Tomb Raider sequel, and the Anniversary remake flogs the tired generic horse enough to make a (significant) profit. The economic logics of “better-faster-more” are of central importance to the remake. Perhaps the only remakes that do not follow the threefold increase are those that attempt to take the game as is visually and transfer the game from a previous, now increasingly difficult to obtain, or simply obsolete, platform to a modern one. Examples of this are the Square-Enix Final Fantasy games that are being remade from the 1990s Super Nintendo hardware to the 2000s Gameboy Advance and Nintendo DS hardware. In these remakes additional elements (“more”) are added, but the other two aspects remain as they were in the original (faithfully reproduced graphics and speed).

The remake is thus involved in a process of renewal where “old” is turned into “new” in a strictly linear fashion that posits less < more, slow < fast, abstract < naturalistic, and so forth. This techno-economic logic of “better-faster-more” dominates gaming on the top commercial layer, but it is directly opposed within the discourse surrounding the demake.

Demaking is a recent phenomenon where a game is translated in the opposite direction compared to the standard remaking. The term “de-make” was coined by Phil Fish on the TIGForums in August of 2007. In response to recent remakes, Fish writes:

what about the opposite? relatively new 3d games being remade for lesser platforms. like that guy who ported ocarina of time to SNES, or turning doom into a cellphone RPG… i fint [sic] it highly interesting to see what happens when that happens. see how far you can push a game backwards, and see what gameplay elements remain intact. what got cut, what got added? does it play better? can anybody think of other downgrades/de-makes? can anybody think of a better name for those games? (Fish 2007).

Instead of taking an old game and making it new the demake takes a new game and makes it old either through genre or graphics. Fish uses the term “downgrade” in opposition to the normative “upgrade” assumed with the remake’s advancement along the previous mentioned techno-economic logic of “better-faster-more,” and then asks if anybody can think of another name: a year and a half later the hyphen has been removed and numerous other demakes have been made both independently by interested parties and within The Independent Gaming Source’s Bootleg Demakes Competition.

The demake that I will primarily be considering here, D+Pad Hero, was created by Kent Hansen and Andreas Pederson and is a demake of Guitar Hero a popular music simulation game. Guitar Hero is one of many music simulation games (others include Rock Band, SingStar, GuitarFreaks and DrumMania) where the user “plays” an instrument (drumset, guitar) or sings in rhythm to music and beats displayed on the screen. With Guitar Hero the guitar is unplayable as a guitar and simply consists of five input buttons and a “strum” bar that the player uses in order to “play” the song. The player must press the buttons and use the strum bar in accordance with the cues on the screen, which are in tune/rhythm with the song being played, and a score is given at the end of the song based on accuracy. This logic has been in older games such as BeatMania (1997), Pop’n Music (1998) and Dance, Dance Revolution (1998), but has increasingly been combined with simulated musical production and commercial music. While Pop’n Music forced the player to press five large, colored buttons arranged on an arcade frame in accordance to the cues and a son, the later games put the buttons on a faux instrument. The original Guitar Hero and the other music simulation games have had dozens of expansions and sequels. is in its third release and the other games are around a similar sequel number. The sequels follow a similar techno-economic logic as the remake in that they have more realistic controllers (‘guitars’ where you strum while holding the proper buttons down) and more popular music (Beatles, Metallica, et cetera); the expansions simply have more songs.

D+Pad Hero takes the basic rhythm game formula and reproduces it with 8bit graphics and 8bit music for the Nintendo Entertainment System, a system first released in 1983. [3] The player uses the archetypal NES controller instead of the faux-guitar in order to input the arrow keys and the A and B buttons in tune to the music. The concept of accuracy and score remain the same. Other than simply reproducing the game visually on the NES system, the songs themselves have been rendered compatible with the hardware. The authors took various popular songs and converted/translated them to 8bit midi music. [4]  The techno-economic logic of the sequel and remake is reversed on both sides: the game has old graphics, old music and old hardware, which reverses the technological logic; the game is unsellable due to copyright and instead exists as an economic product solely through donations. The demake is thus a work of love, fun or fan/nerd culture.

The second logic is of representation/simulation. The idea of naturalistic graphics or perceptual realism is related to the concept of “better-faster-more” within the techno-economic logic, but it extends beyond the simple technological fetish for photorealistic graphics. Both the remake and the demake are crucially related to re-presentation and the link (or lack thereof) between production and reproduction, image and meaning, and/or original and derivative. The conflict between true meaning, lack of meaning and drawn meaning are in conflict within the original, remake and demake. One line of thinking extends from the Marxist desire to raise the veil of ideology and thereby reveal truth through Althusser and the inability to ever escape ideology to Baudrillard, the hyperreal, simulation and an inability to every return to any ultimate truth. True meaning ends within the concept of simulation, but the path can also be linked with a separate path that goes through semiotics and Barthes’ differentiation of the work and the text, Foucault’s destabilization of the author, and Manovich’s concept of transcoding from a base. These two lines then culminate in Jenkins and the postmodern celebration of fan culture and the user’s ability to make his or her own meaning. Meaningless pluralism is reached through this path. [5]

A third logic at play in the remake is Historico-Nostalgic. The past, as unusable due to technical incompatibilities (systemic difference in coding and hardware), is bracketed off, and, depending on your outlook, either erased or put into the past. The few choice bits are taken out of this graveyard of the past and become “history.” While Doom is remade time and again, Pathways into Darkness (a contemporaneous 1st person shooter) is written out of the revived history: it remains in the past, but Doom is remade as Doom III and taken out of the past to be redeployed in the present as historical.

The third logic within the remake is one of history and memory. However, the nostalgia is slightly different between the remake and demake. Key to understanding the difference between the two forms of repetition is Svetlana Boym’s conceptualization of restorative and reflective nostalgic tendencies. While restorative nostalgia attempts to fill in the holes of the past to produce a utopian present, reflective nostalgia dwells in the signs of the past itself. This difference is key to understanding the difference between the remake and demake; it is crucial to finding some sort of useful meaning beyond the reductive celebration of postmodern repetition and pluralism.

Techno-Economic Logic

The initial logic within the remake is one of technology and economics. The remake is made in a particular way because it sells and the particular way in which it is made reproduces the dominant trend of technological advancement. [6] Since the 1950s computers have grown exponentially more powerful. The course has generally followed Moore’s law, which posits that the speed of a processor doubles roughly every two years. While historically the technology indeed followed this logic in that things double in speed every two years, whether this is a natural development or a self-developing prophecy due to the industry’s desire to maintain the trend is unknowable. Similarly problematic is the chicken and egg effect of the hardware being forced into obsolescence due to Moore’s Law and the public’s desire for faster computers. However, the result is that computers either needed to, or simply could run more complex applications.

The general increase in computer speed needs to be understood in the computer’s ability to process minute tasks faster, which translates equally into producing more tasks in the same amount of time, which can be further translated into the simple idea of complexity of tasks and possibilities. The computer then went from being able to calculate very simple algorithms to running through detailed, complex algorithms. One way to understand this logic is through gaming capabilities. Within the initial era of gaming there were text adventures like Zork, where input was limited and graphics were non-existent, and games like Pong, which had highly pixilated graphics and simple logic. There were limiting factors on both the storage and retrieval sides of computer applications. Limited memory to store data and limited processor power to retrieve and use the information. As both increased the programs could become more complex, and it is this complexity that can be translated into more detailed graphics and games. A rather jagged progression of gaming consoles is Nintendo (1983) -> Super Nintendo (1990) -> /Playstation (1994) -> Playstation 2 (2000) -> Playstation 3 (2006); such a progression can be understood slightly better knowing that each progression has a more powerful processor (8bit, 16bit, 64bit, 128bit et cetera) and increased storage (ROM cartridges, memory cards, CDs, DVDs and BluRay disks). [7] The 8bit graphics of the Nintendo were pixel dominated, had limited color and music. The Super Nintendo allowed far more detailed graphics due to the increase in processor speed, and the music and sound changed from very limited beeps and boops to a vast supply of beeps and boops. With the Playstation graphics progressed beyond combinations of pixels into polygons and the production of 3D representations instead of flat, pixilated scenarios. Playstation 2 and 3 utilized CD quality sound due to the movement to the DVD and BluRay disk with greatly expanded storage, and increased the number of polygons involved in any graphical representation scenario so that a tube legged, square shoed, triangle breasted, ovoid headed polygonal Lara Croft eventually turned into the far more “naturalistic” if not realistic character of the Anniversary remake. [8] The graphics of the Anniversary remake are far from photo-realistic, but they are certainly an upgrade brought about by a decade of increased technological development.

The increase in calculation power has brought with it the possibilities of an increase in realistic representation. Whether by chance or naturally there has been a parallel development between realistic representation and processor speed as games that have increased their naturalistic representation as the processing power has increased have also sold better. This logic is best seen in the dominance of first person shooter (FPS) games such as Doom as compared to visual adventure games such as Myst (Hutchison 2008). One aspect of the techno-economic logic within the remake follows this assumed parallel between realisticness and economic well-being. Making a remake with better graphics will make it sell even better than the original game.

A second aspect of the techno-economic logic within the remake involves the price to produce a game. Remaking something that has already existed invariably involves less than making something new as long as all other aspects of the production remain the same. This is the same reasoning behind Hollywood’s remaking industry: it is cheaper to remake an old movie than it is to make a new script, figure out how to enact that script and then hoping in the end that the ideas behind the new script were not poor in the first place (Forrest and Koos 2002). The remake attempts to take out that uncertainty by simply remaking a well-worn, sure-shot idea/script. While it is often problematic to map a concept developed for a particular medium onto another medium the logic should hold here. The costs of remaking an older game are cheaper than making a new game from scratch as long as at least some aspects of the storyboards, narrative, character ideas or even code itself are reused. Only certain (popular) games are chosen for remaking, and both the following of technological evolution and low(er) production cost ensure that they will be better economic success than otherwise. [9]

While the remake follows the linear techno-economic logic, the demake exhibits an opposing logic that breaks with both the technological and economic expectations. Unlike the parallel logic of upgrading the computing power and realisticness, the demake’s “downgrade” simulates a previous level of computing power. This forces the creator of the demake to creatively reproduce tropes of the present in alternate ways; as Fish writes, coding a demake involves “see[ing] how far you can push a game backwards, and see[ing] what gameplay elements remain intact.” In the case of D+Pad Hero, the demake fundamentally questions the benefits of the increased graphics as every element of the actual game is reproduced within the demake on a different level of perceptual representation. Gang Garrison 2, a demake of the popular Team Fortress 2, follows a similar logic by taking away the advanced graphics, but maintaining the cooperative team play. This is somewhat different when the demake actually changes the genre. An example of where the demake forces a shift due to lack of processing speed is when Portal, a FPS game, was demade first as an Internet browser game and then as an Atari 2600 game. The generic formula both demakes enacted was of a 2D puzzle game, which, unlike the first person shooter genre, is something reproducible in the current generation of web browsers and on a twenty year old console. This alteration questioned the currently naturalized economic dominance of the FPS game.

The second aspect of the techno-economic logic is also problematized within the demake as all demakes are fan produced programs that are not designed to be (and in fact cannot be) sold. As the Independent Gaming Source proudly proclaims, the competition is of bootleg demakes. Thus, the increase in monetary gain that might come from repetition followed by sale is in fact stymied because the sale cannot happen. Because the demake breaks with the techno-economic logic there must be some sort of logic that drives people to produce such programs. One answer is the old hacker love of taking something apart, figuring it out and putting it together again differently/better (Galloway 2004). A second answer is that such subcultural proclamations do not prevent cooptation. By riding the popularity of a current game, demake programmers might obtain enough popular support to get a break into the official industry. Bootleg subcultural pride often ends with selling out. A third reasoning, the logic that I believe the demake follows, and something I will return to later is of memory, nostalgia and pleasure. For now I will expound on the logic of representation, which connects most with the techno-economic logic of the remake, even if it does not quite meld well with the demake.

Representation/Simulational Logic

The techno-economic logic that flows in a single direction in the remake and is disrupted into unlinked stops and starts in the demake is paralleled by the second logic of representation and simulation. Representation is to re-present something from a different time/space, to bring something from a previous time/space into the here and now.

Roland Barthes writes that “the photograph profess[es] to be a mechanical analogue of reality, its first-order message in some sort completely fills its substance and leaves no place for the development of a second order message” (Barthes 1977, 18). As representation it claims perfect correspondence, but Barthes points to the different orders of meaning within the image that necessarily block any type of objective analogousness. The image has three levels of meaning: the linguistic message that relays a particular meaning, the non-coded iconic, denoted message that attempts to claim correspondence and objective innocence, and the many coded iconic, connotative meanings that disrupt any possibility of perfect representation (Barthes 1977). Thus, representation claims to simply represent what was with a clear, singular meaning, but in fact has numerous meanings and in fact never brings back the entirety of what was. Re-presentation is never one-to-one repetition even though it claims to be.

DN Rodowick notes representation is often “defined as spatial correspondence” (Rodowick 2007, 102), but I would add that the concept of temporal correspondence (present, presence) is just as important, if slightly more obviously impossible to achieve. Rodowick himself protests the image’s (analog and digital) link to both representation and perceptual realism claiming that photography does not provide spatial semblance, but in fact corresponds to “our perceptual and cognitive norms for apprehending a represented space” (Rodowick 2007, 103). Thus, that which is re-presented is not a physical reality, but a mental and psychological one obtained through perception (Rodowick 2007, 105). Obviously, this argument holds that representation is not reality, but that does not answer what it is, nor does it answer why it is both produced and consumed. One way of getting at the questions of ‘what’ and ‘why’ is primarily through Marxist analysis, and the other is through psychoanalysis, but both include a dosage of semiotics.

Following Marx’s conceptualization of the proletariat’s existence within the capitalist mode of production as false consciousness, the Frankfurt school theorists extend from the economic base to also include superstructural false consciousness. Horkheimer and Adorno write, “Capitalist production so confines [the workers and employees, the farmers and the lower class], body and soul, that they fall helpless victims to what is offered them… the deceived masses are today captivated by the myth of success even more than the successful are” (Horkheimer and Adorno 1972, 133-4). The captivating myth in question is the culture industry’s representation of everyday life.

The whole world is made to pass through the filter of the culture industry. The old experience of the movie-goer, who sees the world outside as an extension of the film he has just left (because the latter is intent upon reproducing the world of everyday perceptions), is now the producer’s guideline. The more intensely and flawlessly his techniques duplicate empirical objects, the easier it is today for the illusion to prevail that the outside world is the straightforward continuation of that presented on the screen. (Horkheimer and Adorno 1972, 126)

The process they describe is one where the culture industries create a representational system where the world of leisure and entertainment is inseparable from the real world, which results in the molding of people into unquestioning consumer citizens who believe the represented world is just out of their grasp, but still obtainable. For Horkheimer and Adorno the culture industry and the progression of representational technologies leads to increased mass deception against which it is the duty of critical theory to oppose, and it is the duty of Marxists theorists to denaturalize. The intent is, like with Marx, to raise the veil and thereby enable the teleological dialectic of progress to lead toward some (better) existence. While (justifiably) doom and gloom, the Frankfurt school hopes to open peoples’ eyes to the ideological brain washing of the culture industry’s representational system.

Henri Lefebvre follows a similar Marxist methodology in his critiques of everyday life. Lefebvre argues that the abstract capitalist conceptualization has worked on and produced the lived, concrete space of life and experience. This is a similar Marxist idea of production even if he has moved beyond the early Marxist declarations of false consciousness and mass deception. We are not being tricked, but we are living in a constructed world/consciousness that he believes needs to be protested. Thus in his multi volume Critique of Everyday Life he proposes that everyday life is fleeting and sought after: utopian (Lefebvre 1991). Unlike de Certeau’s practical tactics of dealing with everyday life as it is, Lefebvre is unsatisfied with the Situationist practicality and wants to point toward the utopian variation of everyday life, the variation that is just out of reach. Thus in his late work he proposes a new science of studying the rhythms of life. By looking at the rhythms we can see the disjunctions and recognize both the (Capitalist) system and the parts that are out of the system (Lefebvre 2004). For Lefebvre representation is not the real, but there remains the possibility of an out, a denaturalization of the constructed space of Capitalism. Althusser’s work on ideology is one of the first steps of the removal of an out (even if it leaves the possibility of understanding).

Writing at approximately the same time and intertwined with Lefebvre, Louis Althusser’s work is on ideology as an always-already constituted element where there ceases to be access to some untouched origin: the veil might be raised, but nothing but another veil will be seen; you might be outside of an ideology, but never ideology (Althusser 1986). Althusser’s work seeks to answer the question of just why the proletariat revolution never happened by bringing out Gramscian notions of hegemony and formulating a dual imposition of Repressive State Apparatuses and Ideological State Apparatuses. While ultimately side-stepping the ire of the orthodox Marxists by giving more importance to RSAs and the economic base in the last instance, Althusser’s analysis in fact identifies the superstructural process of interpellation through which the person is made one with the society so that he or she does not in fact want to rebel. He argues that turning around to a police officer’s hailing shows a person’s interpellation within an ideological system; another example would be identifying (or seeking to identify with) an advertisement. Even if one rejects the hailing of one particular ideology one cannot escape: such a rejection indicates a separate subjective ideology, but not the existence of being outside ideology, and even then one can still intersubjectively imagine a subject that would be interpellated and thereby still be within the system. The second part of Althusser’s theory holds that because one’s identity is formed within an all-inclusive ideology one can never get back to some untouched, uninfluenced origin. We are always-already within ideology and we are always-already constituted as particular subjects. Because of the formulation of always-already there is in fact no false-consciousness from which we can escape, as it is all we have and all we can ever have.

Althusser’s crucial break from the hopes of denaturalization was enabled in part by Jacques Lacan’s psychoanalytic work on reality, language and the three orders: imaginary, symbolic and real. The real is impossible to interact with/see/witness as it resides outside of language; any attempts to get back to a real (through representation) are necessarily fractured through the very structure of language. Language then is part of the symbolic order as it structures and stands between the real and the imaginary (experienced) world. Finally, Lacan’s third order, the imaginary, is the world that we inhabit, our subjective experiences. Althusser’s always-already corresponds to Lacan’s imaginary: it is the represented world. Thus, the problem posed by Althusser is not of getting back to the origin, but understanding and problematizing the (grammatical and material) means and conditions of production of desire, the imaginary, ideological world. He has taken away the Marxist out (the veil), but he has left the possibility of understanding the world.

The connection between Lacan, Lefebvre and Althusser ends up with Jean Baudrillard who, in his later work, radically broke with his Marxist background informed by Lefebvre and the possibility of an out by claiming, like Althusser and Lacan, that there is no recourse to the real. Baudrillard claims:

abstraction is no longer that of the map, the double, the mirror, or the concept. Simulation is no longer that of a territory, a referential being, or a substance. It is the generation by models of a real without origin or reality: a hyperreal. The territory no longer precedes the map, nor does it survive it. It is nevertheless the map that precedes the territory – precession of simulacra – that engenders the territory. (Baudrillard 1994, 1).

Taking a step beyond Lefebvre, Baudrillard takes the possibility of return, of an out from a life in a produced world, away. The territory does not survive the map because the hyperreal/simulation “threatens the difference between ‘true’ and ‘false,’ ‘real’ and the ‘imaginary’ (Baudrillard 1994, 3). Baudrillard identifies a very particular moment when wars were being enacted in front of cameras, re-presented to the world, and being reacted to as ‘real’ events: his argument thus states that there is no difference between real and fake events. [10] The representation “has no relation to any reality whatsoever: it is its own pure simulacrum” (Baudrillard 1994, 6), which leads to deterrence and simply dealing with the hyperreal as the only reality around. Baudrillard takes Lacan’s imaginary order and runs with it as the only reality to which we have access. While Baudrillard was far from celebratory of the hyperreal world, a few years later Zizek tells us to love our symptom. We have no recourse to the real to denaturalize, so we might as well do the best we can. Unfortunately, this logic combines in an uncomfortable way with the destabilization of meaning within postmodern semiotic theory.

Linked in some ways to his work on meaning in photography, Barthes’ differentiation between a work and a text is important as it points to the point of meaning making and what it means for the text itself (Barthes 2007). The work is singular, has a proper meaning that “closes upon a signified,” has “filiations” to field and author, and is ultimately an “object of consumption.” In contrast, the text “is experienced only as an activity in production,” resists classification as a “paradoxical” thing, is “radically symbolic,” plural and understood within a network and not linked to a father/author. Michel Foucault further questions the text’s link to an authorial function and claims that it is imperative to reverse the author function in order to change the discourse of research from subject based to discourse based knowledge (Foucault 2003, 390-1). Opposed to the modern work, the text is ultimately a postmodern, intertextual production. Both of these claims are important, but become problematic when eventually linked up to the remake.

Lev Manovich does not explicitly bring up the concept of a work or text in his discussion of new media, but his five aspects of new media resemble Barthes’ tropes of a text. Manovich claims that there are five relevant principles within new media: numerical representation (digitization), modularity (workable chunks of data), automation (high and low forms of computer automation), variability (which is linked to modularity and postfordism as well as translation through the concept of the “base object”), and transcoding/programmability. The principles obviously move from old to new media in the same way that Barthes moves from work to text, but the ideas of variability and transcoding hold further interest. Manovich’s concept of variability implies a sort of base object that can be altered into varied new forms. However, unlike translation where there is an original and a translation, a first and a second, the singular base object exists as an incomplete entity and therefore does not actually exist in a power relationship to the varied finished objects. Similarly, “to ‘transcode’ something is to translate it into another format” (Manovich 2001, 47). Manovich’s language of new media renders horizontal the last shred of authorial/original meaning and continues with Barthes and Foucault’s intertextualization of the text so that by the time we get to the original, remake and demake they are all simply transcodings of some other base object of simulation.

It is at this point that I can relate the Representational Logic to the remake and demake. The remake follows the simulational logic that stems from Baudrillard’s discussion of hypersimulation (Baudrillard 1994, 19-20). The remake seeks greater perceptual realism so that one cannot tell the difference between a game and life and in fact doesn’t matter. The simulated bank robbery becomes/is a real bank robbery; the person pushing the button to drop the bomb is the same whether he or she knows or not. In contrast, the demake D+Pad Hero problematizes the simulational logic by moving from the simulated experience of playing a (fake) guitar to simply inputting the buttons on a controller. There is no difference within the coded level of the game itself, but the personal experience is completely different. The progression of music simulation games has increasingly tried to reproduce the experience of playing an instrument. While the movement of buttons to a fake guitar is still far from “real,” the movement of buttons to fake drums is slightly less problematic as the action of drumming is within the fake drumming. In contrast, D+Pad Hero removes the instrument and brings back the four direction game pad from deep in the gamer’s memory. Similarly, the return to 8bit music from an era of CD quality reproduction in the original Guitar Hero disrupts the standard representational experience by forcing the player to return recognize that he or she is gaming and not actually playing music with an instrument.

If we follow Baudrillard’s simulation and the lack of recourse to any real and any meaning we are abandoned in our attempt to differentiate between an original, a remake and a demake. As all three are transcodings there is no real difference at the level of simulation: game pad or instrument, photo-realism or 8bit it’s all hyperreal, postmodern pastiche and pluralism all the way down. This is the logic that Constantine Verevis follows in his analysis of cinematic remakes in the postmodern era. By understanding remakes as intertextuality he leads directly toward the removal of difference through an understanding of the remake as New Hollywood citationality. However, by doing so he limits the possibility of seeing the neutralization of the foreign elements and harsh economic realities of remaking precisely because it emphasizes laissez-faire economics and global cinematic modernity (Verevis 2006). Such details are important in the unequal, disjuncture filled world that we make sense of through personal experience. Like Viviane Sobchack’s critique of Baudrillard that resorted to a phenomenological experience of one person having lost a leg and remembering/knowing the difference, the place I seek to reclaim meaning and differentiate between the remake and demake is in history and the personal experiences of memory and nostalgia.

Historico-Nostalgic Logic

The third logic related to the remake and the demake is what I am calling historico-nostalgic. This logic is about the relations of the game and player to the past and to the future. In order to tease out the relationships between remake, demake, player and meaning I will explore ideas of time, history and nostalgia. On nostalgia, Svetlana Boym writes that: At first glance, nostalgia is a longing for place, but actually it is a yearning for a different time – the time of our childhood, the slower rhythms of our dreams. In a broader sense, nostalgia is rebellion against the modern idea of time, the time of history and progress. The nostalgic desires to obliterate history and turn it into private or collective mythology, to revisit time like space, refusing to surrender to the irreversibility of time that plagues the human condition (Boym 2001, XV). Similar to Boym’s obliteration of history, Reinhart Koselleck writes of the constant need to rewrite history so that it aligns with the concept of the modern present (Koselleck 1985, 250). History is never simply dates in the past, one after another, but a specifically aligned genealogy that culminates in the present and leads to the future. It is this politics of alignment between past, present and future that can be seen in the crisis of memory of which the cinematic remakes of end of the 20th century are a part, as are the gaming remakes and demakes of the beginning of the 21st century. [11]

Especially since the late 20th century there has been a crisis of memory. While some theorists write of the entanglement of memory and history in the production of national or cultural identity (Sturken 1997), others have written specifically of the rise of nostalgic, retro styles such as black and white film in the 1990s (Grainge 2002), or the nostalgic consumption of mid 20th century film and television classics in the home (Klinger 2006). All of these engagements with memory and the past are slightly different: while some focus on cultural/national memory and the construction of cultural/national identity, others focus on personal forms of nostalgia and the individual’s own interaction with his or her past. My contention is that the difference between remakes and demakes lies on the line between memory and history, and to conflate the two forms of repetition leads toward the representational logic, but away from any ability to make useful meaning out of the texts: while they might be tangled, they are not the same thing.

Boym writes of two tendencies of nostalgia that help differentiate between the two types of gaming repetition. She explores the concept of nostalgia related to place and time in her native Russia. Having left for what she thought was for good, Boym returns after the fall of the USSR and explores both her and the national interaction with nostalgia. Her framework establishes two general tendencies of nostalgia, restorative and reflective. “Restorative nostalgia puts emphasis on nostos and proposes to rebuild the lost home and patch up the memory gaps. Reflective nostalgia dwells in algia, in longing and loss, the imperfect process of remembrance” (Boym 2001, 41). The two tendencies do not map perfectly onto any single example as they exist in relative amounts, but it is possible to use them to talk about the remake and demake.

The restorative tendency aims toward a unified truth generally understood as the national project. It “manifests itself in total reconstructions of monuments of the past” (Boym 2001, 41). Restorative nostalgia can be seen in the active use of the past to form a particular history. As said before, Tomb Raider: Anniversary reconstructs the entirety of Tomb Raider and what it leaves out is in fact excised from history: the reconstruction becomes history, not the past/original itself. As Constantin Fasolt writes against the historians’ rule against mixing the past as immutable and present as occurring:

History is constitutive of modern politics, constitutive of the kind of modern state that claims sovereignty for itself and the autonomy of individuals subject to nothing except their conscience and the laws of the physical universe. The prohibition on anachronism? It merely seems to be a principle of method by which historians secure the adequacy of their interpretation. In truth the prohibition on anachronism defines the purpose for which the discipline of history exists: to divide the reality of time into past and present. History enlists the desire for knowledge about the past to meet a deeper need: the need for power and independence, the deed to have done with the past and to be rid of things that cannot be forgotten. (Fasolt 2004, 13)

Similarly, the remake has unifying, restorative elements within it. Through remaking an old game the producer and industry create a specific history that highlights very specific aspects. Certain games (Doom and Tomb Raider) or genres (FPS and 3rd person action/adventure) are identified as important and a unified gaming history is created that further mirrors the techno-economic logic that both supports and is supported by the Capitalist mode of production. If, as Thomas Kuhn states, “history…disguises the nature of the work that produced it” (quoted in Fasolt 2004, 39), then the remake disguises the nature and meaning of its reproduction. Instead of questioning the logic that revels in increased realisticness instead of realism, the remake highlights the logic of realisticness and simulation, and justifies itself through the basic concept of economics.

In contrast, the reflective tendency can be seen in the demake, which dwells in the ruins, patina and dreams of the old genres, sights, sounds and experiences that the creators of the demakes themselves witnessed, remember(ed) and attempt to reflect on. The demake drive shuns any form of linear progress by flipping back and forth between the present and past modes and in fact brings the singular primacy of natural progression into question. Unlike the justification of certain games and genres the demake highlights those genres that have been abandoned in the past (text adventure, side-scroller and 8bit sound) and those games that have been ignored due to their lack of economic appeal (artistic, serious and cult-classic games such as Portal and Shadow of the Colossus).  While there is the very possibility of such reflective nostalgia to be co-opted back into the dominant mode it is important to note the entire chain of meaning making including the differences in the loop of production, dissemination, consumption and alteration (Du Gay 1997). The production side is still important even if we have abandoned the reductive injection model of media effects.

It is the personal interaction with one’s own past, remembering the games and genres of childhood that is causing demakes and their interaction with reflective nostalgia. Unlike the ultimate reduction through destabilized postmodern meaning on the consumption end and through the simulational equality of forms of repetition, the impetus behind both those making and those playing demakes questions the dominant system. One of the rhythms in life is that of repetition, and the liminal moment of reflectivity, before it is whitewashed out under a restorative move, reveals the cracks in the dominant system. While nostalgia, restorative or reflective, is never literal in that it never actually brings back the past, such protest is important if one still believes in some sort of rendition of a dialectic, of progress, or even of simply understanding reality, be it imagined or Real.

Travelling in Place

Perfect memory is impossible, but it is also undesirable: there is a need on both the cultural and personal levels to forget in order to heal and be made whole as both a nation and an individual (Ricoeur 2004). This impossibility of perfection extends to all levels of repetition including translation, memory, archiving, history, and representation. Further, like representation is never simply re-presentating something of the there and then in the here and now, repetition is never simply repeating something. Repetition always repeats some things, but leaves other things out. The need to forget, to not repeat, implies the selection of particular pasts through a filtering of history, but this does not necessarily lead to the conclusion of postmodern pluralism where there is no difference between what is remembered, translated, archived, re-membered, and written into history, or what is remade or demade. There are important differences that cannot and should not be ignored.

In his reconsideration of theory travelling between contexts, Edward Said wrote that “The point of theory… is to travel, always to move beyond its confinements, to emigrate, to remain in a sense in exile” (Said 2007, 252). While Said writes of the affiliation to Lukàcs beyond mere borrowing or adaptation, he also writes that conflating Viennese twelve-tone music of Adorno with the Algerian resistance and French colonialism of Fanon would be grotesque. Similarly, the demake and remake must be understood as tangled with repetition, but not as inextricably tied, which is something that is impossible through simply following concepts such as simulation, remediation and transcoding. Change is a type of production of knowledge and knowledge is never innocent. As technological, representational and historical modes of knowledge production the remake and demake are neither objective nor innocent and must be understood as such.

Endnotes

[1]  Such realisticness is of course separate from concepts of social realism (Galloway 2006).
[2]  Within translation theory the trope of faithfulness is opposed by an assumed impossibility of perfect translation (also seen in the Italian adage traddutore/traditore) and both are bracketed by sub-methods such as source and target orientation, literary and literal styles, fidelity to meaning or word and finally the opposition of domestication and foreignization. In my thinking translation is an unspoken/unacknowledged trope within the remake and the demake (See: Bassnet and Lefevere 1990, Venuti 1994 and 1998, and Berman 1992).
[3]  The interaction with 8bit culture is not limited to games and music, but extends into the art realm. Cory Archangel’s Super Mario Clouds and the recent exhibit Ich Bin 8-Bit are examples of artistic engagement with 8bit culture (Archangel 2002 and Ablan et al. 2009).
[4]  So far there are four playable songs: Guns N Roses, Sweet Child o’ Mine; Michael Jackson, The Way You Make Me Feel; Daft Punk, Harder, Better, Faster, Stronger; A-Ha, The Swing of Things. Chicane, Low Sun and Daft Punk, Aerodynamic are used in the program, but are unplayable as songs.
[5]  The dead end of Baudrillard’s simulation can also be sidestepped by the logic of “social realism” and the phenomenological link between the simulation within the game world and in the player’s lived environment (Galloway 2006).
[6]  While I refer specifically to the computer’s development, it is more useful to link this technological trend to the teleological view of history and civilization that is dominant in modernity.
[7]  This is jagged as I am ignoring the personal computer’s processor, Nintendo’s later consoles and Microsoft’s consoles. I am using these particular consoles as they are the ones that I know the best at the moment.
[8]  It should also be noted that due to the “uncanny valley“ programmers have in many instances attempted more stylized representation in lieu of unsettlingly real and yet not real CGI and polygonal characters (See: Mori 1970).
[9]  Like with cinematic remakes, gaming remakes can flop (Psycho 1998 is a good example), but the logic remains that a remake is safer than (with the emphasis on the relative aspect) an entirely new game.
[10]  In fact, the term “event,” something real, becomes a rare entity within Baudrillard’s work (Galloway 2007).
[11]  This could also be related to Derrida’s discussion of archives as dealing with the future, through the sur-vival of the event as opposed to forgetting, which is superrepression/anarchive and dealing with the past (Derrida 1996).
[12]  Portal was demade twice as Super 3D Portals 6 and Portal: The Flash Version; Shadow of the Colossus was demade twice as Hold Me Closer, Giant Dancer and Shadow of the Bossus.

Sources

Althusser, Louis. “Ideology and Ideological State Apparatuses (Notes Towards an Investigation).” In Video Culture: A Critical Investigation, edited by John C. Hanhardt. Salt Lake City: G.M. Smith in association with Visual Studies Workshop Press, 1986.
Appadurai, Arjun. Modernity at Large: Cultural Dimensions of Globalization, Public Worlds. Minneapolis, Minn.: University of Minnesota Press, 1996.
Archangel, Cory. Super Mario Clouds. 2002. <http://www.beigerecords.com/cory/Things_I_Made/SuperMarioClouds>.
Barta, Tony, ed. Screening the Past: Film and the Representation of History. Westport; London: Praeger, 1998.
Barthes, Roland. “From Work to Text.” In The Cultural Studies Reader, edited by Simon During, Donna Jeanne Haraway and Teresa De Lauretis. London: Routledge, 2007.
—. Image, Music, Text. New York: Hill and Wang, 1977.
Bassnett, Susan and Andre Lefevere eds. Translation, History and Culture. London: Pinter Publishers, 1990.
Baudrillard, Jean. “The Precession of the Simulacra.” In Simulacra and Simulation. Ann Arbor: University of Michigan Press, 1994.
Berman, Antoine. The Experience of the Foreign: Culture and Translation in Romantic Germany. Albany: State University of New York Press, 1992.
Bolter, J. David and Richard Grusin. Remediation: Understanding New Media. Cambridge, Mass.: MIT Press, 1999.
Boym, Svetlana. The Future of Nostalgia. New York: Basic Books, 2001.
Cook, Pam. Screening the Past: Memory and Nostalgia in Cinema. London ; New York: Routledge, 2005.
Deleuze, Gilles, and Claire Parnet. “Politics.” In The Cultural Studies Reader, edited by Simon During, Donna Jeanne Haraway and Teresa De Lauretis. London: Routledge, 2007.
Derrida, Jacques. Archive Fever: A Freudian Impression. Chicago; London: University of Chicago Press, 1996.
Du Gay, Paul. Doing Cultural Studies: The Story of the Sony Walkman. London: Sage, in association with The Open University, 1997.
Fasolt, Constantin. “A Dangerous Form of Knowledge.” In The Limits of History. Chicago: University of Chicago Press, 2004.
Fish, Phil. “de-makes.” TIGForums Independent Gaming Discussion. The Independent Gaming Source. Written: August 20, 2007. Accessed: March 12, 2009 < http://forums.tigsource.com/index.php?topic=448.0>
Forrest, Jennifer and Leonard R. Koos eds. Dead Ringers: The Remake in Theory and Practice. Albany: State University of New York Press, 2002
Foucault, Michel. “What Is an Author?” In The Essential Foucault: Selections from Essential Works of Foucault, 1954-1984, edited by Paul Rabinow and Nikolas S. Rose. New York: New Press, 2003.
Galloway, Alexander R. Gaming: Essays on Algorithmic Culture. Minneapolis: University of Minnesota Press, 2006.
—. Protocol: How Control Exists after Decentralization. Cambridge, Mass.: MIT Press, 2004.
—. “Radical Illusion (a Game against).” Games and Culture 2, no. 4 (2007).
Grainge, Paul. Monochrome Memories: Nostalgia and Style in Retro America. Westport, Conn.: Praeger, 2002.
Harvey, David. “Space, as a Key Word.” In Spaces of Global Capitalism: Towards a Theory of Uneven Geographical Development. London; New York, NY: Verso, 2006.
Hebdige, Dick. Subculture: The Meaning of Style. London: Methuen, 1979.
Horkheimer, Max, and Theodor W. Adorno. Dialectic of Enlightenment. New York: Seabury Press, 1972.
Hutchison, Andrew. “Making the Water Move: Techno-Historic Limits in The Game Aesthetics of Myst and Doom.” Game Studies 8, no. 1 (2008).
Ablan, Love, Jon M. Gibson and Derek Puleston (curators). Ich Bin 8-Bit. Neurotitan Gallery for the Pictoplasma Character Walk. Berlin. March 17, 2009 – April 4, 2009. < http://loveablan.com/exhibitions/IchBin8Bit/>
Klinger, Barbara. Beyond the Multiplex: Cinema, New Technologies, and the Home. Berkeley: University of California Press, 2006.
Koselleck, Reinhart. “‘Neuzeit’: Remarks on the Semantics of the Modern Concepts of Movement.” In Futures Past: On the Semantics of Historical Time. Cambridge, Mass.: MIT Press, 1985.
Lefebvre, Henri. Critique of Everyday Life. Translated by Michel Trebitsch. London ; New York: Verso, 1991.
—. Rhythmanalysis: Space, Time and Everyday Life. Translated by Stuart Elden. Athlone Contemporary European Thinkers. New York: Continuum, 2004.
Manovich, Lev. The Language of New Media. Cambridge, Mass.: MIT Press, 2001.
Mori, Masahiro. “The Uncanny Valley.” Energy 7, 4, 1970: pp. 33-35.
Ricoeur, Paul. Memory, History, Forgetting. Chicago: University of Chicago Press, 2004.
Rodowick, DN. The Virtual Life of Film. Cambridge, Mass.: Harvard University Press, 2007.
Said, Edward W. “Traveling Theory Reconsidered.” In The Cultural Studies Reader, edited by Simon During, Donna Jeanne Haraway and Teresa De Lauretis. London: Routledge, 2007.
Sturken, Marita. Tangled Memories: The Vietnam War, the Aids Epidemic, and the Politics of Remembering. Berkeley: University of California Press, 1997.
Tagg, John. The Burden of Representation: Essays on Photographies and Histories. Minneapolis, Minn.: University of Minnesota Press, 1993.
Venuti, Lawrence. The Translator’s Invisibility. New York: Routledge, 1994.
—. The Scandals of Translation: Towards an Ethics of Difference. London; New York, NY: Routledge, 1998.
Verevis, Constantine. Film Remakes. Edinburgh: Edinburgh University Press, 2006.
Wardrip-Fruin, Noah, and Pat Harrigan. First Person: New Media as Story, Performance, and Game. Cambridge, Mass.: MIT Press, 2004.
Whalen, Zach, and Laurie N. Taylor. Playing the Past : History and Nostalgia in Video Games. Nashville: Vanderbilt University Press, 2008.
Yu, Derek. “Bootleg Demakes Competition.” The Independent Gaming Source. Accessed: March 12, 2009 <http://www.tigsource.com/features/demakes/>.

Games

Bigpants. Hold Me Closer, Giant Dancer. The Independent Gaming Source. Accessed: March 16, 2009 <http://forums.tigsource.com/index.php?topic=2817.0>.
Bungie Software. Pathways Into Darkness. Bungie Software. 1993.
Core Design Ltd. Tomb Raider. Eidos Interactive. 1996.
Crystal Dynamics. Tomb Raider: Anniversary. Eidos Interactive. 2007.
Hansen, Kent and Andreas Pederson. D+Pad Hero. 2009. Accessed: March 11, 2009 < http://dpadhero.com/Home.html>.
Harmonix Music Systems. Guitar Hero. RedOctane. 2005.
Hinchy. Super 3D Portals 6. The Independent Gaming Source. Accessed: March 15, 2009 < http://forums.tigsource.com/index.php?topic=2391.0>.
Id Software. Doom. Id Software. 1993.
—. Doom III. Activision. 2004.
mrfredman and MedO. Gang Garrison 2. Gang Garrison. Accessed: March 20, 2009 < http://ganggarrison.com/>.
Saint. Shadow of the Bossus. The Independent Gaming Source. Accessed: March 16, 2009 < http://forums.tigsource.com/index.php?topic=2402.0>.
SCEI. Shadow of the Colossus. SCEI. 2005.
Tal, Ido (Dragy2005) and Hen Mazolski (Hen7). Portal: The Flash Version. Newsgrounds. Accessed: March 15, 2009 < http://www.newgrounds.com/portal/view/404612>.
Valve Software. Portal. Valve Software. 2007.
—. Team Fortress 2. Valve Software. 2007

translation or localization

There’s very little work done on translation and language within gaming. What is mostly done is relegated to celebratory domestication work claiming how poor games sell when the “translation” doesn’t take the target culture into consideration. This of course leads to an understanding that language is negligible next to sale values; that the good translation hides behind the play; that translation is in fact simply localization.

Localization is taking a product and altering it to sell to a local audience. It is a business term that is intricately tied to economics and politics. On the economic, a good localization is one that sells well: change is good as long as it sells more. On the political, a good localization is one that is acceptable within an audience: censoring is a good thing. Within gaming, translation is a matter of localization and has always been so due to the commercial nature of games. In order to problematize such a combination one must either separate gaming from commercial endeavor (something constantly under attempt by serious games, art games, et cetera), or problematize the aspect of new media that focuses on variability to the detriment of difference.

Manovich’s variability claim argues that new media has no original. There’s no original, but then again, there’s also no secondary as all are parts of the same code and property. Thus, within a logic of variability changing the language of a game is a matter of localizing the new media text that otherwise does not change.

The problem with this understanding is that it takes out intention within the language itself. Games have intentions other than play, and an aspect of this intentionality is the language used within it. By focusing on pleasurable flow toward an audience and justifying this through an understanding of games as variable new media such intentionality of the original writer is unfortunately removed. In order to reinsert an idea of intention, if not an origin, it becomes necessary to focus on the concept of translation instead of localization.