Posts Tagged ‘humanities’

The Future of the (Imperceptible) Humanities

Thursday, January 12th, 2012

This is an aggregate blog entry for something that I actually believe, which I posted for a different website. Here’s the link again: http://uchumanitiesforum.org/2012/01/02/the-future-of-the-imperceptible-humanities/

On Translation and/as Interface

Monday, February 14th, 2011

I. Windows, Mirrors and Translations

Within their book Windows and Mirrors, J. David Bolter and Diane Gromala discuss the interface of digital artifacts (primarily artistic ones, but that is in large part their SIGGRAPH 2000 sample set) as having two trends. The first is the invisible window where we see through the interface to the content, and the other is the seemingly maligned (at least recently an in much of the design treatises) reflective mirror that reflects how the interface works with/on us as users.

This is similar in ways to Ian Bogost and Nick Montfort’s Platform Studies initiative where the interface exists between the object form/function and its reception/operation, and this interface can do many things depending on its contextual and material particulars. We need only look at the difference between Myst with its clear screen and Quake with its HUD, or between Halo and its standard gamepad and Wario Ware: Smooth Moves with its wiimote utilization to see the range.

However, another thing that the discussion of windows and mirrors, immediacy and hypermediacy, seeing through and looking at all bring up when paired with interface is translation. A translation is also an interface. It can be a window or a mirror, transparent or layered, you can see through it to some content, or you can be forced to look at it and the form and translation itself.

But thinking of translation as an interface in the Bolter and Gromala sense, or as Bogost and Montfort’s interface layer is unusual. Usual is to place the translation outside of the game as a post production necessity that enable the global spread of the product, or, at best, an integrated element of the production side that minimally alters the text so that it can be accepted in the target locale. Even researchers within the field of game studies generally ignore the language of the game: nobody asks what version the researcher played because we all recognize that we play different versions; more important is that the researcher played at all.

So translation’s place is in question. Is it production? Post-production? Important? Negligible? And how does one study it? We can barely agree upon how to study play and games themselves, so surely this is putting the carriage before the horse (or maybe some nails on the carriage before both). But, no, I still wish to follow through with this discussion, as I believe it can be productive. My question is how does translation relate to games, and hopefully I can come up with a few thoughts/answers if not a single ‘truth.’

II. Translation and Localization

As Heather Chandler has so wonderfully documented, the translation of game has a variable relationship to the production cycle. It at one point was completely post-productive and barely involved the original production and development teams. At its earliest it was simply the inclusion of a translated sheet of instructions to aid the user in deciphering what was a game in a completely foreign language. This still exists in certain locations, especially those with weaker linguistic and monetary Empires (obviously, not English, but ironically this includes China where the games are often gray or black market Japanese imports). This type of translation, called a non-localization, has slowly given way to more complete localizations including “partial” and “full” localizations. Partial localizations maintain many in game features, but menus and titles switch language, audio may remain as is, but subtitles will be included. In contrast, a full localization tends toward altering everything to a target preference including voices, images, dialogue, background music, and even game elements such as diegetic locations. As the extent of localization increased the position (temporally and in importance) of translation in the production cycle changed. It moved forward and needed pre-planning for nested file structures. It also grew in importance so that more money might be spent to ensure a better product.

However, other than a few gaffs like “all your base” and other poor translations from the early years game translation has increasingly become invisible. This invisibility, or transparency, has been written about extensively by Lawrence Venuti regarding literary translation, the status of the translator, and the relationship of global to national cultural production. For my purposes here I will simply say that he says the fluent translations are a problem (in the context of American Empire) and that current game localization practices (which are multi/international, but in many ways American-centric) do what he claims is bad. We don’t need to accept his arguments regarding empire and discursive regimes of translation (although I do), but we should be aware of the parallels between what he talks about using literary analysis and many translation reviews, and the way that nobody even talks about a game translation.

So the industry hides translation. But why does the academic community ignore it? Is it not a part of games? Maybe. But is it a part of play?

III. Ontology

Ontologies of play typically exclude translation. This is most obviously demonstrated in Jesper Juuls’ summary of common definitions and play that he uses to form his own classic game model. Rules are all well and good, but all games have a context, and it is this context that Juul misses when he dismisses the idea of “social groupings” (Juul 34). Juul pulls this from Huizinga and it is key that it relates to Huizinga’s primary contribution of the magic circle and the “ins” and “ofs” of play and culture.

I would argue that games promote social groups, but they also form in social groups and language is crucial to this as an important (perhaps primary) marker of a social group. However, in Juul’s final analysis “the rest of the world” has almost entirely been removed as an “optional” element (41). It is one thing to say that the outcome might effect the world, but it is another to say it can only be created through that world and its mere playing effects the world. Juul even acknowledges this in the conclusion to the chapter where he notes that pervasive and locative games break the rule. However, I would still argue that even the classic model does not obey the ”bounded in space and time” principle.

The former can be demonstrated trough Scrabble. A game created in English with strict rules, negotiable outcomes, player effort, attachment, valorization of winning, and many ways to do so. But the game is completely attached to English. The letters have point determinations based on ease of use and the scarcity of each letter is based on its common usage. The game is designed around English and cone cannot play it with other languages. Take Japanese: even if one were to Romanize the characters one wouldn’t have nearly enough vowels, and if one replaced all of the characters with hiragana there are still way too many homonyms to make a meaningful/difficult game. Japanese Scrabble might be possible, but it would need to be created by changing a great deal of the game. It is bounded in space and time, but contextually so.

The latter we can return to both Huizinga and Caillois who both locate play/games within a relationship to culture. Their teleological and Structuralist issues aside, it is important to not simply separate games (the text) from culture, time, place (the context) in a reductively formal analysis. Huizinga links play to culture as a functional element. These rules are a purpose even if that purpose has changed. Caillois notes a key association between types of play and particular societies. Games may be a separate place, but they affect the real world and vice-versa.

IV. Platform Studies

So context is important. Essential even. Let’s tack it on and see what happens. Or better yet, let’s say it’s pervasive and inseparable, but also difficult to distinguish. This is much like Bogost and Montfort’s Platform Studies model, so let’s see how translation could be integrated into that model.

Here I will primarily use Montfort’s earlier conceptualization of platform studies from his essay “Combat in Context.” Montfort moves toward a slightly simplified five-layer model from Lars Konzack’s seven-layer model by moving cultural and social context from a layer to a surrounding element. However, it is interesting that while he moves context to a surrounding element it is Platform that is key for them. Everything in his model is reliant on the platform.

As the base level the platform enables what can be created upon it. It is both the question of whether it is on a screen, whether it plays DVDs, cartridges or downloads files, how big those are and what size of a game is allowed on them. It is the capabilities of the system and what this enables. However, the platform layer exists in a context both technological and socio-cultural. The processor chip of the platform is in a particular context and limits the platform, but the existence of a living room with enough space to move can also limit the platform.

Second is the game code. The switch from assembly to higher level programming was enabled by platform advancements, but this also enabled great differences in the further layers. The way the code existed is also integrally related to linguistics/language. Translating assembly code is painstaking and almost always avoided. The era of assembly code was also the era of in house translations and non or partial localizations. In contrast, C and its derivatives enable greater linguistic integration and as long as programs in higher level code are programmed intelligibly translating them is possible. Context with the game code involves language. This much is obvious, as code is language. But I mean something further. I mean that there is a shift tin allowances along the way that reveals how real world “natural/national” languages become integrated, but always subsumed under machine languages.

Third is the game form: the narrative and rules. What we see, hear and play (if not ‘how’ we see, hear and play). This is the non-phenomenological game. The text, as it is. Of course, if it is the text then what is the surrounding context other than everything?

As we’ve seen from Juul, the rules belie languagelessness. We enter a world that has a set of rules that are separate from life and this prevents one from linking the game to life. But the narrative, if one does not think it an inconsequential thing tacked onto the essential rules, is related to contextually relevant things and presented in linguistically particular ways. Language then is here as well and translation bears and important role. In many ways this is the main place in which one might locate translation, but only if one is a narratologist. If the story is of prime importance, form is where translation exists.

The fourth level is the interface. Not the interface that I began with, at least not quite, or not yet, but the link between the player and the game. The “how” one sees, hears and plays the game. To Bogost and Montfort this is the control scheme, the wiimote and its phenomenological appeal compared to the gamepad or joystick, but it is also the way the game has layers of information that it must communicate to the user. The form of the game leads toward certain options of interface: a PVP FPS must be sure to have easily read information that allows quick decisions and full game time experience, but a slow RPG can have layers of dense interface, opaque and in a way that forces the user to spend hours making decisions in non-game time.

The interface also enables certain things. A complicated interface is hard to pick up and understand, but a simple one is easy. This is a design principle that Bolter and Gromala contest, but it has levels of truth in it. A new audience is not likely to pick up the obscenely difficult layering of interface of an RPG or turn based strategy game, but a casual point and click may be easily picked up and learned (if just as easily put down and forgotten).

In some ways this is also where translation exists and in some ways it isn’t. Certainly the GUI’s linguistic elements can be translated, but more often they are programmed in a supposedly non linguistic and universal manner. [heart symbol] stands for life and [lightening bolt] stands for magic or energy, or life is red and energy/magic is blue. Similarly, the audio cues are often untranslated. And controls mainly stay the same. Perhaps one of the few control changes of interface is the PlayStation alteration of O, or ‘maru,’ for ‘yes’ and X, or ‘batsu,’ for ‘no’ in Japanese for X, or check, for ‘yes’ and O for ‘no’ in English.

The fifth level is reception and operation: how the user and society receives the game, how it has come from prequels and gone to sequels, its transmedial or generic reverberations, and even the lawsuits and news surrounding it. All of these point outside of the game, but how does one then separate context? Is the nation the receiver or the context? Is the national language or dominant dialect part of the level or surrounding context? Is it effected by the game or can it then effect the game? And even if it effects the game by being on the top layer is it negligible in its importance? Is this another material vs. ideological Marxist fight for a new generation?

A short answer is that Bogost and Montfort answer all of this by putting context as a surrounding element, but they also fail to highlight its importance. By pushing out context to the surrounding bits it essentializes the core and approves of an analysis that does not include the periphery. The core can be enumerated; the periphery can never be fully labeled or contained.

Elements of importance are too destabilized to be meaningful when analyzed according to the platform studies. Translation is a prime example, but race and sexuality are equally problematic. Their agenda is not contextual, but formal. Mine is contextual and cultural.

V. Translation as Interface

The goal of localization is to translate a game so that a user in the target locale can have the same experience as a user in the source locale. For localization, then, translation is about providing a similar fifth level reception and operation experience. However, to provide this experience the localizers must alter the game form level by physically manipulating the game code level. The interface, beyond minor linguistic alteration, is not physically altered and yet it is the metaphor of what is being done to the game itself. The translation of a game, like Bolter and Gromala’s critique of the interface as window, attempts to transparently allow the user to look into a presumed originary text, or in the case of games, into to originary experience. It reduces the text to a singular experience/text. However, the experience and text were never singular to begin with. In translations, too, we need mirrors as well as windows, so how can we make a translation that reads like a mirror by reflecting the user and his or her own experience?

First, all of Bolter and Gromala’s claims against design’s obsession with windows and transparency are completely transferrable to games as digital artifacts and to the localization industry’s professed agendas. Thus, the primary necessity is to acknowledge the benefit of a non-window translation. Second, the translation must be put in as a visible, reflective interface that both shows the user’s playing particulars, the originals playing particulars and the way that the game form and code has been changed in the process. This could be enabled by a more layered, visible, foreignizing translational style. Instead of automatically loading a version of the game the user should be required to pick a translation and be notified that they can pick another. Different localizations should be visible provided on a singular medium. Alternate, fan produced modification translations should be enabled. If an uncomplicated translation-interface is an invisible and unproductive interface, then a complicated translation-interface is a visible and productive one. Make the translational interface visible.

VI. References

  • Bolter, J. David, and Diane Gromala. Windows and Mirrors: Interaction Design, Digital Art, and the Myth of Transparency. Cambridge: MIT Press, 2003.
  • Chandler, Heather Maxwell. The Game Localization Handbook. Hingham: Charles River Media, 2005.
  • Chandler, Heather Maxwell. The Game Production Handbook. 2nd ed. Hingham: Infinity Science Press, 2009.
  • Juul, Jesper. Half-Real: Video Games between Real Rules and Fictional Worlds. Cambridge: MIT Press, 2005.
  • Montfort, Nick. “Combat in Context.” Game Studies 6, no. 1 (2006).
  • Montfort, Nick, and Ian Bogost. Racing the Beam : The Atari Video Computer System. Cambridge: MIT Press, 2009.
  • Venuti, Lawrence. The Translator’s Invisibility: A History of Translation. 2nd ed. New York: Routledge, 2008 [1994].

What Difference an M Makes

Tuesday, January 19th, 2010

One of my pleasures is reading. It is also one of my guilty pleasures as I tend to read books of a speculative nature. My thoughts have always dwelled near the question why would I want to read about the world I live in? Where’s the fun in that? Where’s the escape? Yes, I’m an escapist, and that has included worlds of alternative reality, fantastic worlds, futuristic worlds, and even alternatively represented worlds such as animation. With that (probably unsurprising) admission out of the way I can get to a topic that has bothered me for quite a while, which has also had a new development (new if only in the case that I recently noticed it).

Authors, genres, sorting and status.

An author I’m rather fond of is Iain Banks. he writes fiction. Most of it could be in this world although some of it is a bit iffy, or at least somewhat psychotic. Okay, that describes most fiction as how “real” is the illuminati in comparison to Area 51 and extra terrestrials? I first read Banks’ Dead Air, which I borrowed from a friend in 2004. I loved it, but I couldn’t remember who had written it after I gave it back and didn’t read anything else of his for half a decade. When I finally did figure out who that Scottish writer my Scottish friend loaned me was I was confronted with two things. The first was Iain Banks. I proceeded to read The Steep Approach to Garbadale, The Business, and Whit. The second thing I found was Iain M Banks, the author of the Culture series of science fiction and various one offs. Those of you who might have bothered to guess will probably realize that Iain Banks and Iain M. Banks are the same person.

Average logic seems to hold that people cannot write for multiple genres at once. Or that audiences don’t shop for multiple genres.

But maybe logic should think of all the pseudonyms out there. And then maybe question the purpose of those alternate pen names: Banks wrote 3 books as Banks then got his publishers to publish a sci-fi book. it came out under M. Banks so as not to confuse audiences (or so holds the WIkipedia entry). Maybe it’s for the readers. It’s definitely not because Banks cannot write for both genres as he does well and has done will for over two decades and twenty books.

So why is it that the United States publisher (Orbit) has chosen to publish Banks’ latest novel, Transitions, by Iain M. Banks? It was published in the United Kingdom as a book by Iain Banks and the two book covers are visible, unproblematically, on Banks’ website showing the different covers and different names.

Banks has no problem with his name separation (and integration). So why do I care? What is it that I see as troubling and annoying about both the separation and integration of a science fiction identity and a fiction identity? Mainly status.
Salman Rushdie is a good, similar example. Rushdie’s works are fantastic. They question reality. But they’re “Fiction.” Even one of his earliest works, Grimus, a very “science fiction and fantasy” novel if ever there were one, is happily labeled “Fiction” and sorted alongside Rushdie’s other, “serious” books. While it is labeled “Fantasy novel/Science Fiction” on Wikipedia the Amazon entry (as well as most other booksellers) has ignored this and simply lists it as “Literature & Fiction.”

In bookstores’ entry systems especially of 20 years ago, when both the M and Rushdie’s singular straying happened, Fiction was the high genre and anything more “generic,” anything that needed a modifier, be it fantasy, science fiction, thriller, romance, was the low move toward rubbish, or at least special audiences (where special has all of its connotations, good and bad).

Rushdie rode his barely (and yet very) “Fiction” style out to be one of the most influential writers of the late 20th century. This has many parts to do with his status as a post colonial, and yet British, subject as well as the politico-religious issues surrounding Satanic Verses. However, as his work was “serious” it brought in the very not serious early novel. This preserved the singular location of an author within a store, and essentially, the analogue archive.

In contrast, Neal Stephenson, a second prime example, whose early work was in fiction (The Big-U, Zodiac and a few disavowed co-written works) before he smashed onto the scene with Snow Crash and Diamond Age, two cyberpunk highlights. Stephenson is located in the science fiction section. Again, this is in contrast to his incredibly popular (alternative) historical fiction Cryptonomicon and Baroque Cycle. Because his original hits were in science fiction he has remained in that area. This has not prevented him from garnering support and sales, but it has prevented him from winning awards other than those in science fiction, which his popular historical fiction novels do not fit. It has placed him, marked him, classified him, as a science fiction author.

The placement within the archive, one’s labeling/identificying denotes the status of the author. Rushdie is respected as he is in Fiction. Stephenson is less respected as he is in Science-Fiction. Banks avoided this very possibility with the little M., which separated identities, forced his presence into both places of the archive (and store). With the doubled name Banks broke the status game.

But that is exactly where I see the problem now. My guess is that within the United States, where sci-fi is low, but popular, M. Banks and the Culture novels sell better. This might be switched in teh UK where Banks is known as a Scottish author and gets additional sales because of that and the brogue of his Fiction novels.

The collapse of Banks to M. Banks within the US does a few things. It attempts to ride M. Banks’ greater popularity so as to increase the Fiction sales. This is fine as far as anything Capitalistic goes. However, it also will problematize the location, and therefore status of Banks in the Ficiton section. As M. Banks his previous Fiction books stand to be reissued as M. Banks and relocated to the sci-fi section. In some ways this makes no sense, in others it’s good business, but I see it simply as the denigration and codification of generic borders.

Thoughts on DAC – Programmers and Humanitists

Thursday, December 17th, 2009

The conference Digital Arts and Culture is meant to combine various people from various fields in order to talk and work. It’s interdisciplinary. It also seems to combine people who are in multiple disciplines. It’s multidisciplinary. Unfortunately, the result has similar pitfalls to the standard woes of disciplines. Namely: a) you go to what you know b) there are multiple sessions at any given time c) these sessions roughly break down into humanities, arts and computer science. These three end up meaning that the three groupings have much more limited interaction than might otherwise happen.

I have two examples to elaborate.

1.

On the first day there was a panel on Software Studies. In it Aden Evens gave a talk on Programming and Fold (or Edge as he changed it to). The talk was interesting, but it was to a room filled with programmers who were mumbling and stirring in anger during the first half of his talk saying “wrong wrong WRONG!” to themselves and each other. This was offset by the second half when all of the sudden something clicked and they suddenly became interesting as he moved to the second part that he was trying to connect. However, the q/a consisted primarily of people taking him to task for various ways

Now, there are two things that are important here. Aden Evens is, apparently, a humanistist (yes, it’s a clunky word and there might be something better). He has time spent coding, so he’s done his work enough to talk about the programming side and, importantly, present on a panel that’s slightly more focused on the programming side. The second is that he was largely alone in that room as his fellow humanitists were likely off in the embodiment and performance session.

The result is that the two sides did not really interact and the place where they did interact was as if in enemy territory. Even though there was discussion, it was slightly at odds.

2.

The second example is when I presented on the second day. My own talk had been accepted in both the Future of Humanist Inquiry (humanitist) and Software Studies (programmers) sections, but for various reasons I went with the Software Studies side. At my panel there were four people:

  • Scholarly civilization: utilizing 4X gaming as a framework for humanities digital media
    [Elijah Meeks, University of California, Merced]
  • Shaping stories and building worlds on interactive fiction platforms
    [Alex Mitchell, Communications and New Media Programme, National University of Singapore]
    [Nick Montfort, Massachusets Institute of Technology]
  • Translation (is) not localization: language in gaming
    [Stephen Mandiberg, University of California, San Diego]
  • Seriality, the Literary and Database in Homestar Runner: Some Old Issues in New Media
    [Stephanie Boluk, Department of English, University of Florida]

Of note is that my panel consists of one “refuge from the humanities theme,” one critical geographer, myself (who chose this option and tailored intendingly), and a cs oriented twosome doing slightly more typically programmer things.

The result (and the opposite of the first example) is that questions and discussion was geared completely toward the programming talk. There were some humanitists in the room (I saw them) but they all tended to leave to go in and out. Of the 20 minutes of Q/A 18 or so were discussion about the IF presentation. No questions went to the critical geographer. Now, this might be considered a matter of bitching, but i’m really trying to say it’s a matter of disconnect.

For instance, one of the lines of questions into the IF discussion was focused on the movement and particularities of platforms, programming and possibilities of moving between platforms. This is localization, the exact topic that I had been discussing and in a very similar way to which I had been discussing. So similar, in fact, that the chair and I had a glance at eachother before I felt the need to jump in and point out some of the problems with what they were discussing. This is discussion as it should be, yes, but it is also disconnect that the completely obvious link.

DAC then is interdisciplinary, fine. But it’s also very fragmented by the mentality of doing what’s comfortable, going to the talks that you know, and of course, mingling with the people who are like you.