Monday 11th November saw the British Academy host Translation in a Digital Age, an AHRC-supported event open to the public and forming part of Language Festival 2013, run by the BA in conjunction with the Guardian.
The presentations of the five-speaker panel built on discussions from a similar event last year about multilingualism and the internet, and how the web has allowed new ways for language users to interact. This holds particular relevance for the AHRC Translating Cultures theme, as the phenomenon of translation is undergoing radical change due to technological advances and the internet.
In the broadest of terms, each presentation engaged with the following questions:
- How do phenomena such as computational linguistics, crowdsourcing, and translation applications and services revolutionise the way translations are consumed?
- What is, and what is not, translatable?
- What is the place of human agency in translation?
The panel opened with Stephen Pullman’s presentation on the progress of machine translation (MT). Reflecting on seminal research published in 1993, Stephen traced the development of translation software over the past two decades. Perhaps the most important landmark has been the shift from word-based alignments to phrase-based alignments, by which online translators can now process language in terms of segments of meaning rather than individual words. Whilst this has yielded significant improvements, phrase-based translation works less effectively between structurally-different languages. Hence, whilst English-Arabic translations may be relatively straightforward for MT, the syntactical distance between English and Mandarin, for example, provides more of a challenge.
According to Andy Way, Director of Machine Translation at Lingo24, however, this is one of the main reasons for which MT does not threaten the future of the human translator. Whilst many criticise computational translation for its inaccuracies, there are those who fear the so-called ‘death of the translator’. Yet every day, Google Translate processes a billion translations for 200 million users, surpassing more than a million book’s worth of words, and more than a year’s worth the workload of professional human translation worldwide. Therefore, whilst defending the necessity of human translation for specific and accurate communication, Andy also advocated increasing investment in MT in the ever-expanding collection of ‘Big Data’ stored on the internet today.
Given that Google Translate is one of the leading global MT platforms, the floor then passed to Monika Podsiadlo, technical lead for text-speech translation at Google. She outlined efficiently and clearly Google’s position in the above arguments, carefully nuancing a respect for the accuracies of human translation alongside her professional interest in expanding the capabilities of MT. Google seeks to address the need to ‘feel local in global spaces’, and to encourage more frequent and closer communication between differing nationalities, ethnicities, and social backgrounds. In terms of software outputs, the Google 60 initiative will aim to cover 99% of the world’s internet users, whilst Google 70 will look to include under-represented languages as part of its translation. It goes without saying that Google is an empire constantly looking to expand, and Monika outlined its strategies to integrate its MT across many of its leading platforms, including Google Maps, Web Search, Gmail, and YouTube.
Yet, for all the possibilities that MT expansion holds, it is unclear whether the place of human agency in translation remains unthreatened. This was the focus of presentations by Jeremy Munday, of the Centre for Translation Studies at the University of Leeds, and Sarah Ardizzone, an award-winning French-English literary translator based in London. The current focus of the CTS at Leeds is to hone the collaborative processes of human translation, where translations are managed from requester to publication via a project manager, translator, reviser, reviewer, and proof reader. This is one of the outputs of the IntelliText system, developed during an AHRC-funded project in 2010-11. One of the most interesting advantages of IntelliText over existing MT is that it works on the basis of collocations rather than phrase-based translation. A useful example of its benefits is in the translation of Barack Obama’s “the patchwork heritage in the US”. When translated into Spanish, patchwork was generalised, yet IntelliText sourced distinctly negative collocations for ‘patchwork’ for the translated text. Whilst not necessarily claiming superior accuracy, such methods, combined with the human translation at the CTS, display the benefits of rethinking models of translation. For her part, Ardizzone stressed the importance of diverse thinking for which only human translation can accommodate. When translating novels, the range of cultural difference must be addressed whilst keeping the author’s voice. In this way, she argued, MT cannot account for the creative process of translation. Other potential roadblocks for digital translations include slang, familiar language, and localised or contextualised vernaculars, all of which are perishable phenomena. If, as Sarah argued, many language forms imply identity and exclusivity, how can the translator make the text accessible to a wider audience? Sarah’s project entitled Spectacular Translation Machine (curated in London’s South Bank Centre as part of the London Literature Festival) provided in conclusion a very thought-provoking stimulus on this score, where members of the public displayed the significant variation in both approaches to translation methods, and understandings of the core principals and outcome of cross-cultural translation.
Guest blog by Will Amos (AHRC Theme Leadership Administrator, ‘Translating Cultures’, and PhD student at the University of Liverpool working on ‘The Linguistic Landscape of the Pays d’Oc: examining attitudes to regionalism, commercialism, and globalisation in southern France’)