“Textkritik som analysemetod” (textual criticism as a method of analysis) was the title of this years conference of the Nordic Network for Edition Philology (NNE), held in beautiful Gothenburg in the first week of October. The NNE gathers bi-annually editors, edition philologist, book historians and literary scholars from all the Nordic countries to discuss developments in recent research and editorial method, and present scholarly editions.
This year’s conference was the 14th in a row of successful gatherings in the North and the 20th anniversary of the NNE – with 60 participants (and an amazing 50/50 gender distribution!) and 12 talks in three languages (Swedish, Norwegian & Danish) on various subjects more or less closely tied to this year’s topic. The talks will be published in the NNE-book series and made digitally (XML-TEI P5 encoded!) available afterwards.
What became obvious in the discussions and debates not only here at the NNE meeting but generally in edition philology, is, that the scholarly editions we editors prepare in a very sophisticated manner and with a special eye for detail are not really suited for computer aided corpus analysis like topic modeling, text mining, stylistics etc. The issue is not under-complexity of the (digital) scholarly editions, but rather their complexity and depth of encoding and enrichment. In a corpus of 100.000 books, a textual error is statistically insignificant – no need to make the effort of emendation or provide an explanation and possible rectification. – I think it has to ‘sink in’ that especially quantitative (digital) literary or text studies ask very different questions from those commonly anticipated by edition philologists (that is: those of traditional literary studies). And since editions are not an end in itself but user oriented, what do we have to change in order to meet the needs (also) of those literary scholars who are interested in quantitative, corpus-based analyses & distant reading?